Show HN: A Go service that exposes a FIFO message queue in RAM
17 points
3 days ago
| 9 comments
| github.com
| HN
brodo
4 hours ago
[-]
I‘ve checked the repo and this looks a litte AI generated to me.
reply
kgeist
2 hours ago
[-]
I gave ChatGPT the project's README and asked it to write Go code based on it. The code it produced was very similar in structure to the original, with many parts almost identical.

For example, what ChatGPT gave me:

  func (q *Queue) Enqueue(b []byte) error {
   if len(b) > maxBodySize {
     return fmt.Errorf("payload too large")
   }
   q.mu.Lock()
   defer q.mu.Unlock()

   if len(q.data) >= q.maxMsg || q.bytes+len(b) > q.maxB {
    return errors.New("queue full")
   }
   cp := append([]byte(nil), b...)
   q.data = append(q.data, cp)
   q.bytes += len(cp)
   atomic.AddUint64(&q.enqueueCnt, 1)
   return nil
  }
The project:

  func (q *queue) enqueue(b []byte) error {
   if len(b) > maxBody {
    return errors.New("payload too large")
   }
   q.mu.Lock()
   defer q.mu.Unlock()
   if len(q.data) >= maxMsgs || q.bytes+int64(len(b)) > maxBytes {
    return errors.New("queue full")
   }
   cp := append([]byte(nil), b...)
   q.data = append(q.data, cp)
   q.bytes += int64(len(cp))
   atomic.AddUint64(&enqueueCnt, 1)
   return nil
  }
reply
stevekemp
4 hours ago
[-]
Flagged this submission for that very reason.
reply
hotpocket777
4 hours ago
[-]
More than a little
reply
RaiyanYahya
2 hours ago
[-]
I apologize if anyone got offended by this. I thought it was pretty cool. The intention was not to show off. I did use ai to help me build this. The goal was to have a local implementation of a queue which can be used to decouple systems and use for processing messages. I should have included the intention of the post. I thought some other people would find it cool.

If you have ever used AWS SQS, I wanted to build something like that. Something that would run locally and can be used on simple servers. I apologize if it came out as something else. I apologize again.

reply
kgeist
2 hours ago
[-]
At work, we use RabbitMQ in most projects. Runs OK on a laptop.
reply
RaiyanYahya
2 hours ago
[-]
yes I have used it. I just made this for the sake of it. I wanted it be easy to use.
reply
KaleBab
5 hours ago
[-]
Congratulations on publishing your project!

Since you haven't written much in your post, I'm not entirely sure what your specific intentions are. Do you expect people to use your application, are you looking for feedback or tips?

I think my first tip is, if your intention is for it to be used by others, to ensure a little higher quality before publishing a project. If it's more of a learning project, that's fine, but then it would be nice to indicate as such! And perhaps ask for specific help.

I looked at the code a little bit, and I see some quirks and interesting things which makes me hesitant to use this application in production. I think a bit more polish is required before it's production ready. I'm not about to do a full code review, but here are some random pointers of things that I noticed.

* In the Readme it says the endpoints are POST and GET specifically. But are they? What happens when you GET enqueue or HEAD dequeue? * The queue methods update global variables, is this really the best design choice? Why not consider putting them as fields on the queue? If not, why are the queue's fields not simply global variables, too? * Go has an excellent `slog` package for structured JSON logging, and it would be quite idiomatic to use it * In some environments, such as Kubernetes, the implemented graceful shutdown procedure will terminate the app while the service might still be routing requests to it (as far as I know, it sends the SIGINT concurrently with asking the service to remove the pod from the service [simplified]) * Is "too many requests" the most appropriate status code when the enqueue method returns an error? * The server timeouts could be considered rather large when all it's doing is reading and writing a maximum of 128KiB payloads; preferably such options would be configurable in any case

Honestly all in all, it looks like a university student project. Specially due to the emphasis on spelling out computer science concepts and Go specifics in the Readme (e.g. "Nils every slice entry so Go’s tri‑colour GC can reclaim memory quickly; resets counters", "Mutex ensures linearizability (each op appears instantaneous). Because we hold the lock only long enough to modify the slice, throughput scales linearly until contention on enqueue/dequeue dominates"). Nothing wrong with that, but it would be good to mark it as such.

Also I can highly recommend the book 100 Go Mistakes and How to Avoid Them which will address some points I see in this code as well.

Anyway keep up the good work and happy coding!

reply
wredcoll
3 hours ago
[-]
Is this ai generated?
reply
hotpocket777
48 minutes ago
[-]
Yes
reply
lormayna
4 hours ago
[-]
I am only an amateur go developer, but I have some questions:

* what is the sense of this project? NATS is quite the standard for this use case in Go and you can also embed it in a golang binary

* the code seems a bit "strange" to me: why not using existing libraries for structured logging? No unit tests, no usage of interfaces (i.e. persistence can implement writer interface), etc.

reply
hotpocket777
4 hours ago
[-]
There is no sense
reply
reactordev
3 hours ago
[-]
Why is this here? I can prompt this in 5 minutes. Am I missing something? Are channels that useful for it to be an API without purpose?
reply
pokstad
4 hours ago
[-]
Why not just create a Go library that wraps mkfifo? What’s the advantage here?
reply
vips7L
5 hours ago
[-]
Does Go not have a built in concurrent queue?
reply
brodo
5 hours ago
[-]
Yes Go Channels are concurrent FIFO queues.
reply
vips7L
2 hours ago
[-]
Ah right I forgot, they’re rendezvous queues right? I guess I’m confused why you would need an in memory queue as a service then.
reply
teeray
4 hours ago
[-]
> Critical Section – guarded by a single mutex. Simpler than read‑write locks because writes dominate.

To call out this triviality feels like a hallmark of AI Slop for sure. “No use a regular mutex instead of an RWMutex,” and then the AI helpfully puts that footnote into the README.

reply
nurettin
4 hours ago
[-]
Why not multiple queues?
reply