For example, what ChatGPT gave me:
func (q *Queue) Enqueue(b []byte) error {
if len(b) > maxBodySize {
return fmt.Errorf("payload too large")
}
q.mu.Lock()
defer q.mu.Unlock()
if len(q.data) >= q.maxMsg || q.bytes+len(b) > q.maxB {
return errors.New("queue full")
}
cp := append([]byte(nil), b...)
q.data = append(q.data, cp)
q.bytes += len(cp)
atomic.AddUint64(&q.enqueueCnt, 1)
return nil
}
The project: func (q *queue) enqueue(b []byte) error {
if len(b) > maxBody {
return errors.New("payload too large")
}
q.mu.Lock()
defer q.mu.Unlock()
if len(q.data) >= maxMsgs || q.bytes+int64(len(b)) > maxBytes {
return errors.New("queue full")
}
cp := append([]byte(nil), b...)
q.data = append(q.data, cp)
q.bytes += int64(len(cp))
atomic.AddUint64(&enqueueCnt, 1)
return nil
}
If you have ever used AWS SQS, I wanted to build something like that. Something that would run locally and can be used on simple servers. I apologize if it came out as something else. I apologize again.
Since you haven't written much in your post, I'm not entirely sure what your specific intentions are. Do you expect people to use your application, are you looking for feedback or tips?
I think my first tip is, if your intention is for it to be used by others, to ensure a little higher quality before publishing a project. If it's more of a learning project, that's fine, but then it would be nice to indicate as such! And perhaps ask for specific help.
I looked at the code a little bit, and I see some quirks and interesting things which makes me hesitant to use this application in production. I think a bit more polish is required before it's production ready. I'm not about to do a full code review, but here are some random pointers of things that I noticed.
* In the Readme it says the endpoints are POST and GET specifically. But are they? What happens when you GET enqueue or HEAD dequeue? * The queue methods update global variables, is this really the best design choice? Why not consider putting them as fields on the queue? If not, why are the queue's fields not simply global variables, too? * Go has an excellent `slog` package for structured JSON logging, and it would be quite idiomatic to use it * In some environments, such as Kubernetes, the implemented graceful shutdown procedure will terminate the app while the service might still be routing requests to it (as far as I know, it sends the SIGINT concurrently with asking the service to remove the pod from the service [simplified]) * Is "too many requests" the most appropriate status code when the enqueue method returns an error? * The server timeouts could be considered rather large when all it's doing is reading and writing a maximum of 128KiB payloads; preferably such options would be configurable in any case
Honestly all in all, it looks like a university student project. Specially due to the emphasis on spelling out computer science concepts and Go specifics in the Readme (e.g. "Nils every slice entry so Go’s tri‑colour GC can reclaim memory quickly; resets counters", "Mutex ensures linearizability (each op appears instantaneous). Because we hold the lock only long enough to modify the slice, throughput scales linearly until contention on enqueue/dequeue dominates"). Nothing wrong with that, but it would be good to mark it as such.
Also I can highly recommend the book 100 Go Mistakes and How to Avoid Them which will address some points I see in this code as well.
Anyway keep up the good work and happy coding!
* what is the sense of this project? NATS is quite the standard for this use case in Go and you can also embed it in a golang binary
* the code seems a bit "strange" to me: why not using existing libraries for structured logging? No unit tests, no usage of interfaces (i.e. persistence can implement writer interface), etc.
To call out this triviality feels like a hallmark of AI Slop for sure. “No use a regular mutex instead of an RWMutex,” and then the AI helpfully puts that footnote into the README.