I'm working on small SaaS projects and keep running into the same issue: background jobs require a lot of infrastructure. Even for simple things like delayed tasks or scheduled jobs I end up running Redis, queue workers, cron, retries, monitoring, etc. For bigger systems this makes sense, but for small apps it feels like too much.
I'm thinking about building a small service that would let you send a job via API and get an HTTP callback when it's time to run, without running your own queue or workers. Basically: no Redis, no workers, no cron, no queue server
Would something like this actually be useful, or am I trying to solve a problem that isn't really there?
I use Django with Procrastinate which uses Postgres for the task backend. Took a while to find the right Django setup, but it works like a dream.
If you’re running a single instance, you don’t even need any synchronization. If you’re running multiple instances of your app, try implementing locking (this actually works in any language, not just go. Go jsut helps with the multiple long running workers part. With other languages, just run multiple instances.
Process:
1. Each worker can startup with their own id, it can be a random uuid.
2. When you need to create a task, add it to the tasks table, do nothing else and exit.
3. Each worker running on some loop or cron, would set a lock on a subset of the tasks. Like:
update tasks set workerId = myUUID, lockUntil = now() +10minutes where (workerId is null or lockUntil < now()) and completed = false
Or you can do a select for update or w/e helps you keep other workers from setting their ids at the same time.
4. When this is done, pull all tasks assigned to your worker, execute, then clear the lock, and set to completed.
5. If your worker crashes, another will be able to pick it up after the lock expires.
No redis, no additional libraries, still distributed
It works, but I keep rewriting the same task table / locking / retry logic in every project, which is why I'm wondering if it makes sense to move this out into a separate service.
Not sure if it's actually a real problem or just my workflow though.
Import it into your projects.
Make the library work standalone. But also build a task manager service that people can deploy if they want to run it outside their code.
Then offer a hosted solution that does the webhooks.
I’m sure someone will want to pay for it.
I usually start with something simple, then add a task table, then locking, retries, then some kind of worker process, and eventually it turns into a small job system anyway.
At some point it starts feeling like I'm rebuilding the same queue/worker setup over and over, which is why I'm wondering if this should live outside the app entirely.
Thanks, this discussion is really helpful.
Even without Redis I still end up rebuilding some kind of job system on top of the DB, which is why I'm wondering if this should live outside the app entirely.
I did this for a financial calculator API — every request is pure computation, inputs in, result out, nothing persisted. No Redis, no workers, no task table, no locking. The response is ready before a user would notice a queue anyway (sub-50ms).
Obviously only works when tasks complete in milliseconds. But figassis's pattern of "starts simple, then incrementally grows into a small job system anyway" often happens because the initial scope could have been fully synchronous — the async complexity creeps in before it's actually needed.
Worth asking first: does this task genuinely have to be async, or is it just easier to model it that way?
In practice async usually shows up once there are external APIs, retries, scheduling, or anything that shouldn't block the request, and that's where I end up building some kind of job system again.
I'm trying to figure out if that point happens often enough to justify moving this outside the app entirely.