So it's decorators for scheduling (@scheduler.every(5).minutes, @scheduler.daily.at("09:00")), state saves to JSON so jobs survive restarts, and there's an optional FastAPI dashboard if you want to see what's running.
No Redis, no message broker, runs in-process with your app. Trade-off is it's single process only — if you need distributed workers, stick with Celery.
I can think of two major ways to operationalize a Python script that needs to run continuously. One is with containerization, which usually means Kubernetes, which already has a perfectly fine resource definition for cronjobs. The other approach is to run the script in a bare metal or VM, which would mean defining a service to ensure that the process can be managed appropriately, restarted if it dies, and the like. In other words, defining a service is about just as much effort as defining a cronjob, and there's no escape from some amount of "ops work" that isn't encapsulated in a Python script.
So why not just use the tried-and-true prior art than worry about building and supporting your own secret third thing that others would need to learn, support, maintain, and keep in mind when debugging a problem?