I did smth similar 4 years ago with YOLO ultralytics.
Back then I used chat messsges spike as one of several variables to detect highs and fails moments. It needed a lot a human validation but was so fun.
Keep going
The Tech:
GPU Heavy: It uses decord and PyTorch for scene analysis. I’m calculating action density and spectral flux locally to find hooks before hitting an LLM.
Local Audio: I’m using ChatterBox locally for TTS to avoid recurring costs and privacy leaks.
Rendering: Final assembly is offloaded to NVENC.
Looking for Collaborators: I’m currently looking for PRs specifically around: Intelligent Auto-Zoom: Using YOLO/RT-DETR to follow the action in a 9:16 crop.
Voice Engine Upgrades: Moving toward ChatterBoxTurbo or NVIDIA's latest TTS.
It's fully dockerized, and also has a makefile. Would love some feedback on the pipeline architecture!Still a cool tool though! Although it seems partly AI generated.
HN is a niche audience but it seems like it's the first question everyone has when opening a repo.
Which is odd because the first question we should have is, does it work.
Personally I can't see myself ever writing the bulk of the README again, life's too short.
However, orchestrating things like decord with CUDA kernels, managing VRAM across parallel processes, and getting audio sync right with local TTS requires a deep understanding of the stack. An LLM can help write a function, but it won't solve the architectural 'glue' needed to make it a reliable CLI tool.
The project is open-source precisely because it’s a work in progress. It needs the 'human touch' for things like the RT-DETR auto-zoom and more nuanced video editing logic. PRs are more than welcome—I'd love to see where the community can push this beyond its current state.