If you’re looking for the official LTXV model and working ComfyUI flows, make sure to visit the right sources:
- Official site: https://www.lightricks.com
- Model + Playground: https://huggingface.co/Lightricks/LTX-Video
The LTXV model runs on consumer GPUs, and all ComfyUI flows should work reliably from these official resources. Some third-party sites (like ltxvideo.net or wanai.pro) are broken, misconfigured, or heavy on unnecessary assets—so stick to the official ones to avoid issues and missing content.
I am surprised that it can run on consumer hardware.
I assume it's for SEO or supply chain attacks or overcharging for subscriptions.
https://www.bleepingcomputer.com/news/security/fake-ai-video...
seems unsafe to have a weird fake landing as main article
For more information about the model, refer to these sources:
Model repository: https://github.com/Lightricks/LTX-Video
ComfyUI integration: https://github.com/Lightricks/ComfyUI-LTXVideo
Early community LoRAs: https://huggingface.co/Lightricks/LTXV-LoRAs
Banadoco Discord server, an excellent place for discussing LTXV and other open models (Wan/Hunyuan):
https://discord.com/channels/1076117621407223829/13693260067...
Loads of sites like this submitted, what's is the motivation I wonder?
This is not an official page created by Lightricks, and we do not know who the owner of this page is or why he created it.
Best hint is the submission history of https://news.ycombinator.com/submitted?id=zoudong376 which shows similar unofficial sites for other projects.
Best case, an overenthusiastic fan, worst case, some bad actor trying to establish a "sleeper page".
Also shown: cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation
There are a couple of JS errors, which I presume keep the videos from appearing.
[0] https://github.com/Lightricks/LTX-Video/blob/main/configs/lt...
NVIDIA 4090/5090 GPU 8GB+ VRAM (Full Version)
I have a 3070 w 8GB of VRAM.
Is there any reason I couldn’t run it (albeit slower) on my card?
But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.
*OOM = Out Of Memory Error
It won't speed it up, but using a quantization that fits in VRAM will prevent the offload penalty.
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model and its tools are open source, allowing for community development and customization.
UPDATE: This is text on an unofficial website unaffiliated with the project. BUT https://www.lightricks.com/ has "LTXV open source video model" in a big header at the top of my page so my complaint still stands, even though the FAQ copy I'm critiquing here is likely not the fault of Lightricks themselves.
So it's open weights, not open source.
Open weights is great! No need to use the wrong term for it.
From https://static.lightricks.com/legal/LTXV-2B-Distilled-04-25-... it looks like the key non-open-source terms (by the OSI definition which I consider to be canon) are:
- Section 2: entities with annual revenues of at least $10,000,000 (the “Commercial Entities”) are eligible to obtain a paid commercial use license, subject to the terms and provisions of a different license (the “Commercial Use Agreement”)
- Section 6: To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this Agreement, update the Model through electronic means, or modify the Output of the Model based on updates
This is an easy fix: change that FAQ entry to:
> Is LTXV-13B open weights?
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model is open weights and the underlying code is open source (Apache 2.0), allowing for community development and customization.
Here's where the code became Apache 2.0 6 months ago: https://github.com/Lightricks/LTX-Video/commit/cfbb059629b99...
However if there is any active human involvement during training, one could claim that this makes it human work so they're copyrightable. For example not too long ago i wrote a simple upscaler for gamescope when i was learning how to implement neural networks and i did it in a somewhat "manual" manner by running the training for a bit, testing output, modifying a bit the code, adding/changing training data, then picking up from where the training stopped and continuing from there, etc, so one could claim that the weights i ended up with are the result of my own creative process (though TBH i wouldn't nor i am comfortable with the idea myself since we're talking about a few hundred numbers).
No good yet for anything professional, but more than enough for pornography.
The autoregressive generation feature allows you to condition new segments based on previously generated content. A ComfyUI implementation example is available here:
https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/e...
Wow.
https://x.com/lenscowboy/status/1920353671352623182
https://x.com/lenscowboy/status/1920513512679616600
https://discord.com/channels/1076117621407223829/13693260067...
The parameter count is more more useful and concrete information than anything OpenAI or their competitors have put into the name of their models.
The parameter count gives you a heuristic for estimating if you can run this model on your own hardware, and how capable you might expect it to be compared to the broader spectrum of smaller models.
It also allows you to easily distinguish between different sizes of model trained in the same way, but with more parameters. It’s likely there is a higher parameter count model in the works and this makes it easy to distinguish between the two.
in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)
re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.
OpenAI’s naming conventions have gotten out of hand.
I believe the “o” is supposed to mean “Omni” and indicate that the model is multi-modal.