Neutts-air – Open-source, on device TTS
81 points
4 days ago
| 11 comments
| github.com
| HN
mrklol
11 minutes ago
[-]
Model says it’s only supporting English, seems like the demos on their page for other languages are using an older model as the quality is worse.

But the current one seems really good, tested it for quite a bit with multiple kind of inputs.

reply
joshstrange
11 hours ago
[-]
This is really neat. I cloned my voice and can generate text, but I can't seem to generate longer clips. The README.md says:

> Context Window: 2048 tokens, enough for processing ~30 seconds of audio (including prompt duration)

But it's cutting off for me before even that point. I fed it a paragraph of text and it gets part of the way through it before skipping a few words ahead, saying a few words more, then cutting off at 17 seconds. Another test just cut off after 21 seconds (no skipping).

Lastly, I'm on a MBP M3 Max with 128GB running Sequoia. I'm following all the "Guidelines for minimizing Latency" but generating a 4.16 second clip takes 16.51s for me. Not sure what I'm doing wrong or how you would use this in practice since it's not realtime and the limit is so low (and unclear). Maybe you are supposed to cut your text into smaller chunks and run them in parallel/sequence to get around the limit?

reply
gardnr
12 hours ago
[-]
The model weighs 1.5GB [1] (the q4 quant is ~500MB)

The demo is impressive. It uses reference audio at inference time, and it looks like the training code is mostly available [2][3] with a reference dataset [4] as well.

From the README:

> NeuTTS Air is built off Qwen 0.5B

1. https://huggingface.co/neuphonic/neutts-air/tree/main

2. https://github.com/neuphonic/neutts-air/issues/7

3. https://github.com/neuphonic/neutts-air/blob/feat/example-fi...

4. https://huggingface.co/datasets/neuphonic/emilia-yodas-engli...

reply
kanwisher
1 hour ago
[-]
Need to hook this up to Home assistant
reply
nopelynopington
4 days ago
[-]
If this lives up to the demo it's a huge development for anyone looking to do realistic tts without paying to use an API
reply
kristopolous
10 hours ago
[-]
there's quite a number of pretty low overhead models around that do that in realtime these days.
reply
foofoo12
2 hours ago
[-]
no
reply
ks2048
11 hours ago
[-]
Every couple of weeks I see a new TTS model showcased here and it’s always difficult to see how they differ from one another. Why don’t they describe the architecture and details of the trailing data?

My cynical side thinks people just take the state-of-the-art open source model, use an LLM to alter the source, minimal fine tuning to change the weights and they are able to claim “we built our own state of the art tts”.

I know it’s open source, so I can dig into the details myself, but are they any good high-level overviews of modern TTS, comparing/contrasting the top models?

reply
popalchemist
6 hours ago
[-]
The special sauce here is that it is built on a very small LLM (Qwen) which means this can run on CPU-only, or even on micro devices like Raspberry Pi or a mobile phone.

Architecturally it's similar to other LLM-based TTS models (like OuteTTS) but the underlying LLM makes them able to release it under an Apache 2 license.

reply
DecoPerson
9 hours ago
[-]
Without the resources to do a study to see if the quality is actually better or worse than other options, these open-TTS models must be judged by what you think of their output. (That is, do your own study.)

I've found some of them to be surprisingly good. I keep a list of them, as I have future project ideas that might need a good one, and each has its own merits.

I'm yet to find one that does good spoken informal Chinese. I'd appreciate if anyone can suggest one!

reply
aitchnyu
3 hours ago
[-]
Tangential, how easy is it to verify watermark with a smartphone and how easy is it to erase the watermark?
reply
miki123211
8 hours ago
[-]
> Install espeak (required dependency)

This means using this TTS in commercial project is very dicy due to GPL3.

reply
mlla
2 hours ago
[-]
If only English support is required eSpeak could be replaced with MisakiSwift, which is under Apache 2.0 https://github.com/mlalma/MisakiSwift
reply
diggan
1 hour ago
[-]
Unfortunately seems it's Mac/iPhone only. Any cross platform alternatives?
reply
baby
5 hours ago
[-]
BTW I was looking to train a TTS on my voice, whats the best way to do that today locally?
reply
oidar
10 hours ago
[-]
I really wish these cloning tts would incorporate some sort of prosody control.
reply
curioussquirrel
11 hours ago
[-]
Could we finally get a decent opensource TTS app for Android? This project is very cool.
reply
noman-land
10 hours ago
[-]
SherpaTTS is decent.
reply
hsjdbsjeveb
10 hours ago
[-]
SherpaTTS?

On Fdroid

reply
deknos
3 hours ago
[-]
i though this uses coqui which is not really opensource?
reply