VibeVoice: A Frontier Open-Source Text-to-Speech Model
448 points
5 months ago
| 46 comments
| microsoft.github.io
| HN
simiones
5 months ago
[-]
I read the comments praising these voices as very life like, and went to the page primed to hear very convincing voices. That is not at all what I heard though.

The voices are decent, but the intonation is off on almost every phrase, and there is a very clear robotic-sounding modulation. It's generally very impressive compared to many text-to-speech solutions from a few years ago, but for today, I find it very uninspiring. The AI generated voice you hear all over YouTube shorts is at least as good as most of the samples on this page.

The only part that seemed impressive to me was the English + (Mandarin?) Chinese sample, that one seemed to switch very seamlessly between the two. But this may well be simply because (1) I'm not familiar with any Chinese language, so I couldn't really judge the pronunciation of that, and (2) the different character systems make it extremely clear that the model needs to switch between different languages. Peut-être que cela n'aurait pas été si simple if it had been switching between two languages using the same writing system - I'm particularly curious how it would have read "simple" in the phrase above (I think it should be read with the French pronunication, for example).

And, of course, the singing part is painfully bad, I am very curious why they even included it.

reply
Uehreka
5 months ago
[-]
Their comments about the singing and background music are odd. It’s been a while since I’ve done academic research, but something about those comments gave me a strong “we couldn’t figure out how to make background music go away in time for our paper submission, so we’re calling it a feature” vibe as opposed to a “we genuinely like this and think its a differentiator” vibe.
reply
phildougherty
5 months ago
[-]
Totally felt the same way! Singing happens spontaneously? What?
reply
lyu07282
5 months ago
[-]
They mention that in the FAQ here: https://github.com/microsoft/VibeVoice/tree/main?tab=readme-...

> In fact, we intentionally decided not to denoise our training data because we think it's an interesting feature for BGM to show up at just the right moment. You can think of it as a little easter egg we left for you.

It's not a bug, it's a feature! Okaaaaay

reply
jstummbillig
5 months ago
[-]
Is there any better model you can point at? I would be interested in having a listen.

There are people – and it does not matter what it's about – that will overstate the progress made (and others will understate it, case in point). Neither should put a damper on progress. This is the best I personally have heard so far, but I certainly might have missed something.

reply
Uehreka
5 months ago
[-]
It’s tough to name the best local TTS since they all seem to trade off on quality and features and none of them are as good as ElevenLabs’ closed-source offering.

However Kokoro-82M is an absolute triumph in the small model space. It curbstomps models 10-20x its size in terms of quality while also being runnable on like, a Raspberry Pi. It’s the kind of thing I’m surprised even exists. Its downside is that it isn’t super expressive, but the af_heart voice is extremely clean, and Kokoro is way more reliable than other TTS models: It doesn’t have the common failure mode where you occasionally have a couple extra syllables thrown in because you picked a bad seed.

If you want something that can do convincing voice acting, either pay for ElevenLabs or keep waiting. If you’re trying to build a local AI assistant, Kokoro is perfect, just use that and check the space again in like 6 months to see if something’s beaten it. https://huggingface.co/hexgrad/Kokoro-82M

reply
refulgentis
5 months ago
[-]
There's a certain know-nothing feeling I get that makes me worried if we start at the link (which has data showing it > ElevenLabs quality), jump to eh it's actually worse than anything I've heard then last 2 years, and end up at "none are as good as ElevenLabs" - the recommendation and commentary on it, of course, has nothing to do with my feeling, cheers
reply
sandreas
5 months ago
[-]
What is your opinion about F5-TTS or Fish-TTS?
reply
brettpro
5 months ago
[-]
I recently implemented Fish for a project and found it adequate for TTS but wildly impressive in voice cloning. My POC originally required 3-10 audio samples but I removed the minimum because it could usually one shot it.

The model is good, but I will say their inference code leaves a lot to be desired. I had to rewrite large portions of it for simple things like correct chunking and streaming. The advertised expressive keywords are very much hit and miss, and the devs have gone dark unfortunately.

reply
sandreas
5 months ago
[-]
Did you consider contributing your improvements?
reply
lynx97
5 months ago
[-]
I cobbled together llm-tts to run as many local (and remote) TTs models s I could find and get working.

https://github.com/mlang/llm-tts

Strictly speaking, even music generation fits the usage pattern: text in, audio out.

llm-tts is far from complete, but it makes it relatively "easy" to try a few models in an uniform way.

reply
nipponese
5 months ago
[-]
Not OS or local, but just try ChatGPT Voice Conversation mode. To my ears, it's a generation ahead of these VibeVoice samples.
reply
riquito
5 months ago
[-]
Probably not even the best ones, but among some recent models I find Dia and Orpheus more natural

- http://dia-tts.com/

- https://github.com/canopyai/Orpheus-TTS

reply
popalchemist
5 months ago
[-]
Higgs Audio v2 is currently SOTA in OSS TSS.
reply
satellite2
5 months ago
[-]
Elevenlabs v3 (not local)
reply
whimsicalism
5 months ago
[-]
i think orpheus and sesame sound better
reply
rcarmo
5 months ago
[-]
One of the things this model is actually quite good at is voice cloning. Drop a recorded sample of your voice into the voices folder, and it just works.
reply
watsonmusic
5 months ago
[-]
bonus usage
reply
IshKebab
5 months ago
[-]
I agree. For some reason the female voices are waaay more convincing than the male ones too, which sound barely better than speech synthesis from a decade ago.
reply
selkin
5 months ago
[-]
Results correlate to investment, and there’s more in synthesizing female coded voices. As for the why female coded voices gets more investments, we all know, only difference is in attitude towards that (the correct answer, of course, is “it sucks”)
reply
recursive
5 months ago
[-]
We all know? Female voices have better intelligibility? That's my guess anyway.
reply
kadoban
5 months ago
[-]
There's a lot of money and effort spent in satisfying the sexual desires of (predominantly straight) men. There's not typically quite as much interest in doing the same for women.

For example I've been looking at models and loras for generating images, and the boards are _full_ of ones that will generate women well or in some particular style. Quite often at least a couple of the preview images for each are hidden behind a button because they contain nudity. Clearly the intent is that they are at least able to generate porn containing women. There's a small handful that are focused on men and they're very aware of it, they all have notes lampshading how oddball they are to even exist.

I would expect that this is not as pronounced an effect in the world generating speech, but it must still exist.

reply
lacy_tinpot
5 months ago
[-]
I think this is a very lazy kind of cultural analysis. The reason female voices are being chosen over male ones is a little more multifaceted than just SEX. Heterosexual women also tend to prefer female voices over male ones.

Female voices are often rated as being clearer, easier to understand, "warmer", etc.

Why this is the case is still an open question, but it's definitely more complex than just SEX.

reply
kadoban
5 months ago
[-]
I don't think that this is the only factor, I just suspect that it is _a_ factor.
reply
lacy_tinpot
5 months ago
[-]
>There's not typically quite as much interest in doing the same for women.

Women also prefer female voices.

reply
kadoban
5 months ago
[-]
Okay. I'd happily believe that, it doesn't contradict what I said.

The quote you have from me is from this context:

> There's a lot of money and effort spent in satisfying the sexual desires of (predominantly straight) men. There's not typically quite as much interest in doing the same for women.

In that context, your response is impossible to respond to. Do you even disagree with what I said or do you (like me) just think that there are other factors in addition?

Any particular reason you're being kind of a dick btw?

reply
selkin
5 months ago
[-]
That you consider it sex (rather than gender), is exactly why there’s a preference for female coded voices. Consider where we do hear male recorded voices used as default.
reply
recursive
5 months ago
[-]
Overloaded term. It was a reference to the parent's reference.

> satisfying the sexual desires of

So, "sex" as a reference to "sexual desires". In English, it just so happens that "sex" has other meanings, but those weren't in play at the time.

reply
akimbostrawman
5 months ago
[-]
How the hell would you determine someone's self assigned social gender based on there voice which is a result of there physical sex.
reply
pylotlight
5 months ago
[-]
woosh
reply
selkin
5 months ago
[-]
If you don't know, it's on you to learn. If you do know and prefer to make an asshole of yourself, that's also on you.
reply
odie5533
5 months ago
[-]
It's good but not the best free model. I find Chatterbox to be more realistic with no robot-sounding and better (though not perfect) intonation.
reply
lastdong
5 months ago
[-]
Chatterbox sounds great, their demo page is a good introduction: https://resemble-ai.github.io/chatterbox_demopage/
reply
eaglehead
5 months ago
[-]
I agree. We switched from elevenlabs to chatterbox (hosted on Resemble.ai) and it is much much cheaper and better.
reply
iansinnott
5 months ago
[-]
The English/Mandarin section was VERY impressive. The accents of both the woman speaking English and the man speaking Chinese were spot on. Both sound very convincingly like they are speaking a second language, which anyone here can hear from the Chinese woman speaking English voice. I'd like to add that the foreigner speaking Chinese was also spot on.
reply
echelon
5 months ago
[-]
This is close to SOTA emotional performance, at least the female voices.

I trust the human scores in the paper. At least my ear aligns with that figure.

With stuff like this coming out in the open, I wonder if ElevenLabs will maintain its huge ARR lead in the field. I really don't see how they can continue to maintain a lead when their offering is getting trounced by open models.

reply
kamranjon
5 months ago
[-]
Hmmmm… what is your opinion on the examples showcased here vs the ones on the Dia demo page?

https://yummy-fir-7a4.notion.site/dia

I am not sure why but I find the pacing of the parakeet based models (like Dia) to be much more realistic.

reply
watsonmusic
5 months ago
[-]
11labs is facing a real competitor
reply
skripp
5 months ago
[-]
The male Chinese speakers had THICK American accents. Nothing really wrong with the language, but think the stereotype German speaking English. That was kind of strange to me.
reply
ascorbic
5 months ago
[-]
I think it's because it was using the American voice for it. Conversely the female voice in the Mandarin conversation spoke English with a Chinese accent.
reply
mclau157
5 months ago
[-]
ElevenLabs has a much more convincing voice model
reply
sys32768
5 months ago
[-]
They also offer an AI Voice Changer that will take a recording and transform it into a different voice but retain the cadence and intonation.
reply
DrBenCarson
5 months ago
[-]
Open source?
reply
watsonmusic
5 months ago
[-]
it's not oss
reply
johanyc
5 months ago
[-]
The Chinese is good. The Mandarin to English example she sounds native. The English to Mandarin sounds good too but he does have an English speaker's accent, which I think is intentional.
reply
MengerSponge
5 months ago
[-]
> (1) I'm not familiar with any Chinese language, so I couldn't really judge the pronunciation of that

https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

reply
giancarlostoro
5 months ago
[-]
I really hope someone within Microsoft is naming their open source coding agent Microsoft VibeCode. Let this be a thing. Its either that or "Lo" then you can have Lo work with Phi, so you can Vibe code with Lo Phi.

https://techcommunity.microsoft.com/blog/azure-ai-foundry-bl...

reply
simiones
5 months ago
[-]
Knowing the history of Microsoft marketing, it will either be called something like "Microsoft Copilot Code Generator for VSCode" or something like "Zunega"...
reply
giancarlostoro
5 months ago
[-]
Well don't forget "Microsoft SQL" ;) They'll name something as though they invented it and now have the worse possible way to google it.
reply
kelvinjps10
5 months ago
[-]
For me it doesn't sounds like they invented it but that it's Microsoft version of SQL idk but I hate Microsoft version of anything
reply
loloquwowndueo
5 months ago
[-]
“Microsoft Word” haha reminds me of the old joke : “Microsoft Works” is an oxymoron.
reply
giancarlostoro
5 months ago
[-]
Oh my goodness, I forgot about "Microsoft Works" you just shot me back in time to the 2000s
reply
esafak
5 months ago
[-]
You misquoted Microsoft "Works"
reply
parineum
5 months ago
[-]
Just like MariaDB sounds as though they invented databases, right?
reply
cush
5 months ago
[-]
Later renamed to Microsoft Zune, a handheld AI companion that lives in your pocket
reply
polytely
5 months ago
[-]
GitHub Dotnet Copilot Code Generator for VSC (new)
reply
datadrivenangel
5 months ago
[-]
(preview)
reply
yellowapple
5 months ago
[-]
Microsoft Copilot .NET for Workgroups
reply
airstrike
5 months ago
[-]
Now I need a new project just so I can call it Zunega... lmao
reply
watsonmusic
5 months ago
[-]
genius
reply
malnourish
5 months ago
[-]
This is clearly high quality but there's something about the voices, the male voices in particular, which immediately register as computer generated. My audio vocabulary is not rich enough to articulate what it is.
reply
heeton
5 months ago
[-]
I'm no audio engineer either, but those computer voice sound "saw-tooth"y to me.

From what I understand, it's more basic models/techniques that are undersampling, so there is a series of audio pulses which give it that buzzy quality. Better models are produced smoother output.

https://www.perfectcircuit.com/signal/difference-between-wav...

reply
codebastard
5 months ago
[-]
I would describe it as blockly, as if we visualise the sound wave it seems to be without peaks and cut upwards and downwards producing a metallic boxy echo.
reply
jofzar
5 months ago
[-]
Yeah it sounds super low bitrate to me, reminds me of someone on Bluetooth microphone
reply
lvncelot
5 months ago
[-]
After hearing them myself, I think I know what you mean. The voices get a bit warbly and sound at times like they are very mp3-compressed.
reply
strangescript
5 months ago
[-]
The male voices seem much worse than the female voices, borderline robotic. Every sample of their website starts with a female voice. They clearly are aware of the issue.
reply
jsomedon
5 months ago
[-]
I felt the same, male voice feels kinda artificial.
reply
davorak
5 months ago
[-]
Any insight on my the code and the large model were removed? Some copies are floating around and are MIT licensed. In cases like this I do not know why the projects are yanked. If the project was mistakenly released under MIT, copied elsewhere, is any damage control possible by yanking the copies you have control over? Mostly seems like bad PR, if minor.
reply
androiddrew
5 months ago
[-]
Ok anyone have a link to the code and weights?
reply
fivestones
5 months ago
[-]
Wondering this too.
reply
aargh_aargh
5 months ago
[-]
Is there a current, updated list (ideally, a ranking) of the best open weights TTS models?

I'm actually more interested in STT (ASR) but the choices there are rather limited.

reply
Uehreka
5 months ago
[-]
Yes: https://huggingface.co/models?pipeline_tag=text-to-speech

Generally if a model is trending on that page, there’s enough juice for it to be worth a try. There’s a lot of subjective-opinion-having in this space, so beyond “is it trending on HF” the best eval is your own ears. But if something is not trending on HF it is unlikely to be much good.

reply
odie5533
5 months ago
[-]
Best TTS: VibeVoice, Chatterbox, Dia, Higgs, F5 TTS, Kokoro, Cosy Voice, XTTS-2.
reply
kroaton
5 months ago
[-]
Unmute.sh (same team as Kokoro) gets slept on, but it's really good.
reply
xnx
5 months ago
[-]
Click leaderboard in the hamburger menu: https://huggingface.co/spaces/TTS-AGI/TTS-Arena-V2
reply
prophesi
5 months ago
[-]
Is there a way to filter out hosted models? The top three winners currently are all proprietary as far as I can tell.

edit: Ah, there's a lock icon next to the name of each proprietary model.

reply
odie5533
5 months ago
[-]
That's a highly incomplete comparison
reply
watsonmusic
5 months ago
[-]
yes the best
reply
TheAceOfHearts
5 months ago
[-]
Unfortunately it's not usable if you're GPU-poor. Couldn't figure out how to run this with an old 1080. I tried VibeVoice-1.5B on my old CPU with torch.float32 and it took 832 seconds to generate a 66 second audio clip. Switching from torch.bfloat16 also introduced some weird sound artifacts in the audio output. If you're GPU-poor the best TTS model I've tried so far is Kokoro.

Someone else mentioned in this thread that you cannot add annotations to the text to control the output. I think for these models to really level up there will have to be an intermediate step that takes your regular text as input and it generates an annotated output, which can be passed to the TTS model. That would give users way more control over the final output, since they would be able to inspect and tweak any details instead of expecting the model to get everything correctly in a single pass.

reply
tempodox
5 months ago
[-]
This is ludicrous. macOS has had text-to-speech for ages with acceptable quality, and they never needed energy- and compute-expensive models for it. And it reacts instantly, not after ridiculous delays. I cannot believe this hype about “AI”, it’s just too absurd.
reply
NitpickLawyer
5 months ago
[-]
> with acceptable quality

Compared to IBMs Steven Hawking's chair, maybe. But apple tts is not acceptable quality in any modern understanding of SotA, IMO.

reply
selkin
5 months ago
[-]
Different use cases:

If you need a not-visual output of text, SoyA is a waste of electrons.

If you want to try and mimic a human speaker, then it ain’t.

Question is why would you need to have the computer sound more human, except for “because I can”.

reply
NitpickLawyer
5 months ago
[-]
I tried listening to audiobooks generated with tts. It takes me out of it most of the time, and I lose focus. That podcast thing from google was the first time I felt like I could listen to an entire thing without feeling the uncanny valley thing. And I knew it was genAI. So I'm looking for that, but for my content. Grab a bunch of articles (long form, deeply researched) and "podcast" them but with natural voices, sans hype. Or books. Have them ready when I'm out and about.
reply
andrew_lettuce
5 months ago
[-]
The Google podcasts are so cringey positive it emotionally pains me. Nobody finds pineapple on pizza that amazing.
reply
lagniappe
5 months ago
[-]
>Nobody finds pineapple on pizza that amazing

We can't be friends

reply
crazygringo
5 months ago
[-]
Audiobooks and other material you want to listen to (articles, blog posts, etc.).

There's a lot of stuff I don't have time to sit down and read, but want to listen to while I cook/laundry/shower/drive/etc.

Often recordings don't exist. Or when they do, an audiobook just has a bad voiceover artist, or one that just rubs you the wrong way.

The more human text-to-speech sounds, the easier and less distracting it is to listen to. There's real value in it, it's not "because I can".

You know how it's nicer to read in 300 dpi instead of 72 dpi? Or in Garamond rather than Courier? Or in Helvetica rather than Comic Sans? It's like that, only for speech.

reply
Ukv
5 months ago
[-]
> Question is why would you need to have the computer sound more human

I think translation would be a big use - maybe translating your voice to another language while maintaining emotion and intonation, or dubbing content (videos, movies, podcasts, ...) that isn't otherwise available in your native language.

Traditional non-ML TTS for longer content like podcasts or audiobooks seems like it'd become grating to the point of being unlistenable, or at least a significantly worse experience. Stands to benefit from more natural sounding voices that can place emphasis in the right places.

Since Stephen Hawking was brought up, there are likely also people with voice-impairing illnesses who would like to speak in their own voice again (in addition to those who are fine with a robotic voice). Or alternatively, people who are uncomfortable with their natural voice and want to communicate closer to how they wish to be perceived.

Could also potentially be used for new forms of interactive media that aren't currently feasible - customised movies, audio dramas where the listener plays a role, videogame NPCs that react with more than just prerecorded lines, etc.

reply
baxuz
5 months ago
[-]
Looking forward to the day when tts and speech recognition will work on Croatian, or other less prevalent languages.

It seems that it's only variants of English, Spanish and Chinese which are somewhat working.

reply
lukax
5 months ago
[-]
Have you tried Soniox for speech recognition? It supports Croatian. Or are you just looking for self-hosted open-source models? Soniox is very cheap ($0.1/h for async, $0.12/h for real-time) and you get $200 free credits on signup.

https://soniox.com/

Disclaimer: I used to work for Soniox

reply
baxuz
5 months ago
[-]
I meant in general purpose tools from Google and Apple. Most of this assistant and "AI" stuff is practically useless for me because I refuse to talk to my devices in English.

In Android Auto / CarPlay I can't even get voice guidance that works properly, much less reading notifications, or composing a reply using STT

reply
Insanity
5 months ago
[-]
What an odd name to me, becaus "Vibe" is, in my mind, equal to somewhat poor quality. Like "Vibe Coding". But that's probably just some bias from my side.
reply
mxfh
5 months ago
[-]
Vibe coding just became a term this spring. I doubt that that the substantial part, like giving it a project code name and getting company approval of this research project started after that. It's not libe vibe has a negative connotation in general yet.
reply
Insanity
5 months ago
[-]
'Vibe' as a word / product was definitely less common though. I kinda doubt that 'VibeVoice' is _not_ a consequence of 'VibeCode'.

But I do agree with you in that generally there's probably no negative connotation (yet).

reply
andrew_lettuce
5 months ago
[-]
Vibe always meant "specific feel" and makes sense related to AI coding "by touch" vs. understanding what's actually happening. It's just the results have now made the word pejorative.
reply
rafaelmn
5 months ago
[-]
The Spontaneous Emotion dailog sounds like a team member venting through LLMs.

They could have skipped the singing part, it would be better if the model did not try to do that :)

reply
kridsdale1
5 months ago
[-]
It did get me to look up the song [1] again though, which is a great stimulator of emotion. The robot singing has a long way to go.

1. https://music.youtube.com/watch?v=xl8thVrlvjI&si=dU6aIJIPWSs...

reply
eibrahim
5 months ago
[-]
Hahahah. Thats what I thought too
reply
Meneth
5 months ago
[-]
Open-source, eh? Where's the training data, then?
reply
Joel_Mckay
5 months ago
[-]
Most scraped data is often full of copyright, usage agreement, and privacy law violations.

Making it "open" would be unwise for a commercial entity. =3

reply
zoobab
5 months ago
[-]
Open source is being abused to not provide the actual source. Stop this.
reply
Joel_Mckay
5 months ago
[-]
A lot of code has multiple FOSS licenses that are not contaminating like GPL. GPL violations do occur on code, but have nothing to do with the training Data.

For example, many academic data sets are not public domain, and can't be used in a commercial context. A GPL claim on that data is often an argument of which thief showed up first.

Rule #24: A lawyers Strategic Truth is to never lie, but also avoid voluntarily disclosing information that may help opponents.

Thus, a business will never disclose they paid a fool to break laws for them... =3

reply
nullc
5 months ago
[-]
Perhaps, but it is not Open Source in the traditional sense if they do not provide the preferred form for modifications.
reply
Joel_Mckay
5 months ago
[-]
There are also some weird OSS license rules that only trip the disclosure obligation when distributing the build to end users.

Indeed, these adversarial behaviors do not follow the spirit of FOSS community standards. If a project started as FOSS, than FOSS it should remain. =3

reply
crvdgc
5 months ago
[-]
Very impressive that it can reproduce the Mandarin accent when speaking English and English accent when speaking Mandarin.
reply
stuffoverflow
5 months ago
[-]
VibeVoice-Large is the first local TTS that can produce convincing Finnish speech with little to no accent. I tinkered with it yesterday and was pleasantly surprised at how good the voice cloning is and how it "clones" the emotion in the speech as well.
reply
lxe
5 months ago
[-]
There are 2 "best" TTS models out right now: HiggsAudio and VibeVoice. I found that Higgs is both faster and much higher fidelity than Vibe. Can't speak to expressiveness, but don't sleep on it.
reply
data-ottawa
5 months ago
[-]
Looks like the repo went private

https://github.com/microsoft/VibeVoice

I was trying to get this working on strix halo.

reply
glenstein
5 months ago
[-]
Very good and I could see how I might believe they are real people if I let my guard down. The male voice sounded a little sedated though and there was a smoothness to it that could be samey over long stretches.

Still not at the astonishing level of Google Notebook text to speech which has been out for a while now. I still can't believe how good that one is.

reply
regularfry
5 months ago
[-]
Ok, this is nit-picking, but it's very obvious that the sample voices these were trained with were captured in different audio environments. There's noticeable reverb on the male voice that's not there on the other.

So that's a useful next step: for multi-voice TTS models, make them sound like they're in the same room.

reply
viggity
5 months ago
[-]
I feel like this is a step in the right direction, but a lot of emotive text-to-speech models are only changing the duration and loudness of each word, the timing/pauses are better too.

I would love to have a model that can make sense of things like stressing particular syllables or phonemes to make a point.

reply
watsonmusic
5 months ago
[-]
this model is superb
reply
cush
5 months ago
[-]
To me this is like early generative AI art, where the images came out very "smooth" and visually buttery, but instead there's no timbre to the voices. Intonation issues aside, these models could use a touch of vocal fry and some body to be more believable
reply
bityard
5 months ago
[-]
I thought the name sounded familiar, I'm guessing its no relation to this project which has been around for 7 months? https://github.com/mpaepper/vibevoice
reply
mpaepper
5 months ago
[-]
Unfortunate naming given I named my repo which does open source locally running speech to text vibevoice 7 months ago:

https://github.com/mpaepper/vibevoice

reply
faxmeyourcode
5 months ago
[-]
I tried the colab notebook that they link to and couldn't replicate the quality for whatever reason. I just swapped out the text and let it run on the introduction paragraph of Metamorphosis by Franz Kafka and it seemingly could not handle the intricacies.
reply
wewewedxfgdf
5 months ago
[-]
I'm really hoping one day there will be TTS does that does really nice British accents - I've surveyed them all deeply, none do.

Most that claim to do a British accent end up sounding like Kelsey Grammer - sort of an American accent pretending to be British.

reply
specproc
5 months ago
[-]
I'd like one that really nails Brummie.
reply
xp84
5 months ago
[-]
I’m just a yank, but a lot of the AI-voiced videos on YouTube that I’ve been listening to while I’m falling asleep lately have British voices that sound quite nice to me.
reply
ndkap
5 months ago
[-]
Here is AI being as close as possible to the most animated person I know and here I am sounding robotic in every conversation I have, despite my best efforts to sound otherwise. Sometimes, I just wish I could have an AI speak for me
reply
lyu07282
5 months ago
[-]
Did they delete the repo? It's 404 for me now: https://github.com/microsoft/VibeVoice
reply
RealtyDAO
5 months ago
[-]
they must have removed it.. been down for hrs.
reply
lyu07282
5 months ago
[-]
Repo is back but code is gone, with this statement:

> 2025-09-05: VibeVoice is an open-source research framework intended to advance collaboration in the speech synthesis community. After release, we discovered instances where the tool was used in ways inconsistent with the stated intent. Since responsible use of AI is one of Microsoft’s guiding principles, we have disabled the repo until we are confident that out-of-scope use is no longer possible.

What was that about?

reply
bazlan
5 months ago
[-]
Sad to not see vui on the comparisons!

A 100M podcast model

https://huggingface.co/spaces/fluxions/vui-space

reply
ementally
5 months ago
[-]
they vibecoded their demo website? the text is invisible on Firefox.
reply
double_one
5 months ago
[-]
Same problem here. A quick refresh solved it for me — maybe try that?
reply
recursive
5 months ago
[-]
Works for me
reply
anarticle
5 months ago
[-]
The first example sounds like a cry for help.

Some of them have tone wobbles which iirc was more common in early TTS models. Looks like the huge context window is really helping out here.

reply
baal80spam
5 months ago
[-]
Wow. I admit that I am not a native speaker, but this looks (or rather, sounds) VERY impressive and I could mistake it for hearing two people talking.
reply
x187463
5 months ago
[-]
The giveaway is they will never talk over each other. Only one speaker at a time, consistently.
reply
tracker1
5 months ago
[-]
Fair enough... though it would be possible to generate that and edit to overlay the speech, introducing stuttering/pauses at the beginning and end of statements then edit the output to overlay the steps.

Would probably want to do similar to balance crossfade anyway... having each speaker's input offset from center instead of straight mono.

reply
kaptainscarlet
5 months ago
[-]
Also the lack of stutter and perfect flow of speech are a dead giveaway
reply
kridsdale1
5 months ago
[-]
And longer pause between turns than humans would do.
reply
tracker1
5 months ago
[-]
Yeah, a lot of the TTS has gotten really impressive in general. Definitely a clear leap from the TTS stuff I worked with for training simulations a bit over a decade ago. Aside: Installing a sound card (unused) on a windows server just to be able to generate TTS was interesting. It was required by the platform, even if it wasn't used for it.

I generally don't like a lot of the AI generated slop that's starting to pop up on YouTube these days... I do enjoy some of the reddit story channels, but have completely stopped with it all now. With the AI stuff, it really becomes apparent with dates/ages and when numbers are spoken. Dates/ages/timelines are just off as far as story generation, and really should be human tweaked. As to the voice gen, saying a year or measurement is just not how English speakers (US or otherwise) speak.

reply
qwertytyyuu
5 months ago
[-]
Woah they even immitate the western chinese accent well
reply
ml_basics
5 months ago
[-]
what's the relationship between this work and the recently announced voice models from Microsoft AI? https://microsoft.ai/news/two-new-in-house-models/
reply
ehutch79
5 months ago
[-]
The examples are kind of off-putting. We're definitely in uncanny valley territory here.
reply
nextworddev
5 months ago
[-]
Still haven’t found anything better than kokoro tts. Anyone know something better?
reply
egorfine
5 months ago
[-]
[deleted - I'm an idiot]
reply
x187463
5 months ago
[-]
Whisper is speech-to-text. VibeVoice is text-to-speech.
reply
mpeg
5 months ago
[-]
There is a text-to-speech version of whisper, but IMHO the quality is much worse than the demos of this model.
reply
x187463
5 months ago
[-]
Are you referring to this?

https://github.com/WhisperSpeech/WhisperSpeech

Or is there some OpenAI official Whisper TTS?

reply
mpeg
5 months ago
[-]
Yep, nothing official that I know, but that one is fairly popular so maybe they were referring to it (although AFAIK it's not frontier?)
reply
egorfine
5 months ago
[-]
I stand corrected
reply
weeb
5 months ago
[-]
does anyone know of recent TTS options that let you specify IPA rather than written words? Azure lets you do this, but something local (and better than existing OS voices) would be great for my project.
reply
andybug
5 months ago
[-]
I'm using Kokoro via https://github.com/remsky/Kokoro-FastAPI. It has a `generate_audio_from_phonemes()` endpoint that I'm sure maps to the Kokoro library if you want to use it directly.

My usage is for Chinese, but the phonemes it generated looked very much like IPA.

reply
swiftcoder
5 months ago
[-]
Ah, yes, the Furious 7 soundtrack. Definitely something everyone recalls
reply
closewith
5 months ago
[-]
The most popular song of the year from one of the most popular movie franchises that had been in the global news due to the death of its star. Probably the most memorable song from a soundtrack of the century so far.
reply
agos
5 months ago
[-]
I'm Just Ken (Barbie), Skyfall, Let it Go (Frozen), Remember Me (Coco), Happy (from Despicable Me 2), a Star is Born (Shallow), are all arguably wayyyyy more memorable and these are just off the top of my head. We've had quite a few memorable songs in soundtracks this millennium.

edit: I had forgotten about Jai Ho (Slumdog Millionaire) and Lose Yourself (8 mile)

reply
closewith
5 months ago
[-]
It's obviously subjective, but in terms of numbers the only contender in that list is Let It Go, which had about 1/3rd the reach.

Nothing on that list - movies or songs - had the cultural impact of Furious 7 or See You Again.

reply
ascorbic
5 months ago
[-]
And most recently "Golden"
reply
throwaw12
5 months ago
[-]
Will there be a support for SSML to have more control of conversation?
reply
tehlike
5 months ago
[-]
The comments in the html code is chinese, which is very interesting.
reply
Havoc
5 months ago
[-]
MIT license - very nice!
reply
ComputerGuru
5 months ago
[-]
The application of known FOSS licenses to what is effectively a binary-only release is misleading and borderline meaningless.
reply
Havoc
5 months ago
[-]
It is an unfortunate recycling of an existing regime that no doubt offends Stallman to his very core, but I wouldn't call it meaningless.

If you're in a company and need a model which one do you think you're getting past compliance & legal - the one that says MIT or the one that says "non-commercial use only"?

reply
em-bee
5 months ago
[-]
what does that mean in this context? it seems to depend on an LLM. so can i run this completely offline? if i have to sign up and pay for an LLM to make it work, then it's not really more useful than any other non-free system
reply
watsonmusic
5 months ago
[-]
Microsoft is cool
reply
lagniappe
5 months ago
[-]
Bots should never sing.
reply
agos
5 months ago
[-]
seemingly supports only English, Indian and Chinese
reply
plingamp
5 months ago
[-]
Indian and Chinese are not languages
reply
agos
5 months ago
[-]
I'm very aware of this. The project does not specify more than an in- and zh- prefix.
reply
ascorbic
5 months ago
[-]
Voices, not languages. The "English" one is American though.
reply
cush
5 months ago
[-]
I tried using the demo but it just errors out
reply
amelius
5 months ago
[-]
I tried some TTS models a while ago, but I noticed that none of them allowed to put markup statements in the text. For example, it would be nice to do something like:

     Hey look! [enthusiastic] Should we tell the others? Maybe not ... [giggles]
etc.

In fact, I think this kind of thing is absolutely necessary if you want to use this to replace a voice actor.

reply
data-ottawa
5 months ago
[-]
Eleven labs has some models with support for that.

https://elevenlabs.io/blog/v3-audiotags

reply
sciencesama
5 months ago
[-]
Need this for mac
reply
double_one
5 months ago
[-]
I tried it on my MacBook Pro — works great!
reply
watsonmusic
5 months ago
[-]
one of the best models built by Microsoft
reply
enigma101
5 months ago
[-]
only microsoft could come up with such a name rofl
reply
defrost
5 months ago
[-]
Lippy got vetoed.
reply