Launch HN: Uplift (YC S25) – Voice models for under-served languages
110 points
3 days ago
| 16 comments
| HN
Hi HN, we are Zaid, Muhammad and Hammad, the co-founders of Uplift AI (https://upliftai.org). We build models that speak underserved languages — today: Urdu, Sindhi, and Balochi.

A billion people worldwide can't read. In countries like Pakistan – the 5th most populous country – 42% of adults are illiterate. This holds back the entire economy: patients can't read medical reports, parents can't help with homework, banks can't go fully digital, farmers can't research best practices, and people memorize smartphone app button sequences. Voice AI interfaces can fix all of this, and we think this will perhaps be one of the great benefits of modern AI.

Right now, existing voice models barely work for these languages, and big tech is moving slowly.

Uplift AI was originally a side project to make datasets for translation and voice models. For us it was a "cool side-thing" to work on, not an "important full-time thing" to work on. With some initial data we hacked together a Urdu Voice Bot on Whatsapp and gave it to one domestic worker. In two days 800 people were using it. When we dived deeper into understanding the users, we learned that text interfaces don't work for sooo many. So we started Uplift AI to solve this problem fulltime.

The most challenging part is that all the building blocks needed for great voice models are broken for these languages. For example, if you are creating a speech synthesis model, you will scrape a lot of data from youtube and auto-label it using a transcription model… all very easy to do in English. But it doesn't work in under-served languages because the transcription modes are not accurate.

There are many other challenges. Like when you hire human transcribers to label the data, often they don't have any spell correctors for their languages, and this creates lots of noise in the data… making it hard to train models with low data. There are many more challenges in phonemes, silence detection, diacritization etc.

We solve these problems by making great internal tooling to help with data labeling. Also, we source our own data and don't buy it. This is counterintuitive, but a big advantage over companies buying data and then training. By sourcing our own data we create the right data distributions and get much better models with much less data. By doing the entire thing inhouse, (data, labeling, training, deploying) we are able to make a lot faster progress.

Today we publicly offer a text to speech APIs for Urdu, Sindhi, and Balochi. Here's a video which shows this: https://www.loom.com/share/dcd5020967444c228e9c127151e7a9f5.

Khan Academy is using our tech to dub videos to Urdu (https://ur.khanacademy.org).

Our models excel at informational use cases (like AI bots) but need more work in emotive use-cases like poetry.

We have been giving a lot of people private access in beta mode, and today are launching our models publicly. We believe this will be the fastest way for us to learn about areas that are not performing well so we can fix them quickly.

We'd love to hear from all of you, especially around your experiences with under-served languages (not just the Pakistani ones we're starting with) and your comments in general.

jnmandal
3 days ago
[-]
Looks really cool, exciting to see. I have two questions around this:

1. Given that you are concerned with providing access a class of folks that are traditionally ignored by technologists, do you plan to make these models usable for offline purposes? For example an illiterate person I know from Uttarkhand: his home village is not connected to road. Interestingly he does speak Hindi, but his native language I believe is something more obscure. To get home, he walks five hours from the terminus of a road. Connectivity is obviously both limited and intermittent. A usable device might want the voice interface embedded on it. Any plans for this?

2. I have minimal understanding of this but as someone who has learned Hindi/Urdu as a foreign language but in the US, I am often in mixed conversation w/ both Indians and Pakistanis. There never seems to be any issues with communication. I have heard that certain terms (like for example "khub suraat", "shukria", "kitaab") are more Urdu than Hindi. I also studied Arabic, Farsi, and Swahili so I am familiar with these as loanwords Arabic and/or Persian, but in practice I hear Hindi speakers using these terms often. Is the primary value add here political? Is it an accent thing? Thanks in advance for any explanation. This is still very much a mystery to me.

reply
muhammadbsabir
3 days ago
[-]
To increase access we’re also exploring telco hotlines. Carrier penetration is much higher than internet, so this could let people use AI through a simple phone call. Some users already pay for similar services like weather updates (for farmers) via SIM balance. But to scale it will likely require government or telco partnerships.
reply
jnmandal
2 days ago
[-]
Telco integration sounds amazing. Wishing yall success
reply
muhammadbsabir
2 days ago
[-]
Thanks!
reply
hammadmlk
3 days ago
[-]
1. Offline models: Yes that is on the roadmap. There is a big demand for them especially in interactive educational use-cases.

2. Urdu and Modern Hindi can be cross understood in spoken form. The authentic Hindi is much different though and I can't understand the press releases that are done in super authentic Hindi. The writing systems in Urdu and Hindi is completely different too, so even if there is a great TTS system in Hindi, I cant use it. Accent are very different too.

Scripts: ہیلو हेलो

reply
primitivesuave
2 days ago
[-]
The output quality is remarkable. You mentioned that there are 1 billion illiterate people who would benefit from this, and I would add that there are at least 1 billion additional people who would benefit because they speak a regional dialect. There are many countries across the developing world where the AI tools and translation apps only produce output in the official government dialect (e.g. the Thai spoken in Bangkok, the Hindi spoken in Delhi, or the Mandarin spoken in Beijing). It would be interesting to see how a voice model could be "fine tuned" to better serve a specific regional dialect.
reply
zaidqureshi
2 days ago
[-]
Yes! First goal is to get coverage ASAP. I think it will be easy to get dialects in with current model architecture. The hard part will be LLMs catching up on producing consistent text that respects the linguistics as we drill deeper.
reply
_waqas_ali_
3 days ago
[-]
As a Sindhi speaker myself, amazing stuff. The output is so good. This unlocks the vastness of the internet for millions of people. I am imaging something like NotebookLM but for under-served languages or a hotline where people can call and talk/learn about anything. Do you guys have plans to create b2c products yourself?
reply
zaidqureshi
3 days ago
[-]
At the moment we are focused on making the models available through API so developers can make some cool things. We are actively monitoring to see if there is an opportunity that we will be better positioned to solve.

We are planning on hosting an online hackathon soon, so will suggest these things as ideas!

reply
_waqas_ali_
3 days ago
[-]
Fair enough. I don’t have a use case for the API yet but I am looking forward to the products that come out of this
reply
zaidqureshi
3 days ago
[-]
Maybe will make another post in a month of all the cool products that have come out so far :)..
reply
pavlov
3 days ago
[-]
Nice! Clearly a big and underserved market for voice AI solutions.

Would be nice to have some code examples for using your TTS API with Pipecat.

reply
zaidqureshi
3 days ago
[-]
I have to make that.. I did make one for LiveKit which utilizes our websocket API designed for real-time conversation API:

https://docs.upliftai.org/tutorials/livekit-voice-agent

reply
zaidqureshi
3 days ago
[-]
btw I did try to first make it with Pipecat and was having some annoying windows issues with getting libraries installed for daily etc. so I posted something that was easily reproducible for the tutorial...
reply
mdbackman
2 days ago
[-]
Hi! Pipecat maintainer here. There is no Windows restriction for Pipecat, in general. The DailyTransport does not support Windows, but works on WSL. Though, you don't have to use the DailyTransport. Pipecat has interchangeable transport support. You can do all of your testing on a free, P2P WebRTC transport (SmallWebRTCTransport, based on aiortc) without system restrictions.

Reach out on Discord if you have any challenges.

reply
zaidqureshi
2 days ago
[-]
will do!
reply
willwade
2 days ago
[-]
Your datasets. Are they public? For more under represented languages we DONT need closed voice models - what the world really needs is open voice data repositories (eg TTS ready voice banks AND phonemization db in projects like Mozila CommonVoice). Why? Because there is so small need commercially these countries are not commercially viable - but we DO need TTS for assistive technology purposes and this has very little $$$ associated with it

(Saying that Urdu is NOT a small population so well done..!)

reply
zaidqureshi
2 days ago
[-]
They aren't public. Agreed on commercially viable, even in Pakistan, businesses are price sensitive, currently there priced realllly cheap (just because they are small).
reply
nojs
3 days ago
[-]
Nice, this is really needed. Would be cool to see some of the less common regional Chinese dialects, which are widely spoken and often the only language older people speak. And even just more accurate regional accents for Mandarin.
reply
zaidqureshi
3 days ago
[-]
wow did not know that! Do you feel there is gap in speech understanding here or personalization missing with current TTS?
reply
tugdual
1 day ago
[-]
This is what my Master project was about, working in the case of Wolof. I've trained XTTSv2 and had solid results with less than 20h of paired data that wasn't of the highest quality either - hmu: tkerjan@outlook.com
reply
Lienetic
3 days ago
[-]
Very cool, congrats on the launch! What's your plan for when one of the larger players like ElevenLabs or Google adds support for these languages? I would guess the reason why they haven't is because they don't see a large opportunity. How are you thinking about it?
reply
muhammadbsabir
3 days ago
[-]
Thanks! You’re right, the big players mostly ignore these languages. The additional challenge is the lack of online data, so we spend a lot of effort on data collection and labeling on the ground.

Also companies like ElevenLabs, and Deepgram have done well by focusing on specific use cases, even when the big labs are amazing at English.

Right now these languages are underserved, so there’s a window to build the best models for these languages.

reply
hammadmlk
3 days ago
[-]
I think the Voice Models market will be like eCommerce. There will be no global winner instead a few regional winners -- each being really big.

We plan to be one of those winners.

reply
chirau
2 days ago
[-]
What does it take to build such a model? As in, the key steps. And how expensive does it get? I might be interested in being a regional player and winner as well, lol. In my own corner of the world in Africa.
reply
hammadmlk
2 days ago
[-]
Not much... Just the willingness to work hard on this problem instead of others problems where large revenue is perhaps quicker :)

Ingredients: Decent audio scraping skills, hiring great voice actors for each language, algos to gather text/audio with diverse phonetics, decent ML skills (enough to merge the best features of a few different papers). Lots and lots of data labels (and your own tools to get the data labeled efficiently) And finally GPUs!!!!

None of this is technically hard... the hardest thing is working with Voice Actors (oh man!!!)

reply
sanman8119
3 days ago
[-]
Would love to see Malayalam here one day!
reply
zaidqureshi
3 days ago
[-]
Yes! I will keep track of this comment for the day we do :P
reply
yorwba
3 days ago
[-]
Unless that happens within a week or so, this thread will be locked and you won't be able to reply anymore.

It would be good to have a company blog with an RSS feed that people can subscribe to for updates.

reply
zaidqureshi
3 days ago
[-]
ah, created a quick google form for language requests! https://forms.gle/XA6nZbmBNK5K7GJv5
reply
sanman8119
3 days ago
[-]
Submitted!
reply
zaidqureshi
2 days ago
[-]
appreciate it!
reply
adz_6891
3 days ago
[-]
This is really cool. Congrats on the launch. Would be interested to know which low resource languages in Sub-Saharan Africa you'd be working on, particularly in Nigeria and South Africa.
reply
zaidqureshi
3 days ago
[-]
If you have interest/insights in specific languages, would love if you can fill out this form so we can reach out in the future https://forms.gle/XA6nZbmBNK5K7GJv5

Lots of area to cover for sure!

reply
adz_6891
1 day ago
[-]
Submitted!
reply
aneeqdhk
2 days ago
[-]
Any plans for speech to text? I want to automatically generate subtitles for videos which have Urdu audio
reply
muhammadbsabir
2 days ago
[-]
Yes, we are working speech to text as well. It should be out in the next 2 months.
reply
asadm
3 days ago
[-]
Congrats on launch, I have been sole-funding a dataset for Sindhi on Common Voice. Did you check that out by any chance?
reply
muhammadbsabir
3 days ago
[-]
Amazing! Not yet, I will check it out.

Also, some super cool projects on your website :)

reply
akshayp29
3 days ago
[-]
Pretty cool! Do you think the model would be good at other under-served languages as well? Or is it hypertuned to just these?
reply
zaidqureshi
3 days ago
[-]
The model itself can work well for new languages, its just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.

Currently the model is only given data for these languages so it doesn't know anything else.

reply
mandeepj
3 days ago
[-]
> just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.

À crawler and data ingestion pipeline will not help with that?

reply
zaidqureshi
3 days ago
[-]
Gathering audio data online is not that hard, but getting it accurately labelled is challenging, as the speech understanding systems for those languages aren't there either, so we can't automatically do that
reply
akshayp29
3 days ago
[-]
Cool - makes sense!
reply
moinism
3 days ago
[-]
Congrats on the launch! Having support for regional voices is going to open up so many opportunities.
reply
zaidqureshi
3 days ago
[-]
Agreed!
reply
Bilal_io
3 days ago
[-]
Congratulations on the launch! I really hope it doesn't get used to launch misinformation campaigns against the country.

Are you aware of any effort to educate and fight against misinformation in Pakistan?

reply
zaidqureshi
3 days ago
[-]
Hope so! It is great that it overall has a big impact on making knowledge more accessible (i.e Khan Academy using it to dub their content in minutes instead of weeks). But there are lots of other areas where it applies as well.
reply
ks2048
2 days ago
[-]
Nice work.

Have you looked at the MMS models from Meta and how do they compare?

By publicly release, does that mean offering an API or have you considered huggingface model release? I understand why that might not be best for your business model - but what would be your goal from a business perspective?

reply
hammadmlk1
2 days ago
[-]
Yes we read the paper when it came out and reviewed the audios. We didnt find it good enough for adoption. We didnt compare results with MMS in a systematic way coz it seems irrelevant.
reply
zaidqureshi
2 days ago
[-]
Launched them through API. From a business perspective is to get adoption of voice apps in targeted regions. Some companies can now create voice agents etc.
reply