Another recent example: https://github.com/supertone-inc/supertonic
It seems like it is being trained by one person, and it is surprisingly natural for such a small model.
I remember when TTS always meant the most robotic, barely comprehensible voices.
https://www.reddit.com/r/LocalLLaMA/comments/1qcusnt/soprano...
For voice cloning, pocket tts is walled so I can't tell
It seems like Kokoro is the smaller model, also runs on CPU in real time, and is more open and fine tunable. More scripts and extensions, etc., whereas this is new and doesn't have any fine tuning code yet.
I couldn't tell an audio quality difference.
If it were a big model and was trained on a diverse set of speakers and could remember how to replicate them all, then zero shot is a potentially bigger deal. But this is a tiny model.
I'll try out the zero shot functionality of Pocket TTS and report back.
Btw, I would love to hear from someone (who knows what they're talking about) to clear this up for me. Dealing with potential GPL contamination is a nightmare.
If you could find another compatible converter, you could probably replace eSpeak with it. The data could be a bit OOD, so you may need to fiddle with it, but it should work.
Because the GPL is outdated and doesn't really consider modern gen AI, what you could also do is to generate a bunch of text-to-phoneme pairs with Espeak and train your own transformer on them,. This would free you from the GPL license completely, and the task is easy enough that even a very small model should be able to do it.
There's a bunch of inference stuff though, which is cool I guess. And it really is a quite nice little model in its niche. But let's not pretend there aren't huge tradeoffs in the design: synthetic data, phonemization, lack of train code, sharp boundary effects, etc.
Of course, English only
Cool tech demo though!
Non sequitur. Unless the 'art' in question is the 'art of adding features', usually this phrase is to describe the quality of a very specific development, these are often not even feature complete products.
There's about 1.5B English speakers in the planet.
You pull up a map and start navigation. All the street names are in the local language, and no, transliterating the local names to the English alphabet does not make them understandable when spoken by TTS. And not to mention localised foreign names which then are completely mangled by transliterating them to English.
You pull up a browser, open up an news article in your local language to read during your commute. You now have to reach for a translation model first before passing the data to the English-only TTS software.
You're driving, one of your friends Signals you. Your phone UI is in English, you get a notification (interrupting your Spotify) saying 'Signal message', followed by 5 minutes of gibberish.
But let's say you have a TTS model that supports your local language natively. Well due to the fact that '1.5B English speakers' apparently exist in the planet, many texts in other languages include English or Latin names and words. Now you have the opposite issue -- your TTS software needs to switch to English to pronounce these correctly...
And mind you, these are just very simple use cases for TTS. If you delve into use cases for people with limited sight that experience the entire Internet, and all mobile and desktop applications (often having poor localisation) via TTS you see how mono-lingual TTS is mostly useless and would be switched for a robotic old-school TTS in a flash...
> only that but it's also common to have system language set to English
Ask a German whether their system language is English. Ask a French person. I can go on.
Multilingual doesn't mean language agnostic. We humans are always monolingual, just multi-language hot-swappable if trained. It's more like you can make;make install docker, after which you can attach/detach into/out of alternate environments while on terminal to do things or take in/out notes.
People sometimes picture multilingualism as owning a single joined-together super-language in the brain. That usually doesn't happen. Attempting this especially at young age could lead a person into a "semi-lingual" or "double-limited" state where they are not so fluent or intelligent in any particular languages.
And so, trying to make an omnilingual TTS for criticizing someone not devoting significant resources at it, don't make much sense.
This is plainly not true.
> Multilingual doesn't mean language agnostic. We humans are always monolingual, just multi-language hot-swappable if trained
This and the analogy make no sense to me. Mind you I am trilingual.
I also did not imply that the model itself needs to be multilingual. I implied that the software that uses the model to generate speech must be multilingual and support language change detection and switching mid-sentence.
I'm German but my system language is English
Because translations often suck, are incomplete or inconsistent
Not abundantly obviously a satire and so interjecting: humans, including professional "simultaneous" interpreters, can't do this. This is not how languages work.
I think it's the wrong example, because this is actually very common if you're a Chinese speaker.
Actually, people tend to say the name of the cities in their own countries in their native language.
> I went to Nantes [0], to eat some kouign-amann [1].
As a French, both [0] and [1] will be spoken the French way on the fly in the sentence, while the other words are in English. Switching happens without any pause whatsoever (because there is really only one single way to pronounce those names in my mind, no thinking required).
Note that with Speech Recognition, it is fairly common to have models understanding language switches within a sentence like with Parakeet.
Perhaps I have to admit that my particular primary language is officially a human equivalent of an esoteric language; the myth that it's a complex language is increasingly becoming obsolete(for good!), but maybe it still qualify as being esoteric one that are not insignificantly more incompatible with others.
I think they should have added the fact that it's English only in the title at the very least.
https://github.com/supertone-inc/supertonic
https://github.com/ekwek1/soprano
No affiliation with either.Just made it an MCP server so claude can tell me when it's done with something :)
How am I supposed to enable this?
It says MIT license but then readme has a separate section on prohibited use that maybe adds restrictions to make it nonfree? Not sure the legal implications here.
If a license says "you may use this, you are prohibited from using this", and I use it, did I break the license?
In this case, I'd interpret it as they made up a new licence based on MIT, but their addendum makes it non-MIT, but something else. I agree with what others said; this "new" license has internal conflicts.
Simply, it's MIT licensed. If they want to change that, they have to remove that license file OR clearly update it to be a modified version of MIT.
But don’t do the fake mistake I did and use a hf token that doesn’t have access to read from repos! The error message said that I had to request access to the repo, but I’ve had already done that, so I couldn’t figure out what was wrong. Turns out my HF token only had access to inference.
So, on my M1 mac, did `uvx pocket-tts serve`. Plugged in
> It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way—in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only
(Beginning of Tale of Two Cities)
but the problem is Javert skips over parts of sentences! Eg, it starts:
> "It was the best of times, it was the worst of times, it was the age of wisdom, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the spring of hope, it was the winter of despair, we had everything before us, ..."
Notice how it skips over "it was the age of foolishness,", "it was the winter of despair,"
Which... Doesn't exactly inspire faith in a TTS system.
(Marius seems better; posted https://github.com/kyutai-labs/pocket-tts/issues/38)
- "its noisiest superlative insisted on its being received"
Win10 RTX 5070 Ti
I wonder what's going wrong in there
I also find Javert in particular seems to put in huge gaps and spaces... side effect of the voice?
I saw some agentic models at 4B or similar which can punch above its weights or even some basic models. I can definitely see them in the context of home lab without costing too much money.
I think atleast unmute.sh is similar/competed with chatgpt's voice model. It's crazy how good and (effective) open source models are from top to bottom. There's basically just about anything for almost everyone.
I feel like the only true moat might exist in coding models. Some are pretty good but its the only industry where people might pay 10x-20x more for the best (minimax/z.ai subscription fees vs claude code)
It will be interesting to see if we will see another deepseek moment in AI which might beat claude sonnet or similar. I think Deepseek has deepseek 4 so it will be interesting to see how/if it can beat sonnet
(Sorry for going offtopic)
In English, it's perfect and it's so funny in others languages. It sounds exactly like someone who actually doesn't speak the language, but got it anyway.
I don't know why Fantine is just better than the others in others languages. Javer seems to be the worst.
Try Jean in Spanish « ¡Es lo suficientemente pequeño como para caber en tu bolsillo! » sound a lot like they don't understand the language.
Or Azelma in French « C'est suffisament petit pour tenir dans ta poche. » is very good.I mean half of the words are from a Québécois accent, half French one but hey, it's correct French.
Però non capisce l'italiano.
Speech-dispatcher commonly uses espeak-ng, which sounds robotic but is reportedly better for visually impaired users, because at higher speeds it is still intelligible. This allows visually impaired users to hear UI labels more quickly. For non visually impaired users, we generally want natural sounding voices and to use TTS in the same way we would listen to podcasts or a bedtime story.
With this system, users are in full control and can swap TTS models easily. If a model is shipped and, two weeks later, a smaller, newer, or better one appears, their work would become obsolete very quickly.
[1] https://data.norge.no/en/datasets/220ef03e-70e1-3465-a4af-ed...
[1] (2016 https://arxiv.org/abs/1609.03499) WaveNet: A Generative Model for Raw Audio
[2] (2017 https://arxiv.org/abs/1711.10433) Parallel WaveNet: Fast High-Fidelity Speech Synthesis
[3] (2021 https://arxiv.org/abs/2106.07889) UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation
[4] (2022 https://arxiv.org/abs/2203.14941) Neural Vocoder is All You Need for Speech Super-resolution
All too often, new models' codebases are just a dump of code that installs half the universe in dependencies for no reason, etc.
To me, what you're saying is the same as saying the art of a movie is in the script, the video is just the method of making it available. And I don't think that's a valid take
So yes, I mostly agree with GP. An audiobook is a different rendering of the same subject. The content is in the text, regardless of whether it's delivered in written or oral form.
It's just too flat/dead compared to a human reader.
Set up SherpaTTS as the voice model for your phone (I like the en_GB-jenny_dioco-medium voice option, but there are several to choose from). Add a ebook to librera reader and open it. There's an icon with a little person wearing headphones, which lets you send the text continuously to your phone's tts, using just local processing on the phone. I don't have the latest phone but mine is able to process it faster than the audio is read, so the audio doesn't stop and start.
The voice isn't totally human sounding, but it's a lot better than the microsoft sam days, and once you get used to it the roboticness fades into the background and I can just listen to the story. You may get better results with kokoro (I couldn't get it running on my phone) or similar tts engines and a more powerful phone.
One thing I like about this setup is that if you want to swap back and forth between audio and text, you can. The reader scrolls automatically as it makes the audio, and you can pause it, read in silence for a while yourself and later set it going from a new point.
https://gist.github.com/britannio/481aca8cb81a70e8fd5b7dfa2f...
I just tried some sample verses, sounds natural.
But there seems to be a bug maybe? Just for fun, I had asked it to play the Real Slim Shady lyrics. It always seems to add 1 extra "please stand-up" in the chorus. Anyone see that?
"so and so," he <verb>
and the verb is not just "said", but "chuckled", or "whispered", or "said shakily", the output is modified accordingly, or if there's an indication that it's a woman speaking it may pitch up during the quotation. It also tries to guess emotive content from textual content, such if a passage reads angry it may try to make it sound angry. That's more hit-and-miss, but when it hits, it hits really well. A very common failure case is, imagine someone is trying to psych themselves up and they say internally "come on, Steve, stand up and keep going", it'll read it in a deeper voice like it was being spoken by a WW2 sergeant to a soldier.
Ok, who knows where I can get those high-quality recordings of Majel Barrett' voice that she made before she died?
Like what if I want to graft on TTS to an existing text chat system and give each person an unique, randomly generated voice? Or want to try to get something that's not quite human, like some sort of alien or monster?
claude plugin marketplace add pchalasani/claude-code-tools
claude plugin install voice@cctools-plugins
More here: https://github.com/pchalasani/claude-code-tools?tab=readme-o...
https://github.com/jamesfebin/pocket-tts-candle
The port supports:
- Native compilation with zero Python runtime dependency
- Streaming inference
- Metal acceleration for macOS
- Voice cloning (with the mimi feature)
Note: This was vibecoded (AI-assisted), but features were manually tested.