13 comments

  • magicalhippo 31 minutes ago
    A lot of good small TTS models in recent times. Most seem to struggle hard on prosody though.

    Kokoro TTS for example has a very good Norwegian voice but the rhythm and emphasizing is often so out of whack the generated speech is almost incomprehensible.

    Haven't had time to check this model out yet, how does it fare here? What's needed to improve the models in this area now that the voice part is more or less solved?

    • soco 16 minutes ago
      That, and also using English words in the middle of another language phrase confuses them a lot.
  • kevin42 1 hour ago
    What I love about OpenClaw is that I was able to send it a message on Discord with just this github URL and it started sending me voice messages using it within a few minutes. It also gave me a bunch of different benchmarks and sample audio.

    I'm impressed with the quality given the size. I don't love the voices, but it's not bad. Running on an intel 9700 CPU, it's about 1.5x realtime using the 80M model. It wasn't any faster running on a 3080 GPU though.

    • rohan_joshi 1 hour ago
      yeah we'll add some more professional-sounding voices and also support for diy custom voices. we tried to add more anime/cartoon-ish voices to showcase the expressivity.

      Regarding running on the 3080 gpu, can you share more details on github issues, discord or email? it should be blazing fast on that. i'll add an example to run the model on gpu too.

  • Remi_Etien 13 minutes ago
    25MB is impressive. What's the tradeoff vs the 80M model — is it mainly voice quality or does it also affect pronunciation accuracy on less common words?
  • ks2048 1 hour ago
    You should put examples comparing the 4 models you released - same text spoken by each.
  • DavidTompkins 14 minutes ago
    This would be great as a js package - 25mb is small enough that I think it'd be worth it (in-browser tts is still pretty bad and varies by browser)
  • altruios 1 hour ago
    One of the core features I look for is expressive control.

    Either in the form of the api via pitch/speed/volume controls, for more deterministic controls.

    Or in expressive tags such as [coughs], [urgently], or [laughs in melodic ascending and descending arpeggiated gibberish babbles].

    the 25MB model is amazingly good for being 25MB. How does it handle expressive tags?

    • rohan_joshi 1 hour ago
      thank you so much. Right now, it cannot handle expressive tags. what kind of tags would be most helpful according to you?
      • altruios 20 minutes ago
        Emotion based tagging control would be the most helpful narrowing it down. Tags like [sarcastically] [happily] [joyfully] [fearfully]: so a subsection of adverbs.

        A stretch goal is 'arbitrary tags' from [singing] [sung to the tune of {x}] [pausing for emphasis] [slowly decreasing speed for emphasis] [emphasizing the object of this sentence] [clapping] [car crash in the distance] [laser's pew pew].

        But yeah: instruction/control via [tags] is the deciding feature for me, provided prompt adherence is strong enough.

        Also: a thought...

        Everyone is using [] for different kinds of tags in this space: which is very simple. Maybe it makes sense to differentiate kinds of tags? I.E. [tags for modifying how text is spoken] vs {tags for creating sounds not specifically speech: not modifying anything... but instead it's own 'sound/word'}

  • ks2048 1 hour ago
    There's a number of recent, good quality, small TTS models.

    If the author doesn't describe some detail about the data, training, or a novel architecture, etc, I only assume they just took another one, do a little finetuning, and repackage as a new product.

  • devinprater 24 minutes ago
    A lot of these models struggle with small text strings, like "next button" that screen readers are going to speak a lot.
    • soco 17 minutes ago
      I think I tried on my Android everything I could try and 1. outside webpage reading, not many options; 2. as browser extensions, also not many (I don't like to copy URLs in your app) 3. they all insist reading every little shit, not only buttons but also "wave arrow pointing directly right" which some people use in their texts. So basically reading text aloud is a bunch of shitty options. Anyone jumping in this market opening?
  • fwsgonzo 1 hour ago
    How much work would it be to use the C++ ONNX run-time with this instead of Python? Is it a Claudeable amount of work?

    The iOS version is Swift-based.

    • rohan_joshi 1 hour ago
      shouldn't be hard. what backend/hardware are you interested in running this with? i'll add an example for using C++ onnx model. btw check out roadmap, our inference engine will be out 1-2 weeks and it is expected to be faster than onnx.
  • ilaksh 1 hour ago
    Thanks for open sourcing this.

    Is there any way to do a custom voice as a DIY? Or we need to go through you? If so, would you consider making a pricing page for purchasing a license/alternative voice? All but one of the voices are unusable in a business context.

    • rohan_joshi 1 hour ago
      thanks a lot for the feedback. yes, we're working on a diy way to add custom voices and will also be releasing a model with more professional voices in the next 2-3 weeks. as of now, we're providing commercial support for custom voices, languages and deployment through the support form on our github. can you share more about your business use-case? if possible, i'd like to ensure the next release can serve that.
  • great_psy 1 hour ago
    Thanks for working on this!

    Is there any way to get those running on iPhone ? I would love to have the ability for it to read articles to me like a podcast.

    • rohan_joshi 1 hour ago
      yes, we're releasing an official mobile sdk and inference engine very soon. if you want to use something until then, some folks from the oss community have built ways to run kitten on ios. if you search kittentts ios on github you should find a few. if you cant find it, feel free to ping me and i can help you set it up. thanks a lot for your support and feedback!
  • wiradikusuma 31 minutes ago
    I'm thinking of giving "voice" to my virtual pets (think Pokemon but less than a dozen). The pets are made up animals but based on real animal, like Mouseier from Mouse (something like that). Is this possible?

    Tldr: generate human-like voice based on animal sound. Anyway maybe it doesn't make sense.

  • Tacite 1 hour ago
    Is it English only?
    • rohan_joshi 1 hour ago
      as of now its english only. the training for multilingual model is underway and should be out in April! what languages are you most interested in? Right now, we are providing deployments for custom languages + voices through support form on the github.