Hi all,
Before I start heating my brains trying to do this myself, has anyone managed to get Espeak working with VoiceOver on the Mac?
If not, any pointers where to start? I'm currently reading Apple's Speech Synthesiser programming guide, but it's slow work. I was kind of hoping there would be a .plist I could just edit with the arguments to the speak command line executable, but it doesn't look like I'm going to be that lucky.
Any hits / assistance welcome.
Have a lovely day.
Comments
Not Really
Hello, No, this will require someone to take Espeak and make it compatible with the Mac. I suspect the best way will be a Cocoa wrapper, which will act like a bridge between what Espeak expects and needs and what the Mac expects and needs. Espeak is all C or C++ (I don't remember which), so it will work. The trick will be getting everything wired up correctly, then keeping it updated as Espeak is updated. The only other resources I can think of are the Speech Dev list and the Espeak list.
I thought someone tried to do
I thought someone tried to do this and it failed with a bang. I don't think there is a way to do this at all, not yet.
Take care.
I think it's doable…
I think it's definitely doable, it just requires someone who understands speech synthesis at a far deeper level than I do, and who has solid C, Objective-C, and Cocoa skills. I think it hasn't happened yet simply because such a person has no need to take on the project.
Well, it can't be impossible.
Well, it can't be impossible. I mean, in /System/Library/Speech there's all the voice thingies that VoiceOver uses, they're just not anything readable.
Espeak already works on OS X, I.E. you can do:
brew install espeak
Then speak will talk merrily from the command line, so it's just the wrapper.
I think a better approach would be to make a generic module client that would read a plist or JSON file that would tell it how to interact with one or more executables. That way you could use one module to work with Espeak, Festival, whatever else you care to think of really... Even people's beloved Eloquence if you could find an archaeologist to make it work on OS X somehow.
Cheers.
eSpeak runs fine on OS X. The
eSpeak runs fine on OS X. The problem is simply that there is no interface available for it from Apple's SpeechSynthesis APIs. Alex has it about right: somebody who knows Objective-C will need to do the work. I'm not that person, unfortunately--not yet, anyway. It doesn't help that Apple makes the documentation sparsely available (a sample called "Morse" appears to be the sole example of any engine at all) or that most people who know C/C++ already have fairly comprehensive reasons for not wanting to learn Objective-C unless they are well within the Apple bubble. (This, incidentally, is also why we have so much difficulty with cross-platform toolkits; many of them have C-based APIs that don't adapt well to Apple's accessibility protocol.)
So in summary: someone will have to do the work. Tell Apple you want it done, if you are so inclined.
I am not sure if voiceover is
I am not sure if voiceover is so compatible with third sinthezizers. I could write such an interface but to register it as an accepted voice in the voiceover control panel is what I really don't know.
E.E. vvoiceover has to control speech pause it request speed changes and etc.
It probably calls a library which access all the voices drivers, and these are probably written in a standard way such all of them react to a function called say changeSpeed(int) and will react accordingly.
Without knowing what interface the driver has to have and how we registwer thism driver in the voices list it will be very hard to make a working implementation.
Based on what I know about Apple I'd say that these apis and documentation are not available.
Well, I've got no idea
Well, I've got no idea naturally, but I've read articles on the net about companies providing third-party voices to VoiceOver, so it must be doable... The problem is going to be whether or not it's doable with freely-available APIs, or whether you have to use the ones Apple doesn't tell you about.
If a way can be found, would be personally very happy to pay any developer who could come up with a working system.
It can be done…
No, it can definitely be done. Apple has their Morese sample, and the speech API guide, plus the speech-dev list I linked to in my original comment. It just needs someone with more experience in both programming and speech synthesis to get it working.
Reading
Just reading through the speech api documentation now... It's mostly over my head, but I have two things on my to-learn list, and the other is to use the DVORAK keyboard layout, and this seems slightly less horrible LOL.
Cheers for the info.
eSpeak does work!
So, I do have a build, which uses the older Eloquence framework, accept the calls which VoiceOver makes to the speech synth are quite different than what eSpeak expects, such that speech cannot be stopped once it starts (VoiceOver only, all other system apps work), and weirdness when reading. Did you know that VoiceOver sends words and phrases to the speech synthesizer so the TTS can infer context as to how to pronounce words?
If you wish to get the experimental build and work on it yourself, here you go:
https://www.dropbox.com/s/js8vikqsrnwhlbk/eSpeak.zip?raw=1
(Note: Mojave and High Sierra work best with this and you must have SIP disabled to make the system modifications.
Move the files into the synthesizers and voices directories according to their extensions, restart VO, and customize the voices. Check "Reed" and you're good to go. Note, that 450 WPM is the 50% value, so if you need it slower, then you should accommodate for that before you switch voices.