[DECtalk] DECtalk TTS licensing

Jayson Smith jaybird at bluegrasspals.com
Mon Aug 30 13:11:02 EDT 2021


Hi,

Slight correction. Stephen Hawking's voice sounded very much like 
DECtalk but it wasn't, although I don't know how much of the code was 
based on it. His speech synthesizer was a device called the CallText 
5010 by a company called Speech Plus, manufactured if I recall correctly 
in 1986.

Jayson

On 8/30/2021 12:32 PM, Karen Lewellen wrote:
> speaking personally?
> Yes I do believe a set of voices, clear easy to understand voices 
> should apply to everything I do with my computer...after all one does 
> not get a different set of ears or a brain to read email, research the 
> net, shop, read a book.
> Nor does one get a different set of eyes for a different task.
> the *major* problem with  your method for the end user is that quality 
> and consistency  across activity, having to configure things over and 
> over again leads to poor performance.
> with my  dectalk talk, I can choose from 9 voices, one for the main 
> stuff and one for alerts, and just. get. to. work.
> My hands stay on the  keyboard, and i know that the voices will 
> provide consistent information no matter what I am doing, like the 
> human body actually  works.
> I say please give this gift back to end users who are not programmers, 
> and who   just want to  have  base line quality and consistency.
> ..and communication.  after all there is a reason why dectalk remained 
> the synthesis for Stephen Hawking.
> Being understood as well as flawless understanding.
>
>
> On Mon, 30 Aug 2021, Don wrote:
>
>> On 8/30/2021 6:06 AM, Devin Prater wrote:
>>>  I mean, there is ESpeak.
>>
>> There are *lots* of (FOSS/exposed) synthesizers out there,
>> if you are looking to incorporate one into a product (and
>> likely need to work with sources)!
>>
>> It's an ancient technology that has been replicated by many
>> people in many different ways over the past 4 decades (longer
>> if you want to look at cruder synthesis technologies).  You
>> just have to decide what features you want *in* the synthesizer
>> and what resources you have to devote to it.
>>
>> ["Making noises" that sound like bits of speech is relatively easy]
>>
>> DECtalk made sense in the 80's -- when resources were scarce
>> and the synthesizer had to be a "bag" that was bolted onto
>> an existing product/system.  It was a "one-size-fits-all",
>> standalone solution to "converting text into speech".  But,
>> *it* had no idea what purpose it was serving in any particular
>> application.  So, it could never adjust its approach to
>> synthesis to match the expectations made of it.
>>
>> Nowadays, one would *integrate* the synthesizer into the
>> product/system to improve performance, intelligibility, etc.
>>
>> Do you really think ONE set of synthesis rules should apply to:
>> - reading your email
>> - reading a URL
>> - reading a password
>> - reading a web page
>> - reading a novel
>> - reading a child's book
>> - reading stock tickers/quotes
>> - reading picture captions
>>
>> A synthesizer should understand context; HOW it is being used
>> AT THIS PARTICULAR TIME.  If you push that responsibility into
>> the application that is driving the synthesizer, then much
>> of the value of the "TTS as black box" disappears -- you're
>> doing the work *for* it!
>>
>> Why not just implement a "LETTER-to-speech" synthesizer?  And,
>> *spell* everything, out loud (Ans:  because, while it could
>> TRULY be "one-size-fits-all" -- because it pushes all of the
>> real work into the listener's brain -- it would be incredibly
>> unuseful)
>> _______________________________________________
>> Dectalk mailing list
>> Dectalk at bluegrasspals.com
>> https://bluegrasspals.com/mailman/listinfo/dectalk
>>
>>
>>
> _______________________________________________
> Dectalk mailing list
> Dectalk at bluegrasspals.com
> https://bluegrasspals.com/mailman/listinfo/dectalk
>



More information about the Dectalk mailing list