June 27th
Restoration of voice
The DAC5311IDCKT.DAC5311IDCKT. and FRAM buffer have been reconnected.
This involves connection the same SPI CLK and Data Pins as the LCD display.
The DAC is connected to a unity gain OP amp which in turn is connected to the toy's tapped audio input point.
The FRAM buffer is used to store the .WAV data to produce the desired audio wave forms.
The FRAM buffer is also used to store the corresponding Lipsync file.
The lipsync file is a file that contains the mouth position (either open or shut) that directly corresponds to the .WAV value at that point.
To make the toy speak the FRAM buffer is loaded with both the .WAV and Lipsync data.
The MSP430 then reads both buffers to send the current .WAV value to the DAC and current Lipsync value to the toy's mouth.
This will give the toy the illusion of speaking.
Text to Speech and LipSync data Generation
For the Proof of Concept, the Text to Speech has successfully tested using a Windows Based PC.
This functionality comes standard. You provide a text string and it can produce a corresponding .WAV file.
Here is my test that is written in VBA and can run from any Microsoft Office Application.
It can be easily modified to become a function and accept an input string.
'Speech SAPI 5 version - Test
' Requires reference to "Microsoft Speech Object Library" (SAPI 5.1 or later)
Sub mctest()
Dim FileName As String
Dim FileStream As New SpFileStream
Dim Voice As SpVoice
' *** Beginning of MC alternate format definition***
Set F = New SpAudioFormat
F.Type = SAFT8kHz8BitMono
Set FileStream.Format = F
' *** End of MC alternate format definition ***
'Create a SAPI voice
Set Voice = New SpVoice
'The output audio data will be saved to ttstemp.wav file
FileName = "c:\temp\ttstemp.wav"
'Create a file; set DoEvents=True so TTS events will be saved to the file
FileStream.Open FileName, SSFMCreateForWrite, True
'Set the output to the FileStream
Set Voice.AudioOutputStream = FileStream
'Speak the text
Voice.Speak "Can you hear this?"
'Close the Stream
FileStream.Close
'Release the objects
Set FileStream = Nothing
Set Voice = Nothing
End Sub
The resulting .WAV file can be sent to the CC3000 via Wi-Fi to enable the toy to speak.
Any good puppeteer will tell you that producing suitable mouth movements is complex and requires lots of practice hence I don't think it can be done successfully with a simplistic scan of the .WAV data. If you did I'd guess the mouth would start yabbering and look stupid for certain conditions.
I think this needs to be accomplished using a suitable algorithm that recognises at least the phonemes.
I'll have to look into this later.
Connection of additional peripherals.
To allow connection to several serial based peripherals I have obtained a TI CD4052 Dual 4 channel Multiplexer/Demultiplexer and connected to the MSP430FR5739 serial port that is usually connected to the USB debug interface.
Two MSP430FR5739 output pins are used to select the connected device (which by the way can be the USB debug interface).
Let there be Light!
Professional quality lighting can be accomplished using the C2000 Multi DC/DC LED Lighting Module.
This can be controlled via Wi-Fi using the CC3000.
The C2000 Multi DC/DC LED lighting module is connected to the MSP430 serial port via the CD4052 multiplexer/demultiplexer.
I have used Microsoft C# Express to understand the protocol used for the GUI Example and use CC3000 to replace i
Full details of the C2000 Multi DC/DC LED lighting module are accessible via the controlSUITE in CCS.
Another Surprise!!
I'm just about to give the CC3000 sight!