Page 1 of 1

Calling all script writers!!

Posted: Sun Oct 23, 2011 4:26 pm
by Poptoogi
Is it possible to create some kind of dialog box where I can type in a word or words and then keyframe them with MY own set of phoneme mouths I've created in switch layers are stored in a data base. Then they can be recalled when I'm trying to lip sync over an animation it will auto place as many key frames as it can according to the data base made.

For example, if I type in the word "the" in the scripting dialog box, I'll always want it to be keyed as such with the mouths I've made**Closed mouth-TH mouth-Ah mouth-Closed mouth** Now, if the scripting box could have a button that says something like "add to library" You could push it and then any time you type out sentences for lip syncing you could hit another button that says something like "Keyframe available words or sounds" That would automatically place keyframes within the timeline. Of course, you'd have to move the frames around to match the audio up but I'd still prefer doing that over key framing every sound.

I don't know if this makes any sense to anyone but if it does, let me know how doable it is or isn't and if anyone is willing I would PAY for this type of script!!!!!

Posted: Sun Oct 23, 2011 8:26 pm
by funksmaname
Hey dude,
I'm not sure if I understand your correctly but why not use a custom mouth set with papagayo? that would export a .dat file you can use with your switch layer...

Posted: Wed Oct 26, 2011 1:17 pm
by Breinmeester
Papagayo has a text file that is basically a pnoneme dictionary, meaning it has (almost) every word in the English language and indicates what mouth shapes it's made out of. It's in the folder somewhere.

Now, timing those mouth shapes is a different thing and i think Papagayo doesn't do the best job. You'd need a program that analyses the wavefrom of the soundfile to get a better job done.

Also, as an animator i feel you should never trust automatisation techniques too much. In some cases the pronounciation of words is more convincing when using different phonemens or using an unconventional timing. But automated track reading could be a great basis to start from.

Posted: Wed Oct 26, 2011 2:49 pm
by hayasidist
there's a recent exchange on switch layers and how to control them at viewtopic.php?t=20188. so if you have a set of custom shapes in a switch layer and you want to create the control for them "outside" AS then create the .dat file with the frame number and the name of the vector layer...

if you want to change the pronunciation of words and/or the phoneme set used -- take a look at viewtopic.php?t=19382 and http://www.lostmarble.com/papagayo/index.shtml for some ideas.

changing the pronunciation of a word depending on context is, I fear, a bridge too far for the automation - but it can be done manually -- you'll need to invent a new spelling - for example baseline word is "the" and variants are thu, ther, de ... or (again e.g.) to distinguish between the UK and US pronunciation of "aluminium" in the same sentence you'd have the UK version spelt (say) aluminyum mapped (phonetically) to ah-loo-MIN-ee-um (not the US ah-LOO-min-um) .... these new words can be added to additional dictionaries with mappings to standard or bespoke mouth shapes... again see http://www.lostmarble.com/papagayo/index.shtml