lipsync... hard to do?

General Moho topics.

Moderators: Víctor Paredes, Belgarath, slowtiger

Post Reply
rpc9943
Posts: 227
Joined: Tue Dec 06, 2005 11:10 pm

lipsync... hard to do?

Post by rpc9943 »

hey

just wondered also what is the lipsynch like on moho? im very interested here, please tell me is it a hard process? I mean especially since I read you can have only one soundtrack, which sucks but how would you line it all up? is there audio preview as you change frames?

(no need to worry i own adobe audition which people apparently have recommended)

RonC
User avatar
7feet
Posts: 840
Joined: Wed Aug 04, 2004 5:45 am
Location: L.I., New Yawk.
Contact:

Post by 7feet »

In a word, No. I did some of my own work for lipsinc, but Papagayo works really well. Audio file, type in the words, generally good to go. If you can find some soft where its easier, lemme know.
User avatar
mr. blaaa
Posts: 622
Joined: Sun Jul 31, 2005 12:41 am
Location: ---
Contact:

Post by mr. blaaa »

Exactly.
Basic AND advanced lipsyncing in moho is very easy.

I. BASIC LIPSYNCING:

The basic lipsync process is already integratred in moho:
in moho you have to create a switch layer and for the very basic you can put into there several mouth shapes from opened to closed.
Then you edit the properties of that parent switch layer and import a switch data.

Let's say you bounced a dialogue between the two characters in "dialogue1.wav".
You only need to tell moho that this file is the switch source data.
Moho now generates all the necessary keys for your switch layer.
There you are. See how simple that is? :wink:

Thats was the very basic lipsync process.

But you can improve this for a better quality:

1. Get you a good mic. Crackling audio source will adulterate the automatic gain-controlled key-setting of moho.
So the rule-of-thumb for audio source is:
The less crackling the better for moho.

2. With a good audio software you can bounce the single voice channels for each character seperately. This will save you a bit of work. So if you have a good audio software which is able to export single channels it'd be good if you do so.
But if not, it won't be a pain in the ass.

If you have two voices in your "dialogue1.wav" source, the generated switch data will not make a distinction between voice1 and voice2.
As you may guess, the keys of the switchlayer are generated by reading a specific gain-related information.
So the keys for voice1 and voice2 get recombined in your timeline in the form of those keyframes.
Of course we dont wan't character1 to move his mouth when character2 is talking right?
Now comes the workaround:
You simply go and import the same *.wav file you had for your switch source data as your "animation soundtrack".
Now in the timeline you can see your audio track and you can playback right in moho.
So you should be able to make a distinction between voice1 and voice2 manually. By yourself. By your ears.
Then you simply playback a couple of times and delete the keyframes who are not necessary in the swith layer of voice1.
Lateron you have to repeat this for voice2.

(You can improve this, me myself is using cats, dogs and lamas for this kind of work, cause their ears are so much better than mine :roll: )

II. ADVANCED LIPSYNCING:
What i call advanced lipsyncing is an improved result, the high standart.
It will cost you a bit of time and preparation, but hey, you can't make something out of nothing.
Now what were we about to do?
Err... right, we wanted to make our lipsync better.

(You know if you make an animation with very good results, you will have made yourself elitist.
All animators who made good lipsync jobs, are in a club called the
"Order of the cool animators who made such a damn good job" (O.T.C.A.W.M.S.A.D.G.J).
In that club all members wear hats with a big wallpaper around it, imprinted is "Am I Cool or What?".
They spend the whole day in doing what they like to do, cause in their mansions, they simply have everything.)

Let's say you want to make a seperate drawing for every single phoneme.
So the drawing of the "A" sound looks different than e.g. the "O" sound.
Logically?

Okay let's get it on.

There are three lipsyncing programs available for the use with moho. Pamela, Magpie and Papagayo. I think Papagayo is the best of all so:

Get yourself Lost Marble's Papagayo.

With this very simple but cool program you can import and view your audio track and type or copy and paste the spoken words.
Then you simply have to rearrange the words until the words are synchronized with the audio track.
When you are content you are ready to export the data you created with Papagayo in the *.dat format.
This data tells moho exactly when to use which drawing for the specific phonemes.

In moho first create a switch layer.
It must contain the following sublayers:
AI, E, etc, FV, L, MBP, O, rest, U, WQ.
Logically you will have to create a drawing for each phenome.

If you don't know how to draw the different phonemes properly these links should get you a basic knowledge:
http://www.garycmartin.com/mouth_shapes.html
and
http://www.garycmartin.com/phoneme_examples.html

Finally if you did all the drawing, just import the *.dat to the switch layer and it just looks fine.

Of course there are some other steps more to take to improve this but for now i won't tell...
8)
Image
User avatar
spoooze!
Posts: 689
Joined: Fri Feb 18, 2005 11:42 pm
Location: USA
Contact:

Post by spoooze! »

Hey,
Did you email me from the Hiya website?
If so, I'm finishing up answering it.

Anyway:
This is the way I do lip sync:

1. I set up the shot with all the characters and backgrounds in whatever poses I want my characters to be in (I call this the "starting pose")

2. I animate, to the dialogue, the body language. I don't touch the lip sync yet.

3. I then go in and animate, by hand, the lip sync by manipulating the vectors in the head layer. I also animate the teeth and tounge by hand.

I've never been a fan of automated lip sync with the switch layers and stuff. It doesn't really let you add emotion into the mouth.

James 8)
User avatar
Rasheed
Posts: 2008
Joined: Tue May 17, 2005 8:30 am
Location: The Netherlands

Post by Rasheed »

spoooze! wrote:I've never been a fan of automated lip sync with the switch layers and stuff. It doesn't really let you add emotion into the mouth.
Adding emotion can be done with switch layers as well. You just need a different set of mouth shapes, and a swith layer to switch between happy mouth sets, angry mouth set etc. And emotion expression in a face is a funny thing, look at the sequence of expressions in this movie:
http://members.home.nl/rene.van.belzen/ ... tfaces.mov

So it seems, you can put some emotion in a face with the eye region. In theory, you can use the same mouth shapes for speech and use different sets of eye/eyebrowe sets for expressing emotion.

BTW I only used swith layers in this animation.
User avatar
mr. blaaa
Posts: 622
Joined: Sun Jul 31, 2005 12:41 am
Location: ---
Contact:

Post by mr. blaaa »

Rasheed wrote: Adding emotion can be done with switch layers as well. You just need a different set of mouth shapes, and a swith layer to switch between happy mouth sets, angry mouth set etc.

http://members.home.nl/rene.van.belzen/ ... tfaces.mov
Rasheed nice demonstartion! I also work with switches to get emotion into my animations and as far as i can say for now it just works marvelous!

Especially when you have interpolated switch data.
Image
User avatar
Rasheed
Posts: 2008
Joined: Tue May 17, 2005 8:30 am
Location: The Netherlands

Post by Rasheed »

mr. blaaa wrote:I also work with switches to get emotion into my animations and as far as i can say for now it just works marvelous!
Of course, you can also do things without switch layers, but switch layers can save a lot of work. It also depends, I guess, on your workflow and personal preferences.
mr. blaaa wrote:Especially when you have interpolated switch data.
I know, but sometimes you don't want that kind of realism and rather suggest something than actually showing it.

For those who don't already know: if you want interpolation to work, you'll need the connected points in all child layers of the switch layer to be the same (same amount of points and connected exactly the same point-to-point). In essence, you start off with one shape in a vector layer, make several copies of this layer and modify the shapes in each layer, without adding or removing points or curves. You can, however, modify the style of the curves (color, line width, etc.).

Before version 5.3, you were very limited in this, because there was no way to hide unwanted lines (Hide Edge didn't interpolate) other than changing the transparacy of a line. Now you can also vary the line width between zero (invisible) and greater than zero (visible), so in theory interpolation can be more smooth than with transparacy alone.

Can you (or anyone else) show me some examples of this new type of interpolation between switch layers? I would love to see those! It doesn't have to be a finished piece of animation. Alas, my drawing skills are too limited, otherwise I would try to show some examples myself.
User avatar
spoooze!
Posts: 689
Joined: Fri Feb 18, 2005 11:42 pm
Location: USA
Contact:

Post by spoooze! »

Adding emotion can be done with switch layers as well. You just need a different set of mouth shapes, and a swith layer to switch between happy mouth sets, angry mouth set etc.
Everything you said was true :)
I guess it's pretty much my personal preference to do it by hand. I dunno, I just feel like I have more freedom in the mouth. I can make it as wide or small as I like without having to draw out every switch layer.
I think it looks smoother also.

But I DO use switches in the eyes, hands, and eyelids because they look fine without doing it all by hand.

Oh well, to each his own ;)

James 8)

BTW, nice animation :D
Post Reply