Posted: Fri Sep 23, 2011 8:51 pm
That's a good suggestion about using hotkeys to trigger predefined actions, Hayasidist. I think trying to queue up multiple actions might be difficult to manage in real-time, though. I think you would be better of allowing an action to be triggered before the current one is finished and let the script mix the actions together like my MorphDials script does. You could even use a modifier key to make the action loop. I think you could create some pretty interesting combinations even with just a few basic actions, which I guess is the basis of puppetry.
Also, I disagree that you would have to adhere to a standard for the set of actions that would have to be supported. You could easily just have a text file that contains a map of shortcut keys to actions.
About the lip-syncing, I'm sure it could be done in real-time. After all, you really only need the front end of a speech recognition engine, since there's no point going from speech to phones to phonemes to words then back to phonemes to get the visemes. I think you could just go speech to phones to visemes. In fact, since it's puppetry, you might be able to get away with just using the volume to control the mouth, like AS used to do before it got a more powerful lip-sync engine.
Also, DK, that Animata software does look pretty good. Since it's open-source and built for the task, I think it would be a lot easier to build your platform around it instead of AS, but it would require getting familiar with it first. I'm sure it would be a lot better for supporting specialty controllers, like the kinect or a 3D mouse.
Also, I disagree that you would have to adhere to a standard for the set of actions that would have to be supported. You could easily just have a text file that contains a map of shortcut keys to actions.
About the lip-syncing, I'm sure it could be done in real-time. After all, you really only need the front end of a speech recognition engine, since there's no point going from speech to phones to phonemes to words then back to phonemes to get the visemes. I think you could just go speech to phones to visemes. In fact, since it's puppetry, you might be able to get away with just using the volume to control the mouth, like AS used to do before it got a more powerful lip-sync engine.
Also, DK, that Animata software does look pretty good. Since it's open-source and built for the task, I think it would be a lot easier to build your platform around it instead of AS, but it would require getting familiar with it first. I'm sure it would be a lot better for supporting specialty controllers, like the kinect or a 3D mouse.
