Great lesson today creating face controllers, in the perspective model space, using Nurbs Curves (rectangles, circles and text) to control each facial expression or lip sync phoneme shape...
The controllers then act as 'drivers' to the 'driven' blend shapes.
This gives two advantages.
One: The controllers are clearly visible in the viewport and are a simple representation of the blend shape command interface.
Two: The controllers are clearly visible in the graph editor, and that, in turn, the animation of the blend shapes is far easier to amend in the graph editor; as each one can be selected individually in the graph and editing accordingly.
I am finding rigging to be very logical, and enjoyable in equal measure to animating!!
Above:
Nurbs Curves used to create the 'rectangular' controller box and 'circle' controller. The circle was then limited in it's translation Y axis, which meant that it's position would extend either above or below the box, plus it's values were restricted and locked. '0' being the lowest position and '1' being it's top most position.
The pink circled areas indicated the controller, corresponding blend shape values, and the 'Set Driven Key' option box.
Above:
Constraining and parenting the controller to the head joint (highlighted in green)
Above:
Tear off copy of camera viewport, shown on the right. This was not a 'standard' camera view. We created a 'facial camera' and then constrained the camera (in the perspective viewport) to the controller, meaning that when the head was rotated, that the camera view of the head appeared to remain static and in line with the head....only the body appeared to move. This would be utilised if creating an animated scene and a close up of the face was desired simultaneously...