CS408 Animation Software Design Answers to Sample Exam

CS408 Animation Software Design
Answers to Sample Exam Questions
1. Define the following terms
a) computer animation - the use of computers to create a sequence of images in order to
simulate an event involving motion; computer animation is used on the internet, in
movies, on tv, and in digital simulators.
b) persistence of vision - the sensation of continuous imagery produced by individual
images being shown quickly (at least quick enough).
c) playback rate - the number of images displayed per second; it determines flicker.
d) sampling rate - the number of different images that occur per second; it determines
strobing.
e) thaumatrope - flat disc with an image on both sides and two strings connected to either
edge. By twirling the strings the disc is spun and the images are superimposed on one
another.
f) zoetrope - (wheel of life), a short fat cylinder that rotates on an axis of symmetry with a
series of still images around the inside with long vertical slits in between them so that the
viewer can see the opposite side. When it is spun, the sequence of slits presents the
sequence of images, creating the illusion of motion.
g) multiplane camera - developed by Walt Disney, the camera mounted above multiple
planes, each of which hold an animation cell. Each plane can be moved in 6 directions
(up, down, left, right, in, out) and the camera can move farther or closer in order to create
parallax and produce the illusion of depth.
h) stop-motion animation - involves first modeling objects and an environment, and then
taking still images of the scene, between which the objects are manipulated in order to
produce the illusion of continuous motion.
i) squash and stretch - displays mass by changing form (i.e. during collision), can be used
to show exaggeration and good action flow.
j) slow in and slow out – a method of modeling of physics such as friction, inertia, etc. on
objects by making motions between still poses gradually increase in speed at the
beginning and decrease at the end.
2. Define the following terms
a) sequence - major episode (usually associated with a single area).
b) shot - continuous camera recording.
c) frame - single recorded image.
d) storyboard - layout of action schemes using sketches meant to represent frames at key
moments in a yet-to-be-created animation.
e) key frame - created by lead animators so the frames between the key frames can be
interpolated by other animators. Keyframes are extremes that are identified and produced
by the master animators to help confirm character development and image quality.
f) inbetweening - creation of the frames between the keyframes, traditionally done by the
assistant animators.
g) linear editing - type of editing where the shots are assembled in the order in which they
will appear in the final animation.
h) nonlinear editing - type of editing where sequences can be inserted in any order at any
time to assemble the final presentation.
i) rendering - process of using the components of an animation and converting them to a
format that allows them to be viewed, i.e. generating the images used in the animation.
3. To simulate the multiplane camera using computerized techniques you could generate
a series of images (with some form of transparency), each image corresponding to a
different layer of the multiplane camera, then use the images as textures for a set of
planes generated in maya, 3dstudio max, etc. After doing that, set up the camera in the
environment above the planes. This allows for the same range of motion of the images
and camera that the multiplane camera uses.
4. i) Animation language - procedural language particularly suited to constructing
animations is defined. A special purpose programming language. eg. AL
advantages: powerful, the user can control every detail, cleaner and easier than an
animation hard-coded into a traditional programming language.
disadvantages: steep learning curve, often not intuitive to use, may be slower in
execution.
ii) Graphics software - computer program designed to create still images used to make
each frame. Files representing the frames are joined together later.
advantages: can generate very detailed images, allows the most control of each
individual frame.
disadvantages: very tedious, must go through two stages of processing to create
an animation, the images may not create a smooth consistent animation.
iii) Animation software - computer programs that provide interactive creation and
manipulation of the animation, eg. Maya, 3D Studio Max, etc.
advantages: most intuitive and easiest to learn to use.
disadvantages: may not be able to control every detail of the animation, must be
rendered, and can be slow.
iv) Hard-coded animation - use of a traditional programming language with calls to
some graphics API in order to produce an animation.
advantages: fast execution if done well, language may already be known by
animator.
disadvantages: coding can be very tedious to produce even simple animations.
5. The gulf of evaluation is the problem of bridging the gap from computer output to the
brain. (going from the screen to the user's mental model). The three phases are
perception, interpretation, and making sense.
Perception uses the Gestalt principles of closure, continuity, proximity, similarity, area,
and symmetry. It also involves careful organization of the interface to make essential
information easy to perceive.
Interpretation uses visual language, familiar words and pictures, abstracted icons, and
affordances.
Making sense involves consistency to help users make sense of information (gives them
some default expectations). Also, visual metaphors are used, hierarchical information
structure, maps, fisheye views, focus + context, semantic filtering, and multiple
coordinated views may be used to provide a way for the user to better understand the
information being presented to them.
3D Studio Max:
Collapsible lists of properties allow users to focus only on the important factors of the
properties available without being confused by enormous amounts of information.
It uses a multiple coordinated view of the layout of the frame. This allows the user more
precision in modifying objects, as well as giving them a better sense of what is happening
in the program.
3d studio max has well organized tool bars. They separate many of the available options
into well-defined categories, making it easier to perceive that these objects do not have
the same sort of operations. The set of icons within each tab pane exemplify the gestalt
principles.
Maya:
The object creation icons are all together, displaying the proximity property.
The icons for object translation, rotation, and scaling show those actions being performed
on an object, and are thus an example of clear affordance.
Many icons are organized under tabs for their respective groups which form a hierarchy.
6. The gulf of execution is the problem of bridging the gap from the mental model to
computer input. It involves identifying the task goal, system goal, action plan, and
execution.
Identifying the task goal is deciding what you want to accomplish.
Selecting a system goal involves semantic directness, direct manipulation, command
languages, and opportunistic behaviour.
The action plan is the series of steps that need to be taken to accomplish the system goal.
Action plans use things like affordances, chunks, and windows to help users, whereas
things like interference can be a problem.
Execution of the action plan is physically carrying out the actions required in the action
plan. This requires articulatory directness, pragmatics, feedback, and short cuts such as
keyboard shortcuts, macros and defaults to cut down the amount of work the user must
do.
3D Studio Max:
When rendering, feedback is provided on how far the rendering has progressed.
The autokey option can reduce the amount of work the user has to do to keyframe
objects.
Default settings are provided for any modifiers used on an object, so that the user does
not have to define each property value.
Maya:
Users can control the program through Maya’s command language, MEL, which can be a
quick and precise way for experienced users to produce animations.
The workspace can be viewed either as one or four viewports, which is an example of
windowing.
The translation, rotation, and scaling of objects can be done with the mouse using handles
on the objects, which is more direct, or by typing in values for the various parameters.
7. Maya:
When the program is opened for the first time, it brings up a screen providing instructions
on how to navigate the screen as well as links to tutorials.
Another example is when you try to close the program without saving the scene it pops
up a dialog box asking if you would like to save the current scene.
3D Studio Max:
The user can immediately specify the dimensions and initial coordinates of objects when
they are created.
The program also includes an option for rendering where it will automatically send an
email when it is complete, allowing the user to leave but still find out when this oftenlengthy process is done.
8A. Maya:
To make a particle system, one must select the object from which the particles are
desired, and make it an emitter. Particles then come out of the vertices of this object, and
one can specify the rate, number, colour, size, shape, etc. of the particles themselves.
These particles can also be affected by fields applied to the emitter, which control their
velocity, direction, etc.
8B, 8C, 8D: No answers available
9. Rotate: Rotate the Part of Body at the joint, around an axis perpendicular to that
running the length of the Part of Body.
Twist: Twist the Part of Body by rotating it around an axis running the length of the
Part of Body.
Translate/Pull: Move the Part of Body along the camera plane (Screen left and right, up
and down), possibly moving with it other connected Parts of Body. Pulling it too far it
will cause unpredictable results.
Translate In/Out: Move Part of Body towards or away from camera, possibly moving
with it other connected Parts of Body.
Scale: Change the size of the Part of Body in any direction.
Morph: Allows fine-grained editing of the Part of Body.
10.
i. Squash & stretch displays mass by changing form (ex/during collision), can be
used to show exaggeration and good action flow.
ii. Timing is how actions are spaced according to weight, size, etc, giving
appropriate duration to actions.
iii. Secondary actions support main action, demonstrate reaction.
iv. Slow in & slow out of poses show how things move through space.
v. Arcs show how things move through space due to other forces (i.e. gravity).
vi. Follow though/overlapping action displays flowing actions.
vii. Exaggeration accentuates important movements or traits.
viii. Appeal keeps audience interested.
ix. Anticipation sets up upcoming actions so the audience knows it (or something)
is going to happen.
x. Staging presents an action so the audience does not miss it.
xi. Straight ahead versus pose to pose is progressing from a point and developing
the motion continually along the way versus having key-frames and interpolating
intermediate frames.
11A. open the file
read the file header, make sure it is a bitmap by checking to see if the first two characters
are "B" and "M"
read the image header character by character, you need only record image size (x, y), and
the number of colors in the palette
read the palette
You already know how many colors are in the palette. For each color, there are 4 values
(blue, green, red and an extra one used for masking that you can ignore) are listed
for i < PaletteSize
{
for j < 4
{
read the character
}
}
allocate storage for 3 arrays, one for each color: red, green, blue using malloc.
the array size depends on the size of the image. Try
ArraySize = ( xImageSize ) * ( yImageSize ) * sizeof ( int );
read the data (for each pixel, starting on the lower-left corner, put read values into your 3
arrays)
for j < yImageSize
{
for i < xImageSize
{
read one character into blue array [j][i]
increment blue counter
read one character into green array [j][i]
increment green counter
read one character into red array [j][i]
increment red counter
}
}
close the file
11B. No answer given here.
12A. i. playback rate - how fast it is displayed can minimize flicker, can it be controlled
by the user?
ii. start and stop points - can they be set or changed, can the length be changed, can
you play only a section of the animation?
iii. keyframes - are they visually indicated, can they be manipulated, can you
interpolate or approximate values between the keyframes?
iv. dynamic behaviour - can it be represented on the timeline?
v. frames vs. seconds - can you toggle between dividing the timeline into frames or
into seconds, which is more user friendly?
vi. timelines - are there local timelines for individual objects, layers, groups, etc. in
addition to the global timeline?
vii. scale - what is the scale of the timeline, can it be adjusted, and does it include some
sort of fisheye view?
viii. setting keyframes - do keyframes have to be manually set or are they implicitly
added when an object is somehow manipulated?
*examples
12B. No answer given here.
13.
h
x'
2
y'
2
x2
y2
y
x
cos c
h
h
y'
x'
cos a
sin a
h
h
cos a cos b c cos b cos c sin b sin c
sin a sin b c sin b cos c cos b sin c
x'
x
y
cos b
sin b
h
h
h
x' x cos b y sin b
y'
x
y
sin b
cos b
h
h
h
y ' x sin b y cos b
sin c
cos b
sin b
sin b
cos b
x
y
x'
y'
14. a) [16 8]T
b) [3 -2] T
c) [3 1 -8] T
d) 3*4 + 2*5 + 3*1 = 25
15. a)
b)
16. [1 0 0](Uy*Vz - Uz*Vy) + [0 1 0](Uz*Vx-Ux*Vz) + [0 0 1](Uz*Vy - Uy*Vx)
= [1 0 0](0) + [0 1 0](0) + [0 0 1](2*4 - 3*1)
= [0 0 1](5)
= [0 0 5]
17. a) positional, tangential, curvature
b) none
c) positional
d) positional
e) positional
f) positional, tangential
18. a) Native binary format is a binary representation of the state of a program at a certain
point in time. For an animation program, such a file will contain all relevant data about
the state of objects at each keyframe, and anything else that is necessary for reloading the
animation. Such a format is specific to a particular piece of software.
b) For each type of shape (in a particular order), write to the file the number of objects
of that type, and then for each one, write its static properties, followed by the number of
keyframes for that object, and finally for each keyframe, the values for the properties that
can be animated. To reload this information, it would be read back into the program in
the same order in which it was written out, using the numbers of objects and keyframes
for each object to control what to read in next. The rectangle objects would require
position (say of the top left corner), height and width information. Ovals would require
position (likely of the center), radius, and height to width ratio. Finally, curved paths
would require positions of all points on the segment, both end points (if applicable), and
control points. Information relevant to all shapes would be colour, transparency, etc.
19. Flocking behavior: This could be stored in native binary format by writing out the
positions of the members of the flock, the constant of attraction, the safety distance, the
flock center, the migratory urge, and other relevant information to the file.
Gaseous phenomena: If the particle-based approach is used, the positions, colours,
velocities, sizes, lifetimes, and other such information should be written to the file. If the
grid-based approach is used, write out the 2D or 3D array and for each cell containing
gas, the density, velocity vector, etc.
20. An animation language is a procedural language particularly suited to constructing
animations is defined. A special purpose programming language. eg. AL, MEL
eg. in a hypothetical language:
//make a box with width 10, height 40
some_box = box(10, 40);
//set the current frame to 25
setCurrentFrame(25);
//scale the box 20% along width, 50% along height
scale(some_box, .2, .5);
Example AL code:
(CAMERA “main”
“perspective �TO
(VEC 3 0 0.5 0)
�FROM (VEC 3 1 1 3)
�FOV 45)
(MOTION-BLUR
ON)
(LIGHT-SHADOWS OFF)
(POLYGON �P
�(#<-1 0 0> #<0 1 0>
#<1 0 0>))
21. advantages: powerful, the user can control every detail, fast and effective for
experienced users.
disadvantage: steep learning curve, user must remember syntax, manipulation is less
direct, harder to use if you don’t know exactly what you want.
22. (a) ((4 + 7) / 2, (5+8)/2) = (11/2, 13/2) = (5.5, 6.5)
(b) ((10.3+3.2)/2, (2.5+5.4)/2) = (13.5/2, 7.9/2) = (6.75, 3.95)
23.
24. Put each equation into the form Bi u
B0 u
B1 u
B2 u
1
u 3 3u 2 3u 1
6
1
3u 3 6u 2 0u 4
6
1
3u 3 3u 2 3u 1
6
1
au 3
6
bu 2
cu
d .
B3 u
1 3
u
6
0u 2
0u 0
Take the coefficients of u, and put them into a matrix:
B0 B1 B2 B3
1 3
3 1
1
6
u3
u2
u 1
3
3
1
6
3
0
0
3
0
4
1
0
25. To draw a Hermite curve, one needs a start and an end point, and two control points.
The curve is drawn from the start to the end point guided by the control points, which
define vectors (one from the start point to the first control point and one from the end
point to the second control point) that are tangent to the curve at the start and end points,
respectively.
26. Bezier
v0 = x0
v1 = v0 + 1/3*d0
v2 = v3 - 1/3*d1
v3 = x1
d0 = -3v0 + 3v1
d1 = -3v2 + 3v3
a3 = 2x0 - 2x1 + d0 + d1
a2 = -3x0 + 3x1 - 2d0 - d1
a1 = d0
a0 = x0
a3 = 2v0 - 2v3 + [-3v0 + 3v1] + [-3v2 + 3v3] = -v0 + 3v1 - 3v2 + v3
a2 = -3v0 + 3v3 - 2[-3v0 + 3v1] - [-3v2 + 3v3] = 3v0 - 6v1 + 3v2
a1 = -3v0 + 3v1
a0 = v0
27.
Bezier(v0, v1, v2, v3, u)
{
a3 = -v0 + 3v1 - 3v2 + v3
a2 = -3v0 - 6v1 + 3v2
a1 = -3v0 + 3v1
a0 = v0
return((a3 * u * u * u) + (a2 * u * u) + (a1 * u) + (a0))
}
DrawCurve(P, numPoints, colour)
{
for (i = 0 to numpoints - 1)
{
for (j = 0 to MAX_STEPS)
{
u = j / MAX_STEPS
x = Bezier(P[i].X, P[i].endX, P[i + 1].startX, P[i + 1].X, u)
y = Bezier(P[i].Y, P[i].endY, P[i + 1].startY, P[i + 1].Y, u)
DrawPoint(x, y, colour)
}
}
}
Could also calculate P[i].end from P[i+1].start, as described in class notes.
28. Catmull-Rom (catrom) curves are similar to Bezier curves. They estimate the tangent
at point A using the direction of the vector between A’s adjacent points. They were
developed so that the user could define a curve without control points, using only points
through which the curve will pass.
IF THE QUESTION WERE WORTH MORE MARKS, YOU WOULD ALSO
EXPLAIN HOW YOU CAN DO THIS.
29. You can view the answer to this question here: http://wwwinst.eecs.berkeley.edu/~cs184/Sp2003/Discussion/2003-04-11-Discussion-Solutions.pdf
30. [Not required for 200810] Forward differencing - samples the curve at many
different points, creating a table of all points and estimating the curve length based on
table values. This method was developed because it is simple, intuitive, and fast.
Gaussian Quadrature - it is adaptive, takes samples on uneven intervals, spends more
time on areas where error is more likely to occur. It was developed as a more accurate
alternative to forward differencing.
Note: The course notes also give two more possibilities.
31.
Forward differencing is where points are taken at regular intervals on the curve and the
linear distances are cumulated in a table. To estimate the arc length for any point, simply
linearly interpolate it along the straight line between the closest two calculated points. In
the formula, the Value field refers to the parameter value u and the ArcLength field refers
to the total arc length to this point on the curve.
Index
(i)
0
1
2
Parameter
Value u(i)
0.0
0.25
0.50
Curve(u(i))
(0.0, 5.0)
(1.0, 6.0)
(2.0, 3.0)
Linear Segment
Length
1.41
3.16
Arc Length
G
0.00
1.41
4.57
3
0.75
(3.0, 2.0)
1.41
5.98
4
1.00
(4.0, 1.0)
1.41
7.39
i = (int)(u/distance between entries)
= (int)(0.6/0.25)
= (int)(2.4)
=2
This means look in the row of the table with index value i = 2.
L = ArcLength[i]+(u - Value[i])/(Value[i+1] - Value[i])*(ArcLength[i+1] - ArcLength[i])
= 4.57 + (0.6 – 0.50)/(0.75 – 0.50)*(5.98 - 4.57)
= 4.57 + (0.1)/(0.25)*(1.41)
= 4.57 + (0.4)*(1.41)
= 4.57 + 0.56
= 5.13
32. Repeat for u = 0.8
Index
Parameter
Linear Segment
Arc Length
(i)
Value (u(i))
Curve(u(i))
Length
0
0.0
(0.0, 5.0)
0.00
1
0.25
(1.0, 6.0)
1.41
1.41
2
0.50
(2.0, 3.0)
3.16
4.57
3
0.75
(3.0, 2.0)
1.41
5.98
4
1.00
(4.0, 1.0)
1.41
7.39
i = (int)(u/distance between entries)
= (int)(0.8/0.25)
= (int)(3.2)
=3
L = ArcLength[i]+(u - Value[i])/(Value[i+1] - Value[i])*(ArcLength[i+1] - ArcLength[i])
= 5.98 + (0.8 – 0.75)/(1.00 – 0.75)*(7.39 - 5.98)
= 5.98 + (0.05)/(0.25)*(1.41)
= 5.98 + 0.20*1.41
= 5.98 + 0.28
= 6.26
33. We assume that the points are stored in an array called P with X and Y fields. We
also assume that there is a Curve function that can calculate the position of a point on the
curve. To implement the curve function, all you need is the matrix that defines the type
of the curve. We further assume that the Curve function is defined on four points, Pi,
Pi+1, Pi+2, and Pi+3, as a uniform B-spline curve would be.
FollowCurve(P, numPoints, Rectangle)
{
for (i = 0 to m - 4)
{
for (j = 0 to FRAMES_PER_SEGMENT)
{
x = Curve(P[i].X, P[i + 1].X, P[i+2].X, P[i+3].X, j
/FRAMES_PER_SEGMENT)
y = Curve(P[i].Y, P[i + 1].Y, P[i+2].Y, P[i+3].Y, j /
FRAMES_PER_SEGMENT)
Rectangle.X = x - 0.5 * Rectangle.width
Rectangle.Y = y - 0.5 * Rectangle.height
Rectangle.Draw()
}
}
}
34. To control the speed of the object, the arc length should be parameterized for
consistency. Then make some velocity vs. time function, the total area under which is the
estimated length of the curve. At any time t on the graph, the area under the curve up to t
is the distance traveled along the curve by that time (and the value of the function at t is
the velocity of the object at time t). To make the velocity constant, the function would be
a rectangle.
*example
35. To make an object ease in and out, the velocity vs. time function used could be a
trapezoid (which would result in constant acceleration), or have the left and right sides
the –π/2 to π/2 (increasing) segment of a sine function and the π/2 to 3π/2 (decreasing)
segment of a sine function, respectively, among other possibilities. The sine calculation
is costly, so it is easier to draw basic velocity time functions, which enables you to draw
the distance time function by finding the integral. For constant acceleration, if the object
starts and ends at a stop and starts at position 0, the velocity-time curve can be defined as:
v = v0*(t/t1)
for t < t1
v = v0
for t1 ≤ t ≤ t2
v = v0*(1.0 - (t - t2)/(1.0 - t1)) for t > t2
Then find the integrals to give the equations for the distance-time function
d = v0*(t2/2*t1)
for t < t1
d = v0*t1/2 + v0*(t - t1)
for t1 ≤ t ≤ t2
d = v0*t1/2 + v0*(t - t1) + (v0 - ((v0*(t - t1)/(1- t2))/2))*(t - t2)
for t > t2
35B and 35C. No answer given here.
36. The Frenet frame of reference is a moving right-handed coordinate system which
defines the position, direction, and orientation of the camera as it moves along a curve. It
consists of vectors w, u, and v defined as:
w = P'(s)
u = P'(s) Г— P''(s)
v =wГ—u
Where P'(s) denotes the first derivative of the curve at the point given by the current
position of the camera, P''(s) denotes its second derivative, and Г— is the cross product
operation. The w vector is the direction the camera is pointing and the v vector defines
the upward orientation of the camera.
37. The disadvantage of having the camera point straight off along the direction of the
tangent of the curve is that it wouldn’t be looking ahead to where it would be, and would
look unnatural. One alternative to this approach would be to point the camera at a point a
certain distance ahead on the curve, or where the camera will be in a certain number of
time units. This often looks more natural as it is “looking ahead” on the curve as opposed
to looking off of it. Another alternative would be to define a center of interest, often on
the “inside” of the curve, and have the camera pointed at that throughout the motion.
This is useful for tracking an object while the camera moves.
38. Smoothing can be performed by linear interpolation of adjacent values. Take the two
values on either side, average them, and average the result with the original point. That
is, the old point is given a weight of half when determining the new point, and the
immediately surrounding points are given a weight of one quarter each.
Assuming that the points along the curve are in the order given, and assuming that end
points are not smoothed:
b2 = ((a + c)/2 + b1)/2
the resulting points would be:
(1, 4)
b2x = ((1 + 2)/2 + 1)/2
b2y = ((4 + 7)/2 + 6)/2
(1.25, 5.75)
b2x = ((1 + 3)/2 + 2)/2
b2y = ((6 + 8)/2 + 7)/2
(2, 7)
b2x = ((2 + 4)/2 + 3)/2
b2y = ((7 + 9)/2 + 8)/2
(3, 8)
(4, 9)
Or in other words, with
1
1
2
3
4
p1
p2
p3
p4
p5
4
6
7
8
9
we have
5
1
1
2
1
1
1
1
1
1
4
p2 '
P1
P2
P3
23
4
2
4
4 4 2 6 4 7
4
1
2
3
4
1
1
1
1
1
1
p3 '
P2
P3
P4
7
4
2
4
4 6 2 7 4 8
3
1
1
1
1 2 1 3 1 4
p2 '
P3
P4
P5
8
4
2
4
4 7 2 8 4 9
39.
Recall that
Si
Si
1
1
i
k 1
for k < 0
n 1
i
k 1
for k ≥ 0
n 1
Here
k = 1, n = 2
so the fractional displacements for the points around vertex a should be:
1 1
0
S0 1
1 02 1
2 1
S1
1
S2
1
1
1 1
2 1
2
2 1
1
1
3
1
2
3
1 1
2
1
1
9
8
9
1
4
9
5
9
2
Since we are moving vertex a from [3,3]T to [3,4]T, which is a displacement of [0, 1]T, we
can calculate the other displacements as fractions of this displacement. The new
locations for the vertices shown in red below, from left to right, are:
S(2), left:
S(1), left
S(0)
S(1), right
S(2), right
[1,4]T + 5/9*[0,1]T = [1,4] + [0,5/9]T = [1, 4 5/9]T.
[2,3]T + 8/9*[0,1]T = [2,3] + [0,8/9]T = [2, 3 8/9]T.
[3,3]T + 1*[0,1]T = [3,3] + [0,1]T = [3, 4]T.
[4,3]T + 8/9*[0,1]T = [4,3] + [0,8/9]T = [4, 3 8/9]T.
[4,5]T + 5/9*[0,1]T = [4,5] + [0,5/9]T = [4, 5 5/9]T.
40.
Since vertex a = [4.6 2.5]T, we have u = 0.6 and v = 0.5.
P00 = [2.7 0.1]T,
P01 = [3.6 1.3] T,
P10 = [4.5 0.0]T,
P11 = [4.5 0.3]T
Pu0 = (1 - u) * P00 + u * P10
= 0.4 * [2.7 0.1]T + 0.6 * [4.5 0.0]T
= [1.08 0.04]T + [2.70 0.00]T
= [3.78 0.04]T
Pu1 = (1 - u) * P01 + u * P11
= 0.4 * [3.6 1.3] T + 0.6 * [4.5 0.3]T
= [1.44 0.52]T + [2.70 0.18]T
= [4.14 0.70]T
Puv = (1 – v) * Pu0 + v * Pu1
= 0.5 * [3.78 0.04]T + 0.5 * [4.14 0.70]T
= [1.89 0.02]T + [2.07 0.35]T
= [3.96 0.37]T
vertex a’s new position in global space is [3.96 0.37]T.
41. Both free form and 2D grid deformation involve mapping global to local coordinates.
Neither of them manipulates vertices. They manipulate the grids instead. Free form
deformation can be done in 3D, whereas 2D grid deformation is only 2-dimensional.
53. Calculate the forces applied to an object at its current state, then calculate the
acceleration from the object's mass, then calculate the change in the object's state, then
update the object's state, and repeat.
54. Kinematic Method
Recall that you can calculate the ball's velocity when leaving the surface using this
equation:
v(ti+1) = v(ti) – (1 + k) · v(ti) ● N · N
where v is the velocity, t is the time, N is the surface normal, and k is a dampening factor
such that 0 < k < 1.
Since the ball goes from [1, 5] T to [3, 2] T from time t to time t+1, its velocity is [2, -3] T,
assuming that the ball is going at a constant velocity, since it has gone this far in one time
step.
k = 0.7
N = [0, 1]T, which means that the normal points straight up from a flat surface
(the “T” just means I am writing the vectors across instead of up and down to save
complicated formatting on the computer)
v(ti+1) = v(ti) – (1 + k) · v(ti) ● N · N
= [2, -3]T – (1+ 0.7) [2, -3] T ● [0, 1] T · [0, 1] T
= [2, -3]T – (1.7) (2*0 + -3*1) · [0, 1] T
= [2, -3]T – (1.7) (-3) · [0, 1] T
= [2, -3]T – (-5.1) · [0, 1] T
= [2, -3]T – [0, -5.1] T
= [2, 2.1]T
54B. No answer provided here.
55. Recall that the penalty method attaches a virtual spring to one (or both) of the
colliding object(s) and calculates what impact the spring has on the target. The escape
vector of an object after a collision is calculated as if the objects passed through each
other with the surfaces attached by an imaginary spring, and then rebounded out, with the
force on the objects determined by Hooke’s law: F = -k · d, and then the objects’
accelerations are calculated as a = F/m.
v
k
2
4
0 .5
m 1.5
d
F
a
v
pos y ball t
kd
F
m
v0
0.5
0.5
1.5
2
2ad
0
pos y
1
1
0.5
02
2 0.33 1
1
0.33
0.66
0.82
56. The impulse force method is a way of more accurately defining future motion after
the impact. It records and remembers what happened before the collision.
steps: 1. detect a collision using one or various collision tests
2. back up to the point of intersection
3. calculate the magnitude of the impulse (j)
4. scale the contact normal (vector)
5. update the linear and angular momentum of each object
57. No answer provided here.
58. snow - small white particles
rain - semi-transparent drop shaped particles moving in a straight line
fireworks - self illuminating particles that shoot out from a given point
tornado - dark particles spinning around an axis
fire - many tiny particles emitting light and rising
smoke - dark particles rising, affected by wind, air pressure and turbulence, etc.
clouds - small particles in spheres, with graded density and wispiness
59. the basic model for a particle system includes five modules:
attributes - initial position, initial velocity, initial size, initial colour, initial
transparency, shape, lifetime, density.
generation - generated by a controlled stochastic process; each particle is generated
and transformed through time independent of all other particles.
dynamics - individual particles within a particle system move in 3D space and can
also change colour, transparency, and size.
extinction - when generated particles are given a lifetime measured in frames; the
particle is removed when its lifetime reaches 0, or when it contributes nothing to the
image, goes beneath a transparency threshold, or moves too far away from the origin of
it's parent particle system.
rendering - the position and appearance parameters for all particles are calculated for
a frame before rendering then the particles are rendered into the frame buffer in whatever
order they are generated.
60. A stochastic process is used to introduce randomness over a time period. It concerns
sequences of events governed by probabilistic laws. A use of stochastic processes in
animation is in particle emitters to vary the number of particles emitted, or the properties
of the particles themselves.
61. Four properties that particles might have in a particle system are size, colour,
velocity, and lifetime.
62.
//generate new particles for this time step (t)
newParticles(2t)
//initialize generated particles
for each new particle p
//initializes the attributes given the mean values and the maximum variance
p.xVelocity = meanXVelocity + random(-1, 1) * varXVelocity
p.yVelocity = meanYVelocity + random(-1, 1) * varYVelocity
p.colour = meanColour + random(-1, 1) * varColor
p.Lifetime = meanLifetime + random(-1, 1) * varLifetime
p.size = meanSize + random(-1, 1) * varSize
//updates the existing particles
for each particle p
//updates the position based on the velocity
p.x += p.xVelocity
p.y += p.yVelocity
//moves the colour one step closer to the “final” colour
p.colour += (p.colour - endColor)/p.lifetime
//and the size one step closer to zero
p.size -= p.size/p.lifetime
//updates the velocity based on some acceleration (like from wind)
p.xVelocity += xAcceleration
p.yVelocity += yAcceleration
//counts down the particle’s lifetime
p.lifetime -= 1
//if it is extinct, delete it
if p.lifetime = 0 then
deleteParticle(p)
//otherwise, draw it
else
drawParticle(p)
63. No answer provided here.
64. Emergent behaviour is when each member of a group is governed locally, but the
group ends up working together as a group. It is the overall impression created by the
individual members’ actions within the group. It is used in animation for flocking (e.g.,
schools of fish, flocks of birds, etc.)
65. The two main tendencies of flocking behaviour are collision avoidance (shortdistance repulsion) and flock centering (longer distance attraction). Collision avoidance
causes the members to keep from getting too close to each other, whereas flock centering
causes them to try to stay near the center of the flock. Conflicts between these two
tendencies can be resolved by adjusting the repulsion constant between the members of
the flock, and the distance at which they “sense” each other.
66. Local control is controlling the individual objects completely within the object itself.
Each member is typically aware of two or three of the closest members of the flock and
stays close to them while avoiding collisions with them. This is easier to compute, more
intuitive, and more reflective of actual flock behaviour.
Global control is controlling the entire flock using a single source, for example a flock
center point or a flock member defined as the flock leader, which the other flock
members attempt to stay close to and match velocities with, but without getting in
collisions. This reduces the complexity of the flock and is better for controlling the
overall direction of the flock.
67. For a flock of n members, a full calculation of their instantaneous influences on each
other is O(n2). An example of a phenomenon that requires an O(n2) algorithm is repelling,
each member must know how the distance to all members of the flock to be able to tell
which direction it should move to try to avoid colliding with any other member of the
flock.
68.
Calculate k as follows:
V
k C P
V
and then t as described below:
t
k 2 s2
and finally check if t < r1 + r2; if it is true, there is a collision, otherwise there is none.
51.
r 4
C
7
9
s
C
k
V
1
3
7
1
6
9
3
6
P
C
7
9
V
V
P
1
3
1
3
1
3
1
6
3
6
1 3
P
1
3
62
62
72
8.485
1
10
3
10
18
10
6
6
6
10
24
10
t
s2
7.584
k2
8.485
2
7.584
2
3.79
t < r => collision
3.79 < 4 => a collision occurs
69. No answer given here.
70. In order for a flock to split when approaching an object and then rejoin, there must be
a good balance between collision avoidance and flock centering. The collision avoidance
must be strong enough to allow the flock to split as it approaches the object, with each
member going on the best side of the object for its own trajectory. The flock centering
must be strong enough to cause the parts of the flock to rejoin when there is no longer an
object between them. If the flock centering is too weak, the flock may split permanently,
and if the collision avoidance is too weak, the members of the flock will not respond to
the upcoming object soon enough. In such a case, the members of the flock will either hit
each other or be forced to make awkward motions to avoid collisions, and the flock will
lose its formation.
71. Iteration 1: A
Iteration 2: A
FF
FF
F[+F][-F]FF[+F][-F]F
Iteration 3: A
FF
F[+F][-F]FF[+F][-F]F
F[+F][-F]F[+F[+F][-F]F][- F[+F][-F]F] F[+F][-F]FF[+F][-F]F[+F[+F][F]F][- F[+F][-F]F] F[+F][-F]F
72. A(3)B(1,2)
(A(4)f+F)A(3)B(0,4)[-F]F
((A(6)f+F)f+F) (A(5)f+F) B(1,4)F[+F][-F]F
((Ff+F)f+F) (Ff+F) (B(2,3)F[+F])F[+F][-F][F]
((Ff+F)f+F) (Ff+F) (A(5)B(0,6)[-F]F)F[+F]F[+F][-F][F]
((Ff+F)f+F) (Ff+F) (FB(1,6)F[+F])[-F]FF[+F]F[+F][-F][F]
72B, 72C, 72D, 72E. No answers given here.
73. The three main approaches to model gas are the grid-based, particle-based and hybrid
methods.
The grid-based method breaks space into cells and calculates the flow of gas through
each individual cell, and density and velocity in each cell are updated at each time
interval.
The particle-based method breaks the gas into particles that flow through the space.
Each particle has a mass, and external forces can act upon them and works identically to
a standard particle system.
The hybrid method uses the particle method, but for the purpose of rendering, the
space is divided into cells like in the grid-based method, and the number of particles
currently in the cells determines the densities of the gas in them.
74. First the space is broken up into cells, which can be done before rendering, which
requires a fixed number of cells but has a low overhead, or it can be allocated and freed
throughout the procedure which if more flexible but comes with a higher overhead. Once
this is done, each cell is given a density of gas and a velocity vector. Each frame, the gas
in a cell moves according to the velocity vector, and possibly external forces such as
wind and object movement. If a cell’s gas moves partway between cells, the density is
split between those cells based on the area overlapping each cell. If gas from more than
one cell moves to the same cell, the densities are summed and the velocity is calculated to
be an average of those contributing to the cell, weighted by density. During rendering,
each cell’s density can determine visibility and illumination.
74B, 74C, 74D. No answers given here.
75. [NOT NEEDED FOR 200810] Shadows are one issue for rendering clouds. An
approach to show the shadows is the volumetric shadow technique.
Scattering effects are another issue. Scattering can be shown using illumination
models to determine the albedo effects.
76. [NOT NEEDED FOR 200810] Ebert's volumetric cloud model is a high-level
procedural approach. It allows the animator to model a cloud independent of any physics
model. It provides flexibility, data amplification, abstraction of detail and ease of
parametric control.
The cloud is created using two levels:
1. a cloud macrostructure modeled by implicit functions
В·
implicit functions work well here due to their ease of specification and their
ability to smoothly blend density distributions
В·
the animator needs to specify the location, type and weight of the implicit
primitives
Example of an implicit primitive: sphere, cylinder, cone, etc.
В·
the density functions are therefore primitive-based
В·
a density blending function can be used to blend densities generated by many
different primitives
Example of a density blending function (where r is the distance from the
primitive and varies from 0 to R (R is the maximum distance)):
F(r) = -4/9 В· r6/R6 + 17/9 В· r4/R4 - 22/9 В· r2/R2 + 1
2. a cloud microstructure modeled by turbulent volume densities
В·
created by the Turbulence ( ) and Noise ( ) functions
В·
control over density and change in density is put in place in this section
77. [NOT NEEDED FOR 200810] Cumulus clouds are thicker and less wispy than cirrus
clouds, so for cirrus clouds a smaller density value parameter and a larger exponent
should be used to create an emptier cloud. Also, the animator should take into account
wind direction and force to make the clouds look more natural for cirrus clouds, as they
are affected more by such turbulence. Furthermore, cumulus clouds are more dependent
on their macrostructure and cirrus clouds on their microstructure.
Stratus clouds have thicker layers than cirrus clouds, so the size of turbulence should
be decreased to make fewer wisps. They also use smaller exponent values to generate a
more layered effect and increased density. Stratus clouds also use fewer implicit
primitives that cirrus clouds.
78. [NOT NEEDED FOR 200810]
79. [NOT NEEDED FOR 200810]
80. [NOT NEEDED FOR 200810] A fire can be made using a particle system (or
several) of self-illuminating particles. There should either be very many small particles,
or it could be drawn using the hybrid method for gaseous phenomena. The particles
should float upward after being generated, affected by wind and air turbulence to make it
“dance”, and have a relatively short lifetime. The smoke particles would be similar,
except not self-illuminating, with a longer lifetime, and could be drawn similar to a
cloud.
For example, the Wall of Fire was modeled using a two-level ringed particle system.
Level one deals with particles leaving the initial point of impact. Level two deals with
particles leaving concentric circles surrounding the initial point of impact. These rings
are formed of many individual particle systems all programmed to explode on cue
traveling at random varying angles of the surface normal.
81 a.
Domain length:
dl
1 4
3
2
2
7 3
2
42
9 16
25
5
So, the parametric equations are:
xt
zt
1 t 4 1 t
7 t 3 1 t
yt
0.029293 0.765979 cosh
t 5 2.5
0.765979
81 b.
Third point in a sequence of 12 points corresponds to t=2/11:
x2
z2
11
11
1 2
4 1 2
11
11
7 2
3 1 2
11
11
2
y2
11
0.029293 0.765979 cosh
5 2.5
11
0.765979
x2
z2
11
y2
x2
z2
38
11
41
11
11
0.029293 0.765979 cosh
11
38
11
41
11
11
11
y2
0.029293 0.765979 4.0527483
11
x2
z 2
38
11
41
11
11
11
y2
2 5 2.5
11
0.765979
3.45455
3.72727
3.075027
11
82.
First, calculate the horizontal force acting on v2. We will use the difference between v2
and v1 to compute the force:
F
v1* v 2*
v1 v 2
k
*
v1
16 8
1
0.45
3.6
*
v2
1
16
v1 v 2
v1 v 2
16, 0
1,0
3.6, 0
Next, compute the force between v3 and v4:
F
v 4* v3*
v 4 v3
k
v4
12 8
1
0.45
1.8
*
*
v3
1
12
v 4 v3
v 4 v3
12, 0
1,0
1.8, 0
83. Polygons are simple, easy to create, and are easily deformed. The smoothness of the
model depends on the number of vertices defined.
Splines create smoother surfaces and have lower complexity than polygons, but if
detail is needed, all splines must have many interrelated control points and thus a high
data complexity.
84. A large polygon is divided into smaller polygons to give more detailed control. This
is refined until a smooth surface is created. It is intuitive for a new model, but hard to
apply to an old model. Local complexity may be present but this does not affect global
complexity.
85. Computer aided design - good for fantasy or caricatures and the most flexible, but
requires most experience.
Digitization using physical reference - a physical model is built and then scanned into
animation software.
Modification of an existing model - a parameterized or freehand manipulation of a
previously created model.
86. A phoneme is the most basic unit of speech, any single sound that is part of a
language. A viseme is the appearance of a face when producing a corresponding
phoneme. These are important in facial animation for making modeled heads mimic
speech, by decomposing what the character is saying into its phonemes, and then
animating it to make the visemes at the corresponding times. Usually the viseme is seen
two or three frames before the phoneme is heard in order for the speech to appear natural.
87. FACS is the Facial Animation Coding system. It was developed by psychologists
trying to define basic action units (AUs) of the face. Combinations of these AUs can be
used to create expressions. 46 AUs were defined, for example, wink, blink, jaw drop.
They are not time based or related, and were not expanded to include phonemes. An
advantage of FACS for facial animation is that it allows easy controls for creating
expressions, and a disadvantage is that it is meant to be expressive and not generative.
88. Direct parameterized (topological) models - Parameters are altered to animate the
face. This approach parameterizes all values of a face, and models have little theoretical
basis and do not pay careful attention to facial anatomy. Parke's model is an example of
a direct parameterized model.
Pseudomuscle-based models - Concerned with emulating by approximation of the
movement of the basic facial muscles. Uses geometric deformation operators to control
and change the shape of the face. Movement is created by defining a volume around the
face or a skeleton of the face and manipulating it.
Muscle-based models - very sophisticated, tries to mirror human anatomy, Facial
expressions are created by manipulating parameters to contract and relax virtual muscle
All three models use parameters to control the shape of the model, however, muscleand pseudomuscle-based models make some attempt at realistic muscle motion.
89. Parke's model has around 400 vertices and 300 polygons in a mesh. The position of
the vertices change based on various parameters. There are two types of parameters,
conformational, and expressive. Conformational parameters - such as jaw width differentiate one head from another. Expressive parameters define motions of the same
face and are used for its animation. Parke's model uses up to 50 parameters and is a
direct parameterized model - that is, it is not based on anatomy.
90. Jaw rotation in Parke’s model is based on two-dimensional rotation about the origin,
as shown in the picture below:
To rotate something around point P, translate the points so that P is on the origin, perform
the rotation, and do the inverse translation of the first, as shown in the picture below:
The jaw area of Parke’s model is divided into 5 zones of vertices described below:
The zones are also present on the lips of the model, as in the diagram below:
When the jaw is rotated by some angle, the amount of the rotation performed on each
vertex is dependent on which zone the vertex is in.