Slides - My E-town

Week 2 - Wednesday



What did we talk about last time?
Finished computers
Light and colors
Unsurprisingly, digital
cameras use the RGB
model to detect light and
record it in a photograph
 A sensor called a chargecoupled device (CCD)
reads light intensities
and converts them to
electronic signals

 For color photographs, an
RGB filter is usually used
to register different color
intensities separately
 A common type of filter, the Bayer
mask is shown below
 There are twice as many green
elements because the human eye is
more sensitive to green light
The design for a CCD is related to how
your eyes work
 The rod cells and cone cells in your eyes
are responsible for your sight
 Rods pick up faint light and are not
sensitive to different colors

 They are responsible for night and
peripheral vision

Cones are less sensitive but pick up
different colors
 There are L, M, and S cones that pick up
long, medium, and short wavelength light,
respectively
 L-cones pick up reddish light, M-cones pick
up greenish light, and S-cones pick up
bluish light
Photo credit:
https://www.flickr.com/photos/28931095@N03/

An image is a long list of pixel information
 You can think of it as three numbers for each
pixel: red, green, and blue values



Bitmaps (.bmp files) are almost that simple
Most common image formats (.jpg, .png, and
.gif files) are more complex
They use different forms of compression to
keep the image size small





Stands for Joint Photographic
Experts Group
Good for images without too
much high contrast (sharp edges)
Photographs are often stored as
JPEGs
Uses crazy math (discrete cosine
transform) to reduce the amount
of data needed
Lossy compression



Good for images with low numbers of colors
and high contrast differences
Has built-in compression sort of like zip files
Similar to the older GIF (.gif) images
 GIFs are unpopular now because they only
support 256 colors
 GIFs also suffered from legal battles over the
algorithm used for compression

Lossless compression
Some ideas for these slides borrowed from the UC Berkeley course "The
Beauty and Joy of Computing" designed by Dan Garcia

We were talking about 2D graphics
 Ultimately, almost everything ends up as 2D graphics
because our screens display in 2D

3D graphics is another large area of computer
science
 Making realistic movies and games is tricky
 Artists are usually involved, but computer scientists
make the tools the artists use

From the CS perspective, you can divide 3D
graphics into two important categories:
 Offline rendering
 Real-time rendering






Offline rendering will be our focus today
In this case, offline means that the rendering has
already happened when you see the images
Offline rendering is used for television, movies,
and print media
You can create an entire movie from computer
graphics (CG), like Pixar does
Or you can add CG elements to a movie, like
Gollum in the Lord of the Rings
Each frame produced by offline rendering often
takes hours to render
Real-time rendering is
rendering done as you
watch it, typically in an
interactive way
 Real-time rendering is
almost exclusively the
province of video games,
like Witcher III shown right
 Render rates are often
between 20 and 60 frames
per second
 How much faster is that
than offline rendering?


For most offline and real-time rendered graphics, the
basic outline of producing images is the same
Modeling
Animation
Lighting
and Shading
Rendering
Modeling is creating the 3D objects
Animation is making them move
Lighting and shading determine the lighting of the
scene and other elements of visual appearance
 Rendering is the computation that determines the
final image





Artists usually do the modeling of 3D objects
But computer scientists create the programs that they
use:






AutoCAD
Maya
3DS Max
Blender (free!)
And many others…
Modeling by hand is very common, but it is possible to
scan 3D objects or generate objects procedurally (like
simulating the growth of a tree)
Model of an eastern banjo frog provided by Autodesk



A spline is a curve in space
that is defined as a piecewise
function
Splines are a common tool for
defining shapes in 2D and 3D
Artists add control points
with handles to change the
slope of the curves
Non-uniform rational basis
splines (NURBS) are a very
general form of splines
 Many 3D modeling program
represent surfaces as
patches between these
splines
 Rendering NURBS usually
means turning these
mathematically precise
surfaces into triangles


If you can't get an artist to
model the object for you,
there are a few other ways
 Generate the data procedurally
 Visualization of scientific (or
other) data as spheres, cubes,
or other primitives
 Sampling or scanning the real
world
 Reconstruction from
photographs
 Combinations!
People have worked a fair bit on modeling trees
New research takes an existing tree model and
deforms it to its environment
 It approximates biological reactions to space and light
constraints
 It's a combination of procedural and artist modeling
 Recent SIGCSE paper: "Plastic Trees: Interactive SelfAdapting Botanical Tree Models" by Pirk et al.


 www.youtube.com/watch?v=xlbKL0KoYEU


Once you have the model, you have to make it
move around the scene
One part of this process is rigging, which ties
parts of the model together
 For example, pull the foot and it pulls the leg

The model can be moved to different key
frames
 Then a program can blend between them

Motion capture is also a popular method for
animating models
 The results can be more natural


Models created by artists
Movement based on motion capture

Andy Serkis (Gollum, Kong) is perhaps the best known motion
capture artist
 But there is a dispute over whether or not he can get acting awards for
his work
Mostly, we're talking about putting the real world inside of a
computer
 What if you wanted to turn your 3D (computer) model into a 3D
(real) model?
 New research turns a skinned mesh into a model that can be
created with articulation points and generated with a 3D printer

 So you can play with it!

Recent SIGCSE paper: "Fabricating Articulated Characters from
Skinned Meshes" by Bächer, Bickel, James, and Pfister
 www.youtube.com/watch?v=8jwNWOlU6yw




Once the models are moving around the
environment, we still need lighting to see
them
Virtual lights are placed in the scene
A camera location is chosen
Materials for the models are chosen
 What colors?
 Rough, smooth?
 Shiny, reflective, matte?



Then, rendering is the process of taking all
this data and figuring out what the individual
color of each pixel in the final 2D image will
be
Many parts of the model might overlap with a
single pixel
A lot of math has to be done to figure out
what the final color is

Most rendering systems divide the models
into triangles
 Usually millions of triangles for offline rendering

Each part of a triangle that overlaps with a
pixel is called a fragment

Triangles are useful because the math
involved is simple, and they are always flat




The amount of math involved is breathtaking
Each triangle exists in 3D space
Matrix multiplication is used to map the
location of the object into view space (as seen
from the camera) and then screen space
(flattening out into 2D)
Shading equations based on physics and the
interaction of light with matter determine the
final color of the fragment


And that's just the color of
the fragment, assuming
nothing is blocking the
light
Adding shadows and
reflections means dealing
with interactions between
different objects



Older video games didn't have shadows at all
But shadows add important cues about
relative depth and size of objects
Unless you're using a global illumination
model, shadows are tricky to make
 In older Pixar movies, artists had to decide which
lights shadowed which objects



Ray tracing is one type of global illumination model
Rays are traced from the camera through the screen to the closest object,
called the intersection point
For each intersection point:
 Trace a ray to each light source
 If the object is shiny, trace a reflection ray
 If the object is not opaque, trace a refraction ray


Opaque objects can block the rays, while transparent objects attenuate the
light
It's even more complicated, since rays scatter when they bounce
100 million CPU hours to render the
film
 2 years of actual time on 2,000
computers with more than 24,000
cores
 5.5 million hairs on Sully's fur

 Five times the original!
It still can take 29 hours to render a
single frame
 You need 24 frames per second for
movie quality
 They upgraded to a global
illumination model for Monsters
University

Almost 10 years old but still impressive in many ways


We will talk about video games and real-time
rendering
Lab 2


Keep playing with Scratch
Form your teams for Project 1