2D Geometric Transformations:
Basic Transformations
BASIC TRANSFORMATION
Animation are produced by moving the 'camera' or the objects in a scene along animation
paths. Changes in orientation, size and shape are accomplished with geometric
transformations that alter the coordinate descriptions of the objects. The basic geometric
transformations are translation, rotation, and scaling. Other transformations that are often
applied to objects include reflection and shear.
Use of transformations in CAD
In mathematics, "Transformation" is the elementary term used for a variety of operation
such as rotation, translation, scaling, reflection, shearing etc. CAD is used throughout the
engineering process from conceptual design and layout, through detailed engineering and
analysis of components to definition of manufacturing methods. Every aspect of modeling in
CAD is dependent on the transformation to view model from different directions we need to
perform rotation operation. To move an object to a different location translation operation is
done. Similarly Scaling operation is done to resize the object.
Coordinate Systems
In CAD three types of coordinate systems are needed in order to input, store and display
model geometry and graphics. These are the Model Coordinate System (MCS), the World
Coordinate System (WCS) and the Screen Coordinate System (SCS).
Model Coordinate System
The MCS is defined as the reference space of the model with respect to which all the model
geometrical data is stored. The origin of MCS can be arbitrary chosen by the user.
World Coordinate System
As discussed above every object have its own MCS relative to which its geometrical data is
stored. Incase of multiple objects in the same working space then there is need of a World
Coordinate System which relates each MCS to each other with respect to the orientation of
the WCS. It can be seen by the picture shown below.
Screen Coordinate System
In contrast to the MCS and WCS the Screen Coordinate System is defined as a two
dimensional device-dependent coordinate system whose origin is usually located at the
lower left corner of the graphics display as shown in the picture below. A transformation
operation from MCS coordinates to SCS coordinates is performed by the software before
displaying the model views and graphics.
Viewing Transformations
As discussed that the objects are modeled in WCS, before these object descriptions can be
projected to the view plane, they must be transferred to viewing coordinate system. The
view plane or the projection plane, is set up perpendicular to the viewing zv axis. The World
coordinate positions in the scene are transformed to viewing coordinates, then viewing
coordinates are projected onto the view plane.
The transformation sequence to align WCS with Viewing Coordinate System is.
1. Translate the view reference point to the origin of the world coordinate system.
2. Apply rotations to align xv, yv, and zv with the world xw, yw and zw axes, respectively
Matrix Representations
Matrix representation is a method used by a computer language to store matrices of more than
one dimension in memory. Fortran and C use different schemes. Fortran uses "Column Major",
in which all the elements for a given column are stored contiguously in memory. C uses "Row
Major", which stores all the elements for a given row contiguously in memory. LAPACK defines
various matrix representations in memory. There is also Sparse matrix representation and
Morton-order matrix representation. According to the documentation, in LAPACK the unitary
matrix representation is optimized.[1] Some languages such as Java store matrices using Iliffe
vectors. These are particularly useful for storing irregular matrices. Matrices are of primary
importance in linear algebra.
Basic mathematical operations
An m × n (read as m by n) order matrix is a set of numbers arranged in m rows and n columns.
Matrices of the same order can be added by adding the corresponding elements. Two matrices
can be multiplied, the condition being that the number of columns of the first matrix is equal to
the number of rows of the second matrix. Hence, if an m × n matrix is multiplied with an n × r
matrix, then the resultant matrix will be of the order m × r.[2]
Operations like row operations or column operations can be performed on a matrix, using which
we can obtain the inverse of a matrix. The inverse may be obtained by determining the adjoint as
well.[2]
Basics of 2D array
The mathematical definition of a matrix finds applications in computing and database
management, a basic starting point being the concept of arrays. A two-dimensional array can
function exactly like a matrix. Two-dimensional arrays can be visualized as a table consisting of
rows and columns.
int a[3][4], declares an integer array of 3 rows and 4 columns. Index of row will start from 0
and will go up to 2.
Similarly, index of column will start from 0 and will go up to 3.[3]
Column 0 Column 1 Column 2 Column 3
row 0 a[0][0]
a[0][1]
a[0][2]
[0][3]
row 1 a[1][0]
a[1][1]
a[1][2]
[1][3]
row 2 a[2][0]
a[2][1]
a[2][2]
[2][3]
This table shows arrangement of elements with their indices.
Initializing Two-Dimensional arrays: Two-Dimensional arrays may be initialized by providing a
list of initial values.
int a[2][3] = {1,2,3,4,5,6,} or int a[2][3] = {{2,3,4},{4,4,5}};
Calculation of Address : An m x n matrix (a[1...m][1...n]) where the row index varies from 1 to
m and column index from 1 to n,aij denotes the number in the ith row and the jth column. In the
computer memory, all elements are stored linearly using contiguous addresses. Therefore,in
order to store a two-dimensional matrix a, two dimensional address space must be mapped to one
dimensional address space.In the computer's memory matrices are stored in either Row-major
order or Column-major order form.
Composite Transformations
What must be the value of matrices A and B so that the composite transformation BA will be a
null matrix?
Please see attachments for additional information.
An m×n matrix is a set of numbers arranged in m rows and n columns. The following illustration shows several
matrices.
You can add two matrices of the same size by adding individual elements. The following illustration shows two
examples of matrix addition.
An m×n matrix can be multiplied by an n×p matrix, and the result is an m×p matrix. The number of columns in
the first matrix must be the same as the number of rows in the second matrix. For example, a 4 ×2 matrix can
be multiplied by a 2 ×3 matrix to produce a 4 ×3 matrix.
Points in the plane and rows and columns of a matrix can be thought of as vectors. For example, (2, 5) is a
vector with two components, and (3, 7, 1) is a vector with three components. The dot product of two vectors is
defined as follows:
(a, b) • (c, d) = ac + bd
(a, b, c) • (d, e, f) = ad + be + cf
For example, the dot product of (2, 3) and (5, 4) is (2)(5) + (3)(4) = 22. The dot product of (2, 5, 1) and (4, 3, 1) is
(2)(4) + (5)(3) + (1)(1) = 24. Note that the dot product of two vectors is a number, not another vector. Also note
that you can calculate the dot product only if the two vectors have the same number of components.
Let A(i, j) be the entry in matrix A in the ith row and the jth column. For example A(3, 2) is the entry in matrix A
in the 3rd row and the 2nd column. Suppose A, B, and C are matrices, and AB = C. The entries of C are
calculated as follows:
C(i, j) = (row i of A) • (column j of B)
The following illustration shows several examples of matrix multiplication.
If you think of a point in the plane as a 1 × 2 matrix, you can transform that point by multiplying it by a 2 × 2
matrix. The following illustration shows several transformations applied to the point (2, 1).
All the transformations shown in the previous figure are linear transformations. Certain other transformations,
such as translation, are not linear, and cannot be expressed as multiplication by a 2 × 2 matrix. Suppose you
want to start with the point (2, 1), rotate it 90 degrees, translate it 3 units in the x direction, and translate it 4
units in the y direction. You can accomplish this by performing a matrix multiplication followed by a matrix
addition.
A linear transformation (multiplication by a 2 × 2 matrix) followed by a translation (addition of a 1 × 2 matrix) is
called an affine transformation. An alternative to storing an affine transformation in a pair of matrices (one for
the linear part and one for the translation) is to store the entire transformation in a 3 × 3 matrix. To make this
work, a point in the plane must be stored in a 1 × 3 matrix with a dummy 3rd coordinate. The usual technique is
to make all 3rd coordinates equal to 1. For example, the point (2, 1) is represented by the matrix [2 1 1]. The
following illustration shows an affine transformation (rotate 90 degrees; translate 3 units in the x direction, 4
units in the y direction) expressed as multiplication by a single 3 × 3 matrix.
In the previous example, the point (2, 1) is mapped to the point (2, 6). Note that the third column of the 3 × 3
matrix contains the numbers 0, 0, 1. This will always be the case for the 3 × 3 matrix of an affine transformation.
The important numbers are the six numbers in columns 1 and 2. The upper-left 2 × 2 portion of the matrix
represents the linear part of the transformation, and the first two entries in the 3rd row represent the
translation.
In Windows GDI+ you can store an affine transformation in a Matrix object. Because the third column of a
matrix that represents an affine transformation is always (0, 0, 1), you specify only the six numbers in the first
two columns when you construct a Matrix object. The statement Matrix myMatrix(0.0f, 1.0f, 1.0f, 0.0f, 3.0f, 4.0f); constructs the matrix shown in the previous figure.
Composite Transformations
A composite transformation is a sequence of transformations, one followed by the other. Consider the matrices
and transformations in the following list:
Matrix A
Matrix B
Matrix C
Rotate 90 degrees
Scale by a factor of 2 in the x direction
Translate 3 units in the y direction
If you start with the point (2, 1) — represented by the matrix [2 1 1] — and multiply by A, then B, then C, the
point (2,1) will undergo the three transformations in the order listed.
[2 1 1]ABC = [ –2 5 1]
Rather than store the three parts of the composite transformation in three separate matrices, you can multiply
A, B, and C together to get a single 3 × 3 matrix that stores the entire composite transformation. Suppose ABC
= D. Then a point multiplied by D gives the same result as a point multiplied by A, then B, then C.
[2 1 1]D = [ –2 5 1]
The following illustration shows the matrices A, B, C, and D.
The fact that the matrix of a composite transformation can be formed by multiplying the individual
transformation matrices means that any sequence of affine transformations can be stored in a single Matrix
object.
Note The order of a composite transformation is important. In general, rotate, then scale, then translate is not
the same as scale, then rotate, then translate. Similarly, the order of matrix multiplication is important. In
general, ABC is not the same as BAC.
The Matrix class provides several methods for building a composite transformation: Matrix::Multiply,
Matrix::Rotate, Matrix::RotateAt, Matrix::Scale, Matrix::Shear, and Matrix::Translate. The following
example creates the matrix of a composite transformation that first rotates 30 degrees, then scales by a factor
of 2 in the y direction, and then translates 5 units in the x direction.
Copy
Matrix myMatrix;
myMatrix.Rotate(30.0f);
myMatrix.Scale(1.0f, 2.0f, MatrixOrderAppend);
myMatrix.Translate(5.0f, 0.0f, MatrixOrderAppend);
The following illustration shows the matrix.
Other Transformations
2D Viewing: The Viewing Pipeline
The ability to perform transformations on objects in an image is an important feature of a graphics system. Operators
were added to this system which allow the user to create 2D transformation matrices that perform the following
transformations:
scale around (0, 0)
scale around an arbitrary point, with the x-scale direction oriented to a specified angle
translate
rotate around (0, 0)
rotate around an arbitrary point
In addition, helper functions were created that perform matrix multiplication, reset a matrix to the
identity, reset a matrix to the zero matrix, and print a matrix. Below are some sample images created
using these methods. The original polygons are in red, and the tranformed ones are in blue. The images
are of translation, rotation of PI/4 about (0, 0), rotation of PI/4 about (80, 40), scaling by .8 in x and y,
and respectively.
Part 2: The Starship Enterprise
The next task was to create an image of the Starship Enterprise by applying the transformation tools developed in the
previous task to a unit circle and square. Below is the generated image.
Part 3: Animating the Starship Enterprise
The next step was to create an animation of the Enterprise orbiting a planet.
Part 4: The Viewing Pipeline
The final task consisted of creating a 2D viewing pipeline that would allow the designer to pan around the image. The
animation below is an example of this feature. The viewing pipeline moves in a diamond shape that shadows the
motion of the Enterprise.
Extensions
As an extension, we used the transformation matrices that we had created to make an animated gif of the Enterprise
going to warp speed. The parts of the Enterprise are stretched and then shrunk over time, and a filled polygon was
added at the end to create the effect of an explosion.
Questions
1. Who did you work with on this assignment, and what tasks did each of you do?
I worked with Casey Smith, and we shared the work on most of the tasks.
2. Describe the mechanism you developed for handling the global transformation parameters and
matrix.
The "global transformation parameters" weren't really global, they were local to the test
program. Each new graphics object had its own transformation matrix, and each of these was
passed to the appropriate drawing function (polyLineM, drawUnfilledCircleM,
drawFilledCircleM). The transformation was applied to the points in the the draw functions.
3. Describe the mechanism you developed for handling the viewing pipeline parameters and
transformation matrix.
For this assignment, the global transformation parameters were hardcoded into the test
programs, and the matrix was constructed by translating and scaling the View Transformation
Matrix using the translation and rotation functions. In the future, however, a function will be
created that will take in the points at the lower left and upper right of the viewing window and
the desired scale and will perform the translation and rotation on the VTM.
4. Once you had the code in place, what was the process and how difficult was it to modify the view
window and the position of the Enterprise?
The transformation functions made it very easy to move the objects in the scene and to change
the view window. To change the position of the enterprise, simply apply the same translation or
rotation to each piece of the figure. This will, of course, become even simpler when we have
created a hierarchical modeling system in which individual graphical elements can be combined
into one object and tranformed as a unit.
Moving the viewing window is equally easy. Points are stored for the lower left and upper right
corners of the view window, along with the desired dimensions of the output image. Changing
the viewing window is as simple as changing the points for the corners of the viewing window
and then applying the transformation matrix to the image objects (lines, circles, etc.).
5. If you extended this assignment in any way, describe what you did and how you did it. Include
pictures, or links to pictures that show what you did. As an extension, we created an image of
the Starship Enterprise going into warp speed (see above). We just created series of images in
which the Enterprise was stretched, followed by a series of images in which it was collapsed into
an explosion (a filled polygon).
Viewing Co-ordinate Reference Frame
A frame of reference in physics, may refer to a coordinate system or set of axes within which to
measure the position, orientation, and other properties of objects in it, or it may refer to an
observational reference frame tied to the state of motion of an observer. It may also refer to both
an observational reference frame and an attached coordinate system as a unit.
Different aspects of "frame of reference"
The need to distinguish between the various meanings of "frame of reference" has led to a variety of terms. For
example, sometimes the type of coordinate system is attached as a modifier, as in Cartesian frame of reference.
Sometimes the state of motion is emphasized, as in rotating frame of reference. Sometimes the way it transforms to
frames considered as related is emphasized as in Galilean frame of reference. Sometimes frames are distinguished
by the scale of their observations, as in macroscopic and microscopic frames of reference.[1]
In this article the term observational frame of reference is used when emphasis is upon the state of motion rather
than upon the coordinate choice or the character of the observations or observational apparatus. In this sense, an
observational frame of reference allows study of the effect of motion upon an entire family of coordinate systems that
could be attached to this frame. On the other hand, a coordinate system may be employed for many purposes where
the state of motion is not the primary concern. For example, a coordinate system may be adopted to take advantage
of the symmetry of a system. In a still broader perspective, of course, the formulation of many problems in physics
employs generalized coordinates, normal modes or eigenvectors, which are only indirectly related to space and time.
It seems useful to divorce the various aspects of a reference frame for the discussion below. We therefore take
observational frames of reference, coordinate systems, and observational equipment as independent concepts,
separated as below:
An observational frame (such as an inertial frame or non-inertial frame of reference) is a physical
concept related to state of motion.
A coordinate system is a mathematical concept, amounting to a choice of language used to describe
observations.[2] Consequently, an observer in an observational frame of reference can choose to
employ any coordinate system (Cartesian, polar, curvilinear, generalized, …) to describe
observations made from that frame of reference. A change in the choice of this coordinate system
does not change an observer's state of motion, and so does not entail a change in the observer's
observational frame of reference. This viewpoint can be found elsewhere as well.[3] Which is not to
dispute that some coordinate systems may be a better choice for some observations than are
others.
Choice of what to measure and with what observational apparatus is a matter separate from the
observer's state of motion and choice of coordinate system.
Here is a quotation applicable to moving observational frames and various associated Euclidean three-space
coordinate systems [R, R' , etc.]: [4]
“
We first introduce the notion of reference frame, itself related to the idea of observer: the
reference frame is, in some sense, the "Euclidean space carried by the observer". Let us give
a more mathematical definition:… the reference frame is... the set of all points in the
Euclidean space with the rigid body motion of the observer. The frame, denoted , is said to
move with the observer.… The spatial positions of particles are labelled relative to a frame
by establishing a coordinate system R with origin O. The corresponding set of axes, sharing
the rigid body motion of the frame , can be considered to give a physical realization of . In a
frame , coordinates are changed from R to R' by carrying out, at each instant of time, the
same coordinate transformation on the components of intrinsic objects (vectors and
tensors) introduced to represent physical quantities in this frame.
”
and this on the utility of separating the notions of and [R, R' , etc.]:[5]
“
As noted by Brillouin, a distinction between mathematical sets of coordinates and physical
frames of reference must be made. The ignorance of such distinction is the source of much
confusion… the dependent functions such as velocity for example, are measured with
respect to a physical reference frame, but one is free to choose any mathematical
coordinate system in which the equations are specified.
”
and this, also on the distinction between and [R, R' , etc.]:[6]
“
The idea of a reference frame is really quite different from that of a coordinate system.
Frames differ just when they define different spaces (sets of rest points) or times (sets of
simultaneous events). So the ideas of a space, a time, of rest and simultaneity, go
inextricably together with that of frame. However, a mere shift of origin, or a purely spatial
rotation of space coordinates results in a new coordinate system. So frames correspond at
best to classes of coordinate systems.
”
and from J. D. Norton:[7]
“
In traditional developments of special and general relativity it has been customary not to
distinguish between two quite distinct ideas. The first is the notion of a coordinate system,
understood simply as the smooth, invertible assignment of four numbers to events in
spacetime neighborhoods. The second, the frame of reference, refers to an idealized system
used to assign such numbers … To avoid unnecessary restrictions, we can divorce this
arrangement from metrical notions. … Of special importance for our purposes is that each
frame of reference has a definite state of motion at each event of spacetime.…Within the
”
context of special relativity and as long as we restrict ourselves to frames of reference in
inertial motion, then little of importance depends on the difference between an inertial
frame of reference and the inertial coordinate system it induces. This comfortable
circumstance ceases immediately once we begin to consider frames of reference in
nonuniform motion even within special relativity.…More recently, to negotiate the obvious
ambiguities of Einstein’s treatment, the notion of frame of reference has reappeared as a
structure distinct from a coordinate system.
Window-to-Viewport Co-ordinate Transformation
Once object descriptions have been transferred to the viewing reference frame, we choose the
window extents in viewing coordinates and select the viewport limits in normalized coordinates .
Object descriptions are then transferred to normalized device coordinates. We do this using a
transformation that maintains the same relative placement of objects in normalized space as they
had in viewing coordinates. If a coordinate position is at the center of the viewing window, for
instance, it will be displayed at the center of the viewport.
Figure (a) illustrates the window-to-viewport mapping. A point at position (xw, yw) in the window is
mapped into position (xv, yv) in the associated view-port. To maintain the same relative placement in
the viewport as in the window, we require that
Firgure (a)
Figure (b) A point at position (xm, yw) in a designated window is mapped to viewport coordinates
(xv, yv) so that relative positions in the two areas are the same.
Solving these expressions for the viewport position (xv, yv), we have
xv = xvmin + (xw – xwmin)sx
yv = yvmin + (yw - ywmin)sy
where the scaling factors are
Equations can also be derived with a set of transformtions that converts the window area into the
viewport area. This conversion is performed with the following sequence of transformations:
1. Perform a scaling transformation using a fixed-point position of (xwmin, ywmin) that scales the
window area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport.
Relative proportions of objects are maintained if the scaling factors are the me (sx = sy). Otherwise,
world objects will .be stretched or contracted in either e x or y direction when displayed on the output
device.
Character strings can be handled in two ways when they are mapped to a viewport. The simplest
mapping maintains a constant character size, even though the viewport area may be enlarged or
reduced relative to the window. This method would be employed when text is formed with standard
character fonts that cannot be changed. In systems that allow .for changes in character size, string
definitions can be windowed the same as other primitives. For characters formed with line segments,
the mapping to the viewport can be carried out as a sequence of line transformations.
From normalized coordinates, object descriptions are mapped to the viewport display devices. Any
number of output devices can be open in a particular application, and another window-to-viewport
transformation can be performed for each open output device. This mapping, called the workstation
transformation, is accomplished by selecting a window area in normalized space and a viewport
area in the coordinates of the display device. With the workstation transformation, we gain some
additional control over the positioning of parts of a scene on individual output devices. As illustrated
in Fig. (b), we can use work station transformations to partition a view so that different parts of
normalized space can be displayed on different output devices.
Figure (b)
Figure (b) Mapping selected parts of a scene in normalized coordinated to different video
monitors with workstation transformations.
Exterior Clipping
So far, we have considered only procedures for clipping a picture to the interior of a region by
eliminating everything outside the clipping region. What is saved by these procedures is inside the
region. In some cases, we want to do the reverse, that is, we want to clip a picture to the exterior of
a specified region. The picture parts to be saved are those that are outside the region. This is
referred to as exterior clipping.
A typical example of the application of exterior clipping is in multiple-window systems. To correctly
display the screen windows, we often need to apply both internal and external clipping. Figure 6-31
illustrates a multiple-window display. Objects within a window are clipped to the interior of that
window. When other higher-priority windows overlap these objects, the objects are also clipped to
the exterior of the overlapping windows.
Exterior clipping is used also in other applications that require overlapping pictures. Examples here
include the design of page layouts in advertising or publishing applications or for adding labels or
design patterns to a picture. The technique can also be used for combining graphs, maps, or
schematics. For these applications, we can use exterior clipping to provide a space for an insert into
a larger picture.
Procedures for clipping objects to the interior of concave polygon windows can also make use of
external clipping. A line P,P, that is to be clipped to the interior of a concave window with vertices
V1V2V3V4V5. Line P, P2 can be clipped in two passes: (1) First, P]P1 is clipped to the interior of the
convex polygon ViV,V3V, o yield the clipped segment P'iP':) (2) Then an external clip of P'iP'z is
performed against the convex polygon V,V,V, to yield the final clipped line segment P'i'2.
Figure (c): A multiple-window screen display showing examples of both interior and exterior
clipping.
2D Viewing Functions
Viewing in PHIGS can be split into two separate sections. The first takes a PHIGS structure that
is to be traversed and describes how it is to be mapped onto a device independent coordinate
space called Normalized Projection Coordinates (NPC). PHIGS guarantees that it is possible to
display on a device the part of NPC space in the range 0 to 1 in the X and Y -directions. The
second section of viewing is to take the picture defined in NPC space and describe where it is to
be positioned on the display of the device or on the sheet of paper.
In this Chapter, the first section of viewing will be described which produces the NPC picture.
The term scene is used to describe the graphics produced by the structure traversal and picture
to describe the parts of the scene, converted to NPC coordinates, that are available for display.
The viewing process which maps the scene to the picture has 4 main components:
1. Partition by view index: the scene to be displayed may consist of a number of separate parts
some of which may be magnified or have different viewing characteristics from the rest. PHIGS
allows an attribute called the view index to be associated with the graphics primitives generated
on structure traversal. This is used to differentiate the parts of a scene.
2. Orientate the view: for each part of the scene differentiated by a view index, the application may
redefine the origin and orientation of the world coordinate system. This new coordinate system
is called the View Reference Coordinate (VRC) system. It should be defined such that it is the
most appropriate for the view mapping.
3. Map to NPC: define the window to viewport mapping that maps part of a scene defined by a
specific view index from its description in VRC coordinates to one in NPC.
4. Clip: decide whether the resulting NPC picture part should be clipped against a specified
boundary defined in NPC coordinates. This can be distinct from the viewport used in the window
to viewport mapping.
View indices in PHIGS playa similar role to Normalization Transformations in GKS. They give
the application the ability to compose a picture in the NPC space out of a set of distinct parts.
Similar to GKS, PHIGS has a default setting of view index which is 0 and the view associated
with view index 0 cannot be changed. For view index 0, both orientation and mapping are
identity matrices with clipping set at the boundary of the unit square. This is why the examples
so far have all worked as long as the output has been constrained to the unit square in world
coordinates. Effectively, WC, VRC and NPC are the same coordinate system. That is, the
position (X,Y) in each coordinate system represents the same point in the graphics to be
displayed. A major difference between the viewing function in GKS and PHIGS is that in PHIGS
the mapping from world to NPC coordinates can be defined differently for each workstation.
6.3 VIEW INDEX
The view index attribute is defined by a structure element:
SET VIEW INDEX(I)
On traversal, all subsequent output primitives generated have view index I associated with them
until the view index attribute is changed by another SET VIEW INDEX element. The current
value of view index is stored in the traversal state list. The initial default value is 0. For example:
OPEN STRUCTURE(ENV)
POLYLINE(5, XA, YA)
SET VIEW INDEX(1)
EXECUTE STRUCTURE(DESK)
SET VIEW INDEX(2)
EXECUTE STRUCTURE (DESK)
CLOSE STRUCTURE
will, on traversal, define a polyline to be viewed using the default view 0 and two views of the
same desk. How they will appear in the NPC picture will depend on the definition of the views
set up for view index 1 and 2.
6.4 METRIC DESK
The desk defined previously could be redefined using a more realistic coordinate system, say
metres:
SUBROUTINE DLARGE
REAL XL(5), YL(5)
DATA XL/0, 0, 2, 2, 0/
DATA YL/0, 1, 1, 0, 0/
POLYLINE(5, XL, YL)
RETURN
END
This defines a desk 2 metres long and 1 metre across with the origin at the bottom left corner.
The corner desk defined previously could also be redefined as:
50
SUBROUTINE DCORNR
REAL XC(12), YC(12)
REAL PI
INTEGER I
XC(1)=0
YC(1)=0
XC(2)=0
YC(2)=1
Pl=4*ATAN(1.0)
DO 50 I=3,10
XC(I)=COS((11-I)*PI/18)
YC(I)=SIN((11-I)*PI/18)
CONTINUE
XC(11)=1
YC(11)=0
XC(12)=0
YC(12)=0
POLYL1NE(12, XC, YC)
RETURN
END
This is a desk with 1 metre radius. The complete desk could be defined by:
OPEN STRUCTURE(DESK)
DLARGE
MVRP(2, 0)
DCORNR
MVRP(2, -1)
BUILD LOCAL TRANSFORMATION(0, 0, 0, 0, 0, 0.5, 1, ER, MT)
SET LOCAL TRANSFORMATION(MT, PRE)
DLARGE
CLOSE STRUCTURE
The small desk has been defined as an asymmetrically scaled version of the large desk where
the X-dimension has been halved giving a metre square desk.
The individual pieces of the desk have been defined in their own coordinate systems (modelling
coordinates). On traversal, the complete desk produced is defined in World Coordinates (WC)
which extends from 0 to 3 metres in the X-direction and -1 to 1 metres in the Y-direction as the
origin is at the bottom left corner of the large desk. The various desk accessories could also be
defined and scaled to fit on the desk as before.
6.5 VIEW ORIENTATION
View orientation is specified in PHIGS by a 3 × 3 homogeneous matrix similar to the modelling
transformations specified using BUILD TRANSFORMATION MATRIX. A special utility function
is provided specifically to define the View orientation matrix:
EVALUATE VIEW ORIENTATION MATRIX (VRPX, VRPY, VUPDX, VUPDY, ER, VOM)
The aim of view orientation is to change the origin and orientation of the world coordinate scene
to be viewed to one more appropriate for the mapping to NPC coordinates. The point
(VRPX,VRPY) defines the new origin to be used and (VUPDX,VUPDY) is a vector from
(VRPX,VRPY) that specifies the new Y-direction of the axes. The function builds the matrix
VOM that performs this change of origin and orientation. The parameter ER is set to 0 if a matrix
has been built successfully or to a non-zero error value otherwise.
In the example above, it may be desired to view the desk, which extends from 0 to 3 metres in
the X-direction and -1 to 1 metres in the Y-direction, at an angle of rotation about the centre
point. To do this conveniently would require the origin to be moved to the point (1.5, 0). By
specifying the Y-axis as a vector from this point of dimension (1,1), this would define the Y-axis
as being in the direction from (1.5, 0) to (2.5, 1). Effectively the desk is rotated by 45° anticlockwise to the initial origin. Note that the up direction could equally well have been defined by
the vector (2,2), the direction remains the same.
To produce the desired change of orientation requires:
EVALUATE VIEW ORIENTATION MATRIX(1.5, 0, 1, 1, ER, VOM)
A mistake often made is to take (VUPDX,VUPDY) to mean an absolute position rather than a
vector direction.
Although changing the orientation angle is provided mainly to allow the viewing of 3D objects
from different angles, it can be equally effective in the 2D area.
6.6 VIEW MAPPING
Once the view reference coordinates have been established, the mapping to NPC space needs
to be defined. Again, a specific utility function is provided to construct the 3 × 3 homogeneous
view mapping matrix:
EVALUATE VIEW MAPPING MATRIX(WL, PVL, ER, VMM)
WL defines the X and Y-limits of an area in view reference coordinates (called the window) to be
mapped onto the area defined by the X and Y-limits specified in PVL (called the viewport) in
NPC coordinates. The limits are specified in the order XMIN, XMAX, YMIN, YMAX.
If the desk defined above has its origin moved to the middle, it will extend by 1.5 metres in the X
or Y-direction depending on the orientation given. To map this onto the centre part of the NPC
unit square would require:
WL(1)=-1.6
WL(2)= 1.6
WL(3)=-1.6
WL(4)= 1.6
PVL(1)=0.25
PVL(2)=0.75
PVL(3)=0.25
PVL(4)=0.75
EVALUATE VIEW MAPPING MATRIX(WL, PVL, ER, VMM)
As before, the required view mapping matrix is returned in VMM with ER set to 0 if successful
otherwise a non-zero error value is returned.
6.7 VIEW DEFINITION AND CLIPPING
So far, the production of the two matrices that define view orientation and view mapping have
been described. The definition of the view itself is given by:
SET VIEW REPRESENTATION(WS, VI, VOM, VMM, VC, XYC)
The workstation WS has the view transformation for view index VI defined by the two matrices
YOM and VMM generated by the utility functions described above. The view representation is
defined by this single function invocation to ensure that intermediate definitions cannot be
produced with illegal transformations. As the view representation can be redefined while a
structure is posted, the effect occurs as soon as is possible. All that remains is to describe the
final two parameters which decide whether clipping should be applied and where.
The parameter VC defines that part of NPC space to clip against. Frequently, the values of the
clipping limits VC and the viewport specified by PVL in the function EVALVATE VIEW
MAPPING MATRIX are identical. However, this is not essential and in some applications it is
necessary to separate the definition of the window/viewport mapping which defines the
coordinate transformation from the clipping limits themselves. Although the parameter VC
defines the clipping region, it does not specify that it is operative. The final parameter XYC can
be set to CLIP or NOCLIP and specifies whether the clipping limits VC have any effect.
To clip the desk defined above so that only the part between 0.4 and 0.6 were visible would
require:
VC(l)=0.4
VC(2)=0.6
VC(3)=0.4
VC(4)=0.6
XYC=CLIP
SET VIEW REPRESENTATION(WS, VI, VOM, VMM, VC, XYC)
A COMPLETE EXAMPLE
Let us assume that the desk with its 3 parts has been defined in metres with all the accessories
placed on it. Suppose a view of the complete desk is required at some angle of rotation and, at
the same time, a detail of a part of the desk is required. To delineate the two views a boundary
round each view is drawn. An example of the overall picture is shown in Figure 6.1. The desk
has been displayed without rotation and the detail is of the phone on the left side of the corner
desk.
Figure 6.1: Two views of desk
View index 0, the default view, could be used to specify the boundary around the two separate
views of the desk:
REAL AX(8), AY(8)
DATA AX /0.666, 0, 0,
0.666, 0.666, 1, 1,
0.666/
DATA AY /1,
1, 0.333, 0.333, 1,
1, 0.333, 0.333/
OPEN STRUCTURE(ENV)
POLYLINE(8, XA, YA)
SET VIEW INDEX(1)
EXECUTE STRUCTURE(DESK)
SET VIEW INDEX(2)
EXECUTE STRUCTURE (DESK)
CLOSE STRUCTURE
The polyline defines the two areas in NPC space as the square from (0,0.333) to (0.666,1) and
the rectangle from (0.666,0.333) to (1,1). Before posting the structure ENV, the two views 1 and
2 need to be defined otherwise the default view which is the same as view 0 will be used:
EVALUATE VIEW ORIENTATION MATRIX(1.5, 0, VUPDX, VUPDY, ER, VOMI)
WL1(1)=-1.6
WL1(2)= 1.6
WL1(3)=-1.6
WL1(4)= 1.6
PVL1(l)=0
PVL1(2)=0.666
PVL1(3)=0.333
PVL1(4)=1
EVALUATE VIEW MAPPING MATRIX(WL1, PVL1, ER, VMM1)
VC1(1)=0
VC1(2)=0.666
VC1(3)=0.333
VC1(4)=1
XYC1=CLIP
SET VIEW REPRESENTATION(WS, 1, VOM1, VMM1, VC1, XYC1)
This defines the first view in the left top square of the NPC space. The orientation of the desk
will depend on the values of VUPDX and VUPDY. If VUPDX=0 and VUPDY=1, the desk will not
be rotated. The clipping rectangle is initially defined so that the clipping boundary coincides with
the limits of the viewport. As the window is defined greater than the boundary of the desk, all the
desk should be visible in this viewport.
In the rectangular area to the right, a detail of the desk at normal orientation is to be displayed.
The main item to be focussed on initially is the left telephone on the corner desk so the origin is
placed at the bottom left of the corner desk and the orientation is with the Y-axis vertical:
EVALUATE VIEW ORIENTATION MATRIX(2, 0, 0, 1, ER, VOM2)
WL2(1)=0
WL2(2)=0.5
WL2(3)=0
WL2(4)=H
PVL2(1)=0.666
PVL2(2)=1
PVL2(3)=0.333
PVL2(4)=1
VC2(1)=0.666
VC2(2)=1
VC2(3)=0.333
VC2(4)=1
XYC2=CLIP
SET VIEW REPRESENTATION(WS,2, VOM2, VMM2, VC2, XYC2)
POST STRUCTURE(WS, ENV, 0.2)
With the value of H set to 1, the aspect ratio in the right area (1:2) is the same for both the
window in view reference coordinates and the viewport in normalized projection coordinates
giving a picture as in Figure 6.1.
Figure 6.2: Changed orientation in first view
If the values of (VUPDX, VUPDY) are set to (1,1), the orientation of the desk in view 1 would be
rotated by 45° anti-clockwise. If the clipping limits for view 1 were reset to (0.1, 0.566, 0.433,
0.9), the result would be as in Figure 6.2. If H is changed to 0.5, the aspect ratio is changed in
the viewing transformation giving the picture in Figure 6.3.
This type of display is often used in computer aided design when the operator requires to
manipulate the complete scene while still having a detailed view of the particular part currently
being defined. As will be seen in Chapter 10, it is possible for the application to allow the
operator to interact with the display in either of the two regions.
Figure 6.3: Changing aspect ratio
This emphasizes the point that the viewing transformation can change the aspect ratio in the
transformation from world coordinates to NPC coordinates and this can be set differently for
each view.
Clipping Operations
It is sometimes necessary to extract information about the fundamental period (and thus also the
fundamental frequency) or about the period of the longest component (lowest frequency) in a
complex signal. The fundamental period (frequency) is the longest period (lowest frequency) in a
spectrum of harmonically related tones. In a given signal with non-harmonically related
components, it may not necessarily be the lowest frequency. Two simple measures are available
for this purpose:
1. distance between peaks of the same sign (i.e. positive or negative),
2. the distance between zero-crossings of the same sense (i.e. positive-going or negativegoing).
The measures may be combined. The measures are valid for sinusoid signals. However,
sometimes it is necessary to extract the fundamental period (and thereby also the fundamental
frequency) of a complex signal. For complex signals, however, the method will not work in the
general case, because the ripples which are due to harmonics or other higher frequency
components may have a peak amplitude which is comparable with that of the fundamental
frequency (or, more generally, with the lowest frequency), or they may cause additional zero
crossings, as shown in the Figures.
The time-domain operations used as a heuristic measure to reduce the effects of higher amplitude
harmonics are the clipping operations:
1. Peak clipping. A maximum amplitude level whose absolute value is lower than the
expected amplitude of spurious peaks is defined. Positive and negative instantaneous
amplitude values of the signal are limited to the corresponding positive and negative
signed values.
2. Centre clipping. A minimum amplitude level whose absolute value is higher than the
expected amplitude of peaks associated with spurious zero-crossings is defined. Positive
and negative instantaneous amplitude values of the signal are limited to the
corresponding positive and negative signed values.
The two operations may of course be combined; applications of these operations are shown in
the Figures. The operations are in fact quite widely used to pre-process the signal for
fundamental frequency analysis. The operations are also used in simple forms of `signal
compression' in narrow-band radio transmission, to increase the total energy (at the expense of
distortion) in the transmitted signal; more sophisticated compression techniques are used in
practice, however.
Figure 14: Complex signal with non-fundamental peaks and zero crossings.
Figure 15: Peak clipped signal.
Figure 16: Centre clipped signal.
Figure 17: Peak and centre clipped signal.
Figure 6.4: Change rotation in first view
By changing (VUPDX,VUPDY) to (-1,1), the desk rotation is in the opposite direction in view 1
while retaining the same view in the right area (see Figure 6.4).
This shows the flexibility possible using multiple views in 2 dimensions. The potential uses in 3
dimensions are much greater as it is often only through multiple views that an impression of the
scene can be obtained.
© Copyright 2026 Paperzz