Geometric Modeling Using Six Degrees of Freedom Input Devices

Geometric Modeling Using Six Degrees of Freedom Input Devices
Jiandong Liang, Mark Green
Department of Computing Science, University of Alberta
Edmonton, Alberta, Canada T6G 2H1
Abstract
A highly interactive 3D modeling system (JDCAD) is
presented in this paper. The motivation behind the
project is to explore the use of input devices with
many degrees of freedom to assist 3D geometric modeling. Several design issues are discussed in light of ergonomic consideration, ease of use, proper visual feedback, etc. New 3D interaction techniques based on the
bat are introduced. They include general purpose techniques (such as navigation, 3D menus, object selection
techniques, dials) and application specic techniques
(such as modeling operations and interrogation methods). Preliminary results are encouraging. To model a
mechanical component of moderate complexity, it usually takes about 1/10 of the time required by traditional
systems.
1 Introduction
Three dimensional geometric modeling has been a challenging topic for computer graphics and computer-aided
design [12]. Although many systems have been developed to facilitate 3D geometric modeling, few have penetrated into the early stage of the design process, that
is, the conceptualization or sketching phase[4]. The major reason for this is the generally low eciency of interaction. Using a commercial system, it usually takes
several hours to model a mechanical part of moderate
complexity [6]. This turn-around time makes it virtually impossible to \sketch" with a computer [5].
We believe that this diculty in sketching is largely
due to poor user interface design. In particular, the traditional mouse-based interaction style is a bottleneck to
3D geometric design, as it forces the user to decompose
a 3D task into a series of 1D or 2D tasks.
Based on this observation, we have implemented a
new highly interactive 3D modeling system, currently
dubbed JDCAD. In this system, the user is equipped
with six degrees of freedom input devices. They allow the user to directly manipulate objects in 3D space,
without having to map the task into 1D or 2D space.
The naturalness and intuitiveness of such interactions
can greatly enhance the eciency of geometric design,
thus making it possible to sketch with a computer. Using JDCAD, we show that the prototyping time can be
reduced to about one tenth of that required by commercial systems.
We rst review some of the related work in section 2.
An overview of JDCAD and discussion of several low
level design issues will be given in section 3. We present
some basic bat-based interaction techniques in section 4.
Interaction techniques specic to 3D modeling tasks are
discussed in section 5. Finally, we give a brief conclusion
in section 6.
2 Previous Work
Mouse-based 3D manipulations have two major limitations. First, simultaneous translations along all three
directions are not possible, due to the 2D nature of the
mouse. This is the fundamental obstacle barring mice
from being useful in 3D quick prototyping. When a user
wants to create a box-shaped object, he/she would certainly wish to do it in a single stroke, as in the case of
creating a rectangle with a 2D drawing tool. If a system forces the user to break such a task into sub-tasks
(such as creating a rectangle, and then adding the third
dimension), the user will have to think in terms of subspaces, rather than focusing on the 3D nature of the
object. In other words, the level of abstraction will be
lowered, and the result is low interaction eciency.
Second, 3D rotations are not eectively supported.
Most techniques limit the rotations to around one of
the major axes at a given time. To eect a general
3D rotation, three consecutive rotations have to be performed. The fact that rotations are not commutative
makes the situation worse. Shoemake developed the ar-
cball metaphor for 3D rotation using a mouse [11]. However, the user still has to mentally map the rotation onto
a great arc on a sphere, and certain rotations can not
be executed in one operation. It also takes screen space
to display the sphere. In general it is dicult to control
3D rotation using 2D metaphors, due to the discrepancy
in degrees of freedom.
To overcome these problems, several experiments
have been carried out in using interaction techniques
with many degrees of freedom to support 3D modeling.
Of them, three are of great importance.
In the mid 1970's, James H. Clark built an experimental system for designing surfaces in 3D [2]. It used
a head-mounted display to provide a stereoscopic view
to the designer, and let the designer use a mechanical
3D \wand" to drag control points in 3D. The system offered the designer a strong kinaesthetic correspondence
which gave it a great advantage over the more common
2D techniques of viewing and designing curves and surfaces.
Emanuel Sachs et al at MIT built the 3-Draw system for interactive 3D shape design [9]. The system
was based on a pair of Polhemus Isotrak six degree-offreedom tracking devices. The designer held one sensor
(in the form of a palette) in his left hand to specify a
moving reference frame, and used another sensor (in the
form of a stylus) to draw and edit 3D curves in space.
A graphics display was used to visualize the scene from
a virtual camera position.
Butterworth et al built the 3DM system at University
of North Carolina [1]. The system uses a head-mounted
display to place the designer in the modeling space. The
input device is a Polhemus 3Space Isotrak mounted in
a hollowed-out billiard ball with two buttons on it. The
designer used the hand-held tracker to select functions
from a toolbox, and to create/manipulate objects in 3D
space.
Both 3-Draw and 3DM oered easy and intuitive
ways of modeling 3D shapes, and demonstrated that
3D direct manipulation interfaces are superior to mousebased ones. However, engineering design tasks demand
more consideration for ergonomics and more support for
precision than is provided by these systems.
3 System Overview
Our system is based on an IRIS 4D/310 VGX graphics
workstation and a pair of Polhemus Isotrak six degreeof-freedom tracking devices. We use one tracking device
to dynamically monitor the user's head position, and
provide the kinetic 3D display eect [10] [8]. The second tracking device is used as a hand-held \bat" [13],
which is the main input device for 3D direct manipu-
lation. The schematic drawing in gure 1 shows the
overall setting.
visible volume 3D cursor bat
head tracker
sources
Figure 1: A typical setting of our system.
When operating the bat, the user can rest his/her elbow on the desktop. This is designed to reduce physical
fatigue.
3.1 Kinetic 3D Display
To enable 3D direct manipulation, a system must rst
provide enough spatial information to the user. Since
we don't have true 3D display techniques yet, we have
to rely on 2D displays to visualize the 3D scene.
In addition to common depth cues such as perspective
projection, hidden surface removal, and shading, we also
supply kinetic depth eect by tracking the user's head
position. When sitting in front of the graphics workstation, the user wears the sensor of the tracking device
on his head. Through a series of transformations, the
approximate eye position with respect to the display
window is obtained and used as the center of the perspective projection. Since the display is updated at 15
to 60 frames per second, the user has a strong spatial
perception as if he/she is looking into a real 3D space.
Similar experiments with kinetic depth eect were also
made by other researchers [3] [10].
We avoided using a head-mounted display, for two
reasons. First, ergonomic considerations. Since a user
is expected to spend a signicant period of time (say,
several hours a day) using such a system, the physical
burden of wearing a head-mounted display will be intolerable. Second, the head-mounted display prevents the
user from accessing traditional input devices such as the
keyboard and the mouse, thus, it is not compatible with
existing applications.
Despite the lack of binocular parallax (as could be
provided by a head-mounted display), we found the kinetic display to be reasonably eective. Our experience
shows that the motion parallax resulting from the user
grabbing and rotating the model by the bat provides
adequate depth information when it is needed.
it requires switching of the hand from one device to the
other. We designed several basic interaction techniques
to alleviate this problem.
3.2 3D Input Using the Bat
4.1 Menu Selection
The main input device for our system is a bat | a
hand held Isotrak sensor. The position and orientation
of the bat is monitored in real time, and is echoed on the
display as a 3D cursor (in the form of a jack). The user
uses the bat to interact with objects and to navigate
through the design space.
We do not use glove-based input, for three reasons.
First, the need to wear a glove at all times is too encumbering for long term use. Second, the precision of nger
tracking is often too low with today's glove techniques.
Third, the hand has certain kinematic constraints. For
example, it is far more easy to rotate something held by
the ngers than to rotate the whole hand itself.
Many design issues have to be carefully treated before
one can make good use of the bat in such an environment.
In order to achieve correct kinaesthetic correspondence between hand movement and the 3D cursor motion, the tracking data from the bat has to go through
a series of transformations.
First, the measurement from the Isotrak must be ltered [7] and transformed into a useful reference frame
(in our case, that of the graphics terminal).
Second, since the Isotrak is an absolute locator, a
clutch mechanism is necessary for re-mapping its physical location into a logical location. In addition we found
two kinds of relocation operations to be very useful: (a)
a \homing" operation which brings the 3D cursor to
the center of the visible volume, (b) a \masking" operation which disables the passing of either positional or
orientational change from the bat to the user interface.
With these mechanisms, the bat can be operated with
the same ease as a relative locator such as a mouse.
Third, any zoom factor should be applied to the positional part of the bat data before it is transformed into
modeling coordinate system. This ensures a constant
ratio between hand movement and the apparent cursor
motion. So, at a zoom factor of 10, the 3D cursor still
seems to move 1 cm to the user when the hand moves
1 cm, although the distance it travels in the modeling
coordinate system is actually 0.1 cm.
4 Basic Interaction Techniques
Since the user usually holds the bat in one hand, and
uses the other hand for keyboard interactions (see gure 1), it is usually inconvenient to use the mouse, as
To provide menu selection, we rst designed a spherical menu called a daisy [8]. The menu items are evenly
distributed over a virtual sphere (for example, on the
vertices of a dodecahedron or icosahedron). When the
user presses down the key for a pop-up daisy, the daisy
shows up around the 3D cursor, and thus its orientation follows the orientation of the bat. In addition, a
selection basket (a truncated cone in wireframe) is displayed, which is centered at the 3D cursor and always
faces the user. The user may rotate the daisy so the
desired item falls within the selection basket (the item
is color highlighted when it in the selection basket), and
releases the key to conrm the selection and dissolves
the pop-up daisy. Figure 2 shows a snap shot of a daisy
for choosing among various kinds of primitives.
Figure 2: A snapshot of a pop-up daisy.
The daisy worked quite well. It let the user select menu items without switching the dominant hand
between bat and mouse. The selection operation requires only nger movement, since the bat is only about
1 1:5 1:5 cm in size and can be easily rotated by two
ngers and the thumb.
After a while, we discovered room for further improvement. First, we noticed that sometimes there could be
a depth ambiguity among the items when the daisy is
relatively static (as seen in gure 2). This is mainly
because the lack of occlusion among the items, as this
depth cue is often crucial for objects in a close neighborhood. Second, the cable extending from the sensor
makes it anisotropic with respect to rotation (see gure 3). Rotations around the X-axis are more easy to
execute than those around other directions. These two
observations prompted the design of the ring menu.
sensor
Y
X
Z
Figure 3: Directions associated with the Isotrak sensor.
In a ring menu, the items are arranged along the circumference of a circle (see gure 4). Two transparent
belts are used to eliminate depth ambiguity, and to signify selection. The gap on the front belt always faces the
user (just like the selection basket in the daisy), and the
belts remain static with respect to the X-axis. The user
turns the items in the ring by rotating the bat around its
X-axis. The item that falls into the gap is highlighted
as the current selection. The user may rotate the bat
to tilt the entire ring in any direction, but only rotation
around the X-axis turns the items with respect to the
belts.
We rst experimented with a simple ray-ring selection mechanism, which we called \laser gun" selection.
A ray is emitted from the bat along the its X-axis (cf.
gure 3), and the rst object it intersects becomes the
candidate for selection. By rotating the bat with the
ngers, the user can easily aim at dierent objects and
select them. There is virtually no need to move the wrist
or the arm | an ergonomic advantage over mouse-based
selection methods.
This method works well in most situations. However,
when the target for selection is relatively small and far
away from the center region, the user may have diculty
in aiming due to the high angular accuracy required.
This problem is often exacerbated by the fact that there
is a certain level of noise in the orientation data.
To cope with this problem, we developed a more sophisticated selection mechanism called \spotlight selection". The idea is to make the selection softer than a
mathematical ray, so that a certain level of error in aiming can be accommodated (just like a spotlight can light
a subject without having to be perfectly directed at it).
Figure 5: A snapshot of spotlight selection.
Figure 4: A snapshot of a ring menu.
The ring menu proved to be superior to the daisy in
its ease of use, as there is no depth ambiguity, and the
rotation is easy to execute. This is one example that it
takes more than naive use of the orientational data to
produce satisfactory results.
4.2 Object Selection
Since the bat produces both position and orientation information, new techniques can be developed to facilitate
object selection.
To make such \soft" selection, we rst establish a selection cone with its apex at the bat and its axis being
the X-axis of the bat (same as the ray of the laser gun
selection). Objects within this cone are potential candidates for selection. Second, an anisotropic distance
metric S is dened to emulate the spotlight eect: (a)
if two objects are at equal Euclidean distance from the
bat, the one which is closer to the center line of the
cone should be selected, and (b) if two objects subtend
a same angle at the bat with the center line, the one
with the smaller Euclidean distance to the bat should
be selected.
There are innite number of distance metrics that
satisfy these two requirements. In our current implementation, we chose a metric that produces ellipsoidal
isodistance surfaces (see gure 6). Suppose the spread
angle of the selection cone is , and K = 1= sin , the
selection cone
isodistance surfaces
α
P
X
O
Q
Figure 6: A cross section of the selection cone showing isodistance surfaces of the anisotropic metric S . Point P is considered closer to the origin than point Q.
anisotropic distance form a point P = (x; y; z ) to the
origin is given by
jP j p
p
1+(
K2
K
? 1) (x=jP j)2
;
where jP j = x2 + y2 + z 2 is the normal Euclidean distance.
With the spotlight selection, the user can select a distant object without having to aim at it perfectly, while
selection of nearby objects is still as easy as with the
laser gun. It makes good use of both the positional and
orientational information of the bat, and avoids the disadvantage of measurement noise.
Since the graphics workstation we use supports spotlight lighting calculation, we use it as a visual feedback
for selection (see gure 5). We also draw a transparent cone to signify the selection area. The user can see
which objects are being lit by the spotlight, and can tell
whether a certain object is close to the center line by
the intensity of the light it receives from the spotlight1,
and its intersection line with the cone.
4.3 Bat Operated Dial
From our experience of using the bat for modeling operations, we found that a mapping between positional
change and orientational change is sometimes dicult
to understand and use (cf. section 5.1). In order to facilitate the input of intrinsically angular data, we designed
the bat operated dial (see gure 7).
The novelty of this dial is that it is only coupled with
the rotation of the bat around its X-axis, regardless of
the overall orientation of the bat. In this way, the user
can start the dial when the bat is in arbitrary orientation
and adjusts it by rotating the bat around its X-axis,
which is very easy to operate.
In order to achieve higher resolution, secondary and
tertiary dials may be used. Each new dial takes the
current reading of the previous one as its center value
and amplies the angular resolution by a factor of 10.
1 We use a white light for environmental lighting and a red light
for the spotlight lighting.
Figure 7: A bat-operated dial is being used to adjust the
spread angle of a cone.
When used in conjunction with angular snap gridding,
a relative precision of 10?4 can be readily achieved2 . It
usually suces for engineering use.
For easy reading, the dials are always placed perpendicular to the viewing direction, and only translate with
the 3D cursor. The plate of the dials is translucent in
order not to obscure the scene.
The dial proves to be an intuitive way of adjusting
angular measurements.
4.4 Navigation in Space
Since the user has a space of virtually unlimited volume
to work with, but at any given moment can only see
a part of it (i.e. the visible frustum), some forms of
navigation are necessary.
We found the following four types of navigation useful.
First, a grabbing operation lets the user move the
entire model with the bat. This mechanism, when used
in conjunction with zooming, provides a natural way of
bringing any part of a model into the visible volume.
Second, a \focus" operation brings a selected object
to the center of the apparent coordinate system, i.e. the
visual sweet spot, so subsequent manipulations on the
object become convenient.
Third, a \y-with-bat" operation lets the user move
around in space, and inspect dierent parts of the
model. This is useful for tasks such as architectural
walk through. Since the apparent ying speed is made
zoom-invariant, one can eectively change the pace by
adjusting the zoom factor.
Fourth, the user can mark a viewing setting by
putting down a \spacemark". Later, he/she can come
back to that setting by traversing the chain of spacemarks. Also, the user can always bring the model to its
2
The usable angular resolution of the dial is about 5 degrees.
canonical positional by a \homing" operation.
These navigation mechanisms are very useful when
the model is large and complex.
5 Modeling Operations
The most important advantage of a bat over a mouse
is the ability to create and make changes to objects by
3D direct manipulations. The fact that the user is in
the same space as the geometric model relieves the user
from having to break down 3D tasks into 2D or 1D
operations, and thus makes the interface \transparent".
We will discuss the design of basic modeling operations
in the following sections.
5.1 Primitive Creation
A primitive solid can be created in a single stroke using 3D rubber-banding. The user rst chooses a shape
from a ring menu, then presses a key to set the current
position of the 3D cursor as one corner of the bounding
box, the other corner of the bounding box follows the
3D cursor as it moves with the bat. When the solid is
stretched to the right shape, the user releases the key
to nalize the solid.
For shapes with three axial measurements (such as
a box), the mapping from the bounding box to object
geometry is straightforward. However, certain shapes
have non-axial measurements (such as a circular shape)
or more than three measurements (such as an extruded
trapezoid). In those situations, proper mappings have
to be devised.
Initially, we tried to employ the data of all six degrees of freedom simultaneously. It proved to be the
wrong choice. We observed three facts. First, controlling all six degrees of freedom simultaneously tends to
overload the hand. Second, orientation appears intrinsically more dicult to control than position. Third,
mappings from an orientational change to a positional
change are usually dicult to understand and use.
Based on these observations, we adopted the following
two rules.
First, if the shape in question has three measurements, but some of them are not axial in nature, we
associate each one with an axial change. For example,
a truncated cone has three measurements: the height,
the diameter of the top, and the diameter of the base.
We map the width of the bounding box to the top diameter, and the length of the bounding box with the base
diameter. (The height of the bounding box obviously
maps to the height of the truncated cone.) In this way,
we made good use of the ability of the user to control
3D positions.
Second, if the shape in question has more than three
measurements, we only directly control three of them,
and compute the rest with a pre-dened proportion. In
this way, the user can quickly get the object in roughly
the desired shape and then come back to ne tune it.
For example, an extruded trapezoid has four measurements: the width, the height, the bottom length, and
the top length. We map the length of the bounding box
to the bottom length, and keep the top length at a xed
proportion to it3. The linear mapping makes it equally
easy to set either the top length or the bottom length
to a desired amount.
5.2 Region-Based Reshaping
Modication to a model is at least as important as its
creation, and usually occurs much more frequently than
creation. Also, modication often requires more exibility than creation, as the user may wish to change a
certain set of measurements while leaving the rest untouched.
In 2D interfaces, the reshaping operation is often
based on a set of \handles" around the object. The
user drags a handle with the mouse to enact a certain
type of reshaping. Although this method can be directly
extended into 3D, we found it less eective than the 2D
case for the following two reasons. First, the measurement noise and lack of sucient depth cues often makes
it dicult to place the 3D cursor at an exact handle
point. Second, the handles tend to clutter up the scene.
To cope with this, we expanded the handles to ll the
vicinity of the object, and made them invisible. The 3D
cursor takes over for visual feedback, by showing dierent shapes when in dierent regions. This region-based
technique maintains the exibility of the handle-based
metaphor while relaxing the precision requirement and
also avoiding the visual clutter.
Figure 8 illustrates the region-based reshaping of a
cylinder. Figure 8 (a) shows that the 3D cursor takes
the shape of a double-headed arrow when it is near the
center of the cylinder end, meaning that should the user
start to reshape, he/she will be changing the length of
the cylinder. Figure 8 (b) shows that the 3D cursor
changes its shape to a bow-arrow when it is near the
middle of the cylinder, meaning that a reshape operation will change the radius of the cylinder. Figure 8 (c)
shows that the 3D cursor has both shapes on it when
it is near the \rim" of the cylinder, meaning that both
the height and radius of the cylinder may be changed simultaneously. Figure 8 (d) shows a reshaping operation
changing both the height and radius of the cylinder. At
the beginning of a reshaping operation, the 3D cursor is
3
We use the golden proportion as the default for these cases.
(a) Ready for height
change
(b) Ready for radius
change
that direction. The system echoes the axis it chose as a
thin rod, and the user can start a constrained translation, rotation, reshaping, or alignment between selected
objects, with respect to that axis. This natural way of
specifying an axis eliminates the need for the user to
know the names of the axes.
For visual continuity, all the changes in position, orientation, and size are visualized with animation. Each
command results in a 10-frame animation sequence, and
is carried out in about half a second. This eliminates
the \jumping" of objects, which often requires the user
to mentally re-assimilate the scene.
The user may also select several objects and group
them together or perform CSG operations. A hierachical model can be built in this way.
5.4 Bat-Based Clipping
(c) Ready for height and
radius change
(d) During reshaping
Figure 8: Region-based reshaping.
rst snapped to the corresponding point on the object,
so the object will not undergo a sudden change in shape.
Composite objects (such as the results of CSG operations) can be reshaped via their bounding boxes, in
one, two, or three dimensions. If the starting position
of the cursor is near the center of a face, then the object
will be stretched only along the dimension perpendicular to that face. If the cursor started near the center
of an edge, then only the two dimensions perpendicular
to that edge will change with the cursor. Similarly, the
user can pick a corner and change all three dimensions
at the same time.
In designing a complex model, the user often needs to
inspect the internal structure of a certain part (e.g. an
assembled component). Cross sectioning is the traditional method for this purpose. We designed a batbased clipping tool to further facilitate such operations.
Since the VGX supports six additional clipping planes
in rendering, we use them to form several clipping tools
which the user can dynamically place in space with the
bat. The user can use these dynamic clipping tools to
\poke around" and examine internal structures and proles of cavities.
5.3 Placement and Alignment
Translations and rotations of an object with the bat are
straightforward. The user rst selects the object, then
presses the key for translation, rotation, or both, and
the object moves with the 3D cursor as the bat moves.
Releasing the key nalizes the position and orientation
of the object.
To defeat the measurement noise of the bat and the
unsteadiness of the hand, an optional snap grid can be
turned on for position and/or orientation.
The bat is also used to specify an axis for certain operations. The user points the bat's X-axis in roughly the
direction of the desired axis (cf. gure 3), and presses
a key to instruct the system to nd the nearest axis to
Figure 9: A V-shaped clipping tool is being used to reveal
the inner structure of a mechanical component.
In the current version of JDCAD, the user can dynamically clip out a half space, a quarter space (as in
gure 9), or a one-eighth space. These tools would also
be useful for interrogating a model under simulation, for
example, collision checking for a simulated robot arm
motion.
6 Conclusions
In this paper, we presented a highly interactive 3D modeling system. The aim is to remove the bottleneck between the user and the modeling system by using six
degrees of freedom input devices and novel interaction
techniques. We combined a kinetic 3D display scheme
with hand held bat input to provide the user with a 3D
direct manipulation interface.
Figure 10: A sample mechanical component.
Although only at a preliminary stage, JDCAD has
shown to be very ecient in 3D quick prototyping.
Compared with commercially available systems, it usually takes only one-fth to one-tenth of the time to build
a prototype. For example, the sample mechanical component in gure 10 can be prototyped in about ve minutes.
Some of the issues discussed here may also be applied
to other areas of interaction design. For example, the
ring menu can be used with glove-based interaction in
an immersive virtual reality system, so can the spotlight
selection.
Our next step is to experiment with free-form surface
design using the bat.
Acknowledgements
The rst author wishes to thank Chris Shaw for many
helpful discussions and comments.
References
[1] Butterworth, J., A. Davidson, S. Hench, and T. M.
Olano, \3DM: A Three Dimensional Modeler Using a Head-Mounted Display," Proc. 1992 Symposium on Interactive 3D Graphics, Cambridge, Massachusetts, March 29{April 1 1992, 135{138.
[2] Clark, James H., \Designing Surfaces in 3-D,"
Communications of the ACM, 19, 8, August 1976,
454{460.
[3] Deering, Michael, \High Resolution Virtual Reality," Proc. ACM SIGGRAPH '92, Chicago, Illinois,
July 26{31, 1992, 195{202.
[4] Encarnaca~o, J., \CAD Technology: A Survey on
CAD Systems and Applications," Computers in Industry, 8, 1987, 145{150.
[5] Hatvany, Jozef, \Can Computers Compete with
Used Envelopes?" Proc. IFAC 9th Triennial World
Congress, Budapest, Hungary, 1984, 3373{3375.
[6] Hodskinson, M., and R. Patel, \3-D Modeling in a
Teaching Company Scheme," Computer-Aided Engineering Journal, October 1989, 177{180.
[7] Liang, J., C. Shaw, and M. Green, \On TemporalSpatial Realism in the Virtual Reality Environment," Proc. ACM Symp. on User Interface Software and Technology, Hilton Head, South Carolina,
November 11{13, 1991, 19{25.
[8] Liang, J., and M. Green, \Quick Prototyping for
Engineering Design," Computer Applications and
Design Abstraction, PD-Vol. 49, ASME, Houston,
Texas, January 31 - February 4, 1993, pp. 157-162.
[9] Sachs, Emanuel, Andrew Robers, and David
Stoops, \3-Draw: A Tool for Designing 3D
Shapes," IEEE Computer Graphics and Applications, 11, November 1991, 18{24.
[10] Schmandt, C., \Spatial Input/Output Correspondence in a Stereoscopic Computer Graphics Work
Station," Computer Graphics, 17, 3, July 1983,
253{261.
[11] Shoemake, Ken, \ARCBALL: A User Interface for
Specifying Three-Dimensional Orientation Using a
Mouse," Proc. Graphics Interface '92, Vancouver,
B.C., May 11{15, 1992, pp. 151-156.
[12] Sproull, R. F., \Parts of the Frontier Are Hard to
Move," Computer Graphics, 24, 2, March 1990, p.
9.
[13] Ware Colin, and Danny R. Jessome, \Using the
Bat: A Six Dimensional Mouse for Object Placement," Proc. Graphics Interface '88, Edmonton,
Alberta, June 6{10, 1988, 119{124.