Sketch-based Model Prototype Tool
By:
Nathaniel Tan
Charles Lee
Adam Kromm
Andrew Pugi
Overview
The goal was to create an application that allowed professionals in the entertainment industry
to quickly create 3D data assets for various CG/media projects.
This meant that we would have to create a tool that was easy to use as well as allowed quick
creation of data from either user input, or pre-created image input. The UI is designed to be
similar to other applications to decrease the time necessary for a new user to grow accustomed
when using our application.
Our application, allows users to sketch out images in the workspace provided and render its
model by utilizing either rotational blending or cross sectional blending. The model can also be
textured if there is a pre-loaded image in the workspace. Our application also allows the
creation of 3D models simply by importing an image file, and by clicking a button to
automatically generate a model based on the image.
UI Breakdown
Overview
The UI was designed to be simple but neat, allowing users to find tools that they need to by
grouping related functions together. It was also meant to be somewhat similar to other 3D
applications as layout familiarity decreases the time taken to learn how to use a new
application.
Controls
Workspace
The controls are split into a variety of sections:
Workspace view
Image/Textures
Mesh Editing
Export
Workspace View
Controls
Image/Texture
Controls
Mesh Editing Controls
Model Export
Workspace View Controls
This section essentially allows the user to manipulate the view of the workspace to his/her own
preferences.
Show Grid:
Reset View:
Check/Uncheck to display gridlines in
the background.
Reset the Workspace view to default.
Show Curves:
Check/Uncheck to display the sketch
curves for each object.
In addition, the user can also move the workspace view by holding down the right-mousebutton, and moving the mouse while the button is held down to obtain a preferred view.
The user can also zoom in/out by manipulating a mouse scroll wheel.
Image/Texture Controls
The user can choose an image he/she wants to load into the workspace by entering in the
directory of the image file. Preferably, the image file should be in the same directory as the
application files so all you would have to type in is “image.jpg” within the directory text box.
The program supports any image format supported by FreeImage v3.15.
The loaded image is used to automatically texture any object that is generated after being
loaded and drawn on top of.
Auto Generate Surface:
Automatically generate a model
from a valid image.
Remove Texture:
Removed a texture from a
currently selected model.
Load Image:
Remove Image:
Load declared image.
Removed a loaded workspace
image.
Any unwanted textures can be also removed by selecting the object, and by clicking the “Remove
Texture from Current Model” button.
Mesh Controls
These are the most important set of controls that allow you to Add, Select, or Edit existing
objects in the workspace.
To add a new object by mouse drawing:
Ensure that you select the “Surface Type” that you want.
Check that the “Add Object” radio-button is selected.
Draw the appropriate curves that correspond to the “Surface Type” that you have
selected.
Cross Sectional Blending requires 2 curves and a profile curve.
Rotational Blending requires only 2 curves.
Delete All:
Delete all Objects in the workspace.
Surface Type:
User selected Mesh action.
- Add Object (draw curves).
- Select Existing Mesh.
- Edit a Selected Mesh Object (by selecting
Current Curve, and Surface Type).
Rotate Object:
Rotate a currently selected Mesh.
Move the mouse on the trackball in any
direction to rotate.
Delete Selected:
Delete a Selected Object.
Current Curve:
Current Curve that is being drawn or
edited.
Profile is only for cross section blending.
Red-Green-Blue:
Sets the color of a currently selected
Mesh.
Editing an object, requires the following:
Click on the “Select Object” radio-button.
Use the mouse to select the object that you want to edit.
Now, click on the “Edit Object” radio-button.
Look at the “Current Curve” selection and click the radio-button of the curve that you wish to
edit.
Make the changes in the workspace.
The surface will always automatically generate after you’re done your changes.
Deleting a specific object:
Click on the “Select Object” radio-button.
Use the mouse to select the objoect that you want to edit.
Click on the “Delete Selected” button.
You can also delete all objects in the workspace by clicking on “Delete All”.
Export OBJ
A single click button that will export your objects in the workspace into an OBJ file that can be
opened by other applications. The file will be in the directory of the application.
Implementation
589 Core Features
We implemented 4 main features based on the material described in the course. Each group
member was assigned a core feature and was responsible for the functionality and
implementation of the feature.
These features were more related to the course material and were more important for the
application. Below is the responsibilities of the group members:
Nathaniel Tan – Rotational Blending, proposal, final presentation, final report composition and
summary of system.
Adam Kromm – Cross Sectional Blending (including report section), application core
Charles Lee – Image Texture Mapping (including report section)
Andrew Pugi – Image Extraction for automatic model generation (including report section)
Other Features
Aside from the core features, we’ve also had to decide on and implement such side features as:
Color assignment
Image loading
Object rotation
Object translation
Grid display
Curve display
Input detection
Object Deletion
Object Editing
These features were actually a product of iteration based on the implementation of the core
features. These features were also decided upon based on our own usage of the initial stages
of the application.
In addition, debugging was a big priority as the core features were interdependent on each other.
Secondary features were also linked to the core features which made planning and designing algorithms
properly before implementation even more important.
Next, we will now highlight how each core feature was implemented.
Rotational Blending Implementation
Assumptions:
Rotational blending is based on 2 curves, 1 on each side.
Should work on any 2 curve lengths and the OpenGL coordinate system defined by the
application.
For each rotation blended object, the object can have either 1 of its 2 curves edited/modified.
The system should be able to compensate for that.
So naturally, the implementation involved first a way to define the curves from a path of 0 to 1. This is
to allow us to easily apply our parameters of the surface.
Step 1: Length Calculation
The user drawn curves were essentially a set of points on each curve that were stored in a vector.
For each curve, we then start at the very first point that was drawn (p1), and then traverse to the point
directly after it (p2). We then calculate the distance between those 2 points.
Total Curve length = Total Curve Length + d;
We then do this for all the points in the curve until we reach the last point. We then store the curve
length and prepare for the next step.
Step 2: Percentage Calculation
Since we want to calculate all points along the curve in terms of 0 < u < 1, the logical step would then be
to find the value of each point on the curve.
So we’d again traverse the points of the curve and calculate the distance of every particular point,
compared to the total length of the curve. This gives us the percentage of the point in comparison to
the whole curve.
Input: P1
P1Distance = Calculate Distance(P1);
Percentage = P1Distance / Total Curve Length;
P1.Percentage = Percentage;
We perform this calculation for every point in the curve and we store it. This gives us our initial set of
points ranging from 0 to 1 on the curve.
This step has to be done for both curves of an object before proceeding.
Step 3: Linear Interpolation
Now that we have each curve from 0 to 1, we then traverse each curve according to a pre-defined U
incremental. This is done simultaneously for both curves in order to determine which points on Curve 1
and Curve 2 should meet according to a given T value where:
T = 0;
…
Loop:
T = T + U;
End loop;
The importance of this is to be able to find the points on the curve for each increment. This will also
allow us to easily create the midpoint for each increment value (next step).
Input: T
Find P1 and P2 where P1 <= T < P2;
Range = P2 – P1;
Delta = T – P1;
N = Delta / Range;
PNew = Linear Interpolation(P1, P2, N);
Vector.push_back(PNew);
Where Linear Interpolation(P1, P2, N):
Return = (1 – N)P1 + NP2;
We have now determined all the coordinates/points on the curve according to the defined U increment
value. We have also inserted them into a new vector which stores them for each curve. Also, since the
U increment value is the same for each curve, they will both have the same number of points.
Step 4: Midpoint
The midpoint in rotational blending is important because it gives us the point where we can both
calculate a radius, and also to perform a rotation around with the radius in order to calculate the
coordinates of the desired surface.
Since our previous step has calculated all the points on both curves for us, we can now traverse through
the elements of both curve vectors and calculate the midpoint.
We then simply use the formula, 1/2P1 + 1/2P2, to calculate the midpoint between both curves of an
object. We then store the midpoint in another vector for surface calculation reference later on.
Step 5: Surface Point Generation
Now that we have the points of both side curves and the midpoint in parameter form, it is easy to
calculate the surface points based on a defined U increment.
At this point, the number of elements in both curves and the midpoint are all the same. This makes it
super easy to loop through, due to the fact that for each midpoint, there is guaranteed to be a point on
both the right and left curves.
So in order to generate the surface, we traverse through each point of the midpoint curve. We then find
the radius of each point which is:
Radius = Length(Curve1.point – Midpoint.point);
Once we have the radius, we then calculate the circle from, 0 < u2 < 2pi, for that midpoint.
Once the points are generated, they are stored in a Surface-specific vector in an order that would allow
the surface renderer to link the points into faces.
We traverse through every midpoint and repeat this loop until it is finished.
One thing that differs is that in rotational blending from cross sectional blending is that the surface has
to be closed. This means that in the circle surface loop, the very last element has to be closed. If left
open, this will be displayed as a gap in the object that is generated.
So to close it, we simply set the last point of each circular surface to equal the first point of that circle.
if (x == (circleelement - 1))
{
surface[x][y] = surface[0][y];
}
After the loop is finished, we then create surface faces according to the vertices that have been
generated. They are then rendered and the resulting object is modeled after the users initial 2 curves.
The application allows you to not display each objects initial curves by simply un-checking a checkbox.
This just doesn’t draw the curves of each object.
Cross Sectional Blending
As input this function requires: a SketchObject that has the three curves defined (outline 1 &2, and
profile); a step size for the resolution; a color for the mesh. These three input variables are used to
calculate a grid of points for the surface.
Step 1: Parametrize Curves
The first thing done is to take each input curve and create lists of float values. Call the new lists: p1,p2,
and p3 for: curve1, curve2, and profile. The size of each new list is the same size as the corresponding
input curve. The purpose of the new list is to store the percentage of the arc length distance of the
current point from the beginning of the whole curve. The following is done for each curve:
1.
2.
3.
4.
p1.size = curve1.size
totalDistance = ∑𝑛−1
𝑖=0 𝑙𝑒𝑛𝑔𝑡ℎ(𝑐𝑢𝑟𝑣𝑒[𝑖] − 𝑐𝑢𝑟𝑣𝑒[𝑖 + 1])
currDistance = 0
for I = 0; I < n-1; I = I + 1
a. p1[i] = currDistance
b. currDistance += length(curve[i] – curve[i+1])
5. p1[n] = 1.0
This is then used to parametrize the curve from 0 to 1. This is done for each of the three curves. Then
to find a point that is at a distance of 0.75 (75%) of curve1:
1. Find largest array index such that p1[index] <= 0.75
2. Find the interpolation value: t = 0.75 - p1[index]
3. Find the point p = (1 – t) curve1[index] + t * curve1[index +1]
Step 2: Create points on surface
Once the curves have been parametrized, the next thing to do is to generate all the points on the
surface.
Get orientation
To create the points on the surface we need to get the size and orientation for the profile curve. To do
this we create a vector from a point on curve1 to the corresponding point on curve2:
dir = curve1[u] – curve2[u]
This dir vector then represents the orientation that the profile curve needs. The length of the dir vector
represents the length that the profile curve will need to have.
Scale and position profile curve
Next we need to scale and rotate the profile curve to fit inside the area specified by the dir vector.
To determine how much to scale the profile curve we do the following:
1. pcDir = profile.front() – profile.back();
2. scale = dir.length / pcDir.length
To get the orientation for the curve we do the following:
1.
2.
3.
4.
5.
6.
basis1 = dir.normalized
basis2 = (0,0,1)
// find the angle between the profile curve and the vector between curve points
angle = atan2( (pcDir (cross) basis1).length, pcDir (dot) basis1)
//create rotation matrix
Rotation = rotate by angle around basis2
This rotation matrix is then used to rotate the profile points so that they are position correctly between
curve1[u] and curve2[u].
Create points
Next we create the points. To do this we use the scale and rotation matrix that we calculated. Thus:
1. point = GetPoint(curve1, v)
2. point = rotation * point
3. surface[x][y] = scale * point.x * basis1 + scale * point.y * basis2 + curve1[u]
Here u and v are the parameters for traversing the curves. 0 < u,v < 1. The variables x and y represent
the positions in the surface mesh.
1. X = 1/stepsize
2. Y = 1/stepsize
Texture Implementation
Assumptions:
Model is created on top of the texture that is to be applied.
We’re using OpenGL.
The background object that display the texture is created on the OpenGL coordinates, where the
top is -1, and the bottom of the object is 1, and the middle of the texture is at 0.
For the mesh to render using a texture, each vertex has its own textureUV coordinates, and a textureID.
The textureUV controls which part of the texture to use, and textureID controls which texture to be
using.
In order to map the UV coordinates of the vertices to the texture, will have to find a mapping that maps
each point of the object in the texture to the actual mesh itself. Since the curves will always be drawn
from the +z-axis, we only need to consider the X and Y coordinates.
First, we have to find the mapping from openGL to UV.
Coordinate Ranges
Coordinate Systems
OpenGL
TextureUV
x
-1
0
y
1
1
-1
0
We can map the entire openGL to textureUV plane by:
f(OpenGL) = ((OpenGL + 1) / 2)
This will ensure that the y-coordinates are correctly mapped to the background.
1
1
As is, the ratio of the texture will produce a square, which will not map correctly if the texture is a
rectangle. So we need to stretch this function to preserve the original ratio of the texture. We will then
move the texture back to the middle of the openGL coordinate system.
h = height of the texture
w = width of the texture
shrinkFactor = (h / w)
This will preserve the image ratio, with y being 1-to-1.
offset = ( (w – h) / 2 ) / w )
This will reposition the image back to the middle, after the changes of shrinkFactor.
So we can now get the new coordinates using:
f(f(openGL)) = offset + ( f(openGL) * shrinkFactor )
Now that we have a mapping to the texture, we can use this mapping to texture a mesh.
The spaces of the texture, that are not part of the object, won’t be used because the model generated
from the curve won’t have vertices there.
Thus this texturing technique should work with all meshes generated in this program.
Edge Extraction
Assumptions:
The image has a definite edge that can be determined by a sharp change in pixel value
The image object as a single profile outline to extract
Outlines can be split into the two curves needed for rotational blending
In order to find an accurate curve from the edges of the object in the image, the edge must be found by
scanning the image pixels for sharp changes of the values for adjacent pixels and connecting them into
two usable curves.
Step 1) - Pixel Detection
The first step involves taking the image loaded by the texture loader and accessing its pixel values. The
pixels are then scanned by comparing the values of the pixels against the adjacent pixels to the right, the
bottom right, the bottom, and the bottom left.
Starting will all pixels as a 2D array, we then compare all pixels to their neighbors based on the process
mentioned above to allow for all directions to be checked with no overlapping:
Using a user controlled variable of what pixel colour value and to what degree of difference is needed, is
used to determine which pixels constitute an edge on the image. Any pixels that are found to have an
adjacent pixel in any of the directions that match the change required to be considered an edge are then
flagged as an edge pixel and saved.
Step 2) – Line Creation
After the edge pixels are determined, they are converted into a line by connecting them to their closest
neighbours in a single direction until the line loops back on itself. Any gaps found are connected across
to the next closest edge pixel.
This newly created
around the image
line is now a full border
based on the detected edge.
However, since the rotational blending requires two lines that are on the sides of the image, the line
needs to be split.
Step 3) – Splitting the Line
In order to split the line, the general shape and contour of the shape needs to be known. Ideally the cut
on the image would represent the midpoint of the object in relation to its rotation. However, without
analyzing the image to determine how an arbitrary shape is supposed to be oriented, for the sake of
time constraints, a simpler way of determining the cut points is used. The method used first finds the
middle of the image along the x-axis. This assumes that most images loaded will be on an object that is
being looked at directly and is rotated properly. Secondly, it’s also assumed that the image is centered
properly in the photo. The points that were found at the middle of the x-axis should therefore be in the
middle of the image size. The middle points are found first so that the two resulting curves always
properly halve the image for more accurate modeling. From these points, the top and bottom of the
curves need to be determined. From here the top most point is found by comparing the y-axis, as well as
the bottom most point. These points are then set as the top and bottom of the two curves. From here
each curve is generated by starting from the top point and then traveling either forward or backwards,
adding points to their respective new array, until the bottom point is found. These new array now
contain the two curves needed for the rotational blending.
Step 4) – Returning the Curves
The last step is to construct an object that now contains the new curves generated. This object is then
returned to the calling program where it will use the other sections of the program to construct the
object mesh, apply and texture, and render the image to the screen. The resulting curve will be more
accurate than drawing my hand on the screen as it finds the exact pixels values.
Sample Images:
Cross Sectional Blended Shield with
texture.
Cross Sectional Blended pumpkin
with rotational blended stem.
Multiple objects utilizing rotational
blending.
Cross sectional leaf with leaf
texture. Note the nice surface.
Rotational blended Christmas tree.
Wizard formed by multiple objects
utilizing both cross and rotational
blending.
Rotational Blended pear utilizing
mesh color assignment.
Rose consisting of multiple object
parts. Stem and flower use
rotational, while leaves use cross.
3 differently colored roses.
Future Plans
Improvements that can be made are:
New way of editing curves by implementing movable/delete-able control points.
Proper texture extraction from images to suit all types of formats.
A better way of automatically generating models from loaded image files. One solution
would be to somehow divide an image into multiple parts and define each part as an
object. Then decide on which blending would be more suitable for it.
Object templates to quickly create specified types of objects. Ex: biped, quadruped, etc…
Model smoothness controls. User is able to select high resolution or lower resolution
models for export. (Basic version can already can be implemented by adding another control box for
user defined curve smoothness value. Code already supports it. ).
Lighting options for workspace, ex: position of lights, color of lights, number of lights,
etc…
See if we can generate closed surfaces properly utilizing the cross-sectional blending.
Allow users to add in basic model skeletons for future animation work.
Pictures on buttons for the UI. Easier to understand the function of each button.
For industry use, the application will have to be as refined as other industry standard software
like Maya or 3D Studio Max. However, it is still important that the application be simpler to use
with a minimal barrier of entry in terms of first contact (using for first time) and learning the
controls.
Platform
- C++, Visual Studio.
- OpenGL 3.0
- FreeImagePlus (the c++ wrapper for FreeImage, a image loading library:
http://freeimage.sourceforge.net/)
- GLUI (openGL User Interface: http://www.cs.unc.edu/~rademach/glui/)
- GLM (vectors, matrices, and all the math for them: http://glm.g-truc.net/ )
- GLEW (openGL Extension Wrangler: exposes opengl 3 functions http://glew.sourceforge.net/ )
Reference Papers
2 main papers for the purpose of this project.
1) Sketch-based Modeling with Few Strokes, Cherlin, J.J., Samavati, F.F., Sousa, M.C., abd
Jorge, J.A., Best Presentation Award, Proceedings of the 21st Spring Conference on
Computer Graphics (SCCG'05), in association with ACM SIGGRAPH and Eurographics,
Budmerice, Slovakia, May 12-14, 2005
2) Image-Assisted Modeling from Sketches, Olsen, L., and Samavati, F.F., Proceedings of the
Graphics Interface 2010 (GI'10), Ottawa, Canada, May 31-Jun 2, 2010.
© Copyright 2026 Paperzz