9.2.OTHER NOTABLE AI ASPECTS / HLSL INTRO
Common board game AI approaches and Strategic AI
Execution Management
World Interface
Strat
egy
Moveme
nt
Animation
Brief introduction to tactical and strategic AI
Physics
Tactical and Strategic AI
Tactical and strategic AI
encompasses a wide range of
algorithms that try to:
• derive a tactical assessment of
some situation, possibly using
incomplete or probabilistic
information
• Use tactical assessments to make
decisions and coordinate the
behaviour of multiple characters
Aside: Not every genre of game needs
tactical and/or strategic forms of AI
Waypoint tactics
A waypoint is simply a position
in the game world.
As with path-finding waypoints
(holding path-finding
information, e.g. terrain cost,
etc.), tactical waypoints hold
tactical information, e.g.:
• Cover points
• Reconnaissance/sniper
locations
• Shadowed locations
• Power-up spawn points
• Exposed locations
Waypoint tactics
Tactical locations can be either
set by the designer or derived
from game data or analytical
algorithms.
Tactical nodes can be combined with
pathfinding nodes to provide tactically
aware pathfinding.
Influence maps
Influence mapping is widely used
in strategy games to map the
influence/strength of each side.
The game world is split into
chunks (tile-based is a common
representation). Each chunk is
assigned an influence score based
on the combined balance of
influence ‘emitted’ by game
objects that can effect that chunk.
The influence map can be used to
identify points of weakness and
strength and, from this, drive
strategic goal selection.
Influence maps
The influence exerted on a
particular area can depend upon
the proximity of game objects
(e.g. mobile units or stationary
bases), type of surrounding
terrain (e.g. a mountain range
may ‘prevent’ influence passing),
side specific factors (e.g. current
financial or happiness state), etc.
In most games, influence
emitted by a game object
decays over distance (e.g. using
a linear drop-off alongside a
defined maximum influence
range).
1
2
3
4
4
3
1
4
5
4
1
4
4
4
1
1
3
3
2
2
2
1 +1
1
1 +1
2
1
2
1
3
5
4
3
3
1
4
1
3
1 +1
1
2
2
2
1
1
3
1
2
1
1
2
1
2
4
3
3
Execution Management
World Interface
Strat
egy
Moveme
nt
Animation
Overview of approaches enabling jumping in games
Physics
Jumping
Unlike other forms of steering
behaviour, jumps are inherently
risky (i.e. they can fail, possibly
with ‘fatal’ consequences).
To jump, the character must be
moving at the right speed and in
the right direction and ensure
the jump is executed at the right
time.
Also, steering behaviours typically reevaluate decisions several
times/second, correcting small
mistakes. A jump action is a one-time,
single event, fail-sensitive decision.
Jumping (jump points)
The most simple approach is to
place jump points into the game
level.
If characters can move with
different speeds, then the jump
point also needs a minimum jump
speed.
The character can then seek
towards the jump pad, matching
the specified speed, and jump
whenever it is on the jump pad.
Minimum
jump
velocity
Jumping (difficulties)
Some forms of jump require a
defined target speed or a defined
direction/angle of approach to the
jump pad.
Precise jump
speed needed
Additionally, some jumps may have a
higher ‘price’ of failure (e.g. ‘death’ vs.
a short delay to climb-up).
Aside: Such information can be
incorporated into the jump point, but
it is difficult to extensively test.
Precise jump
direction needed
Jumping (landing pads)
A good approach is to pair a
jump pad with a landing pad.
Doing this permits the game
object to determine the
needed speed and direction (by
solving the trajectory
equations).
This approach is more flexible
(different characters can differ
in their movement approach)
and is less prone to error.
Introduction to HLSL
A bit of history (fixed function pipelines)
Early versions of the DirectX and OpenGL
APIs defined a number of fixed rendering
stages.
This forced all games to use the same
approach with only a few parameters open
to change.
Recent history (shaders)
As GPUs increased in capability it
became possible to inject small
programs (called shaders) allowing the
application to have greater control.
• A list of vertices (points) are sent to
Application
Vertices
Vertex Shader
Rasterisation / Interpolation
the vertex shader.
• In the rasterisation stage primitives
are constructed from output vertices.
The primitives are then rasterized (i.e.
the onscreen pixels determined).
Vertex attributes are interpolated
between the pixels.
• The pixel shader determines the on-
screen colour of each pixel.
Pixel Shader
Z-buffer test
Frame buffer
To screen
Shaders
Shaders are small programs that run on the
GPU. Different shader languages are available.
Vertex Shader
The vertex shader can set/change rendered
vertices, e.g. object deformation, skeletal
animation, particle motion, etc.
Pixel Shader
The pixel shader sets the colour of the pixel,
e.g. for per-pixel lighting, texturing. Can also
be used to apply effects over an entire scene,
e.g. bloom, depth of field blur, etc.
Aside: DirectX 10 also supports geometry
shaders (not supported by XNA).
HLSL(High Level Shader Language)
HLSL is a shading language
developed by Microsoft for the
Direct3D API.
HLSL offers a number of functions
(mostly centred around branching
control, math functions and
texture access).
Aside: See http://msdn2.microsoft.com/enus/library/bb509638.aspx for a complete
HLSL reference
texture textureName;
HLSL(Data types)
HLSL supports the shown scalar datatypes. Note: vectors/matrices forms can
also be defined, e.g. float3, int2x2,
double4x4, etc.
bool
• True or false
int
• 32-bit signed integer
half
• 16-bit floating point
float
• 32-bit floating point
double
• 64-bit floating point
HLSL also provides a sampler type (used
to read, i.e. sample, textures): sampler,
sampler1D, sampler2D, and sampler3D.
sampler2D
textureSampler = sampler_state
{
Texture = textureName;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
AddressU = Wrap;
AddressV = Wrap;
AddressW = Wrap;
}
The sampler type is defined using a
number of different states, e.g.
MinFilter, MagFilter, and MipFilter
controlling texture filtering, and
AddressU, AddressV, and AddressW
controlling addressing states.
HLSL(Semantics)
Semantics are used to map input and
output data to variables. All varying
input data (from the application or
between rendering stages) requires a
semantic tag, e.g. all outputs from the
vertex shader must be semantically
tagged.
Position[n]
Color[n]
• Vertex position in object space
• Colour (e.g. Diffuse)
Normal[n]
• Normal vector
Tangent[n]
• Tangent vector
Binormal[n]
• Binormal vector
Texcoord[n]
• Texture coordinate
float4 vertexPosition : POSITION0;
Above right note: [n] is an optional
integer that provides support for
multiple data types, e.g. Texture0,
Texture1, Texture2.
Aside: The only valid semantic inputs
to the pixel shader are Color[n] and
Texture[n]. Often, custom data (i.e.
non-texture addressing) is passed
using a Texture[n] semantic.
HLSL(Functions)
HLSL permits C like functions to be
specified.
A shader must define at least one
vertex function that will consider
vertex information and at least one
pixel function that will consider pixel
colours. These functions must define
their inputs and outputs with
semantics.
Intrinsic Functions
HLSL offers a set of ‘built-in’
functions, mostly centred around flow
control, math operations and texture
access.
float2 CalculateParallaxOffset(
float3 view, float2 texCoord )
{
view = normalize(view);
float height = parallaxScale
* (tex2D(HeightSampler, texCoord).r )
+ parallaxOffset;
float2 viewOffset = view.xy * (height);
return viewOffset;
}
pixelShaderInput SimpleVS(vertexShaderInput input) {
Vertex shader function
HLSL(Example)
pixelShaderInput ouput;
Transform from model
space to screen space
output. screenPosition
= mul(input.position, wvpMatrix );
output.colour = float3(1.0f, 1.0f, 1.0f);
The following is a simple example
shader
return output;
Define world-view-projection matrix
float4x4 wvpMatrix : WorldViewProjection;
Define input structure expected
by the vertex shader
struct vertexShaderInput
{
float4 vertexPosition : Position0;
};
Define input structure expected by the pixel
shader (and also output by the vertex shader)
struct pixelShaderInput
{
float4 screenPosition : Position;
float3 colour : Color0;
};
}
Pixel shader function
float4 SimplePS(pixelShaderInput input) : Colour0 {
return float4(input.colour.rgb, 1.0f);
}
Output pixel colour
technique SimpleShader
Technique definition –
{
specifying VS and PS
functions and compile
pass
type
{
VertexShader = compile vs_1_1 SimpleVS();
PixelShader = compile ps_1_1 SimplePS();
}
}
Effects in XNA
Effects in XNA are types of game
asset (alongside textures and
models). The Effect class
represents an effect, permitting
effect parameter configuration,
technique selection, and actual
rendering.
An effect can be
loaded/configured as shown.
Aside: For better performance,
the effect.Parameter[“name”] can
be stored as an EffectParameter
object (upon effect construction),
and the setValue(...) method
called on the parameter.
Effect effect;
Load the effect
effect = content.Load<Effect>(“effectName");
Select the desired technique
effect.CurrentTechnique
= lightEffect.Techniques[“techique"];
Define effect parameters
effect.Parameters[“colour"].SetValue(Vector3.One);
effect.Parameters[“tolerance"].SetValue(0.8f);
Begin effect and iterate over each pass
effect.Begin();
foreach (
EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Begin();
// Send vertex information to the effect, e.g.
//graphicsDevice.DrawUserIndexedPrimitives
<VertexPositionTexture>
Vertex
(PrimitiveType.TriangleList, ... ); information
pass.End();
}
effect.End(); End the effect
can be sent
using other
approaches
Summary
Today we
explored:
Brief intro to
some types of
strategic/
tactical AI
Overview of
how jumps
can be
supported
within games
HLSL and
effect usage in
XNA
© Copyright 2026 Paperzz