Fast Synthetic Vision, Memory, and Learning Models for Virtual Humans Purpose Model synthetic vision, memory, and learning Quickly synthesize motion from goals Introduction Virtual robot Combines path planner and controller Internal record of perceived objects and states Related Work Virtual perception Model information flow to character Synthetic Vision Determine what is currently visible to character Speed & ability to handle dynamic environments Synthetic Vision - cont. Render unlit model of scene from character’s POV List of visible objects combined with each object’s location determines observations A character in a virtual office True color False Color Internal Representation & Memory Internal model Object geometry from environment and observed states Perception-Based Navigation Character has set M of observations Observations represented as (objIDi, Pi, Ti, vi, t) M updated at regular intervals Basic sense-plan-control loop (static environments) Perception-Based Navigation - cont. Dynamic environments Perception-Based Navigation - cont. Problem: Truly missing vs. obscured Solution: Re-run vision module Revised sense-plan-control loop (dynamic environments) Learning and Forgetting Temporal models Different memory rules for different objects (logical or deductive model) Experimental Results Tested on SGI InfiniteReality2 Click and drag goals and obstacles 1 3 2 4 A character exploring unknown mazes Conclusions Efficient in storage and update times Flexible Bottlenecks at synthetic vision model (double rendering)
© Copyright 2026 Paperzz