Virtual Humans in Virtual Environments

The participant should animate his virtual human representation in realtime,
however the human control is not straightforward: the complexity
of virtual human representation needs a large number of degrees of
freedom to be tracked. In addition, interaction with the environment
increases this difficulty even more. Therefore, the human control should
use higher level mechanisms to be able to animate the representation
with maximal facility and minimal input. We can divide the virtual
humans according to the methods to control them:

  1. Directly controlled virtual humans
  2. User-guided virtual humans
  3. Autonomous virtual humans
  4. Interactive Perceptive Actors

Direct controlled virtual humans

A complete representation of the participant’s virtual body should have
the same movements as the real participant body for more immersive
interaction. This can be best achieved by using a large number of
sensors to track every degree of freedom in the real body.
However, many of the current VE systems use head and hand tracking.
Therefore, the limited tracking information should be connected with
human model information and different motion generators in order to
“extrapolate” the joints of the body which are not tracked. This is
more than a simple inverse kinematics problem, because there are
generally multiple solutions for the joint angles to reach to the same
position, and the most realistic posture should be selected. In
addition, the joint constraints should be considered for setting the
joint angles.

Guided virtual humans

Guided virtual humans are those which are driven by the user but which
do not correspond directly to the user motion. They are based on the
concept of real-time direct metaphor, a method consisting of
recording input data from a VR device in real-time allowing us to
produce effects of different natures but corresponding to the input data.
There is no analysis of the real meaning of the input data. The
participant uses the input devices to update the transformation of the
eye position of the virtual human. This local control is used by
computing the incremental change in the eye position, and estimating
the rotation and velocity of the body center. The walking motor uses the
instantaneous velocity of motion, to compute the walking cycle length
and time, by which it computes the joint angles of the whole body. The
sensor information or walking can be obtained from various types of
input devices such as special gesture with DataGlove, or SpaceBall,
as well as other input methods.

Autonomous virtual humans

Autonomous actors are able to have a behavior, which means they must
have a manner of conducting themselves. The virtual human is assumed
to have an internal state which is built by its goals and sensor
information from the environment, and the participant modifies this
state by defining high level motivations, and state changes Typically,
the actor should perceive the objects and the other actors in the
environment through virtual sensors: visual, tactile and auditory
sensors. Based on the perceived information, the actor’s behavioral
mechanism will determine the actions he will perform. An actor may
simply evolve in his environment or he may interact with this
environment or even communicate with other actors. In this latter case,
we will consider the actor as a interactive perceptive actor.

The concept of virtual vision was first introduced by Renault
as a main information channel between the environment and the virtual
actor. The synthetic actor perceives his environment from a small
window in which the environment is rendered from his point of view. As
he can access z-buffer values of the pixels, the color of the pixels and
his own position, he can locate visible objects in his 3D environment. To
recreate the virtual audition, it requires a model a sound
environment where the Virtual Human can directly access to positional
and semantic sound source information of a audible sound event. For
virtual tactile sensors, our approach is based on spherical multisensors
attached to the articulated figure. A sensor is activated for any
collision with other objects. These sensors have been integrated in a
general methodology for automatic grasping.

Interactive Perceptive Actors

We define an interactive perceptive synthetic actor  as an actor
aware of other actors and real people. Such an actor is also assumed to
be autonomous of course. Moreover, he is able to communicate
interactively with the other actors whatever their type and the real
people. For example, Emering et al. describe how a directly controlled
Virtual Human performs fight gestures which are recognized by a
autonomous virtual opponent.

Advertisements

, , , , , , , , , ,

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: