Curs 6
Download
1 / 68

Curs 6 - PowerPoint PPT Presentation


  • 188 Views
  • Uploaded on

Curs 6. Modelare procedural ă, Virtual-Humans, Modelare audio, Denavit – Hartenberg. Modelare procedurală. VE t i nd să devină din ce în ce mai complexe , pe măsură ce hardware-ul gra f ic 3D este îmbunătățit și permite specificarea de detalii adiționale.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Curs 6' - errin


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Curs 6
Curs 6

Modelare procedurală, Virtual-Humans,

Modelare audio, Denavit–Hartenberg


Modelare procedural
Modelare procedurală

  • VE tind să devină din ce în ce mai complexe, pe măsură ce hardware-ul grafic 3Deste îmbunătățit și permite specificarea de detalii adiționale.

  • Există circumstanțe (mărimea datelor, gradul de repetițieal elementelor, structuri de obiecte particulare – spre ex. vegetația) unde este deneconceput să gândim modelarea VE folosind tehnici manuale, de scanare.

  • Modelarea la nivel “high-level” presupune devoltarea unor tehnici care săpună la dispoziție un nivel de abstractizarea modeluluiîn clase, care să permită specificații“high-level”.

  • In general, tehnicile procedurale (sau parametrice) folosesc algoritmi și proceduri pentru a coda șiabstractizadetaliile modelului, eliberândmotorul grafic de necesitatea definiriide specificații detaliate.

Analysis

Synthesis

Parameters

Hi-level

model

Virtual

model

Real

model

Procedures

Parameters

Procedures

Abstraction


Curs 6

PROs

  • Simularea demodele complexecu mai puțini parametrii de specificat

  • “Data Amplification” prin controlul parametrilor

  • Optim pentru structuri care au un anumit grad de repetivitate

  • Optim pentru modelarea unei varietăți de entități similaredar nu identice, care au anumite proprietați în comun

  • Permit modelarea/rendarea on-demand, evitând astfel stocarea de date care nu sunt necesare (Lazy Evaluation)

Procedural modelling: pros & cons

CONs

  • Tehnicile sunt puternic legte de aplicații specifice

  • De multe ori nu sunt ușor de identificat, înțeles, conceput si proiectat detaliile cu privire la procedurile și setul de parametri necesari

  • Foarte dificil să se mențină un control suficient asupra rezultatelor


Paradigmele model rii p rocedural e
Paradigmele modelării procedurale

Modelarea SEMI-AUTOMATĂ:

Procedurile și parametrii sunt definiți, dar aceștia nu acoperă întreg procesul de generare. De aceea, în unele etape este necesară sau recomandabilă intervenția manuală a celui care modelează.

Synthesis

Virtual

model

Proc

model

Procedures

Parameters

USER

USER

USER



Modelarea procedural a texturilor
Modelarea procedurală a texturilor

  • Texturileprocedurale sunt generate algoritmic, înloc sa fie rezultatul unui proces de sampling, imagini sau desene

  • Exista o multitudine de abordari, folosind diferite proceduri sau parametri:

    • Blinn & Newell propun sinteza Fourier.

    • Fournier, Fussel & Carpenter propun subdiviziune fractal.

    • Gacalowitz a dezvoltat niste metode statistice pentru a analiza proprietățile texturi naturale a reușit să le reproducă.

    • Perline propune utilizarea de latice de zgomot, adică numere aleatoare sau Pseudo gradienti generate pe o grilă/matrice.


Latice de sunete
Latice de sunete

ValueNoiseGradientNoiseValue-GradientNoise

+ Simple

- High bandwidth

+ Low bandwidth

- Artefacts (pattern)

http://en.wikipedia.org/wiki/Lattice_Boltzmann_methods

http://www.nada.kth.se/utbildning/grukth/exjobb/rapportlistor/2004/rapporter04/eriksson_erik_04161.pdf


Perlin noise
Perlin Noise

  • Es. 2D PerlinTurbulence:

    Value = 0

    for (f = MINFREQ; f<MAXFREQ; f*=2)

    value += 1/f * noise(P*f)


Texturi procedurale
Texturi procedurale

  • Pros:

    • Bandwidth and memorysaving (no storageneed)

    • No tiling

    • High detailindependentfrom zoom

  • Procedures:

    • IBR (Image Based-Rendering): textures are generated starting from samples

    • Texturesynthesis: texture are generatedstartingfrom material properties

  • IBR pro: no needtoidentifydifferent pro-

    ceduresfordifferentmaterials, storeonlysmallssamples


Curs 6
IBR


Ibr algoritmi genetici
IBR + algoritmi genetici

http://www.youtube.com/watch?v=Fp9kzoAxsA4


Procedural texturing animation
Procedural Texturing - Animation

  • Real-time animatedtextures

  • Allowsproducingcomplesanimations on images and not on polygons (es. facialanimation, fire or liquids, clouds etc.)


Procedural synthesis of geometry
Procedural synthesis of geometry

  • L-Systems:

    FormalgrammarproposedbyLindenmayer (1968) and adaptedtographicsbyAlvy Ray Smith (1984)

  • An L-System isbasedupon:

    • An alphabet (e.g. “F”, “+”, “-”)

    • A set of production rules (e.g. F → F+F--F+F)

  • Productions are applied in parallel, i.e. the biggestnumberofsymbolsissubstituted at eachinstance (output isindependentfromrulesapplicationsorder)


Procedural synthesis of geometry1
Procedural synthesis of geometry

  • Symbols can havegeometricalmeanings

  • Es. Logo:

    • “F” draws a segment

    • “+” rotatesθdegreescounterclockwise

    • “-” rotatesθdegreesclockwise

http://en.wikipedia.org/wiki/Fractal


Procedural synthesis of geometry2
Procedural synthesis of geometry

  • L-systems are more flexibleintroducingpush and popoperations (symbols “[“ amd “]”)

  • Thisallowtorealizeverycomplexstructures:

    • F → F [+F] F [-F] F


Procedural synthesis of geometry3
Procedural synthesis of geometry

  • Passingto 3D:

    • “+” e “–” foryaw

    • “^” e “&” forpitch

    • “\” e “/” forroll

  • In 3D segmentsmayberepresentedbycylinders or conefrustums


Paradigms of procedural synthesis
Paradigmsofproceduralsynthesis

  • The typicalmodeling data flow is:

    • The userspecifies a conceptualmodelto the modeler

    • The modelerconverts the conceptualmodel in an intermediate representation, suitableforbeingprocessed and rendered

    • The renderertakes the representation and synthesizeanimage

  • Proceduralsynthesisrelies on twodifferentparadigmstospecify the intermediate representation:

    • Data Amplification

    • LazyEvaluation

MODELER

UTENTE

RENDERER

Concept

Geometry


Data amplification
Data amplification

  • A high geometricaldetailisspecifiedstartingfromfew input information

  • L-Systems are a typicalexampleof data amplification (es. poplartree 16 Kb (concept) -> 6.7 MB (polygons)

  • Data explosion: high memorystoragerequirements

MODELARE(Amplificare)

USER

RENDERER

Articulare

Geometria


Lazy evaluation
Lazy evaluation

  • Thisparadigmavoids the intermediate representation, generating on the fly the synthesisonlywhenneededforrendering

  • Low storagerequirement, high real-time demands

MODELARE(server)

USER

Geometria

RENDERER

(client)

Concept

Coordonate


Parametric l systems
ParametricL-Systems

  • ParametricL-systems can change the behaviourdepending on the passedparameters. Thisallowsto alter transformations and toallowrecursion

  • Es. inductiveinstancing

    define grass(0) < iarbă >

    definegrass(n) <grass (n-1)

    grass (n-1) translate 2^n * (0.1, 0, 0)

    grass (n-1) translate 2^n * (0, 0, 0.1)

    grass (n-1) translate 2^n * (0.1, 0, 0.1)

    >


Procedural geometric instancing
Procedural Geometric Instancing

  • Global coordinates:

    An instancemayvaryitsgeometrydepending on its world position/orientation

    Es. Tropism http://en.wikipedia.org/wiki/Tropism

    es. top → down: gravity

    bottom→ up: searchingsunlight

    side: wind, etc.).



Avatar
Avatar

  • “Icon or interactiverepresentationof a user in a sharedenvironment”

    • “avatar” comesfrom Hindu, describinggodVishnuembodiement

    • in text-based VR (suchasMUDs) an avatar is a textdescriptionprovidedtootheruserslooking at the user

    • In graphics VR, an avatar is a 3D model

      of a virtualhuman (or character) directlydrivenby a humanuser


Agents
Agents

  • An agentisanautonomous VH whoseactions are notdrivenby a humanbeingbutratherby a computer

  • VH actions can beguidedby:

    • anauthonomoussensory system

    • behaviouralrules

    • predefinedscripts


Virtual humans requirements
VirtualHumans: requirements

  • Graphicssimulation:

    • Rigidbodies

    • Deformablebodies

  • Physicalsimulation:

    • Physicallybasedmodeling

    • Inverse and directkinematics

  • Behavioural simulation


Graphical representation of vhs
GraphicalrepresentationofVHs

  • Layeredmodeling:

    • Simplest case: rigidbodieshierarchy (no deformations)

    • Usually at leasttwolayers: skeleton and skin

    • More layers: anatomybasedapproach

  • VH Animation

    • Skeleton motion

    • Consequent skin blending


R epr e z enta re grafica a vhs
Reprezentare grafica a VHs

  • Skeleton + skinning


Skinning
Skinning

  • Simpledeformationalgorithmallowing a skeletontocontinuously animate anassociatedskinmesh

  • Eachskinvertexisinfluencedbyone or more bonesof the skeletondepending on a system ofweights.

  • Toconnect the skinto the bones (rigging) envolopesofpredefinedshape and size can beused, withaninternal and anexternalboundary

  • Vertices inside the internalboundary are givenweight = 1

  • Verticesoutside the externalboundary are givenweight = 0

  • Verticesbetween the twoboundaries aregiven a weightbetween 0 and 1

  • A vertex assume the position relatedto theenvelopeitisincludedinto

  • If multiple envelopes include a vertex, itwill assume an intermediate position givenby the weightedaverageof the positions

vblend= v1w1 + … + vn-1 wn-1 + vn (1-Σiwi)

Blending formula


Rigid bodies vs skinning
Rigid bodies vs. skinning

  • Rigidbodieshierarchy:

    • P: Simple, lightweight

    • S: Imprecise, doesnot simulate deformations

It can be considered a borderline case of blending, where each vertex is associated to one bone only with weight = 1


Skinning bind pose
Skinning – Bind Pose

  • When the skinislinkedto the skeleton, the “Bind pose” issaved, whichis the status, in world-space, of the transformationmatricesof the bones and of the skin, whichwillbeused in the blending stage.

  • Duringblendingeachskinvertex:

    • Istransformedfrom the skinlocal-spaceintoworld-space

    • The fromworld-spaceto the localspaceofeachboneitisconnected

    • Itundergoesall the animationsofeachboneitisconnected, and foreach a correspondingnew positioniscomputed

    • Alle the newpositions are transformedintoworld-space and thenweighted


Skinning vertex shader
Skinning – Vertex shader

  • Whendealingwithcomplexskins and skeletons, blending can be CPU intensive

  • The sameoperation can beperformed on GPU byprogrammingan opportune vertex shader

  • Important: in orderto produce a correctlighting, normalsmustbeblendedtoo!


Curs 6

Animationtechniques


Forward kinematics
Forwardkinematics

  • A human body can be sketched as a connected set of:

    • links (arm, forearm, etc.)

    • joints (connecting links: elbow, shoulder etc.)

  • In order to animate a skeleton, it is needed toalter the joint angles:

    • In a 2-layers representation, only the bones are animated composing, each frame, a particularposture, corresponding to a particular configurationof the joint angles array

    • The skin is then blended accordingly

  • Needed info:

    • Anatomy based joint angles limits

    • Masses of links (only for PBM)

http://www.youtube.com/watch?v=VjsuBT4Npvk&NR=1&feature=endscreen

http://www.youtube.com/watch?feature=endscreen&NR=1&v=3ZcYSKVDlOc

http://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters


Fk motion capture
FK: MotionCapture

  • Joint angles can be computed (Motion Synthesis) or sampled using sensors (Motion Capture)

  • Motion capture pros:

    • Realistic animation

    • Little work for modelers(only fine tuning and junctions)

    • Captures animation detailsthat are uncatchable from theeye or difficult to synthesize

  • Cons:

    • Requires expensive devices

    • Data are samples, which aremore difficult to process


Inverse kinematics
Inverse kinematics

  • The kinematicsof a connectedstructureis the processofcomputing the position in the spaceof the structureend-effector, given the jointangles

  • Inverse kinematics (IK) is the oppositeprocess: given theend-effector position (goal or target), retrieves theconfigurationof the related joint angles. Itis a processwidelyused in robotics.

  • Pros:

    • Allowstocalculatedjointsstartingonlyfrom someposition information (typicallyhands, feet, head), reducing the numberofneededsensors

  • Cons:

    • In some “singularity” point more thanonesolutionispossible. Althoughanatomicallimits can help, itisnotalwayspossibletofind the correctone.

http://demonstrations.wolfram.com/ForwardKinematics/


Shape interpolation
Shape Interpolation

  • Two or more 3D reference meshes representing human body postures are morphed

  • Generally the needed steps are:

    • 1 – Meshes are morphed

    • 2 - Texture coordinates are morphed

    • 3 - Texture maps are morphed

  • If meshes are related to the same basic shape, steps 1 and 2 arebasically linear interpolations.

  • Step 3 is needed only for some visualeffects


Keyframe interpolation
Keyframe Interpolation

  • Sometimes in literature is synonim of Shape Interpolation

  • Our definition refers to Skeletal Animation

  • Reference values are not diretly meshes, rather skeleton postures related to particular keyframes

  • Interpolations takes place on these values, determining new skeleton posture

  • The skin is blended frame by frame based on the new interpolated skeleton postures


Physically based modeling
Physicallybasedmodeling

  • Animations based on kinematics do not keep into account physical effects like gravity or inertia

  • Rather than setting up a kinematics problem, we can setup a dynamics problem, considering also masses and forces

  • Inverse dynamics is also possible (for each joint, computes forces and torques generating a desired movement)

  • Modeling dynamics is DESIRABLE to correctly manage the interaction between the VH and the VE (collision detection and management etc.)


Behavioural modeling
Behaviouralmodeling

  • VHs must be able to access information about the VE, either directly (unrealistic) or by means of a system of virtual sensors

  • These informatino will drive his actions: (behavioural modeling)

    • locomotion driven by sight

    • object manipulations

    • feedback to acoustic stimula etc.

  • Behaviours can be scripted or procedurally evaluated

  • Other behavioural issues:

    • Interaction among VHs

    • Interaction between VHs and real humans



H anim
H-Anim

  • HumanoidAnimation:

    • Target: creationof a libraryofinterchangeablehumanoids and authoringtoolsto create newhumanoids and animations

    • Support keyframe, IK, FK, etc.

    • http://www.h-anim.org/Specifications/H-Anim1.1

  • Features:

    • VRML 97 compliant

    • Flexibility, no assumption on the applicationtype

    • Simple (dealsonlywithVHs and no otherarticulated figure)


H anim file format example
H-Anim – File format example

...

DEF hanim_l_shoulder Joint {  name "l_shoulder"   center 0.167 1.36 -0.0518   children    [     DEF hanim_l_elbow Joint {  name "l_elbow"       center 0.196 1.07 -0.0518       children        [         DEF hanim_l_wrist Joint {  name "l_wrist"           center 0.213 0.811 -0.0338           children    [             DEF hanim_l_hand Segment {  name "l_hand“       ...             }           ]         }       

DEF hanim_l_forearm Segment {  name l_forearm"            ...         }       ]     }     DEF hanim_l_upperarm Segment {  name "l_upperarm"       . ..     }   ] } ...


H anim hierarchy
H-Anim - Hierarchy


Mpeg 4 body animation
MPEG-4: body animation

  • FBA: Facial Body Animation in Mpeg-4

    • An Mpeg-4 body is a collectionofnodes

    • Root, BodyNode, contains 3 nodes:

      • BAP (Body AnimationParameter): 296 parametersdescribingskeletonproperties

      • Rendered Body: holds DEFAULT skin information (shape + textures)

      • If a specific body mustberendered, the BDP (Body DefinitionParameters) isadded, replacing the defaultdelRendered Body. Skinningparametersmaybespecified.

  • The H-Animgroup, coordinatedwith theFBA groupof Mpeg-4 hasstandardizedVH specifications, so asto produce coherentresults in bothennvironments.



Virtual crowds
Virtual Crowds

  • VHs are neededtopopulateVEs

  • Some VEs (suchascities) need a high numberofVHs

  • Crowdssimulation, becauseofitsintrinsiccomplexity, cannotbemanagedas sum ofVHs

  • Application: entertainment, studyofcrowdflows (panic, disaster etc.)

  • Requirements:

    • Graphicsmodeling

    • Behaviouralmodeling



Virtual crowds example
Virtual Crowds example

  • Real-time oriented

  • Allowsto render ~ 100K differentVHs

  • VHsrenderedasprecomputedimpostors

  • Several VH types, possibilityofmodulatingcolors andtextures

  • Real-time shadowingusingshadowmaps


Virtual crowds2
Virtual Crowds

  • Behaviouralalgorithms:

    • Collisionavoidance

    • Heightcheck

    • Interest attractor

    • Exit search, visibility, flow inertia etc.


Acoustical environment
AcousticalEnvironment

  • When a sound isgenerated, itispropagatedaswaves in a medium. The propertiesof the medium, and of the surroundingenvironment, influencehow the sound isperceived

  • A complete acousticalfieldiscomposedof:

    • Sound sources

    • Listener

    • Environment


Acoustical environment1
Acousticalenvironment

  • Sound source:

    anobjectproducing sound waves, which are transmitted(usually) along a preferential direction

  • Environment

    once produced the wavepropagates in the environment, whichcouldmodify the waveproperties

  • Listener:

    anobjectreceiving sound waves. Processing thesewaves, it can retrieve information about the sound source and the environment


Environmental effects
Environmentaleffects

  • Falloff:decayof the sound intensitywhen the distancebetween the listener and the sound source increases


Environmental effects1
Environmentaleffects

  • Reflection: when a sound source changespropagation medium, a share of the waveistransmitted in the new medium, the remainingisreflected (relatedphenomenons: echo, …)


Environmental effects2
Environmentaleffects

  • Diffraction:when a sound wavemeetsanobstacle, itbendsitspath so astomovearound the obstacle (like water waves)


Environmental effects3
Environmentaleffects

Reverberation:

Depending on theirshapes and materials, objects inside anenvironment are characterizedbyreflection and absorptionparameters. Thismodifies the sound wave; the humanbrain can perceiveindividuallyfirst-orderreflection, whilehigher-orderreflections are perceivedascombined and form the reverberation. Itisanimportantacousticalphenomenon, asthereisonlyonedirectpath and manyindirectones. Usuallymost

of the acousticenergyreaching

a listenercomesbyreflections.


Reverberation
Reverberation

  • Reverberationprovidesimportantperceptual information about the type and the sizeof the environment:

    • Short duration→ sound almostalldirect→clueofsmallenvironment

    • Long duration→welldistinctfrom the original sound →clueof big environment

    • Muchenergy in hifrequencies→clueofreflectingenviroment (notabsorbinghifreqs)

    • Muchenergy in hifrequencies → clueofsoundproofenvironment

  • Reverberationprovidesalsopositional information: when the sound source goesaway, the directcomponentdecreaseswhilst the reverberationremainsunaltered


Sound localization
Sound localization

  • Interauralintensitydifference(IID):

    The sound isperceivedas more intense in the earcloserto the sound sources, notonlybecauseof the distancebutalsobecauseof the head masking.


Sound localization1
Sound localization

  • InterauralTimedifference(ITD):

    The sound isperceivedearlierby the earcloserto the sound source.


Sound localization2
Sound localization

  • The humanbrainuses ITD + IID effects in ordertodetermine the position of a sound source inside a cone.

  • Althoughuseful, ITD and IID haveimportantlimits:

    • Internalization: usingheadphones the listenercorrectlyperceiveslateralizationbut the sound appearstobe inside the head

    • They do not help in perceiving “up-down” and “rear-front” differences.

    • Eachlistenerperceiveshisown “version” of a sound, due tohis head/body shape. Tocorrectlyperceive the 3D localization, theseanatomicparametersmustbekeptinto account


Curs 6
HRTF

  • Wemustmoveto a head-centeredacoustic system and computeis transfer function.

  • HRTF = Head Related Transfer Function

  • Given input signals (sound sources) and the transfer function, output signals (perceivedsounds) can becalculated

  • The HRTF isuniqueforeachhumanbeing (ear-print), howeveran “average” approximated HRTF can becomputed and used

sound source

perceived sound


Hrtf analysis
HRTF: analysis

  • A possibleprocesstocalculate

    an HRTF:

    • Twomicrophones are put closeto L

      and R channels (eitherof the user

      or using a mannequin)

    • A loudspeakeris put in a known

      position P.

    • A knownsignalisplayed.

    • The sound isrecoredusingmics.

    • Bycomparing the original sound wave

      with the resulting output, the HRTF

      iscomputedfor THAT P.

    • The processisrepeatedmoving P

      onto a sphere


Hrtf analysis1
HRTF: analysis


Hrtf synthesis
HRTF: synthesis

  • The describedprocess produce a HRTF tablerelatedto a particular head

  • This HRTF mustbesynthesized in ordertobeused in real-time in a VR application

  • Tothispurpose, special DSP (Digital Sound Processing) real-time algorithms are realized, implementingfilterstobeappliedtonon-directional input signalswhich are, this way, “localized”



Virtual environment work flow
Virtual Environment Work Flow

Sampling

Synthesis

Modelling

Behaviour

Properties

VIRTUAL ENVIRONMENT

Management

Rendering

Interaction

USER


Sound modeling
Sound Modeling

Tecniques:

  • Playback of sampled signals

    Pros: maximum fidelity of THAT single eventCons: static, non reactive, repetitive

  • Signal-based synthesis (substractive, additive, FM…)Pros: better flexibility, computationally non demandingCons: limited, “artificial” sound

  • Physical-based modeling

    Pros: realistic, highly reactiveCons: Complex (both for calculation and control), often “too perfect”

    Stress on how the sound is produced

    Alternative: stress on how we perceive the sound (modal synthesis)



Sw for 3d audio rendering
SW for 3D Audio Rendering

  • Basicconceptsof 3D Audio API are verysimilartothoseof 3D Graphics.

  • Todefine a virtualacoustic scenario wemustdefine:

    • Listener: correspondingto camera. Itisdefinedwith

      a position and anorientation (top and frontvecs)

    • Sound Sources : like light sourcesthey can be

      omnidirectional, directional or spot. Minimum and

      maximumdistances (likefar and nearplanes) exists.

    • Environment : definespropertiesofreflections,

      absorption etc. In basicAPIsoftenitisneglected

      Examplesof 3D Audio API:

  • DirectSound 3D: Microsoft only, HW support

    nowdiscontinued.

  • OpenAL: cross platform, HW support, 3D only

  • EAX: availablefor DS3D and OpenAL, managesenvironmentfx.