000 | 06994cam a2200733Ii 4500 | ||
---|---|---|---|
001 | u11693 | ||
003 | SA-PMU | ||
005 | 20210418123224.0 | ||
006 | m o d | ||
007 | cr cnu|unuuu|| | ||
008 | 150929s2016 sz ob 000 0 eng d | ||
040 |
_aN$T _beng _erda _epn _cN$T _dIDEBK _dYDXCP _dN$T _dGW5XE _dOCLCF _dEBLCP _dNUI _dCOO _dDEBSZ _dCDX _dOCLCQ _dKSU _dOHI |
||
019 |
_a931591524 _a932333052 |
||
020 |
_z3319199463 _q(print) |
||
020 |
_z9783319199467 _q(print) |
||
024 | 7 |
_a10.1007/978-3-319-19947-4 _2doi |
|
035 |
_a(OCoLC)922528432 _z(OCoLC)931591524 _z(OCoLC)932333052 |
||
050 | 4 | _aTJ211 | |
072 | 7 |
_aTEC _x009000 _2bisacsh |
|
082 | 0 | 4 |
_a629.892 _223 |
245 | 0 | 0 |
_aContext aware human-robot and human-agent interaction / _cNadia Magnenat-Thalmann, Junsong Yuan, Daniel Thalmann and Bum-Jae You, editors. |
264 | 1 |
_aCham : _bSpringer, _c2016. |
|
300 | _a(xiii, 298 Pages) | ||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
490 | 1 | _aHuman-computer interaction series | |
588 | 0 | _aOnline resource; title from PDF title page (EBSCO, viewed October 5, 2015). | |
504 | _aIncludes bibliographical references. | ||
505 | 0 | _aPreface; Contents; Contributors; Introduction; Part I User Understanding Through Multisensory Perception; 1 Face and Facial Expressions Recognition and Analysis; 1.1 Introduction; 1.2 Literature Review; 1.2.1 Uniform LBP; 1.2.2 Fuzzy LBP; 1.2.3 LTP and Its Variants; 1.2.4 Preprocessing; 1.2.5 Discussion; 1.3 Noise-Resistant Local Binary Patterns; 1.3.1 Problem Analysis of LBP and LTP; 1.3.2 Noise-Resistant LBP; 1.3.3 Extended Noise-Resistant LBP; 1.3.4 Applications on Face and Facial Expression Recognition; 1.4 Experimental Results; 1.4.1 Facial Expression Recognition on the AR Database. | |
505 | 8 | _a1.4.2 Face Recognition on the AR Database; 1.4.3 Face Recognition on the Extended Yale B Database; 1.4.4 Face Recognition on the O2FN Mobile Database; 1.4.5 Comparison of Computational Complexity; 1.5 Conclusion; References; 2 Body Movement Analysis and Recognition; 2.1 Introduction; 2.2 Problematic; 2.3 State of the Art; 2.4 Recent Approaches; 2.4.1 System Overview; 2.4.2 Human Upper Body Gesture Understanding; 2.5 Future Avenues; 2.6 Conclusion; References; 3 Sound Source Localization and Tracking; 3.1 Introduction; 3.2 Overview of Sound Source Localization and Tracking Algorithms. | |
505 | 8 | _a3.2.1 Mathematical Formulation of Sound Source Localization; 3.2.2 Sound Source Localization Using Beamforming-Based Approach; 3.2.3 Sound Source Tracking Using Particle Filter-Based Approach; 3.3 Proposed Robust Speech Source Tracking; 3.3.1 The Harmonic Structure in the Speech Spectrogram; 3.3.2 Speech Source Tracking in the Presence of Sound Interference; 3.3.3 Simulation Results; 3.4 Integration with Social Robot; 3.5 Future Avenues; 3.6 Conclusions; References; Part II Facial and Body Modelling Animation; 4 Modelling Conversation; 4.1 Learning Conversation Skills; 4.2 The State of the Art. | |
505 | 8 | _a4.3 Summary of Our Approach; 4.4 The Capture, Processing and Interpreting of Non-verbal Speech Cues; 4.4.1 Protocols for Capturing of Speech Data; 4.4.2 Processing the Speech Data as Measures; 4.4.3 Interpreting the Measures as Metameasures; 4.5 The Visualisation of the Data; 4.5.1 Metaphor and Data Visualisation; 4.5.2 Time and Data Visualisation; 4.5.3 Game Engines and Data Visualisation; 4.5.4 Our Approach; 4.6 User Study and Discussion; 4.6.1 Task 1; 4.6.2 Task 2; 4.6.3 Task 3; 4.6.4 Task 4; 4.7 Conclusion; References; 5 Personalized Body Modeling; 5.1 Introduction. | |
505 | 8 | _a5.2 State of the Art on Personalized Body Shape Reconstruction; 5.2.1 Shape Reconstruction from Measurement Data; 5.2.2 Building and Searching in a Shape Space; 5.2.3 Dynamic data ; 5.3 2D-3D Registration of a Morphable Model; 5.3.1 Dynamic data ; 5.3.2 Shape Recovery by Searching Deformation Space; 5.3.3 Mapping Textures; 5.3.4 Single Image Input; 5.4 Conclusion; References; 6 Parameterized Facial Modelling and Animation; 6.1 Introduction; 6.2 Parametric Representation of Facial Model; 6.2.1 Linear/Multilinear Space of Facial Meshes; 6.2.2 Linear Space of Mesh Deformation. | |
520 | _aThis is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI). The research presented in this volume is split into three sections: ·User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. ·Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. ·Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics. | ||
650 | 0 | _aRobotics. | |
650 | 0 | _aHuman-robot interaction. | |
650 | 0 | _aArtificial intelligence. | |
650 | 7 |
_aTECHNOLOGY & ENGINEERING _xEngineering (General) _2bisacsh |
|
650 | 7 |
_aArtificial intelligence. _2fast _0(OCoLC)fst00817247 |
|
650 | 7 |
_aHuman robot interaction. _2fast _0(OCoLC)fst01784286 |
|
650 | 7 |
_aRobotics. _2fast _0(OCoLC)fst01098997 |
|
655 | 4 | _aElectronic books. | |
700 | 1 |
_aMagnenat-Thalmann, Nadia, _d1946- _eeditor. |
|
700 | 1 |
_aYuan, Junsong, _eeditor. |
|
700 | 1 |
_aThalmann, Daniel, _eeditor. |
|
700 | 1 |
_aYou, Bum-Jae, _eeditor. |
|
776 | 0 | 8 |
_iPrint version: _tContext Aware Human-Robot and Human-Agent Interaction. _dCham : Springer International Publishing, ©2015 _z9783319199467 |
830 | 0 | _aHuman-computer interaction series. | |
938 |
_aCoutts Information Services _bCOUT _n32790963 |
||
938 |
_aEBL - Ebook Library _bEBLB _nEBL4178295 |
||
938 |
_aEBSCOhost _bEBSC _n1072272 |
||
938 |
_aIngram Digital eBook Collection _bIDEB _ncis32790963 |
||
938 |
_aYBP Library Services _bYANK _n12624136 |
||
029 | 1 |
_aDEBSZ _b453693814 |
|
029 | 1 |
_aNLGGC _b397053053 |
|
029 | 1 |
_aDEBBG _bBV043626889 |
|
029 | 1 |
_aCHVBK _b374527652 |
|
029 | 1 |
_aCHNEW _b000893608 |
|
942 | _cBOOK | ||
994 |
_aZ0 _bSUPMU |
||
948 | _hNO HOLDINGS IN SUPMU - 229 OTHER HOLDINGS | ||
596 | _a1 2 | ||
999 |
_c2137 _d2137 |