Context aware human-robot and human-agent interaction / Nadia Magnenat-Thalmann, Junsong Yuan, Daniel Thalmann and Bum-Jae You, editors.

Contributor(s): Magnenat-Thalmann, Nadia, 1946- [editor.] | Yuan, Junsong [editor.] | Thalmann, Daniel [editor.] | You, Bum-Jae [editor.]Material type: TextTextSeries: Human-computer interaction series: Publisher: Cham : Springer, 2016Description: (xiii, 298 Pages)Content type: text Media type: computer Carrier type: online resourceSubject(s): Robotics | Human-robot interaction | Artificial intelligence | TECHNOLOGY & ENGINEERING -- Engineering (General) | Artificial intelligence | Human robot interaction | RoboticsGenre/Form: Electronic books.Additional physical formats: Print version:: Context Aware Human-Robot and Human-Agent Interaction.DDC classification: 629.892 LOC classification: TJ211
Contents:
Preface; Contents; Contributors; Introduction; Part I User Understanding Through Multisensory Perception; 1 Face and Facial Expressions Recognition and Analysis; 1.1 Introduction; 1.2 Literature Review; 1.2.1 Uniform LBP; 1.2.2 Fuzzy LBP; 1.2.3 LTP and Its Variants; 1.2.4 Preprocessing; 1.2.5 Discussion; 1.3 Noise-Resistant Local Binary Patterns; 1.3.1 Problem Analysis of LBP and LTP; 1.3.2 Noise-Resistant LBP; 1.3.3 Extended Noise-Resistant LBP; 1.3.4 Applications on Face and Facial Expression Recognition; 1.4 Experimental Results; 1.4.1 Facial Expression Recognition on the AR Database.
1.4.2 Face Recognition on the AR Database; 1.4.3 Face Recognition on the Extended Yale B Database; 1.4.4 Face Recognition on the O2FN Mobile Database; 1.4.5 Comparison of Computational Complexity; 1.5 Conclusion; References; 2 Body Movement Analysis and Recognition; 2.1 Introduction; 2.2 Problematic; 2.3 State of the Art; 2.4 Recent Approaches; 2.4.1 System Overview; 2.4.2 Human Upper Body Gesture Understanding; 2.5 Future Avenues; 2.6 Conclusion; References; 3 Sound Source Localization and Tracking; 3.1 Introduction; 3.2 Overview of Sound Source Localization and Tracking Algorithms.
3.2.1 Mathematical Formulation of Sound Source Localization; 3.2.2 Sound Source Localization Using Beamforming-Based Approach; 3.2.3 Sound Source Tracking Using Particle Filter-Based Approach; 3.3 Proposed Robust Speech Source Tracking; 3.3.1 The Harmonic Structure in the Speech Spectrogram; 3.3.2 Speech Source Tracking in the Presence of Sound Interference; 3.3.3 Simulation Results; 3.4 Integration with Social Robot; 3.5 Future Avenues; 3.6 Conclusions; References; Part II Facial and Body Modelling Animation; 4 Modelling Conversation; 4.1 Learning Conversation Skills; 4.2 The State of the Art.
4.3 Summary of Our Approach; 4.4 The Capture, Processing and Interpreting of Non-verbal Speech Cues; 4.4.1 Protocols for Capturing of Speech Data; 4.4.2 Processing the Speech Data as Measures; 4.4.3 Interpreting the Measures as Metameasures; 4.5 The Visualisation of the Data; 4.5.1 Metaphor and Data Visualisation; 4.5.2 Time and Data Visualisation; 4.5.3 Game Engines and Data Visualisation; 4.5.4 Our Approach; 4.6 User Study and Discussion; 4.6.1 Task 1; 4.6.2 Task 2; 4.6.3 Task 3; 4.6.4 Task 4; 4.7 Conclusion; References; 5 Personalized Body Modeling; 5.1 Introduction.
5.2 State of the Art on Personalized Body Shape Reconstruction; 5.2.1 Shape Reconstruction from Measurement Data; 5.2.2 Building and Searching in a Shape Space; 5.2.3 Dynamic data ; 5.3 2D-3D Registration of a Morphable Model; 5.3.1 Dynamic data ; 5.3.2 Shape Recovery by Searching Deformation Space; 5.3.3 Mapping Textures; 5.3.4 Single Image Input; 5.4 Conclusion; References; 6 Parameterized Facial Modelling and Animation; 6.1 Introduction; 6.2 Parametric Representation of Facial Model; 6.2.1 Linear/Multilinear Space of Facial Meshes; 6.2.2 Linear Space of Mesh Deformation.
Summary: This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI). The research presented in this volume is split into three sections: ·User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. ·Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. ·Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Item type Current library Call number Copy number Status Notes Date due Barcode
Books Books Female Library
TJ211.C66 .H86 2016 (Browse shelf (Opens below)) 1 Available STACKS 51952000315803
Books Books Main Library
TJ211.C66 .H86 2016 (Browse shelf (Opens below)) 1 Available STACKS 51952000315810

Online resource; title from PDF title page (EBSCO, viewed October 5, 2015).

Includes bibliographical references.

Preface; Contents; Contributors; Introduction; Part I User Understanding Through Multisensory Perception; 1 Face and Facial Expressions Recognition and Analysis; 1.1 Introduction; 1.2 Literature Review; 1.2.1 Uniform LBP; 1.2.2 Fuzzy LBP; 1.2.3 LTP and Its Variants; 1.2.4 Preprocessing; 1.2.5 Discussion; 1.3 Noise-Resistant Local Binary Patterns; 1.3.1 Problem Analysis of LBP and LTP; 1.3.2 Noise-Resistant LBP; 1.3.3 Extended Noise-Resistant LBP; 1.3.4 Applications on Face and Facial Expression Recognition; 1.4 Experimental Results; 1.4.1 Facial Expression Recognition on the AR Database.

1.4.2 Face Recognition on the AR Database; 1.4.3 Face Recognition on the Extended Yale B Database; 1.4.4 Face Recognition on the O2FN Mobile Database; 1.4.5 Comparison of Computational Complexity; 1.5 Conclusion; References; 2 Body Movement Analysis and Recognition; 2.1 Introduction; 2.2 Problematic; 2.3 State of the Art; 2.4 Recent Approaches; 2.4.1 System Overview; 2.4.2 Human Upper Body Gesture Understanding; 2.5 Future Avenues; 2.6 Conclusion; References; 3 Sound Source Localization and Tracking; 3.1 Introduction; 3.2 Overview of Sound Source Localization and Tracking Algorithms.

3.2.1 Mathematical Formulation of Sound Source Localization; 3.2.2 Sound Source Localization Using Beamforming-Based Approach; 3.2.3 Sound Source Tracking Using Particle Filter-Based Approach; 3.3 Proposed Robust Speech Source Tracking; 3.3.1 The Harmonic Structure in the Speech Spectrogram; 3.3.2 Speech Source Tracking in the Presence of Sound Interference; 3.3.3 Simulation Results; 3.4 Integration with Social Robot; 3.5 Future Avenues; 3.6 Conclusions; References; Part II Facial and Body Modelling Animation; 4 Modelling Conversation; 4.1 Learning Conversation Skills; 4.2 The State of the Art.

4.3 Summary of Our Approach; 4.4 The Capture, Processing and Interpreting of Non-verbal Speech Cues; 4.4.1 Protocols for Capturing of Speech Data; 4.4.2 Processing the Speech Data as Measures; 4.4.3 Interpreting the Measures as Metameasures; 4.5 The Visualisation of the Data; 4.5.1 Metaphor and Data Visualisation; 4.5.2 Time and Data Visualisation; 4.5.3 Game Engines and Data Visualisation; 4.5.4 Our Approach; 4.6 User Study and Discussion; 4.6.1 Task 1; 4.6.2 Task 2; 4.6.3 Task 3; 4.6.4 Task 4; 4.7 Conclusion; References; 5 Personalized Body Modeling; 5.1 Introduction.

5.2 State of the Art on Personalized Body Shape Reconstruction; 5.2.1 Shape Reconstruction from Measurement Data; 5.2.2 Building and Searching in a Shape Space; 5.2.3 Dynamic data ; 5.3 2D-3D Registration of a Morphable Model; 5.3.1 Dynamic data ; 5.3.2 Shape Recovery by Searching Deformation Space; 5.3.3 Mapping Textures; 5.3.4 Single Image Input; 5.4 Conclusion; References; 6 Parameterized Facial Modelling and Animation; 6.1 Introduction; 6.2 Parametric Representation of Facial Model; 6.2.1 Linear/Multilinear Space of Facial Meshes; 6.2.2 Linear Space of Mesh Deformation.

This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI). The research presented in this volume is split into three sections: ·User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. ·Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. ·Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.

1 2

There are no comments on this title.

to post a comment.