AVSP 2013 banner

The 12th International Conference on Auditory-Visual Speech Processing


ISSN 2308-975X (Online)

Editors: Slim Ouni, Frédéric Berthomier and Alexandra Jesse
Publisher: Inria
August 29 - September 1, 2013, Annecy, France.

Foreword

KN1 − Keynote 1: Embodied Language Learning with the Humanoid Robot iCub
Angelo Cangelosi
KN2 − Keynote 2: Audiovisual speech integration: Modulatory factors and the link to sound symbolism
Charles Spence




1 − Who Presents Worst? A Study on Expressions of Negative Feedback in Different Intergroup Contexts
Mandy Visser, Emiel Krahmer and Marc Swerts
p.5-10.

2 − Audio-Visual Speaker Conversion using Prosody Features
Adela Barbulescu, Thomas Hueber, Gerard Bailly and Remi Ronfard
p.11-16.

3 − Spontaneous synchronisation between repetitive speech and rhythmic gesture
Gregory Zelic, Jeesun Kim and Christopher Davis
p.17-20.

4 − Culture and nonverbal cues: How does power distance influence facial expressions in game contexts?
Phoebe Mui, Martijn Goudbeek, Marc Swerts and Per van der Wijst
p.21-26.

5 − Predicting head motion from prosodic and linguistic features
Angelika Hönemann, Hansjörg Mixdorff, Diego Evin, Alejandro J. Hadad and Sascha Fagel
p.27-30.

6 − Visual Control of Hidden-Semi-Markov-Model based Acoustic Speech Synthesis
Jakob Hollenstein, Michael Pucher and Dietmar Schabus
p.31-36.

7 − Objective and Subjective Feature Evaluation for Speaker-Adaptive Visual Speech Synthesis
Dietmar Schabus, Michael Pucher and Gregor Hofer
p.37-42.

8 − Audio-visual Interaction in Sparse Representation Features for Noise Robust Audio-visual Speech Recognition
Peng Shen, Satoshi Tamura and Satoru Hayamizu
p.43-48.

9 − Assessing the Visual Speech Perception of Sampled-Based Talking Heads
Paula Costa and José Mario De Martino
p.49-54.

10 − Speech animation using electromagnetic articulography as motion capture data
Ingmar Steiner, Korin Richmond and Slim Ouni
p.55-60.

11 − Phonetic information in audiovisual speech is more important for adults than for infants; preliminary findings.
Martijn Baart, Jean Vroomen, Kathleen Shaw and Heather Bortfeld
p.61-64.

12 − Audiovisual speech perception in children with autism spectrum disorders and typical controls
Julia Irwin and Lawrence Brancazio
p.65-70.

13 − Looking for the bouba-kiki effect in prelexical infants
Mathilde Fort, Alexa Weiß, Alexander Martin and Sharon Peperkamp
p.71-76.

14 − Audiovisual Speech Perception in Children and Adolescents With Developmental Dyslexia: No Deficit With McGurk Stimuli
Margriet Groen and Alexandra Jesse
p.77-80.

15 − Effects of forensically-realistic facial concealment on auditory-visual consonant recognition in quiet and noise conditions
Natalie Fecher and Dominic Watt
p.81-86.

16 − Impact of Cued Speech on audio-visual speech integration in deaf and hearing adults
Clemence Bayard, Cécile Colin and Jacqueline Leybaert
p.87-92.

17 − Acoustic and visual adaptations in speech produced to counter adverse listening conditions
Valerie Hazan and Jeesun Kim
p.93-98.

18 − Role of audiovisual plasticity in speech recovery after adult cochlear implantation
Pascal Barone, Kuzma Strelnikov and Olivier Déguine
p.99-104.

19 − Auditory and Auditory-Visual Lombard Speech Perception by Younger and Older Adults
Michael Fitzpatrick, Jeesun Kim and Chris Davis
p.105-110.

20 − Integration of Acoustic and Visual Cues in Prominence Perception
Hansjörg Mixdorff, Angelika Hönemann and Sascha Fagel
p.111-116.

21 − Detecting auditory-visual speech synchrony: how precise?
Chris Davis and Jeesun Kim
p.117-122.

22 − How far out? The effect of peripheral visual speech on speech perception
Jeesun Kim and Chris Davis
p.123-128.

23 − Temporal integration for live conversational speech
Ragnhild Eg and Dawn Behne
p.129-134.

24 − Mixing faces and voices: a study of the influence of faces and voices on audiovisual intelligibility
Jérémy Miranda and Slim Ouni
p.135-140.

25 − The touch of your lips: haptic information speeds up auditory speech processing
Avril Treille, Camille Cordeboeuf, Coriandre Vilain and Marc Sato
p.141-146.

26 − Data and simulations about audiovisual asynchrony and predictability in speech perception
Jean-Luc Schwartz and Christophe Savariaux
p.147-152.

27 − The effect of musical aptitude on the integration of audiovisual speech and non-speech signals in children
Kaisa Tiippana, Kaupo Viitanen and Riia Kivimäki
p.153-156.

28 − The sight of your tongue: neural correlates of audio-lingual speech perception
Avril Treille, Coriandre Vilain, Thomas Hueber, Jean-Luc Schwartz, Laurent Lamalle and Marc Sato
p.157-162.

29 − Visual Front-End Wars: Viola-Jones Face Detector vs Fourier Lucas-Kanade
Shahram Kalantari, Rajitha Navarathna, David Dean and Sridha Sridharan
p.163-168.

30 − Aspects of co-occurring syllables and head nods in spontaneous dialogue
Simon Alexanderson, David House and Jonas Beskow
p.169-172.

31 − Avatar User Interfaces in an OSGi-based System for Health Care Services
Sascha Fagel, Andreas Hilbert, Christopher Mayer, Martin Morandell, Matthias Gira and Martin Petzold
p.173-174.

32 − Automatic Feature Selection for Acoustic-Visual Concatenative Speech Synthesis: Towards a Perceptual Objective Measure
Utpala Musti, Vincent Colotte, Slim Ouni, Caroline Lavecchia, Brigitte Wrobel-Dautcourt and Marie-Odile Berger
p.175-180.

33 − Modulating fusion in the McGurk effect by binding processes and contextual noise
Olha Nahorna, Ganesh Attigodu Chandrashekara, Frédéric Berthommier and Jean Luc Schwartz
p.181-186.

34 − Visual Voice Activity Detection at different Speeds
Bart Joosten, Eric Postma and Emiel Krahmer
p.187-190.

35 − GMM Mapping Of Visual Features of Cued Speech With Speech Spectral Features
Denis Beautemps, Zuheng Ming and Gang Feng
p.191-196.

36 − Confusion Modelling for Automated Lip-Reading using Weighted Finite-State Transducers
Dominic Howell, Stephen Cox and Barry-John Theobald
p.197-202.

37 − Transforming Neutral Visual Speech into Expressive Visual Speech
Felix Shaw and Barry-John Theobald
p.203-208.

38 − Differences in the audio-visual detection of word prominence from Japanese and English speakers
Martin Heckmann, Keisuke Nakamura and Kazuhiro Nakadai
p.209-214.

39 − Speaker Separation using Visually-derived Binary Masks
Faheem Khan and Ben Milner
p.215-220.

40 − Improvement of Lipreading Performance Using Discriminative Feature and Speaker Adaptation
Takumi Seko, Naoya Ukai, Satoshi Tamura and Satoru Hayamizu
p.221-226.

41 − Efficient Face Model for Lip Reading
Takeshi Saitoh
p.227-232.





eXTReMe Tracker