Berkay Köprü, Engin Erzin, “Use of Affective Visual Information for Summarization of Human-Centric Videos ,” submitted.
Ege Kesim, Tugce Numanoglu, Oyku Bayramoglu, Bekir Berker Turker, Nusrah Hussain, Metin Sezgin, Yucel Yemez, Engin Erzin, “The eHRI database: A multimodal database of engagement in human-robot interactions ,” submitted.
N. Hussain, E. Erzin, T. M. Sezgin, Y. Yemez, “Training Socially Engaging Robots: Automation of Backchannel Behaviors with Batch Reinforcement Learning ,” submitted.
M.A.T. Turan, E. Erzin, “Domain Adaptation for Food Intake Classification with Teacher/Student Learning ,” to appear in the IEEE Transactions on Multimedia.
S. Asadiabadi, E. Erzin “Vocal Tract Contour Tracking in rtMRI Using Deep Temporal Regression Network ,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 28, pp. 3053-3064, November 2020.
S. Asadiabadi, E. Erzin “Automatic Vocal Tract Landmark Tracking in rtMRI using Fully Convolutional Networks and Kalman Filter ,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020.
R. Sadiq, E. Erzin, “Emotion Dependent Domain Adaptation for Speech Driven Affective Facial Feature Synthesis ,” submitted.
E. Bozkurt, Y. Yemez, E. Erzin, “Affective Synthesis and Animation of Arm Gestures from Speech Prosody ,” Speech Communication, 2020.
E. Bozkurt, Y. Yemez, E. Erzin, ”Multimodal Analysis of Speech and Arm Motion for Prosody-Driven Synthesis of Beat Gestures ,” Speech Communication, Volume 85, pp. 29–42, December 2016.
K. Kaşarcı, E. Bozkurt, Y. Yemez, and E. Erzin, Realtime Speech-Driven Gesture Animation – Gerçek Zamanlı Konuşma Sürümlü Jest Animasyonu , SIU: Sinyal İşleme ve İletişim Uygulamaları Kurultayı, Zonguldak, 2016.
E. Bozkurt, E. Erzin, Y. Yemez, “Affect-Expressive Hand Gestures Synthesis and Animation ,” in IEEE International Conference on Multimedia and Expo (ICME), Torino, Italy, 2015.
E. Bozkurt, S. Asta, S. Ozkul, Y. Yemez, and E. Erzin, ”Multimodal Analysis of Speech Prosody and Upper Body Gestures using Hidden Semi-Markov Models ,” in IEEE International Conference on Acoustics, Speech and Signal Processing, (Vancouver), 2013.
F. Ofli, E. Erzin, Y. Yemez, A.M. Tekalp, ”Learn2Dance: Learning Statistical Music-to-Dance Mappings for Choreography Synthesis ,” IEEE Transactions on Multimedia, Volume 14, Issue 3, pp. 747-759, 2012.
F. Ofli, E. Erzin, Y. Yemez, A.M. Tekalp, ”Multi-modal analysis of dance performances for music-driven choreography synthesis, ” ICASSP’10 Dallas, USA.
M. E. Sargın, Y. Yemez, E. Erzin, and A. M. Tekalp, ”Analysis of Head Gesture and Prosody Patterns for Prosody-Driven Head-Gesture Animation ,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 30, Issue 8, pp. 1330 – 1345, August 2008.
F. Ofli, Y. Demir, Y. Yemez, E. Erzin, A.M. Tekalp, K. Balci, I. Kizoglu, L. Akarun, C. Canton- Ferrer, J. Tilmanne, and E. Bozkurt, “An audio-driven dancing avatar, ” Journal on Multimodal User Interfaces, vol. 2, no. 2, pp. 93-103, September 2008.
Multimedia, Vision and Graphics Laboratory