Accepted papers

Final papers are now available at the page of proceedings.

Oral talk with poster presentation

Eri Kabayama, Attamimi Muhammad, Ichiro Kobayashi, Hideki Asoh, Daichi Mochihashi,
Tomoaki Nakamura, Takayuki Nagai
Evaluation of the Sentences Generated Based on Language Model
Applied by Zero-shot Learning
Tomoaki Nakamura, Kensuke Iwata, Takayuki Nagai, Daichi Mochihashi,
Ichiro Kobayashi, Asoh Hideki and Masahide Kaneko
Continuous Motion Segmentation
Based on Reference Point Dependent GP-HSMM
Peer Neubert, Stefan Schubert and Peter ProtzelLearning Vector Symbolic Architectures for Reactive Robot Behaviours
Akira Taniguchi, Tadahiro Taniguchi and Angelo CangelosiMultiple Categorization by iCub:
Learning Relationships between Multiple Modalities and Words
*An oral talk has 12 min. for presentation and 3 min. for question and answer, approximately.

Spotlight talk with poster presentation

Lorenzo Jamone, Giovanni Saponaro, Atabak Dehban, Alexandre Bernardino, Jos´e Santos-VictorBayesian modeling of object and tool affordances
Rico Jonschkowski and Oliver BrockTowards Combining Robotic Algorithms and Machine Learning:
End-To-End Learnable Histogram Filters
Francesco Riccio, Roberto Capobianco and Daniele NardiUsing Spatio-Temporal Affordances to Represent
Robot Action Semantics
Hiroki Yokoyama and Hiroyuki OkadaLearning non-parametric policies as random variable transformations
*A spotlight talk has 5 min. for presentation and 2 min. for question and answer, approximately.

Poster presentation

Eri Tsunekawa, Attamimi Muhannmad, Ichiro Kobayashi,Hideki Asoh
Daichi Mochihashi,Tomoaki Nakamura and Takayuki Nagai
An Approach to Making a Plan for Tidying up with the Action Concepts
Acquired by Multimodal Information
Yumi Hamazono Ichiro Kobayashi, Hideki Aso, Daichi Mochihashi,
Muhammad Attamimi, Tomoaki Nakamura and Nagai Takayuki
Learning the Correspondence
between Distributed Semantics of Words and Robot’s Action
Francisco Cruz, German I. Parisi, and Stefan WermterMulti-modal Integration of Speech and Gestures
for Interactive Robot Scenarios
Takahiro Kobori, Tomoaki Nakamura, Takayuki Nagai, Naoto Iwahashi
Mikio Nakano, Kotaro Funakoshi and Masahide Kaneko
Robust Comprehension of Spoken Instructions using Multimodal
Information for a Domestic Service Robot
R. Omar Chavez-Garcia, Mihai Andries, Pierre Luce-Vayrac and Raja ChatilaFrom Perception and Manipulation to Affordance Formalization
Maxime Petit, Tobias Fischer and Yiannis DemirisTowards the Emergence of Procedural Memories from Lifelong
Multi-Modal Streaming Memories for Cognitive Robots
Tadashi Matsuo and Nobutaka ShimadaEvaluation Function for Shift Invariant Auto-encoder

Submission of camera-ready papers.

the camera-ready copy of your manuscript is due by September 14, 2016.
The camera-ready copy has to be submitted to
mlhlcr2016 [at]
via Email.
We recommend the authors to update your manuscript referring to the comments from the reviewers.

If the authors want, they can extend the papers up to 4 pages.
  • Deadline for submission of Camera-ready paper September 14, 2016
    • Camera-ready paper can be extended up to 4 pages.