Abstract:
Social robots are becoming popular in therapy sessions of children with autism because
it helps to develop children’s learning abilities. For example, robots encourage
children to repeat actions it demonstrates like high-five, handshake, hugging, singing
song, dancing, that positively influence on children’s treatment. However, robots cannot
perceive children’s engagement to make interaction more natural. In this work
our goal is to build machine learning models that could predict the child’s engagement
with higher accuracy. We expect to train model that could recognize children
engagement based on multi-modal behavioral cues: facial expression, body movement
that were captured from video.