Salta al contenuto principale
Passa alla visualizzazione normale.


Learning High-Level Tasks through Imitation


This paper presents the cognitive architecture Con-SCIS (conceptual space based cognitive imitation system), which tightly links low-level data processing with knowledge representation in the context of imitation learning. We use the word imitate to refer to the paradigm of program-level imitation: we are interested in the final effects of actions on objects, and not on the particular kinematic or dynamic properties of the motion. The same architecture is used both to analyze and represent the task to be imitated, and to perform the imitation by generalizing in novel and different circumstances. The implemented experimental scenario is a simplified two-dimensional world populated with various objects in which observation/imitation takes place. During the observation phase, the user shows her/his hand while performing arbitrary tasks of manipulating objects in front of a single calibrated camera. The task is then segmented into meaningful units and its properties (objects' color and shape, their absolute position and orientation, relations between objects) are represented into high-level symbolic terms. In the imitation phase, the symbolic information is employed to drive the robot's actions. To validate our approach, we report some results concerned with the problem of teaching a humanoid hand/arm robotic system tasks of assembling different workspace objects