International Symposium on Learning 2006 – Day 2 – Part 1

Day 2 – International Symposium on Learning 2006<o:p></o:p>

Thursday, November 30, 2006<o:p></o:p>

4:38 PM<o:p></o:p>

 

International Symposium on Learning 2006 sponsored by KAST<o:p></o:p>

 <o:p></o:p>

http://learning.kast.or.kr/<o:p></o:p>

 <o:p></o:p>

Raja Chatila – Learning Robots: From spatial cognition to skill acquisition<o:p></o:p>

LAAS-CNRS<o:p></o:p>

<st1:place w:st=”on”><st1:city w:st=”on”>Toulouse</st1:city>, <st1:country-region w:st=”on”>France</st1:country-region></st1:place><o:p></o:p>

Raja.Chatila@laas.fr<o:p></o:p>

 <o:p></o:p>

What is a cognitive robot?<o:p></o:p>

o        Integration of perception, decision, and action<o:p></o:p>

o        Learning concepts and interpreting the environment<o:p></o:p>

o        Deliberation and decision-making<o:p></o:p>

o        Learning new skills<o:p></o:p>

o        Communication, interaction, and language<o:p></o:p>

 <o:p></o:p>

Robot companion – European Project COGNIRON<o:p></o:p>

http://www.cogniron.org<o:p></o:p>

 <o:p></o:p>

Learning Requirements:<o:p></o:p>

o        Objects<o:p></o:p>

·         Multi-sensory, 3D, object modeling and recognition; from view-based to object based<o:p></o:p>

o        Space<o:p></o:p>

·         Maps, regions, concepts. Appearance, geometrical, topological labled models, landmarks<o:p></o:p>

o        Situations<o:p></o:p>

·         Spatial and temporal relationships<o:p></o:p>

 <o:p></o:p>

Spatial mapping requires a combination of object and topographical processing.<o:p></o:p>

This involves incremental mapping that the robot learns over time.<o:p></o:p>

 <o:p></o:p>

Beyond spatial toward communication. Really talking about a sort of communicative competence. Takes signals from the environment and interprets them to devise appropriate responses.<o:p></o:p>

 <o:p></o:p>

Object modeling. First you need to recognize items in the environment. This requires constant processing of environmental data, including the use of 2D tracking and 3D representations.<o:p></o:p>

 <o:p></o:p>

A lot of training is required. This is similar to training voice recognition or even handwriting recognition. They started it with videos of people doing a series of actions. This is then interpreted by the robot via a 3D representation of the human.<o:p></o:p>

 <o:p></o:p>

Move to autonomous learning.<o:p></o:p>

Learning concepts to learning skills<o:p></o:p>

o        Open-ended<o:p></o:p>

o        Common representations<o:p></o:p>

o        Process guided by utility<o:p></o:p>

o        Incremental learning<o:p></o:p>

 <o:p></o:p>

Interesting building of temporal knowledge. The robot stores information “maps” about an object from multiple perceptual angles. These maps are then combined to enable the robot to recognize the object at any angle.<o:p></o:p>

 <o:p></o:p>

Multiple object recognitions can be combined to recognize groups (scenes) of objects. This is similar to chunking in language learning. Learning to group items for easier production, or in this case recognition.<o:p></o:p>

 <o:p></o:p>

Provided a cognitive chart at the end, which would have been an hour discussion in its own right. I wish that he could have spent more time on it giving this audience.<o:p></o:p>

 <o:p></o:p>

Take away – Learning about ones environment is really a precursor to interacting with humans in any sort of naturalist way. To an extent, this is entirely possible at this point, but will require a lot of work. Also, autonomy is still a ways off, but it’s as much of a question of time/information as it is about technology. The building of communal knowledge.<o:p></o:p>

<o:p></o:p>

<o:p> </o:p>

%d bloggers like this: