For updated information, you may visit our new website:
The Autonomous Motion Department has its focus on research in intelligent systems that can move, perceive, and learn from experiences. We are interested in understanding, how autonomous movement systems can bootstrap themselves into competent behavior by starting from a relatively simple set of algorithms and pre-structuring, and then learning from interacting with the environment. Using instructions from a teacher to get started can add useful prior information. Performing trial and error learning to improve movement skills and perceptual skills is another domain of our research. We are interested in investigating such perception-action-learning loops in biological systems and robotic systems, which can range in scale from nano systems (cells, nano-robots) to macro systems (humans, and humanoid robots).
One part of our research is concerned with learning in neural networks, statistical learning, and machine learning, since the ability of learning and self-organization seems to be among the most important prerequisites of autonomous systems.
Another part of the research program focuses on how movement can be generated, in particular in human-like systems with bodies, limbs, and eyes. This research touches the fields of control theory, nonlinear control, nonlinear dynamics, optimization theory, and reinforcement learning.
In a third research branch, we investigate perception, in particular 3D perception with vision, tactile, and acoustic senses. A special emphasis lies on understanding active perception processes, i.e., how action and perception can assist each other for better performance and robustness.
A forth component of our work is concerned with human performance by measuring their movements in specially design behavioral tasks, and also by measuring their brain activities with neuroimaging techniques. Such research connects closely to work in Computational Neuroscience for motor control, and it includes abstract functional models of how brains may organize sensorimotor coordination.
Finally, a large part of the research in lab emphasizes studies with actual humanoid and biologically inspired robots. With this work, we are first interested in testing our learning and control theories with real physical systems in order to evaluate the robustness of our research results. Another challenge arises due to the scalability of our methods towards complex robot: our most advanced robot (similar to the pictures on the right) requires the nonlinear control of over 50 physical degrees of freedom that need to be coordinated with visual, tactile, and acoustic perception. When attempting to synthesize behavior with such a machine, the shortcomings of state-of-the-art learning and control theories can be discovered and addressed in subsequent research. Finally, we also use humanoid robots for direct comparisons in behavioral experiments in which the robot is treated like a regular human subject.
Experimental equipment is distributed between the Autonomous Motion Department and the Computational Learning and Motor Control Lab at the University of Southern California. In combination, we have an amazing number of state-of-the-art experimental robots. These include a Sarcos Humanoid Robot, a Willow Garage PR2, a KUKA-LWR bimanual manipulation platform, a Sarcos Master Arm, a Sarcos Slave Arm, a Sarcos Active Vision Head, the Boston Dynamics Little Dog robot, a NAO small humanoid, and a Barrett WAM Arm/Hand. Some pictures of these robots are on the right.