MechE PhD Prospectus Defense: Peter Crowley
- Starts: 11:00 am on Friday, June 14, 2024
- Ends: 1:00 pm on Friday, June 14, 2024
ABSTRACT: Reinforcement learning (RL) presents a powerful paradigm for learning complex robotic tasks. However, RL is only as good as the reward function that it is given. Many robotic tasks inherently correspond to sparse reward functions where the reward signal remains at zero until task completion. Lack of guidance in the reward makes it more difficult for RL to be successful. While reward shaping may be a solution in simple cases, it is difficult to design intermediate rewards for long-horizon, multi-objective, and multiagent tasks. In this dissertation, we study how human demonstrations can inform and improve the learning process for complex robotic tasks. We propose multiple imitation learning and inverse reinforcement learning (IRL) algorithms for solving long-horizon, multi-objective, multiagent robotic tasks. We explore the abilities of these algorithms across 2 domains: a multiagent aquatic surface vehicle capture the flag game, and a magnetic micro-robot control problem. We provide initial results indicating the ability of the proposed algorithms to learn expressive reward functions that are useful for RL, and policies capable of completing tasks in these domains. Additionally, we outline plans for improving and testing these algorithms on certain tasks in these domains.
COMMITTEE: ADVISOR/CHAIR Professor Calin Belta, ME/SE/ECE; Professor Andrew Sabelhaus, ME/SE; Professor Roberto Tron (ME/SE)
- Location:
- ENG 245, 110 Cummington Mall
- Hosting Professor
- Belta