Ryan Julian
I am Research Scientist and Engineer at Google DeepMind in Mountain View (formerly the Robotics Team at Google Brain). I love doing fundamental and applied research at the intersection of cutting-edge robotics and product development. My goal is to deploy robot learning at massive scales in the wild, and my research interests revolve around that goal: immitation learning in-the-wild, massively multi-task learning, reinforcement learning, and curriculums for efficient task learning.
I earned my PhD at the Robotics and Embedded Systems Laboratory, part of the Department of Computer Science at the University of Southern California. During my PhD, I was also a Student Researcher for the Robotics team at Google Brain. My PhD advisor was Gaurav Sukhatme, and from 2017 to 2018, I was also co-advised by Stefan Schaal. My advisers at Google Research were Karol Hausman and Chelsea Finn.
From 2014 to 2017, I worked at Google and X, on a series of robotics projects, including the Everyday Robot project, whose goal was to make a robot which can "learn to help everyone, every day." I worked on many parts of the robotics stack, including high-level programming APIs, 3D visualization, interprocess communication, WiFi and cloud connectivity, automatic calibration, and automation for manufacturing lines building robots. Some of the robots I helped create can be seen in the company's earliest robot learning work. Prior to X, I was an R&D Engineer at Leap Motion (now Ultraleap), where I worked on hardware, firmware, and test automation for computer vision-based hand-tracking devices, and earned a couple patents in the process. Before that, I spent a year as a Research Scientist at UC Berkeley, where I did research with—and built controllers for—some of the world's smallest intelligent robots.
I have a BS in EECS from UC Berkeley, where I worked with Ron Fearing at the Biomimetic Millisystems Lab.
Email /
CV /
GitHub /
Google Scholar /
LinkedIn
|
|
Research
I'm interested in robotics and machine learning, especially in the science of building learning robots which are reliable and scalable.
|
|
Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning
Ryan Julian, Benjamin Swanson, Gaurav S. Sukhatme, Sergey Levine, Chelsea Finn, Karol Hausman
Conference on Robot Learning, 2020
arXiv /
video /
slides /
talk /
site /
Two Minute Papers /
VentureBeat /
USC News /
We formulate a fine-tuning procedure for off-policy reinforcement learning which is simple, fast, effective, and sample-efficient. We then show that it can be used to adapt robotic manipulation policies to novel environments, lighting conditions, objects, robot wear-and-tear, etc. in a continual learning setting.
|
|
Scaling Simulation-to-Real Transfer by Learning a Latent Space of Robot Skills
Ryan Julian, Eric Heiden, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav S. Sukhatme, Karol Hausman
International Journal of Robotics Research, 2020
paper /
code /
We show how to embed robotic manipulation skills into a continuously-parameterized space using variational inference, how useful these learned skill spaces for addressing several important problems in robot learning. We demonstrate how to use them to achieve sim2real transfer, long-horizon hierarchical RL, and meta-RL, all in real manipulation experiments. We conclude by designing an MPC-like algorithm for zero-shot skill learning in latent space, which allows us to generalize simpler skills, such as reaching, into more complex combinations of those skills, such as drawing.
|
|
Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning
Tianhe Yu, Dierdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, Sergey Levine
Conference on Robot Learning, 2019
arXiv /
code /
site /
Benchmark for multi-task and meta-reinforcement learning consisting of 50 distinct robotic manipulation tasks. We also provide baseline results for 8 popular multi-task and meta-RL algorithms. MetaWorld is an active open source project which is maintained by myself and students I supervise at USC.
|
|
Scaling Simulation-to-Real Transfer by Learning Composable Robot Skills
Ryan Julian, Eric Heiden, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav S. Sukhatme, and Karol Hausman
International Symposium on Experimental Robotics, 2018
arXiv /
video /
code /
slides /
We push robot manipulation skill representation learning into the real world, and show that skill embeddings can greatly improve the sample efficiency of sim2real robot learning. Using real robot manipulation experiments, we evaluate several approaches for exploiting learned skill spaces, including interpolating between known tasks, end-to-end learning with skill embeddings as macro actions, and search-based planning in the learned latent space.
|
|
Cooperative Control and Modeling for Narrow Passage Traversal with an Ornithoper MAV and Lightweight Ground Station
Ryan Julian, Cameron J. Rose, Humphrey Hu, Ronald S. Fearing
Autonomous Agents and Multiagent Systems, 2013
paper /
video /
code /
We demonstrate cooperation for target-seeking between a 13-gram ornithopter (flapping-wing) MAV and a lightweight ground station equipped with computer vision. The ground station simulates the compute and sensing capabilities of a lightweight insect-inspired walking robot, and communicates with the MAV with live streaming video. This work also introduces a new ornithopter MAV design, which improves on the payload, control, and maneuverability of previous designs. This design, dubbed the H2Bird, served as a model for many future ornithopter designs.
|
|
Performance Analysis and Terrain Classification for a Legged Robot Over Rough Terrain
Fernando L. Garcia Bermudez, Ryan Julian, Duncan W. Haldane, Pieter Abbeel, Ronald S. Fearing
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012
paper /
code /
We present an accurate, robust, and low-lag terrain classification algorithm for ultra-lightweight walking ground robots. The classifier uses only vibration data from the on-board intertial measurement unit (IMU) and back-EMF sensors. Lightweight running robots experience intense vibratonal coupling with the ground, which is realized as extreme broad-spectrum noise at the IMU and back-EMF signals. Conventional methods cannot cope with noise of this magnitude and bandwidth. We introduce a novel method-of-moments kernel, and apply it to a support vector machine (SVM) to escape this limitation. Our novel classifier achieves 94% overall classification accuracy across 3 test terrains.
|
|