DevLog: Hello World

Our ambition is to build droids that anyone can use to scale themselves in the physical world just like programmers can do. For the framing and roadmap see Commoditizing the Newton.

In a sentence, the beta I’d like to put in users hands is a cheap robot, like a downscaled version of Mobile Aloha, that they can “program” using AR on their phone.

For now, our focus is making a PoC that shows how this can be done in simulation. Today I’ll share the first step towards that goal.

Demo

Below is a snippet where I trained a model to play ring pong in about twenty minutes from scratch in simulation.

image

This is a “hello world” for getting the iteration loop setup between the brain (ML algorithms, code), the bodies (the eight pong paddles), and the environment.

The Droid PoC will use a digital twin of the dev kit robot and provide examples of a next gen programming interface, but to get there we need to rapidly iterate between ML and simulation.

Discussion

We wanted to test concepts in one tool and, after doing a spike with Isaac Sim, Mujoco, and Unity we found that Godot is the simplest requiring the least special knowledge.

What matters now is principles not production.

So, for our PoC we are using Godot as our simulator and a local container to run pytorch models. Of course, we likely won’t be able to do high quality sim2real transfer but once we’ve proven the principle we can move on to higher fidelity simulators.

To do this we leveraged GodotRL — an open-source Hugging Face project that comes with a python bridge to drive the simulation and make environment observations as well as several state-of-the-art RL algorithms.

Be warned, their documentation is sparse and if you try to go through the README to get setup you will run into errors.

Decision

Where do we go from here? Here’s the zoomed in roadmap focusing just on the PoC, with a v1 being a complete PoC of a virtualized droid obeying spatial programs that are created and then observed by the programmer.

Alpha — Demo & Proof of Concept

  • Target completion: Q3 2024-Q1 2025
  • End result: ability to trivially program a droid using AR in simulation
  • Milestones
    • Droid v0.1: Handcrafted 2D Locomotion
    • Droid v0.2: Auto-Curriculum (generate bodies, envs, programs)
    • Droid v0.3: Scale-up Test (N bodies, envs, programs)
    • Droid v0.X: TBD
    • Droid v1: Upscaling to 4D and Hi-Fi Robots
  • June & July
    1. Consolidate Research / Master plan
    2. Droid v0.1: Handcrafted 2D Locomotion
    3. Setup Iteration Loop
      Handcrafted 2D Locomotion
      Fermi Act v0.1: Novel ML Architecture
      Gym v0.1: Handcrafted Envs
      SpaceTime v0.1: Handcrafted Spatial Programs
      Body v0.1: Simple Zero-Joint Bodies
      Simple GUI for Spatial Programming

The first step will be the same demo different brain behind the scenes. We will implement a decision transformer architecture with a few dashes of secret sauce.