Back to all workshops
WorkshopIntermediate

OpenClaw + Robotics with Solo CLI

Calibrate. Teleoperate. Train. Infer. All From One CLI.

Use Solo CLI to work with real robotic hardware — SO-101, Koch, LeKiwi, OpenDroid (Realman R1D2), Unitree G1, and Booster. Calibrate arms, teleoperate, record training datasets, and train vision-language-action (VLA) models like ACT, Pi0, SmolVLA, and GROOT N1.5 on Nebius GPUs. Then deploy your trained policy behind Nebius Serverless + Token Factory for cloud-based inference. Motors are already set up — you start with calibration and go. Full Solo CLI docs at github.com/GetSoloTech/solo-cli.

Jump to Step-by-Step Guide

Who This Is For

Robotics-curious developers, hardware hackers, AI engineers who want to bridge digital and physical

Key Value

End-to-end robotics pipeline: calibrate → teleoperate → record → train VLA → deploy inference

You'll Say

"I trained a VLA model on my own teleop data and watched the arm replay the task autonomously — in one session"

What You'll Build

1

A calibrated and teleoperated robotic arm (SO-101 or Koch) using Solo CLI

2

A recorded training dataset from your own teleoperation sessions

3

A trained VLA policy (ACT, Pi0, or SmolVLA) deployed on Nebius for inference

What We'll Cover

  • Solo CLI: one tool for calibration, teleoperation, recording, training, and inference — github.com/GetSoloTech/solo-cli (docs: github.com/GetSoloTech/solo-cli/blob/main/solo/commands/robots/lerobot/README.md)
  • Supported robots: SO-101, Koch, LeKiwi, OpenDroid (Realman R1D2), Unitree G1, Booster — each with official docs and LeRobot integration
  • VLA models: ACT, Pi0, Pi0Fast, Pi05, SmolVLA, GROOT N1.5, X-VLA — what they are, when to use each, and how to train them
  • Recording imitation learning datasets with 'solo robo --record' and the LeRobot framework (github.com/huggingface/lerobot)
  • Training VLA policies on Nebius GPUs via Token Factory
  • Deploying trained models on Nebius Serverless for cloud-based robotic inference

Schedule

12:00 PM – 12:30 PM

Solo CLI Setup & Robot Calibration

Install Solo CLI, meet the hardware, calibrate, and test teleoperation

  • Install Solo CLI from source or PyPI (Python 3.12+, uv package manager)
  • Available robots: SO-101, Koch, LeKiwi, OpenDroid, Unitree G1, Booster — pick your station
  • Calibrate with 'solo robo --calibrate all' and test with 'solo robo --teleop'
12:30 PM – 1:15 PM

Record Data & Understand VLA Models

Teleoperate your robot, record training datasets, and learn about VLA model architectures

  • Record teleoperation episodes with 'solo robo --record' — guided prompts for each episode
  • VLA model landscape: ACT (fast, simple), Pi0/Pi0Fast (foundation model), SmolVLA (lightweight), GROOT N1.5 (NVIDIA)
  • What makes a good training dataset: episode count, task variety, and data quality
  • Replay recorded episodes with 'solo robo --replay' to verify data quality
1:15 PM – 2:00 PM

Train & Deploy Your VLA Model

Train a policy on Nebius GPUs and deploy it for autonomous inference

  • Launch training with 'solo robo --train' — choose ACT or SmolVLA as your policy
  • Training runs on Nebius GPUs via Token Factory for fast iteration
  • Deploy trained model on Nebius Serverless for cloud-based inference
  • Run inference with 'solo robo --inference' — watch the arm execute the learned task autonomously
2:00 PM – 2:30 PM

Advanced Patterns & Show-and-Tell

Multi-task learning, model comparison, and demoing what you built

  • Compare VLA models: run the same task with ACT vs. SmolVLA and see the difference
  • Multi-task training: combine multiple recorded tasks into one model
  • Record a video of your robot performing autonomously
  • Show-and-tell: demo your workflow to the group

Prerequisites

  • Laptop with Python 3.12+ and terminal access
  • A Nebius AI Cloud account (we'll help you set one up if needed)
  • Basic comfort with Python and CLI tools

You'll Leave With

A calibrated robotic arm you can teleoperate from Solo CLI
A recorded dataset of teleoperation episodes
A trained VLA model (ACT or SmolVLA) that can replay tasks autonomously
Inference running on Nebius Serverless + Token Factory
Video of your robot performing a learned task to show your team

Step-by-Step Guide

Follow these steps during the workshop. Each step includes commands you can copy, tips from our mentors, and a checkpoint to verify before moving on.

Step 1~5 min

Install Solo CLI & Nebius CLI

Install Solo CLI for robotics operations and the Nebius CLI for cloud deployment. Solo CLI handles everything from calibration to training.

Instructions

  1. 1.Clone and install Solo CLI (requires Python 3.12+ and uv)
  2. 2.Install the Nebius CLI for cloud deployment
  3. 3.Verify both are working

Commands

# Install Solo CLI from source (recommended for hackathons)
git clone https://github.com/GetSoloTech/solo-cli.git
cd solo-cli
uv pip install -e .
# Or install from PyPI
uv pip install solo-cli
# Run interactive setup
solo setup
# Install Nebius CLI
curl -sSL https://storage.eu-north1.nebius.cloud/cli/install.sh | bash
nebius auth login

Tips

Solo CLI requires Python 3.12+. If you're on an older version, use 'uv python install 3.12' first.
The 'solo setup' command saves your config to ~/.solo/config.json — it remembers your hardware and preferences.
Full Solo CLI docs: https://github.com/GetSoloTech/solo-cli/blob/main/solo/commands/robots/lerobot/README.md
LeRobot framework (used under the hood): https://github.com/huggingface/lerobot

Checkpoint

Running 'solo status' shows your system info and 'nebius iam whoami' returns your user info.

Step 2~10 min

Calibrate Your Robot

Connect to your robotic arm and run calibration. Motors are already set up — you just need to calibrate the coordinate system. Available robots: SO-101, Koch, LeKiwi, and more.

Instructions

  1. 1.Connect the robotic arm via USB (motors are pre-configured)
  2. 2.Run Solo CLI calibration for all joints
  3. 3.Follow the on-screen prompts to set joint limits
  4. 4.Verify calibration by testing teleoperation

Commands

# Calibrate all joints (motors are already set up)
solo robo --calibrate all
# Test teleoperation — move the arm with the leader arm
solo robo --teleop

Tips

Motors are already set up — skip motor setup and go straight to calibration.
During calibration, move each joint to its limit when prompted. This sets the workspace boundaries.
SO-101 official docs: https://huggingface.co/docs/lerobot/en/so101
Koch official docs: https://huggingface.co/docs/lerobot/en/koch
LeKiwi official docs: https://huggingface.co/docs/lerobot/en/lekiwi (assembly guide: https://wiki.seeedstudio.com/lerobot_lekiwi/)
OpenDroid (Realman R1D2) quick start: https://develop.realman-robotics.com/en/robot/quickUseManual/

Checkpoint

Calibration completes without errors and you can teleoperate the arm smoothly.

Step 3~15 min

Record Training Data

Teleoperate the arm to perform a task (pick and place, stacking, sorting) and record episodes as training data for your VLA model.

Instructions

  1. 1.Choose a task: pick-and-place, stacking blocks, or sorting by color
  2. 2.Start recording with Solo CLI — it prompts you for task descriptions
  3. 3.Perform the task 10-20 times via teleoperation (more episodes = better model)
  4. 4.Replay episodes to verify data quality

Commands

# Record teleoperation episodes (with guided prompts)
solo robo --record
# Skip prompts and use saved settings
solo robo --record -y
# Replay a recorded episode to check data quality
solo robo --replay

Tips

Aim for 10-20 episodes for a basic task. Consistency matters more than volume — try to perform the task the same way each time.
Include slight variations (object positions, angles) so the model generalizes rather than memorizing one trajectory.
LeRobot GitHub (dataset format & tools): https://github.com/huggingface/lerobot

Checkpoint

You have 10+ recorded episodes and replay shows clean, consistent trajectories.

Step 4~10 min

Deploy Inference Infrastructure on Nebius

Set up Nebius Serverless and Token Factory for training and deploying your VLA model. The agent orchestrator runs on serverless CPU, while training and inference use Nebius GPUs.

Instructions

  1. 1.Get your network and subnet IDs from Nebius
  2. 2.Deploy the OpenClaw orchestrator on Nebius Serverless
  3. 3.Configure Token Factory for GPU-backed model serving

Commands

# Get subnet
export SUBNET_ID=$(nebius vpc subnet get-by-name \
--name default-subnet --format jsonpath='{.metadata.id}')
# Deploy orchestrator on Serverless
export AUTH_TOKEN=$(openssl rand -hex 32)
nebius msp serverless v1alpha1 endpoint create \
--name openclaw-robotics \
--container-image openclaw:robotics \
--container-template-resources-platform cpu-d3 \
--container-template-resources-preset 4vcpu-16gb \
--port 8080 \
--username admin \
--password "$AUTH_TOKEN" \
--network-id <your-network-id> \
--parent-id <your-project-id>
# Set Token Factory key for GPU inference
export TF_API_KEY=<your-token-factory-api-key>

Checkpoint

Your serverless endpoint is active and Token Factory API key is configured.

Step 5~15 min

Train Your VLA Model

Train a vision-language-action model on your recorded dataset. Choose ACT for fast training or SmolVLA for a more capable policy. Training runs on Nebius GPUs.

Instructions

  1. 1.Choose your VLA model: ACT (fast, simple) or SmolVLA (vision-language-action)
  2. 2.Launch training with Solo CLI — it handles data formatting and training config
  3. 3.Monitor training progress
  4. 4.Evaluate the trained model on held-out episodes

Commands

# Train a VLA policy (Solo CLI handles the config)
solo robo --train
# Skip prompts with saved settings
solo robo --train -y

Tips

ACT trains in minutes and works great for single-task policies. Start here. Paper: https://tonyzhaozh.github.io/aloha/ | Code: https://github.com/tonyzhaozh/act | LeRobot: https://huggingface.co/docs/lerobot/en/act
SmolVLA is more capable but takes longer to train — good for multi-task learning. Docs: https://huggingface.co/blog/smolvla
Pi0 / Pi0Fast / Pi05 — Physical Intelligence foundation models. Paper: https://www.pi.website/blog/pi0 | Code: https://github.com/Physical-Intelligence/openpi | LeRobot: https://huggingface.co/docs/lerobot/en/pi0
GROOT N1.5 — NVIDIA's VLA model. Paper: https://research.nvidia.com/labs/gear/gr00t-n1_5/ | Code: https://github.com/NVIDIA/Isaac-GR00T | LeRobot: https://huggingface.co/docs/lerobot/en/groot
X-VLA — Paper: https://arxiv.org/pdf/2510.10274 | LeRobot: https://huggingface.co/docs/lerobot/en/xvla

Checkpoint

Training completes and the loss curve shows convergence. Your model checkpoint is saved.

Step 6~10 min

Run Autonomous Inference

Deploy your trained model and watch the arm execute the learned task autonomously — no teleoperation, just the model driving the robot.

Instructions

  1. 1.Load your trained model checkpoint
  2. 2.Run inference with Solo CLI — the robot executes autonomously
  3. 3.Compare the autonomous behavior to your original teleoperation
  4. 4.Try variations: move objects to different positions and see if the model generalizes

Commands

# Run inference with your trained model
solo robo --inference
# Skip prompts
solo robo --inference -y

Tips

The first autonomous run is the magic moment — watch the arm do what you taught it without any human input.
If the model doesn't generalize well, record 10 more episodes with more position variety and retrain.

Checkpoint

Your robot arm executes the learned task autonomously from your trained VLA model.

Step 7~10 min

Record Your Demo & Explore More Robots

Record a video of your robot performing autonomously, then explore other available platforms if time permits.

Instructions

  1. 1.Record a video of autonomous inference — show the robot completing the task
  2. 2.Try a different robot if available: LeKiwi (mobile), OpenDroid, or Unitree G1
  3. 3.Save your model checkpoint and deployment config for future use
  4. 4.Share your video in the group chat

Commands

# Check Solo CLI status and model info
solo status
# List downloaded models
solo list
# Save your Nebius deployment config
nebius msp serverless v1alpha1 endpoint get $ENDPOINT_ID --format yaml > robotics-deploy.yaml

Tips

The best demos show the full loop: teleoperation → training → autonomous execution.
Other robots to explore: LeKiwi (https://huggingface.co/docs/lerobot/en/lekiwi), OpenDroid R1D2 (https://develop.realman-robotics.com/en/robot/quickUseManual/), Unitree G1 (https://huggingface.co/docs/lerobot/en/unitree_g1), Booster (https://www.booster.tech/open-source/). Each works with Solo CLI.
Unitree G1 extras: SDK (https://github.com/unitreerobotics/unitree_sdk2_python), Replay (https://github.com/GetSoloTech/unitree-g1-replay), IsaacLab Sim (https://github.com/unitreerobotics/unitree_sim_isaaclab), GROOT Wholebody Control (https://github.com/NVlabs/GR00T-WholeBodyControl)
OpenDroid programming guide (teach pendant, ROS, APIs): https://develop.realman-robotics.com/en/robot/teachingPendant/armTeching/
All VLA model LeRobot implementations: ACT (https://huggingface.co/docs/lerobot/en/act), Pi0 (https://huggingface.co/docs/lerobot/en/pi0), Pi0Fast (https://huggingface.co/docs/lerobot/en/pi0fast), Pi05 (https://huggingface.co/docs/lerobot/en/pi05), GROOT (https://huggingface.co/docs/lerobot/en/groot), X-VLA (https://huggingface.co/docs/lerobot/en/xvla), SmolVLA (https://huggingface.co/blog/smolvla)

Checkpoint

You have a video of autonomous execution and your model + config saved for future use.

Ready to Build?

RSVP required. Spots are limited since we provide hands-on support for every attendee.

Register Now