Calibrate. Teleoperate. Train. Infer. All From One CLI.
Use Solo CLI to work with real robotic hardware — SO-101, Koch, LeKiwi, OpenDroid (Realman R1D2), Unitree G1, and Booster. Calibrate arms, teleoperate, record training datasets, and train vision-language-action (VLA) models like ACT, Pi0, SmolVLA, and GROOT N1.5 on Nebius GPUs. Then deploy your trained policy behind Nebius Serverless + Token Factory for cloud-based inference. Motors are already set up — you start with calibration and go. Full Solo CLI docs at github.com/GetSoloTech/solo-cli.
Jump to Step-by-Step GuideRobotics-curious developers, hardware hackers, AI engineers who want to bridge digital and physical
End-to-end robotics pipeline: calibrate → teleoperate → record → train VLA → deploy inference
"I trained a VLA model on my own teleop data and watched the arm replay the task autonomously — in one session"
A calibrated and teleoperated robotic arm (SO-101 or Koch) using Solo CLI
A recorded training dataset from your own teleoperation sessions
A trained VLA policy (ACT, Pi0, or SmolVLA) deployed on Nebius for inference
Install Solo CLI, meet the hardware, calibrate, and test teleoperation
Teleoperate your robot, record training datasets, and learn about VLA model architectures
Train a policy on Nebius GPUs and deploy it for autonomous inference
Multi-task learning, model comparison, and demoing what you built
Follow these steps during the workshop. Each step includes commands you can copy, tips from our mentors, and a checkpoint to verify before moving on.
Install Solo CLI for robotics operations and the Nebius CLI for cloud deployment. Solo CLI handles everything from calibration to training.
# Install Solo CLI from source (recommended for hackathons)git clone https://github.com/GetSoloTech/solo-cli.gitcd solo-cliuv pip install -e .# Or install from PyPIuv pip install solo-cli# Run interactive setupsolo setup# Install Nebius CLIcurl -sSL https://storage.eu-north1.nebius.cloud/cli/install.sh | bashnebius auth login
Running 'solo status' shows your system info and 'nebius iam whoami' returns your user info.
Connect to your robotic arm and run calibration. Motors are already set up — you just need to calibrate the coordinate system. Available robots: SO-101, Koch, LeKiwi, and more.
# Calibrate all joints (motors are already set up)solo robo --calibrate all# Test teleoperation — move the arm with the leader armsolo robo --teleop
Calibration completes without errors and you can teleoperate the arm smoothly.
Teleoperate the arm to perform a task (pick and place, stacking, sorting) and record episodes as training data for your VLA model.
# Record teleoperation episodes (with guided prompts)solo robo --record# Skip prompts and use saved settingssolo robo --record -y# Replay a recorded episode to check data qualitysolo robo --replay
You have 10+ recorded episodes and replay shows clean, consistent trajectories.
Set up Nebius Serverless and Token Factory for training and deploying your VLA model. The agent orchestrator runs on serverless CPU, while training and inference use Nebius GPUs.
# Get subnetexport SUBNET_ID=$(nebius vpc subnet get-by-name \--name default-subnet --format jsonpath='{.metadata.id}')# Deploy orchestrator on Serverlessexport AUTH_TOKEN=$(openssl rand -hex 32)nebius msp serverless v1alpha1 endpoint create \--name openclaw-robotics \--container-image openclaw:robotics \--container-template-resources-platform cpu-d3 \--container-template-resources-preset 4vcpu-16gb \--port 8080 \--username admin \--password "$AUTH_TOKEN" \--network-id <your-network-id> \--parent-id <your-project-id># Set Token Factory key for GPU inferenceexport TF_API_KEY=<your-token-factory-api-key>
Your serverless endpoint is active and Token Factory API key is configured.
Train a vision-language-action model on your recorded dataset. Choose ACT for fast training or SmolVLA for a more capable policy. Training runs on Nebius GPUs.
# Train a VLA policy (Solo CLI handles the config)solo robo --train# Skip prompts with saved settingssolo robo --train -y
Training completes and the loss curve shows convergence. Your model checkpoint is saved.
Deploy your trained model and watch the arm execute the learned task autonomously — no teleoperation, just the model driving the robot.
# Run inference with your trained modelsolo robo --inference# Skip promptssolo robo --inference -y
Your robot arm executes the learned task autonomously from your trained VLA model.
Record a video of your robot performing autonomously, then explore other available platforms if time permits.
# Check Solo CLI status and model infosolo status# List downloaded modelssolo list# Save your Nebius deployment confignebius msp serverless v1alpha1 endpoint get $ENDPOINT_ID --format yaml > robotics-deploy.yaml
You have a video of autonomous execution and your model + config saved for future use.
RSVP required. Spots are limited since we provide hands-on support for every attendee.
Register Now