[Project Notes] PAL Robotics in mjlab
Published:
This post supports English / 中文 switching via the site language toggle in the top navigation.
TL;DR
pal_mjlab is not a full robotics framework by itself. It is a thin but useful PAL-specific extension layer on top of mjlab: it packages PAL robot assets for MuJoCo, computes robot-specific actuator settings, and registers a set of ready-to-train RL tasks into the mjlab task registry.
After reading the code rather than only the README, my main takeaway is that the repository is a clean integration package, not a giant system. It adds:
- PAL robot definitions for Talos, TIAGo Pro, and several KANGAROO variants
- 16 registered tasks across velocity tracking, motion imitation, and reaching
- a small amount of custom MDP logic where PAL robots actually need it
- a practical motion preprocessing script for turning retargeted CSV motions into
mjlab-friendly tracking assets
One nice surprise: the repo is broader than the README suggests. The README emphasizes locomotion and motion imitation, but the codebase also includes a real reaching stack, including TIAGo Pro reaching and Kangaroo locomotion-plus-reaching tasks.
Project Info
- Repo: pal-robotics/pal_mjlab
- Organization: PAL Robotics
- Inspected commit:
17e198d - Last commit I inspected: 2026-03-12
- Package shape: Python package with an
mjlab.tasksentry point - Core dependency:
mjlab>=1.2.0
What The Repo Actually Ships
The easiest way to understand the repository is as a task-and-robot plugin for mjlab.
Task matrix
| Area | What is registered | Count |
|---|---|---|
| Velocity tracking | Talos flat/rough + Kangaroo base/hands/grippers flat/rough | 8 |
| Motion tracking | Talos flat and no-state-estimation + Kangaroo flat and no-state-estimation | 4 |
| Reaching | TIAGo Pro dual-arm reaching + Kangaroo base/hands/grippers locomotion-plus-reaching | 4 |
| Total | PAL-specific mjlab task IDs | 16 |
This is a good example of a repository whose real scope is clearer from the source tree than from the landing page. The README is accurate, but incomplete: the implementation reveals a more ambitious task surface than the documentation advertises.
Architecture
The package is intentionally thin. pyproject.toml exposes pal_mjlab.tasks as an mjlab.tasks entry point, and then the package mostly does three things:
- define robot factories that load PAL MJCF/XML assets and attach actuator/collision settings
- define task configs by reusing
mjlabbase environment factories and overriding only the PAL-specific parts - register task IDs so
mjlabcan discover them from the command line
That layering looks like this:
flowchart TD
A["mjlab core<br/>registry, env factories, runners, MDP utils"] --> B["pal_mjlab package"]
B --> C["Robot configs<br/>Talos / TIAGo Pro / Kangaroo variants"]
B --> D["Task configs<br/>velocity / tracking / reaching"]
C --> D
D --> E["register_mjlab_task(...)"]
E --> F["16 PAL task IDs"]
F --> G["uv run train / play / list_envs"]
I like this design because it does not try to fork mjlab into a PAL-only platform. Instead, it behaves like a clean extension module.
The Three Practical Workflows
1. Velocity tracking
The velocity tasks are the most standard RL part of the repo. They build on mjlab’s make_velocity_env_cfg() and then override the parts that need robot knowledge:
- which robot spec to instantiate
- which joints are actually actuated
- how to scale actions
- where the feet contact sensors and body-contact sensors should point
- which domain-randomization knobs to turn on
- what robot-specific reward terms and limits should exist
For KANGAROO, the env config adds more than just naming changes. It includes:
- custom IMU-based observations
- joint-friction and encoder-bias randomization
- self-collision penalties
- convex-hull penalties for hip/ankle joint-limit geometry
- special handling of leg-length joints
This is the kind of repo where the value is not in inventing a new RL algorithm, but in doing the unglamorous integration work that makes an upstream framework actually fit a new robot family.
2. Motion imitation
The tracking tasks are built around motion assets, and the repo provides a small but important preprocessing bridge in csv_to_npz.py.
The expected path is:
- retarget a motion externally, for example via GMR
- convert the retargeted CSV into an NPZ asset
- train a motion-tracking policy in
mjlab
The conversion step is not just format shuffling. The script:
- loads base position, base orientation, and joint trajectories
- interpolates them from input FPS to output FPS
- uses quaternion slerp for orientation interpolation
- computes base linear velocity, base angular velocity, and joint velocities
- prepares the motion in the shape expected by the tracking environments
That pipeline is one of the repo’s most concrete engineering contributions:
flowchart TD
A["Retargeted motion CSV<br/>for Talos or Kangaroo"] --> B["csv_to_npz.py"]
B --> C["Interpolate poses<br/>e.g. 30 FPS to 50 FPS"]
C --> D["Compute base linear / angular velocity<br/>and joint velocities"]
D --> E["Export motion NPZ / registry asset"]
E --> F["Mjlab-Tracking-Flat-Pal-*"]
F --> G["train / play in mjlab"]
Another thoughtful detail: both Talos and Kangaroo tracking stacks include no-state-estimation variants, which makes it easier to separate controller quality from estimator assumptions.
3. Reaching
This is the least advertised and most interesting part of the repo.
There are actually two reaching stories:
- TIAGo Pro gets a dual-arm reaching setup with sampled left/right end-effector pose commands
- Kangaroo gets a hybrid task that combines locomotion and dual-arm reaching in the same environment
The reaching base config defines:
- left and right pose-command generators
- actor/critic observation groups
- dual-arm position and orientation rewards
- curriculum schedules on orientation and action-rate penalties
- debug visualization for current and goal frames
That already makes TIAGo Pro more than a placeholder asset dump. But the hybrid Kangaroo task is the more distinctive design choice. It starts from the reaching base, then adds:
- a locomotion twist command
- locomotion observations and contact features
- locomotion rewards such as velocity tracking, uprightness, angular-momentum penalties, and air-time
- separate resets for arms and locomotion state
So the repo is not only about “make PAL robots walk in mjlab.” It is also exploring multi-objective control surfaces where locomotion and manipulation coexist in one task.
What Feels Especially Well Judged
The repo’s strongest design choice is scope discipline. It does not reimplement mjlab. It only injects the pieces that must be robot-specific:
- XML/MJCF assets
- actuator models
- action scales
- task registration
- reward/observation tweaks
- motion preprocessing
That keeps the codebase small enough to understand in one sitting, which is rare for robotics integration projects.
Another good choice is the use of robot factories in the constants modules. For Kangaroo, Talos, and TIAGo Pro, the code computes actuator stiffness, damping, armature, and effort limits directly into EntityCfg builders. That makes the repo feel more serious than a simple mesh drop plus hand-written YAML.
Limitations
- Documentation is lighter than the implementation. The reaching stack is real, but it is barely surfaced in the README.
- No test suite is visible. I did not find automated tests, so trust currently comes from the example workflows and code structure rather than formal verification.
- The package depends heavily on upstream
mjlababstractions. That is mostly good, but it means users need to already understandmjlab’s task/config model. - Motion imitation depends on external retargeting. The repo helps with CSV-to-NPZ conversion, but it does not own the full retargeting pipeline.
- Results are presented mostly as README demos. This is more of an integration and training repo than a benchmark-heavy research release.
Takeaways
pal_mjlabis best read as a plugin layer, not a platform. Its job is to make PAL robots first-class citizens insidemjlab.- The codebase is more capable than the README headline suggests. Reaching, especially the Kangaroo locomotion-plus-reaching task family, is a real part of the repository.
- The most valuable work here is careful systems adaptation. Sensors, actuators, collisions, action scales, motion formats, and task registration are the core product.
- Thin integrations are underrated. A small, disciplined package like this is often more reusable than a bigger robotics repo that tries to own everything.
References
- [Repository] pal-robotics/pal_mjlab
- [Upstream framework] mujocolab/mjlab
- [Motion retargeting tool mentioned in the README] YanjieZe/GMR
