timschneider94.bsky.social
@timschneider94.bsky.social
One more thing — if you are looking to replace the Franka gripper with something more real-time friendly, we've got you covered:
Our dynamixel-api package lets you control any Dynamixel-based gripper directly from Python.
🔗 github.com/TimSchneider...
Special thanks to Erik Helmut!
GitHub - TimSchneider42/dynamixel-api: Easy-to-use Python API for DYNAMIXEL motors and DYNAMIXEL-based grippers.
Easy-to-use Python API for DYNAMIXEL motors and DYNAMIXEL-based grippers. - TimSchneider42/dynamixel-api
github.com
July 31, 2025 at 5:09 PM
Also, franky lets you access some of the functionality accessible only on the web interface, such as enabling FCI and unlocking the brakes from Python!
But please don't tell Franka Robotics 🤫, because using their API like that is probably illegal.
July 31, 2025 at 5:09 PM
But wait, there is more!
franky exposes most libfranka functionality in its Python API:
🔧 Redefine end-effector properties
⚖️ Tune joint impedance
🛑 Set force/torque thresholds
…and much more!
July 31, 2025 at 5:09 PM
Here’s how simple robot control looks with franky 👇

No ROS nodes. No launch files. Just five lines of Python.
July 31, 2025 at 5:09 PM
🔧 Installation = 3 simple steps:

1️⃣ Install a real-time kernel
2️⃣ Grant real-time permissions to your user
3️⃣ pip install franky-control

…and you’re ready to control your Franka robot!
July 31, 2025 at 5:09 PM
franky supports position & velocity control in both joint and task space — plus gripper control, contact reactions, and more! 🤖
With franky, you get real-time control both in C++ & Python: commands are fully preemptible, and Ruckig replans smooth trajectories on the fly.
July 31, 2025 at 5:09 PM
I would like to thank the first author of this work, Janis Lenz, and my collaborators, Theo Gruner, @daniel-palenicek.bsky.social, Inga Pfenning, and @jan-peters.bsky.social, for this amazing work!
June 13, 2025 at 10:54 AM
6️⃣ If you want to know more, pull up to Poster 100 at RLDM today, 16:30–19:30, and get in touch!
Paper: arxiv.org/abs/2410.23860
Analysing the Interplay of Vision and Touch for Dexterous Insertion Tasks
Robotic insertion tasks remain challenging due to uncertainties in perception and the need for precise control, particularly in unstructured environments. While humans seamlessly combine vision and to...
arxiv.org
June 13, 2025 at 10:54 AM
5️⃣ In conclusion, we find that complementing vision with tactile sensing helps to train more robust policies under more challenging settings. In the future we plan to extend our extend our analysis to even more challenging tasks, such as screw or lightbulb insertion.
June 13, 2025 at 10:54 AM
4️⃣ We also find that even for wider holes, the resulting vision-only policy is significantly less robust to changes in the environment (different hole sizes or angles) when tested zero-shot style. In contrast, the vision-tactile policy is robust even under unseen conditions.
June 13, 2025 at 10:54 AM
3️⃣ Results?
Turns out, as long as the hole is wide enough, a vision-only agent learns to solve the task just as well as a vision-tactile agent. However, once we make the hole tighter, we see that the vision-only agent fails to solve the task and gets stuck in a local minimum.
June 13, 2025 at 10:54 AM
2️⃣ What did we do?
We built a real, fully autonomous, and self-resetting tactile insertion setup and trained model-based RL directly in the real world. Using this setup, we run extensive experiments to understand the role of vision and touch in this task.
June 13, 2025 at 10:54 AM
1️⃣ Robotic insertion in the real world is still a challenging task. Humans use a combination of vision and touch to exhibit dexterous behavior in the face of uncertainty. We wanted to know: What role do vision and touch play when RL agents learn to solve real-world insertion?
June 13, 2025 at 10:54 AM
Big thanks to my collaborators Cristiana de Farias, Roberto Calandra, Liming Chen, and @jan-peters.bsky.social!
June 12, 2025 at 2:30 PM
9️⃣ Come chat with us!
Interested in active perception, transformers, or tactile robotics? Stop by poster 105 at RLDM this afternoon and let’s connect!
🗓️ 16:30 - 19:30
📍 Poster 105

Paper preprint: arxiv.org/pdf/2505.06182
TactileMNIST benchmark: sites.google.com/robot-learni...
arxiv.org
June 12, 2025 at 12:33 PM
8️⃣ Limitations & Future Directions:
Like all deep RL, TAP needs a lot of data. Next steps:
- Improve sample efficiency (think: pre-trained models)
- Apply TAP on real robots (sim2real transfer)
- Scale up to multi-finger/multi-modal (vision+touch) perception
June 12, 2025 at 12:33 PM
7️⃣ What about baselines?
TAP outperformed both random and prior state-of-the-art (HAM) baselines, highlighting the value of attention-based models and off-policy RL for tactile exploration.
June 12, 2025 at 12:33 PM
6️⃣ We observe that TAP learns reasonable strategies. E.g., when estimating the pose of a wrench, TAP first scans the surface to find the handle and then moves towards one of the ends to determine pose and orientation.
June 12, 2025 at 12:33 PM
5️⃣ Key Experiments:
We tested TAP on a variety of ap_gym (github.com/TimSchneider...) tasks from the TactileMNIST benchmark (sites.google.com/robot-learni...).
In all cases, TAP learns to actively explore & infer object properties efficiently.
June 12, 2025 at 12:33 PM
4️⃣ How Does TAP Work?
TAP jointly learns action and prediction with a shared transformer encoder using a combination of RL and supervised learning. We show that TAP's formulation arises naturally when optimizing a supervised learning objective w.r.t action and prediction.
June 12, 2025 at 12:33 PM
3️⃣ Introducing TAP:
We propose TAP (Task-agnostic Active Perception) — a novel method that combines RL and transformer models for tactile exploration. Unlike previous methods, TAP is completely task-agnostic, i.e., it can learn to solve a variety of active perception problems.
June 12, 2025 at 12:33 PM