timschneider94.bsky.social
@timschneider94.bsky.social
Also, franky lets you access some of the functionality accessible only on the web interface, such as enabling FCI and unlocking the brakes from Python!
But please don't tell Franka Robotics 🤫, because using their API like that is probably illegal.
July 31, 2025 at 5:09 PM
But wait, there is more!
franky exposes most libfranka functionality in its Python API:
🔧 Redefine end-effector properties
⚖️ Tune joint impedance
🛑 Set force/torque thresholds
…and much more!
July 31, 2025 at 5:09 PM
Here’s how simple robot control looks with franky 👇

No ROS nodes. No launch files. Just five lines of Python.
July 31, 2025 at 5:09 PM
franky supports position & velocity control in both joint and task space — plus gripper control, contact reactions, and more! 🤖
With franky, you get real-time control both in C++ & Python: commands are fully preemptible, and Ruckig replans smooth trajectories on the fly.
July 31, 2025 at 5:09 PM
4️⃣ We also find that even for wider holes, the resulting vision-only policy is significantly less robust to changes in the environment (different hole sizes or angles) when tested zero-shot style. In contrast, the vision-tactile policy is robust even under unseen conditions.
June 13, 2025 at 10:54 AM
3️⃣ Results?
Turns out, as long as the hole is wide enough, a vision-only agent learns to solve the task just as well as a vision-tactile agent. However, once we make the hole tighter, we see that the vision-only agent fails to solve the task and gets stuck in a local minimum.
June 13, 2025 at 10:54 AM
2️⃣ What did we do?
We built a real, fully autonomous, and self-resetting tactile insertion setup and trained model-based RL directly in the real world. Using this setup, we run extensive experiments to understand the role of vision and touch in this task.
June 13, 2025 at 10:54 AM
Stoked to present another work at RLDM 2025! If you’re into dexterous robotics, multimodal RL, or tactile sensing, swing by Poster 100 today to see what we cooked up 🦾✨
#Robotics #TactileSensing #RL #DexterousManipulation @ias_tudarmstadt

🧵
June 13, 2025 at 10:54 AM
7️⃣ What about baselines?
TAP outperformed both random and prior state-of-the-art (HAM) baselines, highlighting the value of attention-based models and off-policy RL for tactile exploration.
June 12, 2025 at 12:33 PM
6️⃣ We observe that TAP learns reasonable strategies. E.g., when estimating the pose of a wrench, TAP first scans the surface to find the handle and then moves towards one of the ends to determine pose and orientation.
June 12, 2025 at 12:33 PM
5️⃣ Key Experiments:
We tested TAP on a variety of ap_gym (github.com/TimSchneider...) tasks from the TactileMNIST benchmark (sites.google.com/robot-learni...).
In all cases, TAP learns to actively explore & infer object properties efficiently.
June 12, 2025 at 12:33 PM
4️⃣ How Does TAP Work?
TAP jointly learns action and prediction with a shared transformer encoder using a combination of RL and supervised learning. We show that TAP's formulation arises naturally when optimizing a supervised learning objective w.r.t action and prediction.
June 12, 2025 at 12:33 PM
3️⃣ Introducing TAP:
We propose TAP (Task-agnostic Active Perception) — a novel method that combines RL and transformer models for tactile exploration. Unlike previous methods, TAP is completely task-agnostic, i.e., it can learn to solve a variety of active perception problems.
June 12, 2025 at 12:33 PM
Excited to present our latest work at RLDM 2025! If you’re curious about tactile sensing, active perception, or RL in robotics, stop by my poster. Here’s what we’ve been up to:
🧵
#Robotics #TactileSensing #ReinforcementLearning #Transformers #ActivePerception @ias-tudarmstadt.bsky.social
June 12, 2025 at 12:33 PM