Control Strategies for Contact-Rich Robotics Systems

Explore top LinkedIn content from expert professionals.

Summary

Control strategies for contact-rich robotics systems are specialized methods that help robots safely and accurately interact with people or objects in situations where physical contact and force feedback are vital, such as grasping, pushing, or walking. These approaches combine force sensing, learning from human demonstrations, and advanced algorithms to help robots adapt and respond during real-world manipulation tasks.

  • Prioritize safety: Use control methods that separate intentional movements from unintentional ones, so robots can remain predictable and safe when working near people.
  • Incorporate sensing: Integrate tactile or visual sensors to help robots measure or estimate contact forces, enabling them to adjust their actions in real time.
  • Blend learning approaches: Combine imitation learning from human demonstrations with reinforcement and vision-based models to teach robots versatile manipulation skills that work in diverse environments.
Summarized by AI based on LinkedIn member posts
  • View profile for Supriya Rathi

    105k+ | India #1 Robotics Communicator. World #10 | Share your research, and find new ideas through my community | DM for global collabs

    108,575 followers

    Presenting FEELTHEFORCE (FTF): a robot learning system that models human tactile behavior to learn force-sensitive manipulation. Using a tactile glove to measure contact forces and a vision-based model to estimate hand pose, they train a closed-loop policy that continuously predicts the forces needed for manipulation. This policy is re-targeted to a Franka Panda robot with tactile gripper sensors using shared visual and action representa- tions. At execution, a PD controller modulates gripper closure to track predicted forces -enabling precise, force-aware control. This approach grounds robust low- level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks. #research: https://lnkd.in/dXxX7Enw #github: https://lnkd.in/dQVuYTDJ #authors: Ademi Adeniji, Zhuoran (Jolia) Chen, Vincent Liu, Venkatesh Pattabiraman, Raunaq Bhirangi, Pieter Abbeel, Lerrel Pinto, Siddhant Haldar New York University, University of California, Berkeley, NYU Shanghai Controlling fine-grained forces during manipulation remains a core challenge in robotics. While robot policies learned from robot-collected data or simulation show promise, they struggle to generalize across the diverse range of real-world interactions. Learning directly from humans offers a scalable solution, enabling demonstrators to perform skills in their natural embodiment and in everyday environments. However, visual demonstrations alone lack the information needed to infer precise contact forces.

  • View profile for Arash Ajoudani

    Director of HRI² Laboratory

    7,628 followers

    #Safety is crucial in human-robot interaction, especially for #mobile #robots. Without safety, #certification is impossible, and real-world applications are unfeasible. To address this, alongside our work on machine learning (which, despite their huge potential, are not yet certifiable), we use advanced #passivity and #powerbased control strategies to ensure optimal performance and safety. Recently, together with Theodora Kastritsi, we proposed a control strategy that decouples desired #dynamics from unintentional motion. This ensures changes in one direction do not affect the other. In the unintentional space, admittance parameters remain constant, while in the intended motion direction, inertia and damping gains adjust to provide compliance to the human user. We designed these variable terms to ensure a consistent response and perceived behavior, guaranteeing #strict #passivity under human force input for stable manipulation. In this video you can observe how smooth and robust the behavior of the proposed controller is in various trajectories and in comparison to advanced baseline controllers. Also, here is a link to our (open access) work: https://lnkd.in/dGfi7mJX

  • View profile for Ilir Aliu

    AI & Robotics | 100k+ | Scaling Deep Tech

    95,794 followers

    🦿 Can legged robots learn to control force and position… without force sensors? [📍 bookmark paper for later] This new work introduces a unified policy that enables legged robots to handle loco-manipulation tasks by learning both force and position control without using force sensors. It estimates contact forces from motion history and adapts in real time. Why this matters ✅ Jointly learns force and position control in one policy ✅ Works without force sensors by estimating forces from past states ✅ Handles complex tasks like force tracking and compliant behaviors ✅ Boosts imitation learning success by ~39.5% in contact-rich tasks Learn more 📄 Paper: https://lnkd.in/d2VnU4uE 📂 Project: https://lnkd.in/die5gyRA This brings us one step closer to agile, adaptable legged robots that can walk, push, and manipulate; All through a single, sensor-free policy.

  • View profile for Naveen Manwani

    Product Operations Strategy | Scaling 0-to-1 | Bridging Tech & Business | Optimizing Workflows with GenAI

    6,903 followers

    🚨Paper Alert 🚨 ➡️Paper Title: Touch begins where vision ends: Generalizable policies for contact-rich manipulation 🌟Few pointers from the paper 🎯Data-driven approaches struggle with precise manipulation; imitation learning requires many hard-to-obtain demonstrations, while reinforcement learning yields brittle, non-generalizable policies. 🎯Authors of this paper introduced “VisuoTactile Local (ViTaL)” policy learning, a framework that solves fine-grained manipulation tasks by decomposing them into two phases: 🧵a reaching phase, where a vision-language model (VLM) enables scene-level reasoning to localize the object of interest,  🧵and a local interaction phase, where a reusable, scene-agnostic ViTaL policy performs contact-rich manipulation using egocentric vision and tactile sensing. 🎯This approach is motivated by the observation that while scene context varies, the low-level interaction remains consistent across task instances. 🎯By training local policies once in a canonical setting, they can generalize via a localize-then-execute strategy. 🎯ViTaL achieves around 90% success on contact-rich tasks in unseen environments and is robust to distractors. 🎯ViTaL's effectiveness stems from three key insights: (1) foundation models for segmentation enable training robust visual encoders via behavior cloning; 2) These encoders improve the generalizability of policies learned using residual RL; and (3) Tactile sensing significantly boosts performance in contact-rich tasks. 🎯Ablation studies validate each of these insights, and they demonstrated that ViTaL integrates well with high-level VLMs, enabling robust, reusable low-level skills. 🏢Organization: New York University Shanghai, New York University, Honda Research 🧙Paper Authors: Zifan Zhao, Siddhant Haldar, Jinda Cui, Lerrel Pinto, Raunaq Bhirangi 📝 Read the Full Paper here: https://lnkd.in/gkqFaMh7 🗂️ Project Page: https://lnkd.in/gDaF7Uvi 🧑💻 Code: https://lnkd.in/gRFfuMTR 🎥 Be sure to watch the attached Demo Video - Sound on 🔊🔊 Find this Valuable 💎 ? ♻️REPOST and teach your network something new Follow me 👣, Naveen Manwani, for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements.

Explore categories