Advancing Robotics Technology

Explore top LinkedIn content from expert professionals.

  • View profile for Dr. Martha Boeckenfeld
    Dr. Martha Boeckenfeld Dr. Martha Boeckenfeld is an Influencer

    Master Future Tech (AI, Web3, VR) with Ethics| CEO & Founder, Top 100 Women of the Future | Award winning Fintech and Future Tech Leader| Educator| Keynote Speaker | Advisor| Board Member (ex-UBS, Axa C-Level Executive)|

    139,380 followers

    Surgical robots cost $2 million. Beijing just built one for $200,000. Watch it peel a quail egg: Shell removed. Inner membrane intact. Submillimeter accuracy that matches da Vinci at 90% less cost. Think about that. Most hospitals can't afford surgical robots. Rural clinics? Forget it. Patients travel hundreds of miles for robotic surgery or settle for traditional operations with higher risks. Beijing's Surgerii Robotics just broke that equation. Traditional Surgical Robotics: ↳ $2 million purchase price ↳ $200,000 annual maintenance ↳ Only major hospitals qualify ↳ Patients travel or wait Chinese Innovation Reality: ↳ $200,000 total cost ↳ Same precision standards ↳ Reaches district hospitals ↳ Surgery comes to patients But here's what stopped me cold: Professor Samuel Au left da Vinci to build a network of surgical robots. Engineers from Medtronic and GE walked away from Silicon Valley salaries to build this. They're not chasing profit margins. They're chasing one vision: "Every hospital should have one." The egg demonstration proves what matters: Precision doesn't require premium pricing. The robot's multi-backbone continuum mechanisms deliver the same submillimeter accuracy whether peeling eggs or operating on hearts. What This Enables: ↳ Thoracic surgery in rural hospitals ↳ Urological procedures locally ↳ Reduced surgical trauma everywhere ↳ Surgeon shortage solutions The Multiplication Effect: 1 affordable robot = 10 hospitals equipped 100 deployed = provincial healthcare transformed 1,000 units = surgical access democratized At scale = geography stops determining survival Traditional robotics kept precision exclusive. Surgerii makes it accessible. We're not watching price competition. We're watching healthcare democratisation. Because that farmer needing heart surgery shouldn't die waiting for a $2 million robot his hospital will never afford. Follow me, Dr. Martha Boeckenfeld for innovations that put patients before profit margins. ♻️ Share if surgical precision should be accessible, not exclusive. #healthcare #innovation #precisionmedicine

  • View profile for Jim Fan
    Jim Fan Jim Fan is an Influencer

    NVIDIA Director of AI & Distinguished Scientist. Co-Lead of Project GR00T (Humanoid Robotics) & GEAR Lab. Stanford Ph.D. OpenAI's first intern. Solving Physical AGI, one motor at a time.

    223,316 followers

    Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data.  2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro  -> RoboCasa produces N (varying visuals)  -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK

  • View profile for Supriya Rathi

    105k+ | India #1 Robotics Communicator. World #10 | Share your research, and find new ideas through my community | DM for global collabs

    108,574 followers

    Presenting FEELTHEFORCE (FTF): a robot learning system that models human tactile behavior to learn force-sensitive manipulation. Using a tactile glove to measure contact forces and a vision-based model to estimate hand pose, they train a closed-loop policy that continuously predicts the forces needed for manipulation. This policy is re-targeted to a Franka Panda robot with tactile gripper sensors using shared visual and action representa- tions. At execution, a PD controller modulates gripper closure to track predicted forces -enabling precise, force-aware control. This approach grounds robust low- level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks. #research: https://lnkd.in/dXxX7Enw #github: https://lnkd.in/dQVuYTDJ #authors: Ademi Adeniji, Zhuoran (Jolia) Chen, Vincent Liu, Venkatesh Pattabiraman, Raunaq Bhirangi, Pieter Abbeel, Lerrel Pinto, Siddhant Haldar New York University, University of California, Berkeley, NYU Shanghai Controlling fine-grained forces during manipulation remains a core challenge in robotics. While robot policies learned from robot-collected data or simulation show promise, they struggle to generalize across the diverse range of real-world interactions. Learning directly from humans offers a scalable solution, enabling demonstrators to perform skills in their natural embodiment and in everyday environments. However, visual demonstrations alone lack the information needed to infer precise contact forces.

  • View profile for Ted Strazimiri

    Drones & Data

    27,987 followers

    Researchers at Hong Kong University MaRS Lab have just published another jaw dropping paper featuring their safety-assured high-speed aerial robot path planning system dubbed "SUPER". With a single MID360 lidar sensor they repeatedly achieved autonomous one-shot navigation at speeds exceeding 20m/s in obstacle rich environments. Since it only requires a single lidar these vehicles can be built with a small footprint and navigate completely independent of light, GPS and radio link. This is not just #SLAM on a #drone, in fact the SUPER system continuously computes two trajectories in each re-planning cycle—a high-speed exploratory trajectory and a conservative backup trajectory. The exploratory trajectory is designed to maximize speed by considering both known free spaces and unknown areas, allowing the drone to fly aggressively and efficiently toward its goal. In contrast, the backup trajectory is entirely confined within the known free spaces identified by the point-cloud map, ensuring that if unforeseen obstacles are encountered or if the system’s perception becomes uncertain, the system can safely switch to a precomputed, collision-free path. The direct use of LIDAR point clouds for mapping eliminates the need for time-consuming occupancy grid updates and complex data fusion algorithms. Combined with an efficient dual-trajectory planning framework, this leads to significant reductions in computation time—often an order of magnitude faster than comparable SLAM-based systems—allowing the MAV to operate at higher speeds without sacrificing safety. This two-pronged planning strategy is particularly innovative because it directly addresses the classic speed-safety trade-off in autonomous navigation. By planning an exploratory trajectory that pushes the speed envelope and a backup trajectory that guarantees safety, SUPER can achieve high-speed flight (demonstrated speeds exceeding 20 meters per second) without compromising on collision avoidance. If you've been tracking the progress of autonomy in aerial robotics and matching it to the winning strategies emerging in Ukraine, it's clear we're likely to experience another ChatGPT moment in this domain, very soon. #LiDAR scanners will continue to get smaller and cheaper, solid state VSCEL based sensors are rapidly improving and it is conceivable that vehicles with this capability can be built and deployed with a bill of materials below $1000. Link to the paper in the comments below.

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    598,968 followers

    Ever wondered about the differences between traditional automation, AI automation, and AI agents? It’s a question I get asked a lot, so I put together this infographic, for all of you! 1️⃣ Traditional Automation ↳ Primarily rule-based: think straightforward RPA (Robotic Process Automation), basic factory robots, or simple scripted IT tasks. ↳ Great for repetitive processes with predictable, static conditions. ↳ Still struggles with unpredictable changes, requiring frequent reprogramming by humans. ↳ Tools: UiPath, Blue Prism, Automation Anywhere — these remain the dominant RPA solutions, but they’re increasingly integrating AI for tasks like document understanding. 2️⃣ AI Automation ↳ How It Works: Machine Learning and other AI approaches to learn from data and adapt with minimal human intervention. ↳ Adapts to changing inputs—like email spam filters that get better over time or AI chatbots that refine responses. ↳ Examples: Fraud detection systems, recommendation engines, advanced chatbots. ↳ Tools/ Frameworks: → Gumloop: A rising platform that lets teams prototype, test, and deploy AI models with minimal coding → Zapier: For connecting AI-driven workflows to thousands of apps 3️⃣ AI Agents ↳ How they differ: These go beyond pattern recognition to reason, plan, and act autonomously. ↳ They actively make contextual decisions in real time, learning from ongoing interactions. ↳ Examples: Self-driving cars orchestrating traffic decisions, personal AI research assistants scouring data for insights, or “smart” systems that can optimize supply chains on the fly. ↳ Tools/ Frameworks: → CrewAI: Focuses on real-time collaboration and multi-agent systems with a Pythonic design → LangChain: A framework that enables developers to build applications powered by large language models, suitable for creating custom AI agents. → AutoGen: An open-source Python-based framework by Microsoft, designed for developers to create advanced AI agents with minimal coding → RASA: Open-source framework for building intelligent chatbots and voice assistants with advanced NLU → LangGraph: LangChain-created tool for building and managing complex generative AI agent workflows using graph-based architectures. → OpenAI Swarm: Experimental framework for lightweight, customizable multi-agent systems focusing on flexible task delegation and coordination. 𝌭 Foundational LLMs/SLMs : → Open-source models from Mistral AI models, Microsoft Phi, Google Gemma models, DeepSeek AI models, Perplexity R1-1776, Meta llama models, Alibaba Group Qwen models → Closed-source models from OpenAI, Anthropic, Perplexity, Google 🚀 Top-Inference providers- Fireworks AI, Groq, Cerebras Systems I’d love to hear your experiences: Have you implemented AI agents recently? Any favorite frameworks or tools you think are game-changers? Share below 👇 -------- Share this post with your network ♻️ Follow me (Aishwarya Srinivasan) for more AI insights, news, and educational content!

  • View profile for Thomas Wolf

    Co-founder at 🤗 Hugging Face – Angel

    178,712 followers

    Impressive work by the new Amazon Frontier AI & Robotics team (from Covariant acquisition) and collaborators! This research enable mapping long sequences of human motion (>30 sec) on robots with various shapes as well as robots interacting with objects (box, table, etc) of different size nd in particular different from the size in the training data. This enable easier in-simulation data-augmentation and zero-shoot transfer. This is impressive and a huge potential step for reducing the need for human teleoperation data (which is hard to gather for humanoids) The dataset trajectories is available on Hugging Face at: https://lnkd.in/eygXVVHx The full code framework is coming soon. Check out the project page which has some pretty nice three.js interactive demos: https://lnkd.in/e2S-6K2T And kudos to the authors on open-sourcing the data, releasing the paper and (hopefully soon) the code. This kind of open-science projects are game changers in robotics.

  • It is always great to see something published that you have worked on for a considerable amount of time – but in this case, it feels really special. AI Act, GDPR, DSA, finance, medical devices, automotive regulation: so many things close to my (academic) heart, and I could combine them all in one study on the frictions, interdependencies, and ways forward through this regulatory jungle. Here are the key policy recommendations, structured by addressees, many more in the study, someone counted 25 :). Important: Almost all of them can be achieved without any diminished protection of fundamental rights.   European Legislators   1. Designate a "Lead Act": Assign a leading regulatory framework for each sector, such as the AI Act or sector-specific laws, to reduce conflicts and enhance coherence. If that Lead Act is complied with, compliance of the other designated acts should be presumed, unless some specific provisions are exempted from that rule. Example: Art. 17(4) AI Act, one of my favorite norms in the Act, a hidden gem ;)   2. Clarify AI Act-GDPR Alignment: Address contradictions, such as differing responsibilities for AI providers under the AI Act and data controllers under the GDPR, and rules for training AI on personal data.   3. Develop Safe Harbor Standards: Create technical standards that provide compliance with the AI Act AND related regulations.   4. Conduct Regular External Reviews: Periodically and EXTERNALLY evaluate the AI Act's implementation to address contradictions, regulatory gaps and new technological challenges.    European Commission (AI Office and Sectoral Authorities)   5. Enhance Risk Analysis for Hybrid Platforms: Develop integrated guidelines for platforms that incorporate generative AI, addressing systemic risks under both the AI Act and the DSA, and the mutual reinforcement of the specific platform and GenAI risks.   6. Expand Data Access for Research: Establish mechanisms for vetted researchers to access both platform AND AI system data, inspired by the DSA’s Article 40.   National Legislators and Authorities   7. Support SMEs: Introduce grant programs to help small and medium-sized enterprises comply with AI Act and sector-specific regulations. This could, for example, fund access to training programs.   8. Foster Oversight Synergies: Clearly institutionalize the necessary collaboration between national data protection, sectoral and AI Act oversight authorities for cohesive enforcement. Be agile and project-based in solving cases involving multiple Acts.   Standardization Bodies   9. Develop Unified Standards: Provide technical standards for the AI Act AND sectoral regulations.   Industry and Civil Society   10. Encourage Cross-Disciplinary Collaboration: Establish advisory groups combining industry, academic, and civil society expertise and liaising with the national AI authorities to address sector-specific challenges.   Many thanks to Bertelsmann Stiftung, Julia Gundlach and Asena Soydas for enabling this!        

  • View profile for Ludovic Subran
    Ludovic Subran Ludovic Subran is an Influencer

    Group Chief Investment Officer at Allianz, Senior Fellow at Harvard University

    47,111 followers

    Reindustrializing #Europe in the age of AI 🤖”—our latest report outlines what it will take: Amid intensifying global competition in AI and #Robotics, Europe faces a defining moment: reindustrialize or risk falling irreversibly behind. Robotics can help restore industrial sovereignty, address demographic headwinds, and boost productivity. We propose a 5-point strategic roadmap to reposition Europe as a credible competitor alongside the US and China: 1️⃣ A European Robotics Roadmap – Focus on building champions in high-impact, under-robotized sectors: logistics, hospitality, agrifood, healthcare, aerospace, and defense. Prioritize strategic autonomy, not chasing lost ground in humanoids or autonomous vehicles. 2️⃣ Capital Access for Robotics Startups – Address the 7x VC funding gap with the US by scaling Europe’s venture capital market and reinforcing complementary funding streams. 3️⃣ Bridging Innovation and Market – Tackle fragmentation through innovation clusters, regional champions, and greater public-private investment coordination. We recommend increasing the 2028–2034 EU budget by at least 5% with a dedicated robotics allocation. 4️⃣ Upskilling the Workforce – Tackle skill shortages across factory floors and engineering teams. From frontline operators to system integrators, we need a unified "Robot Skills Framework" and modern vocational training. 5️⃣ Smart Regulation – Align AI and robotics regulation to promote innovation. Use regulatory sandboxes, harmonized safety standards, and dynamic, risk-based approaches to support adoption—especially among SMEs. 📘 Download the full report: https://lnkd.in/evxEPDgn #Robotics #AI #IndustrialPolicy #Reindustrialization #Innovation #VentureCapital #FutureOfWork #TechSovereignty #Automation #Manufacturing #Ludonomics #AllianzTrade #Allianz

  • View profile for Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    769,350 followers

    Micro drones are no longer niche tools — they are becoming a core pillar of surveillance, security, and tactical intelligence across defense, public safety, and critical infrastructure. Have you seen this one? What’s remarkable is not just the capability — it’s the speed of evolution. 📈 The Numbers Behind the Momentum • The global micro-drone market is growing at 16–19% CAGR, with forecasts projecting: • From ~$10B in 2024 to over $24B by 2029 • Small UAV market expected to exceed $11B by 2030 • Defense and surveillance account for one of the largest and fastest-growing segments due to: • Border security expansion • Urban surveillance demand • ISR (Intelligence, Surveillance, Reconnaissance) modernization 🧠 What Changed the Game? Modern micro drones now combine: • AI-powered navigation & object recognition • Real-time video transmission • Autonomous flight and obstacle avoidance • Swarm coordination capabilities • Ultra-miniaturized thermal + optical sensors Some nano-drones weigh under 20 grams, fly for 20–25 minutes, and transmit encrypted HD video over 1.5–2 km, all while operating with extremely low acoustic signatures. This level of capability was military-exclusive just a few years ago. Today, it’s rapidly becoming standard Micro surveillance drones are now actively used for: • Tactical reconnaissance in conflict zones • Law enforcement situational awareness • Crowd monitoring & perimeter security • Disaster response in collapsed or dangerous environments • Critical infrastructure inspection (energy, transport, telecom) At the tactical level, they allow frontline units to “see first” before entering hostile or uncertain environments — reducing risk and improving decision speed. 🤖 The Rise of Swarm Intelligence One of the most disruptive developments is coordinated micro-drone swarms: • Multiple drones operating as a single intelligent system • Real-time terrain mapping • Autonomous target identification • Dynamic mission adaptation This shifts surveillance from isolated viewpoints to distributed intelligence networks in the air. ⚠️ The Strategic Challenge With power comes responsibility. Micro drone surveillance forces critical conversations around: • Privacy and civil liberties • Airspace governance • Ethical deployment • Counter-drone defense systems • Digital sovereignty At the same time, governments and enterprises are investing heavily in anti-drone and RF-neutralization technologies, signaling that the drone vs counter-drone race has already begun. #Drones #SurveillanceTechnology #DefenseTech #AI #AutonomousSystems #SecurityInnovation #FutureOfSurveillance

  • View profile for Clem Delangue 🤗
    Clem Delangue 🤗 Clem Delangue 🤗 is an Influencer

    Co-founder & CEO at Hugging Face

    288,423 followers

    🦾 Great milestone for open-source robotics: pi0 & pi0.5 by Physical Intelligence are now on Hugging Face, fully ported to PyTorch in LeRobot and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots! π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training. Generalization must occur at multiple levels: - Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments - Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills - Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from: - Multimodal Web Data: Image captioning, visual question answering, object detection - Verbal Instructions: Humans coaching robots through complex tasks step-by-step - Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed) - Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities - Multi-Environment Data: Static robots deployed across many different homes - Mobile Manipulation Data: ~400 hours of mobile robot demonstrations This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously. Huge thanks to the Physical Intelligence team & contributors Model: https://lnkd.in/eAEr7Yk6 LeRobot: https://lnkd.in/ehzQ3Mqy

Explore categories