Applications of Robotics

Explore top LinkedIn content from expert professionals.

  • View profile for Jim Fan
    Jim Fan Jim Fan is an Influencer

    NVIDIA Director of AI & Distinguished Scientist. Co-Lead of Project GR00T (Humanoid Robotics) & GEAR Lab. Stanford Ph.D. OpenAI's first intern. Solving Physical AGI, one motor at a time.

    223,316 followers

    DexCap: a $3,600 open-source hardware stack that records human finger motions to train dexterous robot manipulation. It's like a very "lo-fi" version of Optimus, but affordable to academic researchers. This isn't teleoperation: data collection is decoupled from the robot execution, so that you don't need a one-to-one ratio of human operators babysitting the robots at all times. Great work from Chen Wang et al. at Stanford AI Lab! Website: https://dex-cap.github.io Hardware assembly guide: https://lnkd.in/dDb9zFDe

  • View profile for Miguel Fierro
    Miguel Fierro Miguel Fierro is an Influencer

    I help people understand and apply AI

    78,244 followers

    There is a push to use Model Predictive Control (MPC) instead of Reinforcement Learning (RL) in LLMs. MPC is not as common in AI but is well-known in robotics. Here is a simple explanation. Model Predictive Control (MPC): • Model-Based: MPC relies on an explicit model of the system. This model is used to predict how the system will respond to different control inputs. • Optimization in Real-Time: At each time step, MPC solves an optimization problem to find the best sequence of control actions for the future, based on the current state and model predictions. • Constraints: MPC can handle constraints directly in its optimization problem, which is crucial for systems with operational limits. • Predictive Horizon: It uses a "rolling horizon" where future states are predicted and optimized over a time window, but only the first action is implemented. • Feedback: Incorporates feedback by updating the system's state at each step, allowing for adjustments to the control strategy based on actual outcomes. Reinforcement Learning (RL): • Model-Free: RL typically does not require an explicit model of the environment. Instead, it learns from interaction, through trial and error. • Learning from Experience: An RL agent learns by exploring the environment, receiving rewards or penalties for actions taken, and adjusting its policy (strategy) over time. • Policy or Value Function: RL either learns a policy (what action to take in each state) or a value function (how good it is to be in a particular state or take an action from that state). • Long-Term Optimization: RL aims to maximize cumulative reward over time, which might not be immediately apparent in short-term actions. • Exploration vs. Exploitation: RL agents often need to balance between exploiting known good actions and exploring new actions to potentially find better strategies. Was this useful?

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    693,358 followers

    As we transition from traditional task-based automation to 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, understanding 𝘩𝘰𝘸 an agent cognitively processes its environment is no longer optional — it's strategic. This diagram distills the mental model that underpins every intelligent agent architecture — from LangGraph and CrewAI to RAG-based systems and autonomous multi-agent orchestration. The Workflow at a Glance 1. 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗶𝗼𝗻 – The agent observes its environment using sensors or inputs (text, APIs, context, tools). 2. 𝗕𝗿𝗮𝗶𝗻 (𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗘𝗻𝗴𝗶𝗻𝗲) – It processes observations via a core LLM, enhanced with memory, planning, and retrieval components. 3. 𝗔𝗰𝘁𝗶𝗼𝗻 – It executes a task, invokes a tool, or responds — influencing the environment. 4. 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (Implicit or Explicit) – Feedback is integrated to improve future decisions.     This feedback loop mirrors principles from: • The 𝗢𝗢𝗗𝗔 𝗹𝗼𝗼𝗽 (Observe–Orient–Decide–Act) • 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀 used in robotics and AI • 𝗚𝗼𝗮𝗹-𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝗲𝗱 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 in agent frameworks Most AI applications today are still “reactive.” But agentic AI — autonomous systems that operate continuously and adaptively — requires: • A 𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗹𝗼𝗼𝗽 for decision-making • Persistent 𝗺𝗲𝗺𝗼𝗿𝘆 and contextual awareness • Tool-use and reasoning across multiple steps • 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 for dynamic goal completion • The ability to 𝗹𝗲𝗮𝗿𝗻 from experience and feedback    This model helps developers, researchers, and architects 𝗿𝗲𝗮𝘀𝗼𝗻 𝗰𝗹𝗲𝗮𝗿𝗹𝘆 𝗮𝗯𝗼𝘂𝘁 𝘄𝗵𝗲𝗿𝗲 𝘁𝗼 𝗲𝗺𝗯𝗲𝗱 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 — and where things tend to break. Whether you’re building agentic workflows, orchestrating LLM-powered systems, or designing AI-native applications — I hope this framework adds value to your thinking. Let’s elevate the conversation around how AI systems 𝘳𝘦𝘢𝘴𝘰𝘯. Curious to hear how you're modeling cognition in your systems.

  • View profile for Pascal BORNET

    #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️

    1,500,877 followers

    What a Self-Driving Bike Just Revealed About the Future of AI A team at the Robotics and AI Institute (RAI) just built a bike that rides itself. No joystick. No remote. No pre-programmed routes. Just reinforcement learning in motion. It learns balance through trial and error — the same way humans do. Every wobble becomes feedback, every near-fall becomes data, every correction becomes memory. Why it matters Most AI systems fail when reality gets messy. This one doesn’t. It adapts. It treats unpredictability not as a bug to fix, but as a teacher to learn from. That’s a quiet but radical shift in how intelligence forms. What this enables → Delivery robots that stay upright in crowded streets → Mobility aids that self-stabilize for elderly or disabled users → Rescue robots that recover in rough terrain → Industrial systems that keep moving safely under pressure The deeper insight We’ve spent years training AI for perfect control. But real intelligence — human or artificial — isn’t about control. It’s about correction. The ability to recover when the world stops behaving as expected. Maybe the next era of AI won’t be about prediction at all. Maybe it will be about recovery. So here’s my question: Should the next generation of AI be trained for resilience before accuracy? #AI #Robotics #MachineLearning #Resilience #Innovation #FutureOfWork

  • View profile for Nicolas Hubacz, M.S.

    90k | TMS | Neuroscience | Psychiatry | Neuromodulation | MedDevice | Business Development at Magstim

    90,355 followers

    🧲 Magnetic Slime Robots & Healthcare 🏥 In Medical technology, one of the most intriguing and promising innovations is the development of magnetic slime robots. These soft, flexible robots, composed of a magnetic slime material, are poised to revolutionize various aspects of healthcare, offering new possibilities in minimally invasive procedures, targeted drug delivery, and precise medical interventions. 💭 What Are Magnetic Slime Robots? Magnetic slime robots are made from a combination of magnetic particles and a polymer matrix, resulting in a unique material that is both flexible and controllable through external magnetic fields. This allows the slime to navigate complex environments and change shape as needed, making it highly adaptable for various medical applications. 🔑 Key Applications in Healthcare 1️⃣ Minimally Invasive Surgery 🎯 Precision and Flexibility: Magnetic slime robots can be precisely guided to target areas within the body, minimizing damage to surrounding tissues. Their flexibility allows them to navigate through tight and complex anatomical structures that traditional surgical instruments cannot reach. 🤕 Reduced Recovery Time: The minimally invasive nature of these robots means smaller incisions and less trauma for patients, leading to quicker recovery times and reduced risk of complications. 2️⃣ Targeted Drug Delivery 🚄 Enhanced Efficacy: By navigating to specific sites within the body, magnetic slime robots can deliver medications directly to affected areas, increasing the efficacy of the treatment while minimizing side effects. 💊 Controlled Release: These robots can be engineered to release drugs in a controlled manner, ensuring that the medication is delivered at the right time and in the right dosage. 3️⃣ Medical Diagnostics: 📸 Improved Imaging: Magnetic slime robots can carry imaging agents to specific parts of the body, enhancing the quality of medical imaging techniques such as MRI and CT scans. This can lead to more accurate diagnoses and better treatment planning. 👩🔬 Biopsy Procedures: These robots can be used to collect tissue samples from hard-to-reach areas, providing valuable diagnostic information with minimal invasiveness. #Healthcare #Innovation #Science #MedTech

  • View profile for Clem Delangue 🤗
    Clem Delangue 🤗 Clem Delangue 🤗 is an Influencer

    Co-founder & CEO at Hugging Face

    288,423 followers

    🦾 Great milestone for open-source robotics: pi0 & pi0.5 by Physical Intelligence are now on Hugging Face, fully ported to PyTorch in LeRobot and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots! π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training. Generalization must occur at multiple levels: - Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments - Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills - Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from: - Multimodal Web Data: Image captioning, visual question answering, object detection - Verbal Instructions: Humans coaching robots through complex tasks step-by-step - Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed) - Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities - Multi-Environment Data: Static robots deployed across many different homes - Mobile Manipulation Data: ~400 hours of mobile robot demonstrations This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously. Huge thanks to the Physical Intelligence team & contributors Model: https://lnkd.in/eAEr7Yk6 LeRobot: https://lnkd.in/ehzQ3Mqy

  • View profile for Thomas Wolf

    Co-founder at 🤗 Hugging Face – Angel

    178,712 followers

    Impressive work by the new Amazon Frontier AI & Robotics team (from Covariant acquisition) and collaborators! This research enable mapping long sequences of human motion (>30 sec) on robots with various shapes as well as robots interacting with objects (box, table, etc) of different size nd in particular different from the size in the training data. This enable easier in-simulation data-augmentation and zero-shoot transfer. This is impressive and a huge potential step for reducing the need for human teleoperation data (which is hard to gather for humanoids) The dataset trajectories is available on Hugging Face at: https://lnkd.in/eygXVVHx The full code framework is coming soon. Check out the project page which has some pretty nice three.js interactive demos: https://lnkd.in/e2S-6K2T And kudos to the authors on open-sourcing the data, releasing the paper and (hopefully soon) the code. This kind of open-science projects are game changers in robotics.

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    Product Leader @AWS | Startup Investor | 2X Linkedin Top Voice for AI, Data Science, Tech, and Innovation | Quantum Computing & Web 3.0 | I build software that scales AI/ML Network infrastructure

    216,356 followers

    Many AI agents look impressive in demos, but crash in real-world production. Why? Because scaling agents requires engineering discipline, not just clever prompts. Moving from prototype to production means tackling memory, observability, scalability, and resilience challenges. Let’s explore the design principles that make AI agents production-ready. 🔸Why AI Agents Fail Monolithic designs, missing scalability, and poor observability often break agents under real-world traffic. 🔸Microservices Architecture Break agents into services like inference, planning, memory, and tools for flexibility and fault tolerance. 🔸Containerization & Orchestration Use containers for packaging and Kubernetes for orchestration. Make it a habit from prototype to multi-agent production. 🔸Message Queues & Async Processing Prevent bottlenecks with task queues, event sourcing, and non-blocking communication. 🔸Continuous Delivery (CI/CD) Automate deployments with a three-stage pipeline for faster, safer updates. 🔸Load Balancing for Real Traffic Distribute 50–5,000+ requests/minute with API gateways, application layers, and service mesh. 🔸Scalable Memory Layer Use Redis for short-term context, SQL/NoSQL for structure, and Vector DBs for knowledge. 🔸Observability & Monitoring Log calls, monitor latency, and enable human-in-the-loop reviews for deeper debugging. The real test for AI agents goes beyond a demo to survive production traffic at scale. Have you had this experience? #AIAgent

  • This headline captures a growing reality: China’s rapid automation drive is reshaping global industrial competition. The charts below the headline tell the real story — China now installs more industrial robots each year than the rest of the world combined, and its robot density (robots per 10,000 workers) has surged past advanced economies like Germany, the US, and Japan. This transformation isn’t just about scale. It reflects a deep structural shift — from labor-cost advantage to productivity and precision dominance. Chinese factories, powered by robotics and AI, are fast becoming the global benchmark for efficiency, threatening to erode the technological and manufacturing edge long held by Western economies. For multinational executives, the “fear” stems less from politics and more from competitiveness: China’s mix of automation, vertical integration, and government-backed industrial strategy is creating a self-reinforcing ecosystem — one that could define the next industrial era. Sources: on graph

  • View profile for Nicholas Nouri

    Founder | APAC Entrepreneur of the year | Author | AI Global talent awardee | Data Science Wizard

    131,199 followers

    Have you ever wondered who cleans the office bathrooms after a long day? It's a tough job that often goes unnoticed and isn't always the most pleasant task. Well, technology is stepping in to make a difference. An American company called Somatic has introduced robots that are already at work cleaning office bathrooms. Yes, you read that right - robots are now handling one of the least glamorous yet essential cleaning tasks in the workplace. 𝐒𝐨, 𝐇𝐨𝐰 𝐃𝐨 𝐓𝐡𝐞𝐬𝐞 𝐑𝐨𝐛𝐨𝐭𝐬 𝐖𝐨𝐫𝐤? - Autonomous Operation: These robots navigate the bathroom space on their own, using advanced sensors to move around stalls, sinks, and other obstacles without human guidance. - Thorough Cleaning: Equipped with cleaning tools and disinfectants, they can scrub toilets, mop floors, and sanitize surfaces, ensuring a consistent level of cleanliness every time. - Safety Measures: They are designed to operate when the bathroom is unoccupied to ensure privacy and safety for everyone. 𝐖𝐡𝐲 𝐈𝐬 𝐓𝐡𝐢𝐬 𝐒𝐢𝐠𝐧𝐢𝐟𝐢𝐜𝐚𝐧𝐭? - Addressing Labor Shortages: Cleaning jobs, especially in restrooms, can be hard to fill. Robots can take over repetitive and undesirable tasks, allowing human workers to focus on other responsibilities. - Consistency and Efficiency: Robots perform tasks the same way each time, which means the cleanliness standards are consistently met or even exceeded. - Health and Hygiene: Automating bathroom cleaning reduces human exposure to germs and hazardous cleaning chemicals, promoting a healthier work environment. 𝐅𝐨𝐫 𝐭𝐡𝐨𝐬𝐞 𝐜𝐮𝐫𝐢𝐨𝐮𝐬 𝐚𝐛𝐨𝐮𝐭 𝐭𝐡𝐞 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 - Sensors: The robots use Lidar and other sensing technologies to map out the bathroom and detect obstacles. - Programmable Schedules: They can be set to clean at specific times, such as overnight, to minimize disruption. - Machine Learning: Over time, they learn the layout and can optimize their cleaning routes for better efficiency. Are you comfortable with robots performing cleaning tasks in spaces like bathrooms? Where else do you think robots like these could make a positive impact? #innovation #technology #future #management #startups

Explore categories