Despite the enormous hype surrounding humanoids and their pursuit of “General Physical Intelligence,” we still have not seen a mobile manipulator that can reliably perform gross, non-suction manipulation across even a small set of objects. Achieving tightly coupled perception, planning, and control remains extremely challenging, even for basic manipulation with simple grippers. Yet the public narrative suggests otherwise. Demos, videos, and headlines often paint an overly optimistic picture of progress. Many are teleoperated, heavily scripted, or engineered specifically for a single staged moment, far from the level of consistency required to deliver real value in the field. So before we get carried away with buzzwords, we should clarify what we mean by generalization. There is a vast spectrum between a single-purpose robot and a machine capable of everything a human can do. While humanoids aim for the upper extreme of that spectrum, the most immediate opportunity lies somewhere in the middle: Robots that can perform a small number of high-value tasks exceptionally well, reliably, repeatedly, and in real-world environments. #Robotics #MobileManipulation #RoboticsEngineering #Perception #MotionPlanning #ControlSystems #IndustrialRobotics #Humanoids #AIInRobotics #RealWorldAI
Aligning Expectations With Realities in Mobile Robotics
Explore top LinkedIn content from expert professionals.
Summary
Aligning expectations with realities in mobile robotics means making sure that what people hope robots can do matches what these machines can reliably achieve in real-world environments. This involves designing robots whose appearance, abilities, and roles are clearly communicated, while also understanding the technical and safety limits that come with cutting-edge robotics technology.
- Clarify capabilities: Communicate what robots can and cannot do in straightforward language to prevent disappointment and confusion.
- Design with purpose: Match a robot’s look and features to its actual abilities and intended tasks, avoiding designs that suggest more than what’s currently possible.
- Follow safety standards: Pay close attention to industry standards and safety thresholds when deploying robots among people, especially where physical or emotional trust is involved.
-
-
If you’re in #robotics, you’ve probably heard of Rodney Brooks’s Three Laws of Robotics, which emphasize real-world deployment. If not, here’s a quick summary: 1. Appearance Promise: A robot’s design should match its capabilities to meet user expectations. 2. Preserve Human Agency: Robots should never hinder human actions, especially in critical situations. 3. Reliability: Technologies need at least 10 years of refinement post-lab to achieve 99.9% reliability in real-world use. These principles focus on creating practical robots that integrate smoothly into human environments. Now, I want to explain in a series of posts, why humanoid robots look the way they look. The first part, follows the first law of Rodney Brooks, Appearance Promise. This principle states that a robot’s outward appearance should accurately reflect its abilities. If a robot looks advanced, users will expect it to perform complex tasks. Misaligned expectations can lead to disappointment or misuse. Thus, designing robots that don’t overpromise through their appearance is crucial to ensuring users understand what the robot can and cannot do, reducing frustration and errors. However, many robots today, especially humanoids, have neutral designs that don’t indicate their specific application. Why is this? The first root answer is likely that many companies are still unclear about their robots’ primary applications. As a result, neutral designs are the safest choice. This is evident in several design aspects: • Forms and Surfaces: Minimalistic surfaces are common, reflecting current trends and simplifying manufacturing. However, these forms often don’t communicate a specific application. • Color, Material, and Finish: Neutral tones, often gray with a semi-matte finish, dominate. An example of CMF conveying the functionality is, metal panels suggest industrial use, while fabric implies household use. • Proportions: Humanoid robots generally follow human proportions, but most stick to generic ratios without reflecting specific capabilities. An example of body proportions conveying a specific capability, is the long lower body of marathon runners, or longer fingers of pianists. • Character: The trend of a black oval head with a simple light interface shows that robots are designed to appear as neutral as possible, without a specific character, or maybe waiting to adapt one. While these robots are well-designed by professionals designers which I know some of them and highly respect, it should be noted that the design requirements are driven by business needs. Lastly, even “when” general-purpose robots become a reality, their appearance should still be customized for their intended use. After all, don’t we humans wear different clothes for different activities? End of part 1 #robotdesign #HRI #humanoid
-
The IEEE Humanoid Study Group just released its comprehensive report on robotics standards. With 160+ humanoid models from 120+ companies globally, we're at a critical juncture where the gap between promise and deployment isn't just technical - it's regulatory. Standards are the invisible infrastructure that enables entire industries to scale. From USB ports, internet protocols, or aviation safety rules, without standards, markets fragment, innovation stalls, and consumer trust evaporates. For humanoids, standards will determine whether a small or large market gets unlocked in the mid-term. Three Key Findings from the Report: 1️⃣ Classification Crisis -- Current standards assume fixed-base robots. Humanoids break every assumption - they're machines that need multi-dimensional classification from physical capabilities/form-factors, autonomy levels and application domains (warehouse application is not the same as eldercare)...Without a common taxonomy, we can't even discuss safety meaningfully. 2️⃣ The Stability Paradox -- A 66kg robot falling isn't just damage - it's life-threatening. Yet NO current standards account for actively balancing systems. Key insight: We don't need 100% stability (humans fall too) - we need quantifiable risk thresholds. ISO/AWI 25785 just launched as first bipedal safety standard New metrics needed: margin of stability, capture point, disturbance recovery 3️⃣ The Overtrust Problem -- The report includes a survey of 50+ experts, which revealed that humans expect emotional intelligence from humanoids. This creates unprecedented safety risks: --> Appearance drives false capability assumptions --> Users expect empathy, especially with vulnerable populations --> Mismatch between expectations and reality endangers trust 💡 Most compelling finding: Users interviewed wanted emotional intelligence MORE than perfect task execution. This reshapes development priorities. My Take: As someone working at the Edge AI/robotics intersection, standards aren't paperwork - they're the gateway to scale. The report's framework and SDO collaboration call to action [ASTM for test methods, IEEE for performance metrics, ISO for safety thresholds]; is a path worth trodding. 👏 Kudos to Aaron Prather for putting this out there: https://lnkd.in/e_x_aVpX #Robotics #HumanoidRobots #Standards #EdgeAI #Safety #IEEE Ali Shafti Joe Smallman Riccardo Secoli, PhD Max Middleton Maria J. Alonso Gonzalez, PhD Dev Singh Leila Takayama Vanessa Evers Allison Okamura Bern Grush Emma Ruttkamp-Bloem Khalfan Belhoul Pascale Fung Michael Spranger Paolo Pirjanian Ram Devarajulu Tim Ensor Sally Epstein Tom Shirley