Let's talk for a minute about robotics demo videos. With #ces2024 this week, we're likely to see some impressive new demos. But there are also a lot of amazing trick-shot videos out there. Separating reality from stagecraft is hard. Here's how to be a sophisticated consumer of robotics videos. 1) Anything can be animated with stop-motion, even vegetables. https://lnkd.in/gWcncVx9 If you see a robotics video with a lot of frame skips or camera cuts, but wary. You'll notice Boston Dynamics videos are often one cut with no camera cuts, that's impressive. 2) We used to be skeptical of anything in simulation, but in the past few years, simulation to real transfer has gotten very good. If you see just a simulation though, assume that it hasn't been translated to the real world yet and that takes time. But if you see simulation and real in the same video, that suggests this is a real project, because almost everyone starts in sim now. This Salto video comes across as really credible... https://lnkd.in/ga5ahGFa 3) Wizard of Oz demos are really easy to do, by this I mean a demo where there's a really a person behind the curtain. Stanford's Mobile ALOHA work is impressive work, but if you just watch the first minute, you miss that the real innovation here is great teleoperation for collecting imitation learning data. Click ahead to minute 2:25 to see the wizard of oz rig. https://lnkd.in/gyE2XvCH Unfortunately, its really hard to know if someone is doing this or not, but its a really low-integrity thing to do to show a robot doing something and not reveal the human controller behind it. But people do it. If you're considering a significant investment in a robotics company, never go just off the video... go see it first hand. 4) Single-task Reinforcement Learning works. You can learn a controller to do a single task today. Open a door, stack a block, turn a crank. And learning these tasks is impressive and they look impressive and they are impressive. But a good RL engineer can make this work in a couple of months. One step harder is to make it robust to different subtle variations. But generalizing to multiple similar tasks is very hard. In order to be able to tell if it can generalize, look for multiple trained tasks. TRI's diffusion policy videos show this. https://lnkd.in/gj_GEH_p 5) Pay attention to the environment and what they're not showing the robot do. For instance, Figure's recent video making coffee is just awesome. Fluid, single-cut, shows robustness to failure modes. Still just a single task, so claims of robotic's ChatGPT moment aren't in evidence here. Production quality is great. But you'll notice the robot doesn't lift anything heavier than a Keurig cup. Picking up mugs has been done, but they don't show that. Maybe the robot doesn't have that strength?
Robotics Innovation Demonstration Video
Explore top LinkedIn content from expert professionals.
Summary
A robotics-innovation-demonstration-video is a visual presentation that showcases new advancements in robot technology and highlights how these systems perform tasks in real-world or simulated environments. These videos help everyday viewers understand the latest breakthroughs in robotics and their practical applications, making technical progress accessible and engaging.
- Examine production details: Watch for clues such as camera cuts, frame skips, or simulation-only footage to determine whether a robotics demonstration is showing genuine innovation or staged results.
- Notice task diversity: Look for videos where robots perform multiple tasks or adapt to changing environments, as this signals progress toward more capable and flexible robotic systems.
- Consider user interfaces: Pay attention to demonstrations that show how non-experts can interact with robots, as intuitive interfaces are key to making robotics useful for a wide range of people.
-
-
Happy to share our latest paper, "Enabling Novel Mission Operations and Interactions with ROSA: The Robot Operating System Agent". This work was led by Rob R. in collaboration with Marcel Kaufmann, Jonathan Becktor, Sangwoo Moon, Kalind Carpenter, Kai Pak, Amanda Towler, Rohan Thakker and myself. Please find the #OpenSource code, paper, and video demonstration linked below. Operating autonomous robots in the field is often challenging, especially at scale and without the proper support of Subject Matter Experts (SMEs). Traditionally, robotic operations require a team of specialists to monitor diagnostics and troubleshoot specific modules. This dependency can become a bottleneck when an SME is unavailable, making it difficult for operators to not only understand the system's functional state but to leverage its full capability set. The challenge grows when scaling to 1-to-N operator-to-robot interactions, particularly with a heterogeneous robot fleet (e.g., walking, roving, flying robots). To address this, we present the ROSA framework, which can leverage state-of-the-art Vision Language Models (VLMs), both on-device and online, to present the autonomy framework's capabilities to operators in an intuitive and accessible way. By enabling a natural language interface, ROSA helps bridge the gap for operators who are not roboticists, such as geologists or first responders, to effectively interact with robots in real-world missions. In our video, we demonstrate ROSA using the NeBula Autonomy framework developed at NASA Jet Propulsion Laboratory to operate in JPL's #MarsYard. Our paper also showcases ROSA's integration with JPL's EELS (Exobiology Extant Life Surveyor) robot and the NVIDIA Carter robot in the IsaacSim environment (stay tuned for ROSA IssacSim extension updates!). These examples highlight ROSA's ability to facilitate interactions across diverse robotic platforms and autonomy frameworks. Paper: https://lnkd.in/g4PRjF4V Github: https://lnkd.in/gwWXmmjR Video: https://lnkd.in/gxKcum27 #Robotics #Autonomy #AI #ROS #FieldRobotics #RobotOperations #NaturalLanguageProcessing #LLM #VLM
-
🚀 Mars is unforgiving. Every wheel turn is critical. Every decision matters 🤯 Imagine the day when the rover's handle broken wheels on Mars? I did too—and decided to find out. Having this in mind, my research led me to a Stanford University paper that proposes a unified framework integrating #ModelPredictiveControl ( It leverages transformer-based #neuralnetworks to enhance trajectory generation. This approach is particularly exciting as it addresses a key challenge: robotic systems, much like NASA - National Aeronautics and Space Administration's #Opportunity rover dealing with a damaged wheel. But it is a long path for me to learn and prototype this paper. First step is to to simulate rover paths using the #BicycleModel and Pure Pursuit Control—a stepping stone in a multi-phase project to replicate the paper. Umm, so what did I do? I developed a simulation to model how a #rover adapts to mobility constraints. By combining the Bicycle Model with Pure Pursuit Control, my system dynamically tracks a #targetpath, adjusting steering and speed based on real-time conditions. Key highlights: - Simplified vehicle dynamics with the Bicycle Model. - Real-time path tracking with Pure Pursuit Control. - Simulated challenging terrains and obstacles to test robustness. 💡 Next Steps: Implementing Model Predictive Control (#MPC) and exploring transformer-based models for #advancedpathplanning and obstacle avoidance. This project bridges the gap between theoretical algorithms and practical applications like self-driving cars and autonomous robotics. 🌍 📹 I created a detailed video demonstrating the simulation and its real-world implications (attached below!) 📝 My article on the Project: https://lnkd.in/eWqJ2FWb Let's connect to discuss innovations in autonomous systems and robotics. I know I have a lot to learn in this field. But I tried my best to learn the basics and implemented it.:) I'd love to hear feedback from all of you! cc. Daniele Gammelli Tommaso Guffanti Simone D'Amico Marco Pavone Elisa Capello Politecnico di Torino Davide Celestini The Knowledge Society (TKS) Andrés R. M. Velarde Raha Francis Limitless Space Institute Josh Roy Kaci Heins Chaka Jaliwa Linda Preece Magalie Renaud #AutonomousVehicles #Robotics #MarsRover #PurePursuit #Simulation #STEM #girslwhocode