Lightwheel’s cover photo
Lightwheel

Lightwheel

Software Development

Santa Clara, California 3,977 followers

Make Simulation Successful for Embodied AI.

About us

Lightwheel is accelerating the real-world deployment of AI through simulation, reshaping how machines perceive, learn from, and interact with the physical world. Our offering includes highly generalizable, physics-accurate 3D assets and scenes, teleoperation-based data collection in simulation, and scalable simulation platform solutions. Recognized by industry leaders, Lightwheel is the leading provider of synthetic data and simulation technologies for embodied AI.

Website
https://www.lightwheel.ai/
Industry
Software Development
Company size
51-200 employees
Headquarters
Santa Clara, California
Type
Privately Held
Founded
2023

Locations

Employees at Lightwheel

Updates

  • 🚀 Next Stop: Humanoids Summit — CEO Live on the Main Stage We’re excited to share that our Founder & CEO at Lightwheel, Steve Xie, Ph.D., will deliver a keynote at the Humanoids Summit: “Make Embodied AI Successful: How Synthetic Data Is Being Used in Humanoid Robotics Training.” As humanoid systems race toward real-world deployment, the question is no longer whether they can learn — but how quickly, how reliably, and at what scale they can learn. Steve Xie, Ph.D. will walk through how high-quality simulation and synthetic data and are reshaping the way humanoids are trained, and why Embodied AI needs this data foundation to succeed. We can’t wait to join the global humanoid community, meet the teams pushing the field forward, and share what we’ve been building. 📅 Humanoids Summit 2025 — The Computer History Museum, Silicon Valley 🎤 Keynote: Steve Xie, Ph.D. 🕞 3:20 PM, December 11 📍 Booth #522 Besides the insightful keynote speech, swing by Booth 522, say hello, and grab a coffee with us. #HumanoidsSummit #Lightwheel #EmbodiedAI #SyntheticData #Humanoids #AI #Robotics #Simulation

    • No alternative text description for this image
  • 🌊 A coastal evening with the embodied AI community. During NeurIPS week, the BEHAVIOR team and Lightwheel brought the embodied-AI community together by the San Diego waterfront. No agenda, no talks — just good conversations, ocean air, and a group of people shaping the future of embodied intelligence. The venue hit capacity fast — many couldn’t get in — but those who did helped create one of the most energizing evenings of NeurIPS week. We’re grateful to everyone who joined and brought such warm energy to the night. And we’re glad to continue this tradition with BEHAVIOR, creating spaces for the community to meet and grow together. Looking forward to more BEHAVIOR × Lightwheel gatherings — at NeurIPS or along another coastline. #Lightwheel #BEHAVIOR #NeurIPS2025 #EmbodiedAI #RoboticsCommunity #AIEvents

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • 🏆 Celebrating Excellence at the BEHAVIOR Challenge — Live from NeurIPS Today at NeurIPS, the BEHAVIOR Challenge brought together outstanding teams pushing the boundaries of embodied AI and long-horizon robot behavior. 🎉 Huge congratulations to the Top 3 teams on the leaderboard: Robot Learning Collective, Comet (NVIDIA), and SimpleAI Robot — outstanding performances across an exceptionally challenging benchmark. 👏 At the same time, sincere respect to the BEHAVIOR team for building one of the most ambitious and realistic embodied AI challenges to date — a true frontier benchmark for human-scale robot behavior. It was a great moment to celebrate the talent, rigor, and momentum across the embodied AI community — and we’re excited to see what comes next. #NeurIPS #BehaviorChallenge #EmbodiedAI #Robotics #NVIDIA #Stanford #FoundationModels

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • 🚀 NeurIPS 2025 — We Launched, We Demoed, We Delivered. NeurIPS 2025 is officially a wrap — and the energy was unbelievable. From the moment the hall opened, our booth was shoulder-to-shoulder, and we succesfully delivered two major product launches that lit up the conference: RoboFinals and EgoSuite. 🔥 RoboFinals — our industrial-grade simulation evaluation platform, designed to finally challenge frontier robotics foundation models and provide a trustworthy, scalable way to measure real capability. https://lnkd.in/eHX5Yvsx 🔥 EgoSuite — our full-stack, globally scalable egocentric human data engine, delivering structured, first-person interaction data at a scale the field has never seen before. https://lnkd.in/eWiVv6cz The response was incredible — continuous inquiries, deep technical discussions, and teams lining up to understand how these new foundations can support their next generation of embodied-AI systems. Thank you to everyone who stopped by, tested our demos, or shared a conversation. We’re building the future of simulation and data infrastructure together. 📍 Next stop: Humanoids Summit, Silicon Valley. See you there. #Lightwheel #NeurIPS2025 #EmbodiedAI #Simulation #RoboFinals #EgoSuite #SyntheticData #Sim2Real #Robotics

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • 🌊 BEHAVIOR × Lightwheel— Coastal Party in San Diego Dec 6 · NeurIPS Week · San Diego Waterfront We’re excited to co-host a special coastal party with the BEHAVIOR team — bringing together top researchers and builders pushing the frontier of embodied AI and world model. Set against the San Diego coastline, this evening is designed for meaningful conversations, new connections, and a moment of calm after a packed NeurIPS week. No panels. No slides. Just great people, great energy, and a shared mission to move embodied intelligence forward. 📅 Date: Dec 6 📍 Location: San Diego 🔗 RSVP (Limited Spots): https://luma.com/7v3ln724 #Lightwheel #BEHAVIOR #NeurIPS2025 #EmbodiedAI #RoboticsCommunity #Simulation #AIEvents

    • No alternative text description for this image
  • View organization page for Lightwheel

    3,977 followers

    Pushing the Boundaries: Building Hyper-Realistic Simulation Environments with Marble 🎯 We've taken our Marble experiments to the next level! This time, we used a panoramic photo of a modern apartment to construct a high-fidelity simulation environment with a novel dual-representation approach. Following Marble's Technical Workflow: As suggested by the World Labs team (shoutout to Hang Yin's integration work!), We leveraged Marble to generate a hybrid scene that decouples visual rendering from physical interaction: 📸 Input: Single panoramic photo of a modern apartment Marble's dual-stream output: 🎨 Visual Layer: Dense 3D Gaussian Splatting (.ply) for photorealistic rendering ⚙️ Physics Layer: Structural mesh (.glb) for accurate collision detection and dynamics Simulation Integration: Transformed raw Gaussian data → USDZ asset (via 3DGUT) → Spatially aligned both representations in Isaac Sim The Result: Stunning photorealistic apartment environment via 3DGS rendering Precise physics simulation through invisible collision meshes Successfully deployed LeRobot to execute a cloth folding task with our SimReady assets (table, t-shirt) Why This Matters: This hybrid approach solves a critical challenge in robotics simulation - maintaining visual fidelity of real living spaces while ensuring accurate physics. The 3DGS layer preserves every detail of the actual apartment, while the decoupled mesh layer enables reliable physical interactions for household tasks. Key Achievement: From a single photo of a real apartment to a robot performing complex manipulation tasks in that exact environment. The visual quality in Isaac Sim is indistinguishable from the original space! Incredible work by the World Labs team! This aligns perfectly with Prof. Fei-Fei Li's vision of spatial intelligence - bridging the gap between visual understanding and physical interaction. Marble is making it possible to train robots in thousands of real-world environments at unprecedented speed.

  • View organization page for Lightwheel

    3,977 followers

    🚀 NeurIPS 2025 — Day 3: Still Packed. Still Buzzing. Still Talking Sim2Real. Crowds kept coming nonstop today — researchers, founders, and engineers stopping by Booth #1733 to try our teleop demos and ask the same question: “How do you actually solve the sim2real gap?” 1️⃣ Physical Real2Sim — measure the real world + solver in physics engine We capture real physical parameters through measurement benchmarks (e.g., bending stiffness, torsion stiffness, self-friction for cables) from real world, then reproduce them in simulation. To make these parameters behave correctly, we go deep into the physics engine — developing custom solvers for deformables, liquids, and other complex materials so simulated behavior matches physical reality. 2️⃣ Visual Real2Sim — match the look, feel, and structure of the real world Our asset-production pipeline, combined with AIGC and human-in-the-loop workflows, generates realistic, structured, and consistent visual assets that reflect real-world appearance and interaction cues. This combination is why people try the demo, pause, and ask: “Wait… this is simulation?” 📍 Booth #1733 — come by right now, try the teleop yourself, and talk sim2real, embodied AI, or how to scale real robot intelligence. Our team on-site: Myles L., Huang Yang, Jonathan Stephens, Siyi Lin, Zimu Gong, Neil Zhou, Shaoze Yang #NeurIPS2025 #Lightwheel #Sim2Real #EmbodiedAI #Simulation #Teleoperation #SimReady #RoboFinals #EgoSuite

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • 🚀 Announcing Lightwheel RoboFinals Today we’re excited to announce RoboFinals, the industrial-grade simulation evaluation platform built to finally challenge frontier-scale robotics foundation models. Frontier labs have outgrown nearly all existing academic simulation benchmarks. Real-world testing does not scale. RoboFinals closes this gap — delivering 100 industry-aligned tasks, large-scale deterministic evaluation, and cross-simulator generalization across Isaac Lab with Newton Physics, Isaac Lab with NVIDIA PhysX physics, MuJoCo, and Genesis. 🔥 Built on the upcoming NVIDIA Isaac Lab — Arena, RoboFinals provides: - Industrial-grade realism powered by the SimReady Asset ecosystem - Progressive difficulty and high task diversity across home, factory, and retail domains - Unified success criteria and cross-robot evaluation for tabletop arms, mobile manipulators, and loco-manipulation systems - Full Real2Sim calibration and an upcoming Sim–Real correlation dataset - Co-designed and actively adopted by Qwen RoboFinals is now available for frontier labs building generalist robotic systems. 👉 Contact us to join the early-access program. https://lnkd.in/eHX5Yvsx

  • 🚀 Introducing Lightwheel EgoSuite The high-quality, multimodal, globally scalable egocentric human data solution for Embodied AI and world models. Robotics foundation models are advancing fast — but everyone faces the same bottleneck: the world still doesn’t have enough high-quality, robot-ready data. EgoSuite fills that gap. 🔥 What EgoSuite Delivers - 20,000+ hours/week of continuous egocentric data collection - 10,000+ tasks across 500+ environments in 7 countries - 300,000+ hours delivered to leading world-model and VLA teams - Multimodal capture: VR, exoskeleton, UMI gripper, hand tracking, audio - Automated structured annotation for robot learning (pose, actions, semantics) 🌍 Why It Matters Egocentric human data is becoming the critical layer for Embodied AI — capturing rich interaction signals while scaling globally like web video. 🤝 Get Early Access https://lnkd.in/eWiVv6cz

  • 🚀 NeurIPS 025 — Day 1: Absolutely Electric. San Diego is buzzing — and the crowd around the Lightwheel booth hasn’t slowed for a minute. Our live teleop cable-manipulation demo has been the magnet of the day. People try it, stop mid-motion, and ask: “Wait… this is all simulation?” We’ve had amazing conversations with teams from Scale AI, Amazon, NVIDIA, TSMC, and many others — all pointing to the same thing: The world needs high-quality, physically accurate assets and a scalable pipeline for real robot manipulation. That’s exactly what we’re here to deliver. 📍 Booth #1733 — Lightwheel Come by tomorrow to try the demos yourself, see what’s launching this week, and talk embodied-AI data, simulation pipelines, or how to scale real robot intelligence. Our team on the ground: Myles L., Huang Yang, Jonathan Stephens, Siyi Lin, Zimu Gong, Neil Zhou, Shaoze Yang Swing by — we’d love to meet you. #NeurIPS2025 #Lightwheel #EmbodiedAI #Simulation #Teleoperation #SimReady #RoboFinals #EgoSuite

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Lightwheel 2 total rounds

Last Round

Seed
See more info on crunchbase