ReportWire

Tag: Boston Dynamics

  • TechCrunch Mobility: ‘Physical AI’ enters the hype machine | TechCrunch

    [ad_1]

    Welcome back to TechCrunch Mobility, your hub for all things “future of transportation.” To get this in your inbox, sign up here for free — just click TechCrunch Mobility!

    It’s been a minute, folks! As you might recall, the newsletter took a little holiday break. We’re back and well into 2026. And a lot has happened since the last edition. 

    I spent the first week of the year at the Consumer Electronics Show in Las Vegas. And while I wrote about this last January, it’s worth repeating: U.S. automakers have left the building. 

    What has filled the void in the Las Vegas Convention Center? Autonomous vehicle tech companies (Zoox, Tensor Auto, Tier IV, and Waymo, which rebranded its Zeekr RT, to name a few), Chinese automakers like Geely and GWM, software and automotive chip companies, and loads of what Nvidia CEO Jensen Huang calls “physical AI.” 

    The term, which is sometimes called “embodied AI,” describes the use of AI outside the digital world and into the real, physics-based one. AI models, combined with sensors, cameras, and the motorized controls, allow that physical thing — humanoid robot, drone, autonomous forklift, robotaxi — to detect and understand what’s in this real environment and make decisions to operate within it. And it was all over the place from agriculture and robotics to autonomous vehicles and drones, industrial manufacturing, and wearables. 

    Hyundai had one of the busiest and largest exhibits with a near-constant line wrapped around the entrance. The Korean automaker wasn’t showing cars. Nope, it was robots of various forms, including the Atlas humanoid robot, courtesy of its subsidiary Boston Dynamics. There were also innovations that have come out of Hyundai Motor Group Robotics LAB, including a robot that charges electric autonomous vehicles, and a four-wheel electric platform called the Mobile Eccentric Droid (MobEd) that is going into production this year. It seems everyone was embracing and showcasing robotics, particularly humanoids. 

    The hype around humanoids, specifically, and physical AI, in general, was palpable. I asked Mobileye co-founder and president Amnon Shashua about this because his company just bought his humanoid robotics startup for $900 million: “What do you say when people tell you humanoid robots are all hype?” 

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    “The internet was also a hype, remember in 2000, the crisis of the internet,” Shashua said. “It did not mean that [the] internet is not a real thing. Hype means that companies are overvalued for a certain period of time, and then they crash. It does not mean that the domain is not real. I believe that the domain of humanoids is real.”

    A few notable stories from CES:
    Nvidia launches Alpamayo, open AI models that allow autonomous vehicles to ‘think like a human’

    This is Uber’s new robotaxi from Lucid and Nuro

    Mobileye acquires humanoid robot startup Mentee Robotics for $900M

    Now onto the other non-CES and more recent news … 

    A little bird

    Image Credits:Bryce Durbin

    President Trump made comments this week at a Detroit Economic Club meeting about welcoming Chinese automakers into the United States that did not sit well with many in the auto industry, according to insiders I have spoken to. Specifically, I have been told the Alliance for Automotive Innovation (the industry lobbying group) is “freaking out,” one DC insider told me. 

    “If they want to come in and build a plant and hire you and hire your friends and your neighbors, that’s great, I love that,” Trump said, according to reporters in attendance. “Let China come in, let Japan come in.”

    A couple of notes. Japanese companies like Toyota are already very much in the United States. The bigger hurdle, beyond protests from within the boardrooms of U.S. automakers, is existing law. In 2025, the U.S. Department of Commerce’s Bureau of Industry and Security issued a rule that restricts the import and sale of certain connected vehicles and related hardware and software linked to China or Russia. This essentially bans the sale of Chinese vehicles in the country. 

    Avery Ash, who is CEO of SAFE, a nonpartisan organization focused on securing U.S. energy, critical materials, and supply chains, weighed in about the dangers of allowing Chinese automakers to sell their vehicles in the United States. Side note: Ash was on my podcast, the Autonocast, which touches on some of this subject.

    “Welcoming Chinese automakers to build cars here in the U.S. will reverse these hard-won accomplishments and put Americans at risk,” he said. ”We’ve seen this strategy backfire in Europe and elsewhere — it would have potentially catastrophic impacts on our automotive industry, have ripple effects on our entire defense industrial base, and make every American less secure.”

    Meanwhile, Canada is opening the door to Chinese automakers. Canadian prime minister Mark Carney announced his country will slash its 100% import tax on Chinese EVs to just 6.1%, Sean O’Kane reports.

    Got a tip for us to share in the Little Bird section? Email Kirsten Korosec at kirsten.korosec@techcrunch.com or my Signal at kkorosec.07, or email Sean O’Kane at sean.okane@techcrunch.com

    Deals!

    money the station
    Image Credits:Bryce Durbin

    Budget carrier Allegiant agreed to buy rival Sun Country Airlines for about $1.5 billion in cash and stock.

    Dealerware, which sells software services to automotive OEMs and retailers, was acquired by a group of investors led by Wavecrest Growth Partners and Radian Capital. Automotive Ventures and automotive industry executives David Metter and Devin Daly also participated. The terms were not disclosed.

    Long-distance bus and train provider Flix acquired the majority share of European airport transfer-platform Flibco. Luxembourg company SLG will retain some ownership stake in Flibco. Terms weren’t disclosed. 

    JetZero, the Long Beach, California, startup developing a midsized triangular aircraft designed to save on fuel, raised $175 million in a Series B round led by B Capital, Bloomberg reported.

    Joby Aviation, a company developing electric air taxis, reached an agreement to buy a 700,000-square-foot manufacturing facility in Dayton, Ohio, to support its plans to double production to four aircraft per month in 2027.

    Luminar has reached a deal to sell its lidar business to a company called Quantum Computing Inc. for just $22 million. If that seems low, you’re right. Luminar’s valuation peaked in 2021 at $11 billion.

    Notable reads and other tidbits

    Image Credits:Bryce Durbin

    Bluspark Global, a New York-based shipping and supply chain software company, didn’t realize its platform was vulnerable and open to anyone on the internet. Here’s how a security researcher (and TechCrunch) got it fixed.

    The Federal Trade Commission finalized an order that bans General Motors and its OnStar telematics service from sharing certain consumer data with consumer reporting agencies. Read the full story on what that means.

    InDrive, the company that started as a ride-hailing platform that lets users set the price, is diversifying and starting to execute on its “super app” strategy. That means more in-app advertising across its top 20 markets and expanding grocery delivery to Pakistan. Read the full story here. 

    Motional, the majority Hyundai-owned autonomous vehicle company, has rebooted. When Motional paused its operations last year, I wasn’t sure it was going to survive. Other AV companies with big backers have seen their funding disappear in a blink, so it was certainly plausible. But the company is here and with a new AI-first approach. Before you roll your eyes at that term, take a read of my article, which includes a demo ride and an interview with CEO Laura Major. Then feel free to hit my inbox with your thoughts. 

    New York governor Kathy Hochul plans to introduce legislation that would effectively legalize robotaxis in the state with the exception of New York City. No details on this yet; I’ve been told it will all be revealed in her executive budget proposal next week. What we do know is the proposal is designed to expand the state’s existing AV pilot program to allow for “the limited deployment of commercial for-hire autonomous passenger vehicles outside New York City.” My article delves deeper into what she shared and gives an update on Waymo’s NYC permit

    Tesla is ditching the one-time fee option for its Full Self-driving (Supervised) software and will now sell access to the feature through a monthly subscription.

    On-demand drone delivery company Wing is bringing its service to another 150 Walmart stores as part of an expanded partnership with the retailer.

    [ad_2]

    Kirsten Korosec

    Source link

  • CES 2026: Everything revealed, from Nvidia’s debuts to AMD’s new chips to Razer’s AI oddities  | TechCrunch

    [ad_1]

    CES 2026 is in full swing in Las Vegas, with the show floor open to the public after a packed couple of days occupied by press conferences from the likes of Nvidia, Sony, and AMD and previews from Sunday’s Unveiled event. 

    As has been the case for the past two years at CES, AI is at the forefront of many companies’ messaging, though the hardware upgrades and oddities that have long defined the annual event still have their place on the show floor and in adjacent announcements. We’ll be collecting the biggest reveals and surprises here, though you can still catch the spur-of-the-moment reactions and thoughts from our team on the ground via our live blog right here

    Let’s dive right in, starting with some of Monday’s biggest players. 

    Nvidia reveals AI model for autonomous vehicles, showcases Rubin architecture

    Nvidia CEO Jensen Huang delivered an expectedly lengthy presentation at CES, taking a victory lap for the company’s AI-driven successes, setting the stage for 2026, and yes, hanging out with some robots

    The Rubin computing architecture, which has been developed to meet the increasing computation demands that AI adoption creates, is set to begin replacing Blackwell architecture in the second half of this year. It comes with speed and storage upgrades, but our senior AI editor Russell Brandom goes into the nitty-gritty of what distinguishes Rubin

    And Nvidia continued its push to bring the AI revolution into the physical world, showcasing its Alpamayo family of open source AI models and tools that will be used by autonomous vehicles this year. That approach, as senior reporter Rebecca Bellan notes, mirrors the company’s broader efforts to make its infrastructure the Android for generalist robots

    AMD’s keynote highlights new processors and partnerships 

    AMD chair and CEO Lisa Su delivered the first keynote of CES, with a presentation that featured partners, including OpenAI president Greg Brockman, AI legend Fei-Fei Li, Luma AI CEO Amit Jain, and more. 

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Beyond the partner showcases, senior reporter Rebecca Szkutak detailed AMD’s approach toward expanding the reach of AI through personal computers using its Ryzen AI 400 Series processors. 

    The standout oddities of CES

    Let’s face it, by this point in the show the major announcements have been made, products have been showcased, and it’s time to eye some of the most brow-raising reveals from CES. We started our list of what stood out to us as odd and noteworthy, but we’re open to more suggestions! 

    Highlights from CES breakout sessions

    CES isn’t all hardware showcases and show floor attractions — there are plenty of additional industry panels and speakers drawing eyeballs. We kept tabs on a few notable highlights, ranging from Palmer Luckey pushing retro aesthetics, to why the “learn once, work forever” era may be over, to previews of the new Silicon Valley-based series “The Audacity,” to the expansion of Roku’s $3 streaming service, to All-In host Jason Calacanis putting a $25,000 bounty on an authentic Theranos device

    Ford’s AI assistant debuts

    Ford is launching its assistant in the company’s app before a targeted 2027 release in its vehicles, with hosting managed by Google Cloud and the assistant itself built using off-the-shelf LLMs. As we noted in our coverage of the news, however, few details were offered around what drivers should expect from their experience with the assistant. 

    Caterpillar, Nvidia partner on automated construction equipment

    As part of the ever-present push for AI’s impact on the physical world, Caterpillar and Nvidia announced a pilot program, “Cat AI Assistant,” which was demonstrated at CES Wednesday. This system, coming to one of Caterpillar’s excavator vehicles, is happening alongside another project to use Nvidia’s Omniverse simulation resources to help with construction project planning and execution. 

    Hands-on with Clicks Communicator

    Image Credits:TechCrunch

    One of the buzziest reveals of the show is the debut phone from Clicks Technology, the $499 Communicator, which brings back BlackBerry vibes with its physical keyboard, plus a separate $79 slide-out physical keyboard that can be used with other devices.

    Check out our full rundown from the show floor here, but the Communicator makes a good first impression, per Consumer Editor Sarah Perez:

    “In our hands-on test, the phone felt good to hold — not too heavy or light, and was easy to grip. Gadway told me the company settled on the device’s final form after dozens of 3D-printed shapes. The winning design for the phone features a contoured back that makes it easy to pick up and hold.

    “The device’s screen is also somewhat elevated off the body, and its chin is curved up to create a recess that protects the keys when you place it face down.”

    Check out the Skylight Calendar 2

    Image Credits:Sarah Perez

    This family planning tool caught our eyes on the show floor, not just for its calendar and planning capabilities, but for its AI capabilities that are able to sync calendars from different sources, create new to-dos based off of messages or photos, appointment reminders, and more. Check out our full impressions here

    Boston Dynamics and Google partner on Atlas robots 

    Hyundai’s press conference focused on its robotics partnerships with Boston Dynamics, but the companies revealed that they’re working with Google’s AI research lab rather than competitors to train and operate existing Atlas robots, as well as a new iteration of the humanoid robot that was shown onstage. Transportation editor Kirsten Korosec has the full rundown

    Amazon’s AI-centric update with Alexa+ is getting the kind of push you’d expect at CES, with the company launching Alexa.com for Early Access customers looking to use the chatbot via their browsers, along with a similar, revamped bot-focused app. Consumer editor Sarah Perez has the details, along with news on Amazon’s revamp to Fire TV and new Artline TVs, which have their own Alexa+ push. 

    On the Ring front, consumer reporter Ivan Mehta runs through the many announcements, from fire alerts to an app store for third-party camera integration, and more. 

    Razer joins the AI deluge with Project AVA and Motoko 

    In the past, Razer has been all about ridiculous hardware at CES, from three-screen laptops to haptic gaming cushions to a mask that landed the company a federal fine. This year, its two attention-grabbing announcements were for Project Motoko, which aims to function similarly to smart glasses, but without the glasses. 

    Then there’s Project AVA, which puts the avatar of an AI companion on your desk. We’ll let you watch the concept video for yourself. 

    Lego Smart Bricks mark the company’s first CES appearance 

    Lego joined CES for the first time to hold a behind-closed-doors showcase of its Smart Play System, which includes bricks, tiles, and Minifigures that can all interact with each other and play sounds, with both the debut sets having a Star Wars theme. Senior writer Amanda Silberling has all the details here

    [ad_2]

    Morgan Little

    Source link

  • 1/4/2026: Maduro; Here Come the Humanoids; Alysa Liu

    [ad_1]

    First, a report on the capture of Venezuelan President Nicolás Maduro. Then, a look at the progress made on AI-powered humanoid robots. And, Alysa Liu: The 60 Minutes Interview.

    [ad_2]

    Source link

  • Boston Dynamics is training an AI-powered humanoid robot to do factory work

    [ad_1]

    With rapid advances in artificial intelligence, computer scientists and engineers are making progress in developing robots that look and act like humans. A global race is underway to develop humanoid robots for widespread use. 

    Boston Dynamics has established itself as a frontrunner in the field. With support from South Korean carmaker Hyundai, which owns an 88% stake in Boston Dynamics, the Massachusetts company is testing a new generation of its humanoid robot, Atlas. 

    This past October, a 5-foot-9-inch, 200-pound Atlas was put to the test at Hyundai’s new Georgia factory, where it practiced autonomously sorting roof racks for the assembly line.

    Today’s AI-powered humanoids are learning movements that, until recently, were considered a step too far for a machine, according to Scott Kuindersma, who is the head of robotics research at Boston Dynamics.

    “A lot of this has to do with how we’re going about programming these robots now, where it’s more about teaching, and demonstrations, and machine learning than manual programming.” Kuindersma said. 

    How Atlas is trained

    When 60 Minutes visited Boston Dynamics’ headquarters in 2021, Atlas was a bulky, hydraulic robot that could run and jump. Back then, Atlas relied on algorithms written by engineers. The Atlas of today is sleeker, with an all-electric body and an AI brain powered by Nvidia’s advanced microchips, making it smart enough to master hard-to-believe feats.

    Atlas learns in several ways. At Boston Dynamics, machine learning scientist Kevin Bergamin demonstrated an example of supervised learning. Wearing a virtual reality headset, Bergamin took direct control of the humanoid and guided its hands and arms through each task until Atlas succeeded.

    “That generates data that we can use to train the robot’s AI models to then later do that task autonomously,” Kuindersma said.

    Boston Dynamics.head of robotics Scott Kuindersma and Bill Whitaker

    60 Minutes


    Another teaching technique involves a motion capture body suit. 60 Minutes correspondent Bill Whitaker wore the suit while performing jumping jacks.

    Since Atlas’ body is different from Whitaker’s, the robot was trained to match his motions. Data collected by the motion capture suit was fed into Boston Dynamics’ machine learning process. 

    More than 4,000 digital Atlases trained for six hours in simulation. The simulation added challenges for the avatars — like slippery floors, inclines or stiff joints – and homed in on the best way for Atlas to perform the jumping jacks. 

    The Boston Dynamics team then uploaded the new skill into the AI system that controls every Atlas robot. Once one was trained, they were all trained. At the end of the process, Atlas performed jumping jacks that looked just like Whitaker’s. 

    Having learned from the same technique, Atlas demonstrated the ability to run, crawl, skip, and dance.

    There are limitations, Kuindersma said. Atlas isn’t proficient at performing most of the routine tasks that people do in their daily lives, like putting on clothes or pouring a cup of coffee. 

    “There are no humanoids that do that nearly as well as a person,” Kuindersma said. “But I think the thing that’s really exciting now is we see a pathway to get there.”

    The future of humanoids 

    Boston Dynamics CEO Robert Playter spearheaded the company’s humanoid development. 

    “There’s a lot of excitement in the industry right now about the potential of building robots that are smart enough to really become general purpose,” he said. 

    Boston Dynamics CEO Robert Playter

    Boston Dynamics CEO Robert Playter

    60 Minutes


    Goldman Sachs predicts the market for humanoids will reach $38 billion within the decade. Boston Dynamics and other U.S. robot makers are fighting to come out on top. State-supported Chinese companies are also in the race. 

    “The Chinese government has a mission to win the robotics race.,” Playter said. “Technically I believe we remain in the lead. But there’s a real threat there that, simply through the scale of investment, we could fall behind.”

    Should humans be worried about humanoids?

    As fears grow that AI will displace workers, humanoid robots are learning to perform human tasks. Boston Dynamics is training Atlas to do a job that human workers currently handle at Hyundai’s Georgia plant.

    Playter said it could be several years before Atlas becomes a full-time worker at Hyundai, but he predicted that humanoids will change the nature of work.

    “The really repetitive, really backbreaking labor is really, is going to end up being done by robots. But these robots are not so autonomous that they don’t need to be managed. They need to be built. They need to be trained. They need to be serviced.”

    Playter said there are benefits to creating robots like Atlas, which can move in ways that humans can’t. 

    Atlas humanoid

    60 Minutes


    “We would like [robots] that could be stronger than us or tolerate more heat than us or definitely go into a dangerous place where we shouldn’t be going,” he said. “So you really want superhuman capabilities.”

    Still, Playter said there’s no reason to worry about a future like the one depicted in “The Terminator.”

    “[If you] saw how hard we have to work to get the robots to just do some of the straightforward tasks we want them to do, that would dispel that worry about sentience and rogue robots,” he said.

    [ad_2]

    Source link

  • How Boston Dynamics upgraded the Atlas robot — and what’s next

    [ad_1]

    In 2021, 60 Minutes visited the offices of robotics company Boston Dynamics and met an early model of its humanoid robot, Atlas. 

    It could run, jump and maintain its balance when pushed. But it was bulky, with stiff, mechanical movements. 

    Now, Atlas can cartwheel, dance, run with human-like fluidity, twist its arms, head and torso 360 degrees, and pick itself up off of the floor using only its feet. 

    “They call it a humanoid, but he stands up in a way no human could possibly stand up,” correspondent Bill Whitaker told Overtime. “His limbs can bend in ways ours can’t.”

    Boston Dynamics CEO Robert Playter told Whitaker that Atlas’ “superhuman” range of motion is keeping with the company’s vision for humanoid robots. 

    “We think that’s the way you should build robots. Don’t limit yourself to what people can do, but actually go beyond,” Playter said. 

    Whitaker watched demonstrations of the latest Atlas model at Boston Dynamics’ headquarters in Waltham, Massachusetts. Rather than turning around to walk in the other direction, Atlas can simply rotate its upper torso 180 degrees. 

    “For us to turn around, we have to physically turn around,” he told Overtime. “Atlas just pivots on his core.”

    Boston Dynamics’ head of robotics research, Scott Kuindersma, told Whitaker that Atlas doesn’t have wires that cross its the joints of the limbs, torso and head, allowing continuous rotation for tasks and easier maintenance of the robot.

    “The robot’s not really limited in its range of motion,” Kuindersma told Whitaker. “One of the reliability issues that you often find in robots is that their wires start to break over time… we don’t have any wires that go across those rotating parts anymore.”

    Another upgrade to the Atlas humanoid robot is its AI brain, powered by Nvidia chips.

    Atlas’ AI can be trained to do tasks.  One way is through teleoperation, in which a human controls the robot. Using virtual reality gear, the teleoperator trains Atlas to do a specific task, repeating it multiple times until the robot succeeds.

    Whitaker watched a teleoperation training session. A Boston Dynamics’ machine learning scientist showed Atlas how to stack cups and tie a knot.

    Kuindersma told Whitaker robot hands pose a complex engineering problem.

    “Human hands are incredible machines that are very versatile. We can do many, many different manipulation tasks with the same hand,” Kuindersma said. 

    Boston Dynamics’ new Atlas has only three digits on each hand, which can swing into different positions or modes.

    “They can act as if they were a hand with these three digits, or this digit can swing around and act more like a thumb,” Kuindersma said. 

    “It allows the robot to have different shaped grasps, to have two-finger opposing grasp to pick up small objects. And then also make its hands very wide, in order to pick up large objects.”

    Kuindersma said the robot has tactile sensors on its fingers, which provide information to Atlas’ neural network so the robot can learn how to manipulate objects with the right amount of pressure.

    But Kuindersma said there is still room to improve teleoperation systems.

    “Being able to precisely control not only the shape and the motion, but the force of the grippers, is actually an interesting challenge,” Kuindersma told Whitaker. 

    “I think there’s still a lot of opportunity to improve teleoperation systems, so that we can do even more dexterous manipulation tasks with robots.”

    Whitaker told Overtime, “There is quite a bit of hype around these humanoids right now. Financial institutions predict that we will be living with millions, if not billions, of robots in our future. We’re not there yet.”

    Whitaker asked Boston Dynamics CEO Robert Playter if the humanoid hype was getting ahead of reality. 

    “There is definitely a hype cycle right now. Part of that is created by the optimism and enthusiasm we see for the potential,” Playter said.

    “But while AI, while software, can sort of move ahead at super speeds… these are machines and building reliable machines takes time…  These robots have to be reliable. They have to be affordable. That will take time to deploy.”

    The video above was produced by Will Croxton. It was edited by Scott Rosann. 

    [ad_2]

    Source link

  • Boston Dynamics’ AI-powered humanoid robot is learning to work in a factory

    [ad_1]

    For decades, engineers have been trying to create robots that look and act human. Now, rapid advances in artificial intelligence are taking humanoids from the lab to the factory floor. As fears grow that AI will displace workers, a global race is underway to develop human-like robots able to do human jobs. Competitors include Tesla, startups backed by Amazon and Nvidia, and state-supported Chinese companies. Boston Dynamics is a frontrunner. The Massachusetts company, valued at more than a billion dollars, is hard at work on a humanoid it calls Atlas. South Korean carmaker Hyundai holds an 88% stake in the robot maker. We were invited to see the first real-world test of Atlas at Hyundai’s new factory near Savannah, Georgia. There, we got a glimpse of a humanoid future that’s coming faster than you might think.

    Hyundai’s sprawling auto plant is about as cutting-edge as it gets. More than 1,000 robots work alongside almost 1,500 humans, hoisting, stamping and welding in robotic unison. This may look like the factory of the future, but we found the future of the future in the parts warehouse, tucked away in the back corner, getting ready for work. 

    Meet Atlas: A 5’9″, 200 pound, AI-powered humanoid created by Boston Dynamics. The rise of the robots is science fiction no more.

    Bill Whitaker: I have to say, every time I see it, I just can’t believe what my eyes are seeing. Is this the first time Atlas has been out of the lab?

    Zack Jackowski: This is the first time Atlas has been out of the lab doing real work.

    Bill Whitaker and Zack Jackowski

    60 Minutes


    Zack Jackowski heads Atlas development. He has two mechanical engineering degrees from MIT and a mission to turn the robot into a productive worker on the factory floor. We watched as Atlas practiced sorting roof racks for the assembly line without human help. 

    Bill Whitaker: So he’s working autonomously. 

    Zack Jackowski: Correct

    Bill Whitaker: You’re down here to see how Atlas works in the field, and you’ll be showing Atlas off to your bosses at Hyundai?

    Zack Jackowski: Yeah. 

    Bill Whitaker: Do you feel like a proud papa? 

    Zack Jackowski: I feel like– a nervous engineer. 

    Jackowski has been preparing for this moment for a year. We first met him and Atlas a month earlier at Boston Dynamics’ headquarters just outside the city, where he and his team were teaching Atlas skills needed to work at Hyundai. And Atlas, with its AI brain, was gaining knowledge through experience – in other words, it seemed to be learning.

    Bill Whitaker: You know how crazy that sounds?

    Zack Jackowski: Yeah, a little bit. I– and I– I think a lot of our roboticists would’ve thought that was pretty crazy five, six years ago. 

    When 60 Minutes last visited Boston Dynamics in 2021, Atlas was a bulky, hydraulic robot that could run and jump. Back then, Atlas relied on algorithms written by engineers. When we dropped in again this past fall, we saw a new generation Atlas with a sleek, all-electric body and an AI brain, powered by Nvidia’s advanced microchips, making Atlas smart enough to pull off hard to believe feats autonomously. We saw Atlas skip and run with ease.

    Bill Whitaker: Do you ever stop thinking, gee whiz?

    Scott Kuindersma: I remain extremely excited about where we are in the history of robotics but we see that there’s so much more that we can do, as well.

    Scott Kuindersma is head of robotics research, a job he proudly wears on his sleeve.

    Scott Kuindersma

    Scott Kuindersma

    60 Minutes


    Bill Whitaker: You even have on a robot shirt.

    Scott Kuindersma: Well, once I saw that this shirt existed, there was no way I wasn’t buying it. 

    He told us robots today have learned to master moves that until recently were considered a step too far for a machine.

    Scott Kuindersma: And a lot of this has to do with how we’re going about programming these robots now, where it’s more about teaching, and demonstrations, and machine learning than manual programming.

    Bill Whitaker: So this humanoid, this mechanical human, can actually learn?

    Scott Kuindersma: Yes. And– and we found that that’s actually one of the most effective way to program robots like that.

    Atlas learns in different ways. In supervised learning, machine learning scientist Kevin Bergamin – wearing a virtual reality headset – takes direct control of the humanoid, guiding its hands and arms, move-by-move through each task until Atlas gets it.

    Scott Kuindersma: And if that teleoperator can perform the task that we want the robot to do, and do it multiple times, that generates data that we can use to train the robot’s AI models to then later do that task autonomously. 

    Kuindersma used me to demonstrate another way Atlas learns.

    Scott Kuindersma: That v– very stylish suit that you’re wearing is actually gonna capture all of your body motion to train Atlas to try to mimic exactly your motions. And so you’re about to become a 200-pound metal robot.

    He asked me to pick an exercise. They captured the way I work as well.

    Bill Whitaker: I am here at the AI Lab at Boston Dynamics. All of my movements, my walking, my d– arm gestures are being picked up by these sensors…

    Then engineers put my data into their machine learning process. Atlas’ body is different from mine, so they had to teach it to match my movements virtually – more than 4,000 digital Atlases trained for six hours in simulation.

    Atlas humanoid

    60 Minutes


    Scott Kuindersma: And they’re all trying to do jumping jacks, just like you. And as you can see, they’re just starting to learn, so they’re not very good at it.

    The simulation, he told us, added challenges for the avatars, like slippery floors, inclines, or stiff joints, and then homed in on what works best.

    Scott Kuindersma: And it can eventually get to a state where we have many copies of Atlas doing really good jumping jacks. 

    They uploaded this new skill into the AI system that controls every Atlas robot. Once one is trained, they’re all trained.

    Scott Kuindersma: So that’s what you look like when you’re exercising. 

    Bill Whitaker: Uh-huh.

    And what I look like doing my job.

    Bill Whitaker: I am here at the AI Lab at Boston Dynamics. All of my movements, my walking, my d– arm gestures are being picked up by these sensors … 

    Bill Whitaker: This is mind-blowing.

    Through the same processes, Atlas was taught to crawl, do cartwheels. It didn’t fare as well with the duck walk. 

    Scott Kuindersma: Oh, that was fun. And then this happens.

    Bill Whitaker: And then this happens. 

    Scott Kuindersma: We love when things like this happen, actually. Because it’s often an opportunity to understand something we didn’t know about the system.

    Bill Whitaker: What are some of the limitations you see now?

    Scott Kuindersma: Well, I’d- I would say that most things that a person does in their daily lives, Atlas or– other humanoids can’t really do that yet. I think we’re start–

    Bill Whitaker: Like- like what?

    Scott Kuindersma: Well, just putting on clothes in the morning, or pouring your cup of coffee and walking around the house with it.

    Bill Whitaker: That’s too difficult for– for Atlas?

    Scott Kuindersma: Yeah, I think there are no humanoids that do that nearly as well as a person would do that. But I think the thing that’s really exciting now is we see a pathway to get there. 

    A pathway provided by AI. What stands out in this Atlas is its brain. Nvidia chips – the ones that helped launch the AI revolution with ChatGPT – process the flood of collected data, moving this humanoid robot closer to something like common sense.

    Scott Kuindersma: So the analogy might be if I was teaching a child how to do free throws in basketball, if I allow them to just explore and come up with their own solutions, sometimes they can come up with a solution that I didn’t anticipate. And that’s true for these systems as well.

    Atlas can see its surroundings and is figuring out how the physical world works. 

    Scott Kuindersma: So that some day you can put a robot like this in a factory and just explain to it what would– you would like it to do, and it has enough knowledge about how the world works that it has a good chance of doing it.

    Robert Playter: There’s a lot of excitement in the industry right now about the potential of building robots that are smart enough to really become general purpose.

    Boston Dynamics CEO Robert Playter

    Boston Dynamics CEO Robert Playter

    60 Minutes


    Robert Playter, the CEO of Boston Dynamics, spearheaded the company’s humanoid development. He’s been building toward this moment for more than 30 years. The cornerstone was this robotic dog, Spot, introduced almost a decade ago. Spots are trained in heat, cold and varied terrain, and roam the halls of Boston Dynamics.

    Robert Playter: So we have some cameras– thermal sensors, acoustic sensors. An array of sensors on its back that lets it collect data about the health of a factory.

    Spots carry out quality control checks at Hyundai, making sure the cars have the right parts. They conduct security and industrial inspections at hundreds of sites around the world. What began with Spot has evolved into Atlas. 

    Robert Playter: So this robot is capable of superhuman motion, and so it’s gonna be able to exceed what we can do. 

    Bill Whitaker: So you are creating a robot that is meant to exceed the capabilities of humans.

    Robert Playter: Why not, right? We– we would like things that could be stronger than us or tolerate more heat than us or definitely go into a dangerous place where we shouldn’t be going. So you really want superhuman capabilities. 

    Bill Whitaker: To a lotta people that sounds scary. You don’t foresee– a world of Terminators? 

    Robert Playter: Absolutely not. I think if you saw how hard we have to work to get the robots to just do some of the straightforward tasks we want them to do, that would dispel that– that worry about sentience and rogue robots. 

    We wondered if people might have more immediate concerns. We saw workers doing a job at the Hyundai plant that Atlas is being trained to perform. 

    Bill Whitaker: I guarantee you there are going to be people who will say, “I’m gonna lose my job to a robot.” 

    Robert Playter: Work does change. So the really repetitive, really back-breaking labor is really- is gonna end up being done by robots. But these robots are not so autonomous that they don’t need to be managed. They need to be built. They need to be trained. They need to be serviced. 

    Playter told us it could be several years before Atlas joins the Hyundai workforce fulltime. Goldman Sachs predicts the market for humanoids will reach $38 billion within the decade. Boston Dynamics and other U.S. robot makers are fighting to come out on top. But they’re not the only ones in the ring. Chinese companies are proving to be formidable challengers. They’re running to win.

    Bill Whitaker: Are they outpacing us? 

    Robert Playter: The Chinese government has a mission to win the robotics race. Technically I believe we remain– in the lead. But there’s a real threat there that, simply through the scale of investment– we could fall behind. 

    To stay ahead, Hyundai made that big investment in Boston Dynamics.

    Zack Jackowski: Four robots…

    We were at the Georgia plant when Atlas engineer Zack Jackowski presented Atlas to Heung-soo Kim, Hyundai’s head of global strategy. He came all the way from South Korea to check in on the brave new world the carmaker is funding. 

    Bill Whitaker: What do you think of the progress that they’ve made with Atlas?

    Heung-soo Kim: I think we are on track- about the development. Atlas, so far, it’s very successful. It’s a kind of– a start of great journey. Yeah.

    The destination? That humanoid future we mentioned at the start – robots like us working beside us, walking among us. It’s enough to make your head spin.

    Produced by Marc Lieberman. Associate producer, Cassidy McDonald. Broadcast associate, Mariah Johnson. Edited by Matt Richman.

    [ad_2]

    Source link

  • Can AI turn a robot dog into a first responder? – WTOP News

    [ad_1]

    Researchers with the University of Maryland are turning a dog they nicknamed Spot into a robot that can assess patients at mass casualty scenes.

    At a scene where there’s more victims than medics, whether it’s a crime scene, the scene of accident or on a battlefield, the future of that initial screening could be conducted by a robotic dog made by Boston Dynamics.

    Researchers with the University of Maryland, in cooperation with the Defense Advanced Research Projects Agency, are turning a robot dog they nicknamed Spot into a first responder that can talk to and assess patients, and work with medics to make sure whoever needs the most serious amount of help can get it fast.

    “I’m here to help,” the robot says as it approaches a mannequin that, at least in this demo, had suffered a gunshot wound to the leg. “Can you tell me what happened?”

    The computer on the dog includes a large language model artificial intelligence system, similar to ChatGPT, that can communicate with the patient.

    “We buy pretty much the heaviest computer that it could carry, and we put it on there,” said Derek Paley, a professor in Maryland’s Department of Aerospace Engineering and the Institute for Systems Research. “We also add a lot of sensors to the arm here. These are the sensors that are used to assess a patient’s injuries.”

    It all works together to determine someone’s condition.

    “The depth camera can create a 3D image of the casualty, and each of these sensing modalities are fused in what we call an ‘inference engine,’ so that accumulates evidence to support the assessments that are shown here. So each assessment may be determined by combining information from multiple robots, multiple sensors and multiple sensor-processing algorithms,” Paley said.

    At the very beginning of the response to the incident, an aerial drone will assess the situation on the ground, mapping out where potential victims are and sending that information to both Spot and medics on scene. Spot can then scour the area to get a closer look with all its cameras and sensors.

    “The robots can explore, they can assess the number of casualties, where they’re all located, and actually provide that information to the medic in real time on a phone that’s attached to the medic’s chest,” Paley said. “So the medic can look down at their chest and see pins on a map where all the casualties are, color coded by the severity of injuries.”

    The robots are all doing it autonomously, too.

    “They build a mosaic of images in a map to show where the casualties are, and then the ground robots, the Spots here, go to each casualty and they get things like vital signs and other assessments that the drones can’t perform,” Paley said. “That’s all preloaded onto the medic’s phone, so they have that information when they get to each casualty. They already know what the robot has assessed.”

    Spot can even call out for a medic urgently if it determines a patient has critical injuries by shouting, “Medic, medic!”

    All of this is still in the testing phase right now; but Paley thinks the technology could be deployable within the next couple years.

    “We’re able to provide valuable assessments to the medics while they’re under pressure to provide those interventions in timely fashion,” he said.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    [ad_2]

    John Domen

    Source link

  • This Robot Only Needs a Single AI Model to Master Humanlike Movements

    [ad_1]

    While there is a lot of work to do, Tedrake says all of the evidence so far suggests that the approaches used to LLMs also work for robots. “I think it’s changing everything,” he says.

    Gauging progress in robotics has become more challenging of late, of course, with videoclips showing commercial humanoids performing complex chores, like loading refrigerators or taking out the trash with seeming ease. YouTube clips can be deceptive, though, and humanoid robots tend to be either teleoperated, carefully programmed in advance, or trained to do a single task in very controlled conditions.

    The new Atlas work is a big sign that robots are starting to experience the kind of equivalent advances in robotics that eventually led to the general language models that gave us ChatGPT in the field of generative AI. Eventually, such progress could give us robots that are able to operate in a wide range of messy environments with ease and are able to rapidly learn new skills—from welding pipes to making espressos—without extensive retraining.

    “It’s definitely a step forward,” says Ken Goldberg, a roboticist at UC Berkeley who receives some funding from TRI but was not involved with the Atlas work. “The coordination of legs and arms is a big deal.”

    Goldberg says, however, that the idea of emergent robot behavior should be treated carefully. Just as the surprising abilities of large language models can sometimes be traced to examples included in their training data, he says that robots may demonstrate skills that seem more novel than they really are. He adds that it is helpful to know details about how often a robot succeeds and in what ways it fails during experiments. TRI has previously been transparent with the work it’s done on LBMs and may well release more data on the new model.

    Whether simple scaling up the data used to train robot models will unlock ever-more emergent behavior remains an open question. At a debate held in May at the International Conference on Robotics and Automation in Atlanta, Goldberg and others cautioned that engineering methods will also play an important role going forward.

    Tedrake, for one, is convinced that robotics is nearing an inflection point—one that will enable more real-world use of humanoids and other robots. “I think we need to put these robots out of the world and start doing real work,” he says.

    What do you think of Atlas’ new skills? And do you think that we are headed for a ChatGPT-style breakthrough in robotics? Let me know your thoughts on ailab@wired.com.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    [ad_2]

    Will Knight

    Source link

  • Watch This New Robot Relax in the Creepiest Way Possible

    Watch This New Robot Relax in the Creepiest Way Possible

    [ad_1]

    The past decade has seen humanoid robot makers trying to make their creations more and more like humans. But here in 2024, we seem to be witnessing an odd shift in the dexterity of our bipedal robo-dreams. Put bluntly, robotics companies aren’t afraid of getting weird with the contortions of their latest offerings.

    China-based Unitree Robotics released a new video on Monday, available on YouTube, showing off the new G1 which retails for $16,000. It’s just the latest demonstration of a robot maneuvering in entirely un-human ways to accomplish its goals, as you can see in the GIF above.

    The video includes lots of odd movements, showing how the robot can get up off the ground or greet people by pulling a sort of Exorcist move with its torso, rotating 180 degrees. And it all looks strikingly similar to the new version of Boston Dynamics’ Atlas, which has a novel way of getting on its feet.

    There’s also a demonstration of the Unitree robot getting kicked and pushed, presumably to show how well it can balance, even when it meets resistance. But we’d be lying if we said it didn’t make us uncomfortable. These are, after all, robots made to look like humans. And watching wanton cruelty, even against a machine that doesn’t have feelings, sets off something deep in our brain that says they shouldn’t be doing that.

    Unitree Introducing | Unitree G1 Humanoid Agent | AI Avatar | Price from $16K

    Again, these contortions all seem a bit new. A decade ago, Gizmodo attended the DARPA Robotics Challenge in Southern California, where teams largely competed by trying to make their robots as much like humans as possible. Companies like Boston Dynamics released new videos each year showing its robots walking, running, and then eventually doing backflips, all in the same way that talented humans might do it.

    But we seem to be on the cusp of a new era when it comes to robotics. Most robot makers have achieved basic human-style walking and running. The new frontier is taking that form factor and turning them into super-humans, whether by performing gymnastics or applying logic and reason to the world in front of them.

    We’re still a long way from AGI that’ll help robots get chores done, but if we continue on this trajectory, it seems unlikely robots will be doing the mundane tasks that humans don’t want to do. We allowed AI to skip all the boring stuff and jump right ahead to making music and writing poetry. It seems silly to think we’re building an army of butlers to serve humanity with that technology. No, we’re probably going to be letting the robots paint beautiful landscapes while we’re all stuck at our desks filling out Excel spreadsheets if the recent past is any guide.

    [ad_2]

    Matt Novak

    Source link

  • Humanoid robots are learning to fall well | TechCrunch

    Humanoid robots are learning to fall well | TechCrunch

    [ad_1]

    The savvy marketers at Boston Dynamics produced two major robotics news cycles last week. The larger of the two was, naturally, the electric Atlas announcement. As I write this, the sub-40 second video is steadily approaching five million views. A day prior, the company tugged at the community’s heart strings when it announced that the original hydraulic Atlas was being put out to pasture, a decade after its introduction.

    The accompanying video was a celebration of the older Atlas’ journey from DARPA research project to an impressively nimble bipedal ’bot. A minute in, however, the tone shifts. Ultimately, “Farewell to Atlas” is as much a celebration as it is a blooper reel. It’s a welcome reminder that for every time the robot sticks the landing on video there are dozens of slips, falls and sputters.

    Image Credits: Boston Dynamics

    I’ve long championed this sort of transparency. It’s the sort of thing I would like to see more from the robotics world. Simply showcasing the highlight reel does a disservice to the effort that went into getting those shots. In many cases, we’re talking years of trial and error spent getting robots to look good on camera. When you only share the positive outcomes, you’re setting unrealistic expectations. Bipedal robots fall over. In that respect, at least, they’re just like us. As Agility put it recently, “Everyone falls sometimes, it’s how we get back up that defines us.” I would take that a step further, adding that learning how to fall well is equally important.

    The company’s newly appointed CTO, Pras Velagapudi, recently told me that seeing robots fall on the job at this stage is actually a good thing. “When a robot is actually out in the world doing real things, unexpected things are going to happen,” he notes. “You’re going to see some falls, but that’s part of learning to run a really long time in real-world environments. It’s expected, and it’s a sign that you’re not staging things.”

    A quick scan of Harvard’s rules for falling without injury reflects what we intuitively understand about falling as humans:

    1. Protect your head
    2. Use your weight to direct your fall
    3. Bend your knees
    4. Avoid taking other people with you

    As for robots, this IEEE Spectrum piece from last year is a great place to start.

    “We’re not afraid of a fall—we’re not treating the robots like they’re going to break all the time,” Boston Dynamics CTO Aaron Saunders told the publication last year. “Our robot falls a lot, and one of the things we decided a long time ago [is] that we needed to build robots that can fall without breaking. If you can go through that cycle of pushing your robot to failure, studying the failure, and fixing it, you can make progress to where it’s not falling. But if you build a machine or a control system or a culture around never falling, then you’ll never learn what you need to learn to make your robot not fall. We celebrate falls, even the falls that break the robot.”

    Image Credits: Boston Dynamics

    The subject of falling also came up when I spoke with Boston Dynamics CEO Robert Playter ahead of the electric Atlas’ launch. Notably, the short video begins with the robot in a prone position. The way the robot’s legs arc around is quite novel, allowing the system to stand up from a completely flat position. At first glance, it almost feels as though the company is showing off, using the flashy move simply as a method to showcase the extremely robust custom-built actuators.

    “There will be very practical uses for that,” Playter told me. “Robots are going to fall. You’d better be able to get up from prone.” He adds that the ability to get up from a prone position may also be useful for charging purposes.

    Much of Boston Dynamics’ learnings around falling came from Spot. While there’s generally more stability in the quadrupedal form factor (as evidenced from decades trying and failing to kick the robots over in videos), there are simply way more hours of Spot robots working in real-world conditions.

    Image Credits: Agility Robotics

    “Spot’s walking something like 70,000 kms a year on factory floors, doing about 100,000 inspections per month,” adds Playter. “They do fall, eventually. You have to be able to get back up. Hopefully you get your fall rate down — we have. I think we’re falling once every 100-200 kms. The fall rate has really gotten small, but it does happen.”

    Playter adds that the company has a long history of being “rough” on its robots. “They fall, and they’ve got to be able to survive. Fingers can’t fall off.”

    Watching the above Atlas outtakes, it’s hard not to project a bit of human empathy onto the ’bot. It really does appear to fall like a human, drawing its extremities as close to its body as possible, to protect them from further injury.

    When Agility added arms to Digit, back in 2019, it discussed the role they play in falling. “For us, arms are simultaneously a tool for moving through the world — think getting up after a fall, waving your arms for balance, or pushing open a door — while also being useful for manipulating or carrying objects,” co-founder Jonathan Hurst noted at the time.

    I spoke a bit to Agility about the topic at Modex earlier this year. Video of a Digit robot falling over on a convention floor a year prior had made the social media rounds. “With a 99% success rate over about 20 hours of live demos, Digit still took a couple of falls at ProMat,” Agility noted at the time. “We have no proof, but we think our sales team orchestrated it so they could talk about Digits quick-change limbs and durability.”

    As with the Atlas video, the company told me that something akin to a fetal position is useful in terms of protecting the robot’s legs and arms.

    The company has been using reinforcement learning to help fallen robots right themselves. Agility shut off Digit’s obstacle avoidance for the above video to force a fall. In the video, the robot uses its arms to mitigate the fall as much as possible. It then utilizes its reinforcement learnings to return to a familiar position from which it is capable of standing again with a robotic pushup.

    One of humanoid robots’ main selling points is their ability to slot into existing workflows — these factories and warehouses are known as “brownfield,” meaning they weren’t custom built for automation. In many existing cases of factory automation, errors mean the system effectively shuts down until a human intervenes.

    “Rescuing a humanoid robot is not going to be trivial,” says Playter, noting that these systems are heavy and can be difficult to manually right. “How are you going to do that if it can’t get itself off the ground?”

    If these systems are truly going to ensure uninterrupted automation, they’ll need to fall well and get right back up again.

    “Every time Digit falls, we learn something new,” adds Velagapudi. “When it comes to bipedal robotics, falling is a wonderful teacher.”

    [ad_2]

    Brian Heater

    Source link

  • The Atlas Robot Is Dead. Long Live the Atlas Robot

    The Atlas Robot Is Dead. Long Live the Atlas Robot

    [ad_1]

    You don’t need to have been petrified by Arnold Schwarzenegger’s Skynet-commissioned cyborg assassin in 1984’s The Terminator to fret that super-strong, all-terrain, bipedal humanoid robots sprinting up steps, pulling backflips, and righting themselves could be programmed to break our necks on sight. (And laser guns, never give them laser guns.)

    With the Old Atlas, we could comfort ourselves with the notion that clever editing meant Atlas wasn’t as self-righting over rough ground as the original viral videos portrayed. The pratfalls in the retirement video prove that hunch was correct. However, today’s video might well resurrect any robot overlord fears you may have since suppressed. This thing is scary, and not just because it has a ringlight for a face. (Who had “Robot YouTube influencer” on their 2024 bingo card?)

    It was nice knowing you, Old Atlas—you awesome, pratfalling, parkouring, metal man machine.

    Scary, too, if you’re an Amazon warehouse worker, because the New Atlas could do that job with one three-fingered hand tied behind its matte gray robotic back. More likely, however, is that Hyundai—which bought Boston Dynamics in 2020, valuing it at $1 billion—could soon set Atlas to work in its car factories. The “journey will start with Hyundai,” confirmed Boston Dynamics in a statement announcing the All New Atlas launch.

    Again, no details have been released, but we can surmise that the new Atlas will be given dull, repetitive tasks in the Korean company’s factories rather than, say, laser welding. (Remember, keep lasers away from robot butlers.)

    Hyundai isn’t the only company planning to use humanoid robots as workers. Beating Tesla’s still-in-development Optimus line of humanoid robots, Sanctuary AI of Canada announced on April 11 that it would be delivering a humanoid robot to Magna, an Austrian automotive firm that assembles cars for Mercedes, Jaguar, and BMW.

    And Californian robotics startup Figure announced in February that it had raised $675 million from investors such as Nvidia, Microsoft, and Amazon to work with OpenAI on generative artificial intelligence for humanoid robots.

    A general-purpose humanoid robot that can learn on the fly. What could possibly go wrong with that?

    [ad_2]

    Carlton Reid

    Source link

  • Inversion Space will test its space-based delivery tech in October | TechCrunch

    Inversion Space will test its space-based delivery tech in October | TechCrunch

    [ad_1]

    Inversion Space is aptly named. The three-year-old startup’s primary concern is not getting things to space, but bringing them back — transforming the ultimate high ground into “a transportation layer for Earth.”

    The company’s plan — ultra-fast, on-demand deliveries to anywhere on Earth — sounds like pie in the sky, but it’s the sort of moonshot goal that could transform terrestrial cargo transportation. The aim is to send up fleets of earth-orbiting vehicles that will be able to shoot back to Earth at Mach speeds, slow with specially-made parachutes, and deliver cargo in minutes.

    Inversion has developed a pathfinder vehicle, called Ray, that’s a technical precursor to a larger platform that will debut in 2026. Ray will head to space this October, on SpaceX’s Transporter-12 ride share mission, paving the way for Inversion’s future plans on orbit (and back).

    Ray is small — about twice the diameter of a standard frisbee — and will spend anywhere from one and five weeks in space, depending on factors like weather and how the orbit aligns with the landing site, Inversion CEO Justin Fiaschetti explained in a recent interview.

    This first mission will have three phases: the initial on-orbit phase, where the spacecraft will power on, charge its batteries, and hopefully send telemetry to the ground. During the second phase, Ray will use its onboard propulsion system to slow down the vehicle so it starts losing altitude and reentering the atmosphere. The reentry capsule will separate from the satellite bus (both designed in-house), with the latter structure burning up.

    The third and final phase will see Ray slow down using a supersonic drogue parachute, from a reentry speed of Mach 1.8 to Mach 0.2. The main parachute will then deploy, further slowing the capsule to a soft splashdown off the coast of California.

    Impressively, the company has designed and built almost all of the Ray vehicle in-house, from the propulsion system to the structure to the parachutes. This last component is key: almost no space company designs parachutes themselves, and they’re incredibly challenging to engineer from the ground up. Inversion’s engineering team completed qualification testing of the deployment and parachute systems last year.

    Fiaschetti said strong vertical integration has helped the company move so quickly.

    “The purpose of our Ray vehicle is to develop technology for our next-gen vehicle. As such, we’ve built basically the entire vehicle in-house,” Fiaschetti said. “What we saw was that if we can build in-house now, do the hard thing first, that allows us to scale very quickly and meet our customer needs.”

    The reentry vehicle is totally passive — meaning it doesn’t have active controls to navigate its reentry to Earth — but the company’s larger next-gen vehicle, called Arc, will have “football field-level” accuracy.

    Inversion was founded by CEO Justin Fiaschetti and CTO Austin Briggs in 2021, but the two go back further: they met for the first time when they sat next to each other at a Boston University freshman matriculation ceremony. The pair eventually got jobs in southern California — Briggs, as a propulsion development engineer at ABL Space Systems, while Fiaschetti had brief engineering stints at Relativity and SpaceX — and they were actually roommates when they first floated the idea of developing technology to deliver cargo anywhere on Earth.

    The company went through Y Combinator in the summer of 2021 (it was one of our favorites from the cohort) and closed its $10 million seed round in November that same year.

    “We’ve been off to the races ever since,” Fiaschetti said. The company’s grown to 25 employees, who are based out of Torrance, California, where they have a 5,000-square-foot facility. The startup also owns five acres of land in the Mojave Desert, where it conducts engine testing. The scaling of the team and this first mission have been entirely financed by that round.

    The startup sees promising markets in both government agencies and private companies; both segments could use Inversion’s reusable platform as an on-orbit testbed, or as a delivery vehicle to a private commercial space station. Inversion is aiming on pushing both reusability and duration-on-orbit “to the maximum” to bring down costs and also to support different mission profiles, Fiaschetti said.

    Inversion aims to fly the next-gen vehicle, Arc, for the first time in 2026. While the two cofounders declined to provide more details on the spacecraft, the company’s website says it will be capable of carrying over 150 kilograms of cargo, to provide “proliferated” delivery in space.

    “We are testing hardware consistently. We’re developing an infrastructure to be able to scale ourselves. Just as our decision to bring parachutes in house was a decision because the parachutes are so directly applicable to what we’re building, it’s making those kinds of key decisions that allows us to move move much faster than another reentry vehicle would take much longer to develop.”

    [ad_2]

    Aria Alamalhodaei

    Source link

  • Elon Musk’s Latest Robot Video Looks Like It Was Shot on a Phone From 2002

    Elon Musk’s Latest Robot Video Looks Like It Was Shot on a Phone From 2002

    [ad_1]

    Elon Musk has shared a new video on Saturday featuring Optimus, the robot Tesla has been working on since 2021. But anyone who tries to watch the video will immediately notice something weird. The clip of Optimus is so low quality and pixelated that it looks like it was shot on a flip-phone from two decades ago.

    The new video was posted in the early morning hours of Saturday and has been viewed over 35 million times as of this writing. But the video appears to show Optimus just walking around without doing much of anything. That would have been quite impressive around 2013 or so, since it’s relatively difficult to get machines to walk like humans, but it’s not entirely clear why Musk would want the world to see Optimus walking like this.

    Update, 3:58 p.m. ET: At some point in the past 30 minutes or so Elon Musk’s video was swapped out to include a higher resolution version. Curiously, tweets that have been edited will typically show a note at the bottom that says a tweet has been edited and the time it occurred, but Musk’s tweet doesn’t indicate anything has been changed.

    The screenshots below show a side-by-side of what the tweet looked like before it was changed to include a higher resolution video.

    Screenshot: Elon Musk / X

    We’ve reached out to Twitter to see if Musk has special rules as owner of the social media platform and will update this post if we hear back. The rest of this post is being kept up for posterity.

    Incremental technical achievements aside, why does this video look so terrible? We weren’t the only ones to notice the bizarrely pixelated quality, as plenty of Musk fans made jokes about the blurriness.

    “Was this filmed with a potato?” one user quipped.

    “Same photographer?” another X user quipped with a photo of Bigfoot.

    Tesla didn’t immediately respond to questions about this new video of Optimus emailed Saturday.

    Musk unveiled Optimus with an unconventional presentation in the summer of 2021 that really felt like the billionaire was desperate to hype virtually anything futuristic. Tesla’s AI Day that year didn’t feature a real robot, but rather someone dressed in a white and black suit moving around like a stereotypical robot before starting to dance a jig.

    Tesla’s robot has made progress since that first jokey unveiling, but Optimus still has quite a ways to go before it can catch up to the most cutting edge robots of the 2020s. Atlas, a humanoid robot made by Boston Dynamics, started learning how to pick itself up in 2016, standing on one leg that same year, doing backflips in 2017, and achieved parkour-style jumping in 2018.

    And Atlas is still making progress in ways that rival how humans actually move. Last year, the Atlas robot showed off its ability to manipulate its environment to navigate complex worksites.

    Optimus has made improvements since it was first announced but it has quite a ways to go if it wants to catch up to a company like Boston Dynamics. Arguably the most impressive thing we’ve seen Optimus do is fold laundry, but if you take a close look at the video, there was a person standing just off-screen mimicking the movements. And, frankly, that’s technology that’s been possible since the 1960s.

    Can Tesla develop a truly autonomous robot that can work as a household servant, just as Musk has promised? Only time will tell. But we’ve been waiting on that version of the future for over a century now. Robotics is hard. But we can certainly keep dreaming.

    [ad_2]

    Matt Novak

    Source link

  • Rhizome’s 7×7 Models a Deeper Collaboration Between Art and Science

    Rhizome’s 7×7 Models a Deeper Collaboration Between Art and Science

    [ad_1]

    Ben Shirken (l.) and Reggie Watts at this year’s 7×7. Photo by Owley Studios, Courtesy of Rhizome.

    The intersection of art and technology gets a lot of press these days. In any given headline, it might be the “next frontier.” Or where cultural innovation happens. On some days, it’s spawning new job titles (e.g., curator of digital initiatives). And it always feels bright and shiny and optimistic and most importantly, new, even though artists have been experimenting with new technologies since the dawn of technology itself.

    And therein lies the challenge one faces when considering what exactly is happening at this much-publicized intersection. On one hand, the phrase is applied, seemingly broadly, to everything from NFTs and the ever-morphing works of Refik Anadol to the kinds of immersive installations pioneered by Sandro Kereselidze’s Artechouse. On the other, what reportedly exists at the intersection of art and technology seems strangely circumscribed. There’s computer-generated art and art inspired by technology at these crossroads but very little science.

    Or to put it another way, it seems there’s a lot more digital art being created at the intersection of the arts and technology than there are radical pairings of art and science. It may come down to people simply being more open to art borrowing from science and engineering than the reverse, even though there are plenty of notable examples of art inspiring scientific discovery. Niels Bohr in his development of the non-intuitive complementarity principle of quantum mechanics, for example, drew inspiration from Jean Metzinger’s cubist works.

    Claims that the dividing line between science and art is artificial come off as hyperbolic, but both scientists and artists are dreamers who channel their creative energies into untangling the world’s mysteries and building new things. It’s logical to consider what the intersection of art and technology could look like if the focus was on deep collaboration instead of just tapping into one or the other as a source of inspiration.

    Modeling a stronger synergy of art and science

    On a Saturday in late January, scientists, engineers, artists and the curious gathered at the New Museum in New York City for the relaunch of Seven on Seven (7×7), an event born out of a 2010 hackathon that paired seven engineers with seven artists to demonstrate what could happen when they worked together. The lineup of past participants is a fascinating who’s who of art and tech: Tumblr founder David Karp, Internet entrepreneur Jonah Peretti and Aza Raskin of the Center for Humane Technology… new media artist Tabita Rezaire, moving image artist Hito Steyerl and performance and installation artist Martine Syms. In 2015, Ai Weiwei collaborated with the hacker Jacob Appelbaum. This year, Boston Dynamics’ Spot took to the stage with dancer Mor Mendel as part of a collaboration between Boston Dynamics Director of Human-Robot Interaction David Robert and artist Miriam Simun with Hannah Rossi.

    Scientist–artist collaboration can take many forms: art-based communication can make science more accessible… new technologies become mediums in the hands of artists. What’s less common is what one Eos article calls “ArtScience,” which involves “artists and scientists working together in transdisciplinary ways to ask questions, design experiments and formulate knowledge.” 7×7, which is organized by the born-digital art and culture organization Rhizome, puts ArtScience on display by design. According to Xinran Yuan, this year’s producer and co-curator, it’s as important for the public to see collaboration between artists and scientists in action as it is to see the final output.

    Xin Liu, Christina Agapakis and Joshua Dunn. Photo by Owley Studios, Courtesy of Rhizome.

    That output was fascinating and surprisingly moving—Ginkgo Bioworks Head of Creative Christina Agapakis and artist Xin Liu’s yeast that lactates stood out—though I personally would have liked each duo’s presentations to be longer. Other 2024 7×7 participants included Replika AI CEO and Founder Eugenia Kuyda with artist and filmmaker Lynn Hershman Leeson; Nym Technologies CEO and Co-Founder Harry Halpin with artist Tomás Saraceno; Runway CEO and Co-Founder Cristóbal Valenzuela with comedian, writer, and actor Ana Fabrega; and engineer and entrepreneur Alan Steremberg with artist Rindon Johnson; and quantum physicist Dr. Stephon Alexander working with comedian, artist and musician Reggie Watts.

    The focus of this year’s event was A.I.—specifically, the role it might play in our lives moving forward. It’s a blisteringly hot topic in the art world, given the emergence of tools that many artists argue are, at best, plagiarism machines and, at worst, livelihood killers.

    “I’m glad that I’m alive right now at this really precarious time in human history and to be involved with A.I.,” Watts said at the end of an engaging and pleasantly optimistic talk on the potential of artificial intelligence in not only music but also improvisational creation. He was, however, pragmatic about the role artists need to play in the development of the technology. “I think it’s important for artists and technologists, but especially artists, to get ahead of the curve… even if you arrive at ‘this isn’t for me,’ be there at the table to have an opinion so it can be steered in a direction that’s most useful.”

    Simun also feels it’s important to consider the question of what our future with A.I. will look like. “A question I asked during my performance is: What would happen if we defined intelligence less on how well someone/something knows, and rather on how well they react to unexpected, ambiguous, and uncertain situations?” she told Observer. “If this was the metric by which we defined intelligence, how might we build our robots and our A.I. differently?”

    Dancer Mor Mendel with Boston Dynamics’ Spot piloted by Hannah Rossi. Photo by Owley Studios, Courtesy of Rhizome.

    What scientists gain by working with artists

    We’re culturally comfortable with art informed by science but less so by science informed by art—and that means we may be missing out on opportunities for innovation. Matthias C. Rillig, professor of ecology at Freie Universität Berlin, has considered the question in his own lab, which has an established artist-in-residency program, and among the many benefits of art-technology he has identified, idea generation stands out. “In conversations with the artist, unusual terms or connections appear,” he wrote last year. “One recent example of this was the term ‘soundscape stewardship’ that occurred in a conversation with Marcus Maeder,” which led to a paper in Science.

    Observer spoke with David Robert shortly after 7×7 about why Boston Dynamics collaborates with artists. “Putting the robot in other contexts, besides what it’s doing for its ‘job’ to earn its keep helps us figure out what’s possible,” he said. Working on projects with artists, he explained, can help engineers understand not only whether people like or don’t like a robot but also what aspects they like or dislike, which can suggest avenues for improvement.

    On the other hand, he added, “people project on them all the time and that’s a hard thing to design around.” Boston Dynamics has arguably done a top-notch job of getting people excited about robots, and it this point, it’s hard not to anthropomorphize Spot, which is bright yellow, moves like a happy dog and can be outfitted with what is functionally an arm but makes the robot look something like a friendly apatosaurus. It’s also currently painting with artist Agnieszka Pilat at this year’s National Gallery of Victoria (NGV) Triennial and has danced with BTS, walked the Coperni runway during Paris Fashion Week and given many kids and adults their first view of a real robot in action at Boston’s Museum of Science.

     

    On the other hand, there’s still a ways to go—even with the maximum encutification of robots (see, for example, the University of Manitoba’s Picassnake), people make jokes about killbots and the coming robot apocalypse. “It totally makes sense, given all the narratives that we’ve grown up with,” Robert said. “Most people haven’t had a direct experience with a robot.”

    The arts can change that. Simun’s 7×7 piece, as danced to the music of Igor Tkachenko and DJ Dede, offered an alternative to the imaginary robots we grew up with. “I hope the performance I created enabled the audience to gain a new and different perspective on the adoption of robots in our daily lives,” she said. “How are these robots being programmed to behave? To interact with us? To interact with their surroundings? … What kind of relationships with machines do we want, what will we get and what can we dream of?”

    In the end, the answer to those questions will be determined by the types of dreamers who took the stage at the New Museum—those for whom art is more than science’s ambassador and technology isn’t just another artist’s tool.

    Rhizome’s 7×7 Models a Deeper Collaboration Between Art and Science



    [ad_2]

    Christa Terry

    Source link

  • Robotics Q&A with Boston Dynamics’ Aaron Saunders | TechCrunch

    Robotics Q&A with Boston Dynamics’ Aaron Saunders | TechCrunch

    [ad_1]

    For the next few weeks, TechCrunch’s robotics newsletter Actuator will be running Q&As with some of the top minds in robotics. Subscribe here for future updates.

    Part 1: CMU’s Matthew Johnson-Roberson

    Part 2: Toyota Research Institute’s Max Bajracharya and Russ Tedrake

    Part 3: Meta’s Dhruv Batra

    This time it’s Boston Dynamics CTO, Aaron Saunders. He has been with the company for more than 20 years, most recently serving as its vice president of Engineering. 

    What role(s) will generative AI play in the future of robotics?

    The current rate of change makes it hard to predict very far into the future. Foundation models represent a major shift in how the best machine learning models are created, and we are already seeing some impressive near-term accelerations in natural language interfaces. They offer opportunities to create conversational interfaces to our robots, improve the quality of existing computer vision functions and potentially enable new customer-facing capabilities such as visual question answering. Ultimately we feel these more scalable architectures and training strategies are likely to extend past language and vision into robotic planning and control. Being able to interpret the world around a robot will lead to a much richer understanding on how to interact with it. It’s a really exciting time to be a roboticist! 

     What are your thoughts on the humanoid form factor?

    Humanoids aren’t necessarily the best form factor for all tasks. Take Stretch, for example — we originally generated interest in a box-moving robot from a video we shared of Atlas moving boxes. Just because humans can move boxes doesn’t mean we’re the best form factor to complete that task, and we ultimately designed a custom robot in Stretch that can move boxes more efficiently and effectively than a human. With that said, we see great potential in the long-term pursuit of general-purpose robotics, and the humanoid form factor is the most obvious match to a world built around our form. We have always been excited about the potential of humanoids and are working hard to close the technology gap. 

    Following manufacturing and warehouses, what is the next major category for robotics?

    Those two industries still stand out when you look at matching up customer needs with the state of art in technology. As we fan out, I think we will move slowly from environments that have determinism to those with higher levels of uncertainty. Once we see broad adoption in automation-friendly industries like manufacturing and logistics, the next wave probably happens in areas like construction and healthcare. Sectors like these are compelling opportunities because they have large workforces and high demand for skilled labor, but the supply is not meeting the need. Combine that with the work environments, which sit between the highly structured industrial setting and the totally unstructured consumer market, and it could represent a natural next step along the path to general purpose. 

    How far out are true general-purpose robots?

    There are many hard problems standing between today and truly general-purpose robots. Purpose-built robots have become a commodity in the industrial automation world, but we are just now seeing the emergence of multi-purpose robots. To be truly general purpose, robots will need to navigate unstructured environments and tackle problems they have not encountered. They will need to do this in a way that builds trust and delights the user. And they will have to deliver this value at a competitive price point. The good news is that we are seeing an exciting increase in critical mass and interest in the field. Our children are exposed to robotics early, and recent graduates are helping us drive a massive acceleration of technology. Today’s challenge of delivering value to industrial customers is paving the way toward tomorrow’s consumer opportunity and the general purpose future we all dream of. 

    Will home robots (beyond vacuums) take off in the next decade?

    We may see additional introduction of robots into the home in the next decade, but for very limited and specific tasks (like Roomba, we will find other clear value cases in our daily lives). We’re still more than a decade away from multifunctional in-home robots that deliver value to the broad consumer market. When would you pay as much for a robot as you would a car? When it achieves the same level of dependability and value you have come to take for granted in the amazing machines we use to transport us around the world.  

    What important robotics story/trend isn’t getting enough coverage?

    There is a lot of enthusiasm around AI and its potential to change all industries, including robotics. Although it has a clear role and may unlock domains that have been relatively static for decades, there is a lot more to a good robotic product than 1’s and 0’s. For AI to achieve the physical embodiment we need to interact with the world around us, we need to track progress in key technologies like computers, perception sensors, power sources and all the other bits that make up a full robotic system. The recent pivot in automotive towards electrification and Advanced Driver Assistance Systems (ADAS) is quickly transforming a massive supply chain. Progress in graphics cards, computers and increasingly sophisticated AI-enabled consumer electronics continues to drive value into adjacent supply chains. This massive snowball of technology, rarely in the spotlight, is one of the most exciting trends in robotics because it enables small innovative companies to stand on the backs of giants to create new and exciting products. 

    [ad_2]

    Brian Heater

    Source link

  • Los Angeles approves $278,000 robot police dog despite

    Los Angeles approves $278,000 robot police dog despite

    [ad_1]

    LA City Council approves robot police dog for LAPD


    LA City Council approves robot police dog for LAPD

    04:03

    A $278,000 robotic dog was approved by the Los Angeles City Council, despite some council members expressing “grave concerns” about the Boston Dynamics-manufactured device. 

    The “Quadruped Unmanned Ground Vehicle” was offered as a donation to the Los Angeles Police Department by the Los Angeles Police Foundation, according to CBS Los Angeles. If the council hadn’t accepted the donation, the offer would have expired, it reported. 

    On Tuesday, the L.A. city council voted 8-4 in favor of accepting the robot dog, which is unarmed but has surveillance technology. Members of the public spoke at the meeting, with most urging the council against taking up the offer, citing fears that the machine could violate resident’s civil rights, CBS LA reported.

    Such robots have sparked debate and fascination, as videos of the four-legged machines — including one that shows a Boston Dynamic’s robot dog opening a door — have gone viral. But adding the mechanized creatures to policing efforts have also led to controversy, with concern that the machines could be used against people.

    SPAIN-BARCELONA-MOBILE WORLD CONGRESS
    Demo of Boston Dynamics’ SPOT robot dog at 2022 Mobile World Congress MWC in Barcelona, Spain. The “Quadruped Unmanned Ground Vehicle” was offered as a donation to the LAPD by the Los Angeles Police Foundation.

    Gustavo Valiente/Xinhua via Getty Images


    Los Angeles councilwoman Eunisses Hernandez had previously said she had “grave concerns” about accepting the donation. She wasn’t present at the vote on Tuesday, according to ABC7.

    “We’ve seen these robot dogs crop up in other police departments around the country, including New York and San Francisco, where the community is similarly fighting back against bringing this kind of depersonalized, military-style technology to municipal police forces,” she said in a statement, according to CBS LA.


    Texas program studies human-robot interactions

    02:35

    The LA council’s approval comes with some strings attached. The city’s police department must issue quarterly reports about the robot’s usage and outcomes, as well as note any problems that arise. The council also has authority to suspend the robot’s use, CBS LA said. 

    New York City has recently added robot dogs to its police force, despite concerns from some residents that the machines could be problematic, especially in poor communities that have experienced aggressive policing. A robot dog was recently used in New York City to go inside a collapsed parking garage, with NYC Mayor Eric Adams praising the machine. 

    “Just one week ago, I was being criticized by all the folks in the bleachers, saying, ‘Well, why are you getting that dog?’ ” Adams said. “Now you see why I got the dog — to save lives.”

    [ad_2]

    Source link