ReportWire

Tag: Boston Dynamics

  • Can AI turn a robot dog into a first responder? – WTOP News

    Researchers with the University of Maryland are turning a dog they nicknamed Spot into a robot that can assess patients at mass casualty scenes.

    At a scene where there’s more victims than medics, whether it’s a crime scene, the scene of accident or on a battlefield, the future of that initial screening could be conducted by a robotic dog made by Boston Dynamics.

    Researchers with the University of Maryland, in cooperation with the Defense Advanced Research Projects Agency, are turning a robot dog they nicknamed Spot into a first responder that can talk to and assess patients, and work with medics to make sure whoever needs the most serious amount of help can get it fast.

    “I’m here to help,” the robot says as it approaches a mannequin that, at least in this demo, had suffered a gunshot wound to the leg. “Can you tell me what happened?”

    The computer on the dog includes a large language model artificial intelligence system, similar to ChatGPT, that can communicate with the patient.

    “We buy pretty much the heaviest computer that it could carry, and we put it on there,” said Derek Paley, a professor in Maryland’s Department of Aerospace Engineering and the Institute for Systems Research. “We also add a lot of sensors to the arm here. These are the sensors that are used to assess a patient’s injuries.”

    It all works together to determine someone’s condition.

    “The depth camera can create a 3D image of the casualty, and each of these sensing modalities are fused in what we call an ‘inference engine,’ so that accumulates evidence to support the assessments that are shown here. So each assessment may be determined by combining information from multiple robots, multiple sensors and multiple sensor-processing algorithms,” Paley said.

    At the very beginning of the response to the incident, an aerial drone will assess the situation on the ground, mapping out where potential victims are and sending that information to both Spot and medics on scene. Spot can then scour the area to get a closer look with all its cameras and sensors.

    “The robots can explore, they can assess the number of casualties, where they’re all located, and actually provide that information to the medic in real time on a phone that’s attached to the medic’s chest,” Paley said. “So the medic can look down at their chest and see pins on a map where all the casualties are, color coded by the severity of injuries.”

    The robots are all doing it autonomously, too.

    “They build a mosaic of images in a map to show where the casualties are, and then the ground robots, the Spots here, go to each casualty and they get things like vital signs and other assessments that the drones can’t perform,” Paley said. “That’s all preloaded onto the medic’s phone, so they have that information when they get to each casualty. They already know what the robot has assessed.”

    Spot can even call out for a medic urgently if it determines a patient has critical injuries by shouting, “Medic, medic!”

    All of this is still in the testing phase right now; but Paley thinks the technology could be deployable within the next couple years.

    “We’re able to provide valuable assessments to the medics while they’re under pressure to provide those interventions in timely fashion,” he said.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    John Domen

    Source link

  • This Robot Only Needs a Single AI Model to Master Humanlike Movements

    While there is a lot of work to do, Tedrake says all of the evidence so far suggests that the approaches used to LLMs also work for robots. “I think it’s changing everything,” he says.

    Gauging progress in robotics has become more challenging of late, of course, with videoclips showing commercial humanoids performing complex chores, like loading refrigerators or taking out the trash with seeming ease. YouTube clips can be deceptive, though, and humanoid robots tend to be either teleoperated, carefully programmed in advance, or trained to do a single task in very controlled conditions.

    The new Atlas work is a big sign that robots are starting to experience the kind of equivalent advances in robotics that eventually led to the general language models that gave us ChatGPT in the field of generative AI. Eventually, such progress could give us robots that are able to operate in a wide range of messy environments with ease and are able to rapidly learn new skills—from welding pipes to making espressos—without extensive retraining.

    “It’s definitely a step forward,” says Ken Goldberg, a roboticist at UC Berkeley who receives some funding from TRI but was not involved with the Atlas work. “The coordination of legs and arms is a big deal.”

    Goldberg says, however, that the idea of emergent robot behavior should be treated carefully. Just as the surprising abilities of large language models can sometimes be traced to examples included in their training data, he says that robots may demonstrate skills that seem more novel than they really are. He adds that it is helpful to know details about how often a robot succeeds and in what ways it fails during experiments. TRI has previously been transparent with the work it’s done on LBMs and may well release more data on the new model.

    Whether simple scaling up the data used to train robot models will unlock ever-more emergent behavior remains an open question. At a debate held in May at the International Conference on Robotics and Automation in Atlanta, Goldberg and others cautioned that engineering methods will also play an important role going forward.

    Tedrake, for one, is convinced that robotics is nearing an inflection point—one that will enable more real-world use of humanoids and other robots. “I think we need to put these robots out of the world and start doing real work,” he says.

    What do you think of Atlas’ new skills? And do you think that we are headed for a ChatGPT-style breakthrough in robotics? Let me know your thoughts on ailab@wired.com.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    Will Knight

    Source link

  • Watch This New Robot Relax in the Creepiest Way Possible

    Watch This New Robot Relax in the Creepiest Way Possible

    The past decade has seen humanoid robot makers trying to make their creations more and more like humans. But here in 2024, we seem to be witnessing an odd shift in the dexterity of our bipedal robo-dreams. Put bluntly, robotics companies aren’t afraid of getting weird with the contortions of their latest offerings.

    China-based Unitree Robotics released a new video on Monday, available on YouTube, showing off the new G1 which retails for $16,000. It’s just the latest demonstration of a robot maneuvering in entirely un-human ways to accomplish its goals, as you can see in the GIF above.

    The video includes lots of odd movements, showing how the robot can get up off the ground or greet people by pulling a sort of Exorcist move with its torso, rotating 180 degrees. And it all looks strikingly similar to the new version of Boston Dynamics’ Atlas, which has a novel way of getting on its feet.

    There’s also a demonstration of the Unitree robot getting kicked and pushed, presumably to show how well it can balance, even when it meets resistance. But we’d be lying if we said it didn’t make us uncomfortable. These are, after all, robots made to look like humans. And watching wanton cruelty, even against a machine that doesn’t have feelings, sets off something deep in our brain that says they shouldn’t be doing that.

    Unitree Introducing | Unitree G1 Humanoid Agent | AI Avatar | Price from $16K

    Again, these contortions all seem a bit new. A decade ago, Gizmodo attended the DARPA Robotics Challenge in Southern California, where teams largely competed by trying to make their robots as much like humans as possible. Companies like Boston Dynamics released new videos each year showing its robots walking, running, and then eventually doing backflips, all in the same way that talented humans might do it.

    But we seem to be on the cusp of a new era when it comes to robotics. Most robot makers have achieved basic human-style walking and running. The new frontier is taking that form factor and turning them into super-humans, whether by performing gymnastics or applying logic and reason to the world in front of them.

    We’re still a long way from AGI that’ll help robots get chores done, but if we continue on this trajectory, it seems unlikely robots will be doing the mundane tasks that humans don’t want to do. We allowed AI to skip all the boring stuff and jump right ahead to making music and writing poetry. It seems silly to think we’re building an army of butlers to serve humanity with that technology. No, we’re probably going to be letting the robots paint beautiful landscapes while we’re all stuck at our desks filling out Excel spreadsheets if the recent past is any guide.

    Matt Novak

    Source link

  • Humanoid robots are learning to fall well | TechCrunch

    Humanoid robots are learning to fall well | TechCrunch

    The savvy marketers at Boston Dynamics produced two major robotics news cycles last week. The larger of the two was, naturally, the electric Atlas announcement. As I write this, the sub-40 second video is steadily approaching five million views. A day prior, the company tugged at the community’s heart strings when it announced that the original hydraulic Atlas was being put out to pasture, a decade after its introduction.

    The accompanying video was a celebration of the older Atlas’ journey from DARPA research project to an impressively nimble bipedal ’bot. A minute in, however, the tone shifts. Ultimately, “Farewell to Atlas” is as much a celebration as it is a blooper reel. It’s a welcome reminder that for every time the robot sticks the landing on video there are dozens of slips, falls and sputters.

    Image Credits: Boston Dynamics

    I’ve long championed this sort of transparency. It’s the sort of thing I would like to see more from the robotics world. Simply showcasing the highlight reel does a disservice to the effort that went into getting those shots. In many cases, we’re talking years of trial and error spent getting robots to look good on camera. When you only share the positive outcomes, you’re setting unrealistic expectations. Bipedal robots fall over. In that respect, at least, they’re just like us. As Agility put it recently, “Everyone falls sometimes, it’s how we get back up that defines us.” I would take that a step further, adding that learning how to fall well is equally important.

    The company’s newly appointed CTO, Pras Velagapudi, recently told me that seeing robots fall on the job at this stage is actually a good thing. “When a robot is actually out in the world doing real things, unexpected things are going to happen,” he notes. “You’re going to see some falls, but that’s part of learning to run a really long time in real-world environments. It’s expected, and it’s a sign that you’re not staging things.”

    A quick scan of Harvard’s rules for falling without injury reflects what we intuitively understand about falling as humans:

    1. Protect your head
    2. Use your weight to direct your fall
    3. Bend your knees
    4. Avoid taking other people with you

    As for robots, this IEEE Spectrum piece from last year is a great place to start.

    “We’re not afraid of a fall—we’re not treating the robots like they’re going to break all the time,” Boston Dynamics CTO Aaron Saunders told the publication last year. “Our robot falls a lot, and one of the things we decided a long time ago [is] that we needed to build robots that can fall without breaking. If you can go through that cycle of pushing your robot to failure, studying the failure, and fixing it, you can make progress to where it’s not falling. But if you build a machine or a control system or a culture around never falling, then you’ll never learn what you need to learn to make your robot not fall. We celebrate falls, even the falls that break the robot.”

    Image Credits: Boston Dynamics

    The subject of falling also came up when I spoke with Boston Dynamics CEO Robert Playter ahead of the electric Atlas’ launch. Notably, the short video begins with the robot in a prone position. The way the robot’s legs arc around is quite novel, allowing the system to stand up from a completely flat position. At first glance, it almost feels as though the company is showing off, using the flashy move simply as a method to showcase the extremely robust custom-built actuators.

    “There will be very practical uses for that,” Playter told me. “Robots are going to fall. You’d better be able to get up from prone.” He adds that the ability to get up from a prone position may also be useful for charging purposes.

    Much of Boston Dynamics’ learnings around falling came from Spot. While there’s generally more stability in the quadrupedal form factor (as evidenced from decades trying and failing to kick the robots over in videos), there are simply way more hours of Spot robots working in real-world conditions.

    Image Credits: Agility Robotics

    “Spot’s walking something like 70,000 kms a year on factory floors, doing about 100,000 inspections per month,” adds Playter. “They do fall, eventually. You have to be able to get back up. Hopefully you get your fall rate down — we have. I think we’re falling once every 100-200 kms. The fall rate has really gotten small, but it does happen.”

    Playter adds that the company has a long history of being “rough” on its robots. “They fall, and they’ve got to be able to survive. Fingers can’t fall off.”

    Watching the above Atlas outtakes, it’s hard not to project a bit of human empathy onto the ’bot. It really does appear to fall like a human, drawing its extremities as close to its body as possible, to protect them from further injury.

    When Agility added arms to Digit, back in 2019, it discussed the role they play in falling. “For us, arms are simultaneously a tool for moving through the world — think getting up after a fall, waving your arms for balance, or pushing open a door — while also being useful for manipulating or carrying objects,” co-founder Jonathan Hurst noted at the time.

    I spoke a bit to Agility about the topic at Modex earlier this year. Video of a Digit robot falling over on a convention floor a year prior had made the social media rounds. “With a 99% success rate over about 20 hours of live demos, Digit still took a couple of falls at ProMat,” Agility noted at the time. “We have no proof, but we think our sales team orchestrated it so they could talk about Digits quick-change limbs and durability.”

    As with the Atlas video, the company told me that something akin to a fetal position is useful in terms of protecting the robot’s legs and arms.

    The company has been using reinforcement learning to help fallen robots right themselves. Agility shut off Digit’s obstacle avoidance for the above video to force a fall. In the video, the robot uses its arms to mitigate the fall as much as possible. It then utilizes its reinforcement learnings to return to a familiar position from which it is capable of standing again with a robotic pushup.

    One of humanoid robots’ main selling points is their ability to slot into existing workflows — these factories and warehouses are known as “brownfield,” meaning they weren’t custom built for automation. In many existing cases of factory automation, errors mean the system effectively shuts down until a human intervenes.

    “Rescuing a humanoid robot is not going to be trivial,” says Playter, noting that these systems are heavy and can be difficult to manually right. “How are you going to do that if it can’t get itself off the ground?”

    If these systems are truly going to ensure uninterrupted automation, they’ll need to fall well and get right back up again.

    “Every time Digit falls, we learn something new,” adds Velagapudi. “When it comes to bipedal robotics, falling is a wonderful teacher.”

    Brian Heater

    Source link

  • The Atlas Robot Is Dead. Long Live the Atlas Robot

    The Atlas Robot Is Dead. Long Live the Atlas Robot

    You don’t need to have been petrified by Arnold Schwarzenegger’s Skynet-commissioned cyborg assassin in 1984’s The Terminator to fret that super-strong, all-terrain, bipedal humanoid robots sprinting up steps, pulling backflips, and righting themselves could be programmed to break our necks on sight. (And laser guns, never give them laser guns.)

    With the Old Atlas, we could comfort ourselves with the notion that clever editing meant Atlas wasn’t as self-righting over rough ground as the original viral videos portrayed. The pratfalls in the retirement video prove that hunch was correct. However, today’s video might well resurrect any robot overlord fears you may have since suppressed. This thing is scary, and not just because it has a ringlight for a face. (Who had “Robot YouTube influencer” on their 2024 bingo card?)

    It was nice knowing you, Old Atlas—you awesome, pratfalling, parkouring, metal man machine.

    Scary, too, if you’re an Amazon warehouse worker, because the New Atlas could do that job with one three-fingered hand tied behind its matte gray robotic back. More likely, however, is that Hyundai—which bought Boston Dynamics in 2020, valuing it at $1 billion—could soon set Atlas to work in its car factories. The “journey will start with Hyundai,” confirmed Boston Dynamics in a statement announcing the All New Atlas launch.

    Again, no details have been released, but we can surmise that the new Atlas will be given dull, repetitive tasks in the Korean company’s factories rather than, say, laser welding. (Remember, keep lasers away from robot butlers.)

    Hyundai isn’t the only company planning to use humanoid robots as workers. Beating Tesla’s still-in-development Optimus line of humanoid robots, Sanctuary AI of Canada announced on April 11 that it would be delivering a humanoid robot to Magna, an Austrian automotive firm that assembles cars for Mercedes, Jaguar, and BMW.

    And Californian robotics startup Figure announced in February that it had raised $675 million from investors such as Nvidia, Microsoft, and Amazon to work with OpenAI on generative artificial intelligence for humanoid robots.

    A general-purpose humanoid robot that can learn on the fly. What could possibly go wrong with that?

    Carlton Reid

    Source link

  • Inversion Space will test its space-based delivery tech in October | TechCrunch

    Inversion Space will test its space-based delivery tech in October | TechCrunch

    Inversion Space is aptly named. The three-year-old startup’s primary concern is not getting things to space, but bringing them back — transforming the ultimate high ground into “a transportation layer for Earth.”

    The company’s plan — ultra-fast, on-demand deliveries to anywhere on Earth — sounds like pie in the sky, but it’s the sort of moonshot goal that could transform terrestrial cargo transportation. The aim is to send up fleets of earth-orbiting vehicles that will be able to shoot back to Earth at Mach speeds, slow with specially-made parachutes, and deliver cargo in minutes.

    Inversion has developed a pathfinder vehicle, called Ray, that’s a technical precursor to a larger platform that will debut in 2026. Ray will head to space this October, on SpaceX’s Transporter-12 ride share mission, paving the way for Inversion’s future plans on orbit (and back).

    Ray is small — about twice the diameter of a standard frisbee — and will spend anywhere from one and five weeks in space, depending on factors like weather and how the orbit aligns with the landing site, Inversion CEO Justin Fiaschetti explained in a recent interview.

    This first mission will have three phases: the initial on-orbit phase, where the spacecraft will power on, charge its batteries, and hopefully send telemetry to the ground. During the second phase, Ray will use its onboard propulsion system to slow down the vehicle so it starts losing altitude and reentering the atmosphere. The reentry capsule will separate from the satellite bus (both designed in-house), with the latter structure burning up.

    The third and final phase will see Ray slow down using a supersonic drogue parachute, from a reentry speed of Mach 1.8 to Mach 0.2. The main parachute will then deploy, further slowing the capsule to a soft splashdown off the coast of California.

    Impressively, the company has designed and built almost all of the Ray vehicle in-house, from the propulsion system to the structure to the parachutes. This last component is key: almost no space company designs parachutes themselves, and they’re incredibly challenging to engineer from the ground up. Inversion’s engineering team completed qualification testing of the deployment and parachute systems last year.

    Fiaschetti said strong vertical integration has helped the company move so quickly.

    “The purpose of our Ray vehicle is to develop technology for our next-gen vehicle. As such, we’ve built basically the entire vehicle in-house,” Fiaschetti said. “What we saw was that if we can build in-house now, do the hard thing first, that allows us to scale very quickly and meet our customer needs.”

    The reentry vehicle is totally passive — meaning it doesn’t have active controls to navigate its reentry to Earth — but the company’s larger next-gen vehicle, called Arc, will have “football field-level” accuracy.

    Inversion was founded by CEO Justin Fiaschetti and CTO Austin Briggs in 2021, but the two go back further: they met for the first time when they sat next to each other at a Boston University freshman matriculation ceremony. The pair eventually got jobs in southern California — Briggs, as a propulsion development engineer at ABL Space Systems, while Fiaschetti had brief engineering stints at Relativity and SpaceX — and they were actually roommates when they first floated the idea of developing technology to deliver cargo anywhere on Earth.

    The company went through Y Combinator in the summer of 2021 (it was one of our favorites from the cohort) and closed its $10 million seed round in November that same year.

    “We’ve been off to the races ever since,” Fiaschetti said. The company’s grown to 25 employees, who are based out of Torrance, California, where they have a 5,000-square-foot facility. The startup also owns five acres of land in the Mojave Desert, where it conducts engine testing. The scaling of the team and this first mission have been entirely financed by that round.

    The startup sees promising markets in both government agencies and private companies; both segments could use Inversion’s reusable platform as an on-orbit testbed, or as a delivery vehicle to a private commercial space station. Inversion is aiming on pushing both reusability and duration-on-orbit “to the maximum” to bring down costs and also to support different mission profiles, Fiaschetti said.

    Inversion aims to fly the next-gen vehicle, Arc, for the first time in 2026. While the two cofounders declined to provide more details on the spacecraft, the company’s website says it will be capable of carrying over 150 kilograms of cargo, to provide “proliferated” delivery in space.

    “We are testing hardware consistently. We’re developing an infrastructure to be able to scale ourselves. Just as our decision to bring parachutes in house was a decision because the parachutes are so directly applicable to what we’re building, it’s making those kinds of key decisions that allows us to move move much faster than another reentry vehicle would take much longer to develop.”

    Aria Alamalhodaei

    Source link

  • Elon Musk’s Latest Robot Video Looks Like It Was Shot on a Phone From 2002

    Elon Musk’s Latest Robot Video Looks Like It Was Shot on a Phone From 2002

    Elon Musk has shared a new video on Saturday featuring Optimus, the robot Tesla has been working on since 2021. But anyone who tries to watch the video will immediately notice something weird. The clip of Optimus is so low quality and pixelated that it looks like it was shot on a flip-phone from two decades ago.

    The new video was posted in the early morning hours of Saturday and has been viewed over 35 million times as of this writing. But the video appears to show Optimus just walking around without doing much of anything. That would have been quite impressive around 2013 or so, since it’s relatively difficult to get machines to walk like humans, but it’s not entirely clear why Musk would want the world to see Optimus walking like this.

    Update, 3:58 p.m. ET: At some point in the past 30 minutes or so Elon Musk’s video was swapped out to include a higher resolution version. Curiously, tweets that have been edited will typically show a note at the bottom that says a tweet has been edited and the time it occurred, but Musk’s tweet doesn’t indicate anything has been changed.

    The screenshots below show a side-by-side of what the tweet looked like before it was changed to include a higher resolution video.

    Screenshot: Elon Musk / X

    We’ve reached out to Twitter to see if Musk has special rules as owner of the social media platform and will update this post if we hear back. The rest of this post is being kept up for posterity.

    Incremental technical achievements aside, why does this video look so terrible? We weren’t the only ones to notice the bizarrely pixelated quality, as plenty of Musk fans made jokes about the blurriness.

    “Was this filmed with a potato?” one user quipped.

    “Same photographer?” another X user quipped with a photo of Bigfoot.

    Tesla didn’t immediately respond to questions about this new video of Optimus emailed Saturday.

    Musk unveiled Optimus with an unconventional presentation in the summer of 2021 that really felt like the billionaire was desperate to hype virtually anything futuristic. Tesla’s AI Day that year didn’t feature a real robot, but rather someone dressed in a white and black suit moving around like a stereotypical robot before starting to dance a jig.

    Tesla’s robot has made progress since that first jokey unveiling, but Optimus still has quite a ways to go before it can catch up to the most cutting edge robots of the 2020s. Atlas, a humanoid robot made by Boston Dynamics, started learning how to pick itself up in 2016, standing on one leg that same year, doing backflips in 2017, and achieved parkour-style jumping in 2018.

    And Atlas is still making progress in ways that rival how humans actually move. Last year, the Atlas robot showed off its ability to manipulate its environment to navigate complex worksites.

    Optimus has made improvements since it was first announced but it has quite a ways to go if it wants to catch up to a company like Boston Dynamics. Arguably the most impressive thing we’ve seen Optimus do is fold laundry, but if you take a close look at the video, there was a person standing just off-screen mimicking the movements. And, frankly, that’s technology that’s been possible since the 1960s.

    Can Tesla develop a truly autonomous robot that can work as a household servant, just as Musk has promised? Only time will tell. But we’ve been waiting on that version of the future for over a century now. Robotics is hard. But we can certainly keep dreaming.

    Matt Novak

    Source link

  • Rhizome’s 7×7 Models a Deeper Collaboration Between Art and Science

    Rhizome’s 7×7 Models a Deeper Collaboration Between Art and Science


    Ben Shirken (l.) and Reggie Watts at this year’s 7×7. Photo by Owley Studios, Courtesy of Rhizome.

    The intersection of art and technology gets a lot of press these days. In any given headline, it might be the “next frontier.” Or where cultural innovation happens. On some days, it’s spawning new job titles (e.g., curator of digital initiatives). And it always feels bright and shiny and optimistic and most importantly, new, even though artists have been experimenting with new technologies since the dawn of technology itself.

    And therein lies the challenge one faces when considering what exactly is happening at this much-publicized intersection. On one hand, the phrase is applied, seemingly broadly, to everything from NFTs and the ever-morphing works of Refik Anadol to the kinds of immersive installations pioneered by Sandro Kereselidze’s Artechouse. On the other, what reportedly exists at the intersection of art and technology seems strangely circumscribed. There’s computer-generated art and art inspired by technology at these crossroads but very little science.

    Or to put it another way, it seems there’s a lot more digital art being created at the intersection of the arts and technology than there are radical pairings of art and science. It may come down to people simply being more open to art borrowing from science and engineering than the reverse, even though there are plenty of notable examples of art inspiring scientific discovery. Niels Bohr in his development of the non-intuitive complementarity principle of quantum mechanics, for example, drew inspiration from Jean Metzinger’s cubist works.

    Claims that the dividing line between science and art is artificial come off as hyperbolic, but both scientists and artists are dreamers who channel their creative energies into untangling the world’s mysteries and building new things. It’s logical to consider what the intersection of art and technology could look like if the focus was on deep collaboration instead of just tapping into one or the other as a source of inspiration.

    Modeling a stronger synergy of art and science

    On a Saturday in late January, scientists, engineers, artists and the curious gathered at the New Museum in New York City for the relaunch of Seven on Seven (7×7), an event born out of a 2010 hackathon that paired seven engineers with seven artists to demonstrate what could happen when they worked together. The lineup of past participants is a fascinating who’s who of art and tech: Tumblr founder David Karp, Internet entrepreneur Jonah Peretti and Aza Raskin of the Center for Humane Technology… new media artist Tabita Rezaire, moving image artist Hito Steyerl and performance and installation artist Martine Syms. In 2015, Ai Weiwei collaborated with the hacker Jacob Appelbaum. This year, Boston Dynamics’ Spot took to the stage with dancer Mor Mendel as part of a collaboration between Boston Dynamics Director of Human-Robot Interaction David Robert and artist Miriam Simun with Hannah Rossi.

    Scientist–artist collaboration can take many forms: art-based communication can make science more accessible… new technologies become mediums in the hands of artists. What’s less common is what one Eos article calls “ArtScience,” which involves “artists and scientists working together in transdisciplinary ways to ask questions, design experiments and formulate knowledge.” 7×7, which is organized by the born-digital art and culture organization Rhizome, puts ArtScience on display by design. According to Xinran Yuan, this year’s producer and co-curator, it’s as important for the public to see collaboration between artists and scientists in action as it is to see the final output.

    Xin Liu, Christina Agapakis and Joshua Dunn. Photo by Owley Studios, Courtesy of Rhizome.

    That output was fascinating and surprisingly moving—Ginkgo Bioworks Head of Creative Christina Agapakis and artist Xin Liu’s yeast that lactates stood out—though I personally would have liked each duo’s presentations to be longer. Other 2024 7×7 participants included Replika AI CEO and Founder Eugenia Kuyda with artist and filmmaker Lynn Hershman Leeson; Nym Technologies CEO and Co-Founder Harry Halpin with artist Tomás Saraceno; Runway CEO and Co-Founder Cristóbal Valenzuela with comedian, writer, and actor Ana Fabrega; and engineer and entrepreneur Alan Steremberg with artist Rindon Johnson; and quantum physicist Dr. Stephon Alexander working with comedian, artist and musician Reggie Watts.

    The focus of this year’s event was A.I.—specifically, the role it might play in our lives moving forward. It’s a blisteringly hot topic in the art world, given the emergence of tools that many artists argue are, at best, plagiarism machines and, at worst, livelihood killers.

    “I’m glad that I’m alive right now at this really precarious time in human history and to be involved with A.I.,” Watts said at the end of an engaging and pleasantly optimistic talk on the potential of artificial intelligence in not only music but also improvisational creation. He was, however, pragmatic about the role artists need to play in the development of the technology. “I think it’s important for artists and technologists, but especially artists, to get ahead of the curve… even if you arrive at ‘this isn’t for me,’ be there at the table to have an opinion so it can be steered in a direction that’s most useful.”

    Simun also feels it’s important to consider the question of what our future with A.I. will look like. “A question I asked during my performance is: What would happen if we defined intelligence less on how well someone/something knows, and rather on how well they react to unexpected, ambiguous, and uncertain situations?” she told Observer. “If this was the metric by which we defined intelligence, how might we build our robots and our A.I. differently?”

    Dancer Mor Mendel with Boston Dynamics’ Spot piloted by Hannah Rossi. Photo by Owley Studios, Courtesy of Rhizome.

    What scientists gain by working with artists

    We’re culturally comfortable with art informed by science but less so by science informed by art—and that means we may be missing out on opportunities for innovation. Matthias C. Rillig, professor of ecology at Freie Universität Berlin, has considered the question in his own lab, which has an established artist-in-residency program, and among the many benefits of art-technology he has identified, idea generation stands out. “In conversations with the artist, unusual terms or connections appear,” he wrote last year. “One recent example of this was the term ‘soundscape stewardship’ that occurred in a conversation with Marcus Maeder,” which led to a paper in Science.

    Observer spoke with David Robert shortly after 7×7 about why Boston Dynamics collaborates with artists. “Putting the robot in other contexts, besides what it’s doing for its ‘job’ to earn its keep helps us figure out what’s possible,” he said. Working on projects with artists, he explained, can help engineers understand not only whether people like or don’t like a robot but also what aspects they like or dislike, which can suggest avenues for improvement.

    On the other hand, he added, “people project on them all the time and that’s a hard thing to design around.” Boston Dynamics has arguably done a top-notch job of getting people excited about robots, and it this point, it’s hard not to anthropomorphize Spot, which is bright yellow, moves like a happy dog and can be outfitted with what is functionally an arm but makes the robot look something like a friendly apatosaurus. It’s also currently painting with artist Agnieszka Pilat at this year’s National Gallery of Victoria (NGV) Triennial and has danced with BTS, walked the Coperni runway during Paris Fashion Week and given many kids and adults their first view of a real robot in action at Boston’s Museum of Science.

     

    On the other hand, there’s still a ways to go—even with the maximum encutification of robots (see, for example, the University of Manitoba’s Picassnake), people make jokes about killbots and the coming robot apocalypse. “It totally makes sense, given all the narratives that we’ve grown up with,” Robert said. “Most people haven’t had a direct experience with a robot.”

    The arts can change that. Simun’s 7×7 piece, as danced to the music of Igor Tkachenko and DJ Dede, offered an alternative to the imaginary robots we grew up with. “I hope the performance I created enabled the audience to gain a new and different perspective on the adoption of robots in our daily lives,” she said. “How are these robots being programmed to behave? To interact with us? To interact with their surroundings? … What kind of relationships with machines do we want, what will we get and what can we dream of?”

    In the end, the answer to those questions will be determined by the types of dreamers who took the stage at the New Museum—those for whom art is more than science’s ambassador and technology isn’t just another artist’s tool.

    Rhizome’s 7×7 Models a Deeper Collaboration Between Art and Science





    Christa Terry

    Source link

  • Robotics Q&A with Boston Dynamics’ Aaron Saunders | TechCrunch

    Robotics Q&A with Boston Dynamics’ Aaron Saunders | TechCrunch

    For the next few weeks, TechCrunch’s robotics newsletter Actuator will be running Q&As with some of the top minds in robotics. Subscribe here for future updates.

    Part 1: CMU’s Matthew Johnson-Roberson

    Part 2: Toyota Research Institute’s Max Bajracharya and Russ Tedrake

    Part 3: Meta’s Dhruv Batra

    This time it’s Boston Dynamics CTO, Aaron Saunders. He has been with the company for more than 20 years, most recently serving as its vice president of Engineering. 

    What role(s) will generative AI play in the future of robotics?

    The current rate of change makes it hard to predict very far into the future. Foundation models represent a major shift in how the best machine learning models are created, and we are already seeing some impressive near-term accelerations in natural language interfaces. They offer opportunities to create conversational interfaces to our robots, improve the quality of existing computer vision functions and potentially enable new customer-facing capabilities such as visual question answering. Ultimately we feel these more scalable architectures and training strategies are likely to extend past language and vision into robotic planning and control. Being able to interpret the world around a robot will lead to a much richer understanding on how to interact with it. It’s a really exciting time to be a roboticist! 

     What are your thoughts on the humanoid form factor?

    Humanoids aren’t necessarily the best form factor for all tasks. Take Stretch, for example — we originally generated interest in a box-moving robot from a video we shared of Atlas moving boxes. Just because humans can move boxes doesn’t mean we’re the best form factor to complete that task, and we ultimately designed a custom robot in Stretch that can move boxes more efficiently and effectively than a human. With that said, we see great potential in the long-term pursuit of general-purpose robotics, and the humanoid form factor is the most obvious match to a world built around our form. We have always been excited about the potential of humanoids and are working hard to close the technology gap. 

    Following manufacturing and warehouses, what is the next major category for robotics?

    Those two industries still stand out when you look at matching up customer needs with the state of art in technology. As we fan out, I think we will move slowly from environments that have determinism to those with higher levels of uncertainty. Once we see broad adoption in automation-friendly industries like manufacturing and logistics, the next wave probably happens in areas like construction and healthcare. Sectors like these are compelling opportunities because they have large workforces and high demand for skilled labor, but the supply is not meeting the need. Combine that with the work environments, which sit between the highly structured industrial setting and the totally unstructured consumer market, and it could represent a natural next step along the path to general purpose. 

    How far out are true general-purpose robots?

    There are many hard problems standing between today and truly general-purpose robots. Purpose-built robots have become a commodity in the industrial automation world, but we are just now seeing the emergence of multi-purpose robots. To be truly general purpose, robots will need to navigate unstructured environments and tackle problems they have not encountered. They will need to do this in a way that builds trust and delights the user. And they will have to deliver this value at a competitive price point. The good news is that we are seeing an exciting increase in critical mass and interest in the field. Our children are exposed to robotics early, and recent graduates are helping us drive a massive acceleration of technology. Today’s challenge of delivering value to industrial customers is paving the way toward tomorrow’s consumer opportunity and the general purpose future we all dream of. 

    Will home robots (beyond vacuums) take off in the next decade?

    We may see additional introduction of robots into the home in the next decade, but for very limited and specific tasks (like Roomba, we will find other clear value cases in our daily lives). We’re still more than a decade away from multifunctional in-home robots that deliver value to the broad consumer market. When would you pay as much for a robot as you would a car? When it achieves the same level of dependability and value you have come to take for granted in the amazing machines we use to transport us around the world.  

    What important robotics story/trend isn’t getting enough coverage?

    There is a lot of enthusiasm around AI and its potential to change all industries, including robotics. Although it has a clear role and may unlock domains that have been relatively static for decades, there is a lot more to a good robotic product than 1’s and 0’s. For AI to achieve the physical embodiment we need to interact with the world around us, we need to track progress in key technologies like computers, perception sensors, power sources and all the other bits that make up a full robotic system. The recent pivot in automotive towards electrification and Advanced Driver Assistance Systems (ADAS) is quickly transforming a massive supply chain. Progress in graphics cards, computers and increasingly sophisticated AI-enabled consumer electronics continues to drive value into adjacent supply chains. This massive snowball of technology, rarely in the spotlight, is one of the most exciting trends in robotics because it enables small innovative companies to stand on the backs of giants to create new and exciting products. 

    Brian Heater

    Source link

  • Los Angeles approves $278,000 robot police dog despite

    Los Angeles approves $278,000 robot police dog despite

    LA City Council approves robot police dog for LAPD


    LA City Council approves robot police dog for LAPD

    04:03

    A $278,000 robotic dog was approved by the Los Angeles City Council, despite some council members expressing “grave concerns” about the Boston Dynamics-manufactured device. 

    The “Quadruped Unmanned Ground Vehicle” was offered as a donation to the Los Angeles Police Department by the Los Angeles Police Foundation, according to CBS Los Angeles. If the council hadn’t accepted the donation, the offer would have expired, it reported. 

    On Tuesday, the L.A. city council voted 8-4 in favor of accepting the robot dog, which is unarmed but has surveillance technology. Members of the public spoke at the meeting, with most urging the council against taking up the offer, citing fears that the machine could violate resident’s civil rights, CBS LA reported.

    Such robots have sparked debate and fascination, as videos of the four-legged machines — including one that shows a Boston Dynamic’s robot dog opening a door — have gone viral. But adding the mechanized creatures to policing efforts have also led to controversy, with concern that the machines could be used against people.

    SPAIN-BARCELONA-MOBILE WORLD CONGRESS
    Demo of Boston Dynamics’ SPOT robot dog at 2022 Mobile World Congress MWC in Barcelona, Spain. The “Quadruped Unmanned Ground Vehicle” was offered as a donation to the LAPD by the Los Angeles Police Foundation.

    Gustavo Valiente/Xinhua via Getty Images


    Los Angeles councilwoman Eunisses Hernandez had previously said she had “grave concerns” about accepting the donation. She wasn’t present at the vote on Tuesday, according to ABC7.

    “We’ve seen these robot dogs crop up in other police departments around the country, including New York and San Francisco, where the community is similarly fighting back against bringing this kind of depersonalized, military-style technology to municipal police forces,” she said in a statement, according to CBS LA.


    Texas program studies human-robot interactions

    02:35

    The LA council’s approval comes with some strings attached. The city’s police department must issue quarterly reports about the robot’s usage and outcomes, as well as note any problems that arise. The council also has authority to suspend the robot’s use, CBS LA said. 

    New York City has recently added robot dogs to its police force, despite concerns from some residents that the machines could be problematic, especially in poor communities that have experienced aggressive policing. A robot dog was recently used in New York City to go inside a collapsed parking garage, with NYC Mayor Eric Adams praising the machine. 

    “Just one week ago, I was being criticized by all the folks in the bleachers, saying, ‘Well, why are you getting that dog?’ ” Adams said. “Now you see why I got the dog — to save lives.”

    Source link