ReportWire

Tag: Robots

  • Amazon shelves Blue Jay warehouse robot

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Amazon made a lot of noise in October when it unveiled Blue Jay, a multi-armed warehouse robot built to speed up same-day deliveries. Just months later, the company quietly ended the program.

    The robot’s core technology will live on in other projects. Still, Blue Jay itself is done.

    That sudden shift raises an important question. If one of the world’s most advanced logistics companies cannot make a high-profile robot work at scale, what does that say about the future of artificial intelligence (AI) in the real world?

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Blue Jay was designed as a ceiling-mounted robot that could sort and handle multiple packages at once to speed up same-day delivery. (Amazon)

    What Blue Jay was supposed to do

    Blue Jay was not a simple conveyor belt upgrade. It was a ceiling-mounted system designed to recognize and sort multiple packages at once. Using AI-powered perception models, the robot could:

    • Identify packages in motion
    • Coordinate several arms at the same time
    • Manipulate items with speed and precision

    Amazon said it developed the system in under a year. That pace alone was impressive. The goal was clear: move more packages faster while reducing strain on workers in same-day fulfillment centers. On paper, that sounds like a win for everyone.

    Why Blue Jay ran into trouble

    Despite the hype, Blue Jay faced steep engineering and cost challenges. First, the robot was mounted to the ceiling. That design required complex installation and tight integration into Amazon’s Local Vending Machine warehouses. Those facilities operate as massive, single structures with automation baked into the building itself.

    There was little room to reconfigure hardware once installed. That rigidity likely became a liability. In software, AI can pivot overnight with a code update. In the physical world, changing course means retooling steel beams, motors and entire layouts. That takes time and serious money. Several employees who worked on Blue Jay have already moved to other robotics projects.

    The company reportedly continues to experiment and improve its warehouse systems. The technology behind Blue Jay will, in fact, inform future designs. In other words, the robot failed. The ideas did not.

    WAYMO’S CHEAPER ROBOTAXI TECH COULD HELP EXPAND RIDES FAST

    Amazon Blue Jay robot handling a package

    Engineering complexity and high installation costs limited how easily Blue Jay could scale inside Amazon’s tightly integrated warehouse system. (Amazon)

    From LVM to Orbital: A strategic shift

    Amazon’s next move centers on a new warehouse architecture called Orbital. Unlike the older Local Vending Machine model, Orbital is modular. It can be built from smaller units and deployed faster in different layouts.

    That flexibility matters. Retail is fragmenting. Customers expect same-day delivery from urban hubs, local stores and even grocery locations. Orbital could allow Amazon to place micro-fulfillment centers behind retail stores, including Whole Foods locations. That would help it compete more directly with Walmart, which already has a strong grocery footprint.

    Alongside Orbital, Amazon is developing a new robotics system called Flex Cell. Unlike Blue Jay’s ceiling mount, Flex Cell is expected to sit on the floor.

    That small design change signals something bigger. Amazon appears to be moving from massive centralized automation to smaller, adaptable systems built for the unpredictable realities of local retail.

    What this means for your deliveries

    If you order from Amazon regularly, you might wonder whether this affects you. In the short term, probably not. Your packages will still show up. Same-day and next-day delivery remain core priorities. However, the long-term story is more interesting. Amazon’s robotics strategy shapes how fast your order arrives, how much you pay and how local warehouses operate in your community.

    If Orbital works, you could see:

    • Faster delivery from smaller neighborhood hubs
    • Better handling of chilled and perishable items
    • More automation in retail backrooms

    If it struggles, same-day expansion could slow or become more expensive. That tension reflects a broader truth about AI. Writing code is one thing. Teaching a robot to lift boxes in a real warehouse without breaking down is another.

    AI TRUCK SYSTEM MATCHES TOP HUMAN DRIVERS IN MASSIVE SAFETY SHOWDOWN WITH PERFECT SCORES

    A warehouse worker inspecting the Blue Jay robot

    After only a few months, Amazon discontinued the Blue Jay program while continuing to reuse parts of its underlying robotics technology. (Amazon)

    The gap between AI hype and hardware reality

    Blue Jay highlights a growing divide in the tech world. AI in software is moving at lightning speed. Chatbots, image tools and predictive systems evolve weekly.

    Hardware is different. Robots must deal with gravity, friction, heat and unpredictable human environments. Every mistake has a physical cost.

    Amazon’s course correction shows that even tech giants hit limits when translating AI breakthroughs into moving metal. That does not mean automation is slowing down. It means the path is bumpier than the headlines suggest.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Amazon shelving Blue Jay is not a retreat from robotics. It is a recalibration. The company is betting that modular, flexible systems will win over massive, tightly integrated machines. That shift could define the next era of e-commerce logistics. For you, the promise remains the same: faster delivery, better availability and more local convenience. But behind that promise is a complicated dance between AI ambition and real-world constraints.

    If even Amazon struggles to make advanced robots work at scale, how much of the AI revolution is still more vision than reality? Let us know by writing to us at Cyberguy.com

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    Related Article

    Robots learn 1,000 tasks in one day from a single demo

    [ad_2]

    Source link

  • AI dating cafes are now a real thing

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Dating has changed a lot over the past decade. First, we moved from meeting people in person to swiping on apps. Now, some people are skipping human partners altogether and dating AI. That shift became very real at a recent pop-up event in Hell’s Kitchen in New York, where EvaAI, an AI companion app, hosted what it called a dating cafe. Guests arrived solo and brought their virtual partners with them.

    Instead of someone sitting across the table, many had a phone or tablet propped up between the candles. They slipped on headphones, smiled at their screens and carried on full conversations with digital companions. It looked like a normal date night. It just happened to include artificial intelligence.

    Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    AI COMPANIONS ARE RESHAPING TEEN EMOTIONAL BONDS     

    A New York wine bar in Hell’s Kitchen transforms into EVA AI Cafe, what the company calls the world’s first AI dating cafe, complete with neon signage and candlelit tables.  (EvaAI)

    EvaAI takes AI relationships into the real world

    EvaAI organized the event to give users a chance to take their AI companion out on a real date. The app allows people to create customizable AI partners for text and video chat. For one evening, those private conversations moved into a public setting. Guests set up their devices on stands and began chatting with their AI partners as drinks were poured and music played. Some described their companions as friends. Others framed the relationship as romantic, often involving roleplay or fantasy scenarios.

    Company representatives said the goal was to reduce stigma around AI companion relationships. They emphasized that the app is not designed to replace human partners. Instead, they position it as support for people who feel lonely or who want a low-pressure way to build confidence. Still, seeing rows of candlelit tables with screens instead of people makes the shift feel tangible.

    What is an AI companion relationship?

    An AI companion relationship happens when someone forms an emotional or romantic bond with a chatbot designed to simulate personality and conversation.

    On platforms like EvaAI, users can:

    • Swipe through AI characters
    • Customize appearance and personality
    • Text or video chat anytime
    • Create romantic or fantasy scenarios

    You control the interaction. You decide when it starts and when it ends. You shape the personality to fit what you want. For many people, that control feels safe. There is no fear of rejection. No pressure to impress. No awkward silence unless you want one. If you have ever felt burned out by dating apps, you can probably understand the appeal.

    Why are more people turning to AI for romance?

    Modern dating can feel exhausting. You swipe, match and message. Then conversations disappear. AI cuts out a lot of the drama. There is no ghosting. No mixed signals. No waiting hours to reply, so you do not seem too eager. Instead, you get immediate engagement. For people who struggle with anxiety or who do not have many daily interactions, that can feel comforting. Some users say AI helps them practice conversation before dating real people. Others say it fills a social gap during lonely periods.

    Younger generations are also growing up with AI integrated into daily life. Talking to a chatbot no longer feels unusual. Adding emotional connection may feel like the next step. Surveys show a noticeable percentage of adults have experimented with AI in a romantic or intimate way. Among teens, the numbers are even higher.

    The benefits and the tradeoffs of AI relationships

    AI companion relationships come with real upsides. For example, they can reduce loneliness and provide emotional reassurance. In many cases, they also help people rehearse difficult conversations before having them in real life. As a result, some users say they feel more confident and socially prepared.

    However, there are clear tradeoffs. Unlike AI, real relationships require compromise, unpredictability and emotional growth. While a digital partner adapts to your preferences, a human partner may challenge you in unexpected ways. In contrast, AI typically responds the way you prefer and rarely pushes back unless designed to do so.

    Two human-like robots standing side-by-side.

    Moya’s humanlike appearance is intentional, from her warm skin to subtle facial details designed to feel familiar rather than mechanical.   (DroidUp)

    Over time, spending several hours a day in digital intimacy may shift expectations about real-world connections. At the New York event, some attendees admitted they feel more comfortable interacting with their AI companion at home rather than in crowded spaces. Because the app offers a high level of control, it can feel safer than face-to-face interaction. On one hand, that comfort can build confidence. On the other hand, it may reinforce isolation. Ultimately, the outcome depends on how intentionally the technology is used.

    TEENS TURNING TO AI FOR LOVE AND COMFORT

    Are AI companion relationships a passing trend or the future?

    It is easy to dismiss an AI dating cafe as a quirky tech stunt. Then again, meeting someone through a dating app once felt strange, too. Technology keeps advancing. Video syncing looks smoother. Voices sound more natural. Conversations feel more responsive.

    As AI becomes more lifelike, emotional attachment may deepen. EvaAI’s leadership has made clear that they do not view the app as a substitute for human relationships. They describe it as support during periods without a partner or as practice for real-world dating. Whether users maintain that boundary over time remains an open question.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com      

    Kurt’s key takeaways

    If you had told someone ten years ago that people would bring a chatbot to a wine bar for date night, they probably would have laughed. Now it is happening, and not quietly. The AI dating cafe in New York highlighted something very human. People want connection. When dating feels exhausting, awkward or intimidating, they look for something that feels safer and easier to manage. 

    For some, AI companion relationships may serve as practice. For others, they may become a primary source of emotional support. The technology will keep improving. The bigger question is how we choose to use it. We once debated whether meeting someone online counted as “real.” AI may follow a similar path, or it may remain a niche comfort for a certain group of people.

    Two people on a date

    Instead of someone sitting across the table, diners video chat with customizable AI partners, blending virtual romance with a real world setting. (iStock)

    If an AI companion helps someone feel less lonely and more confident, does it really matter that the connection is digital, or is the lack of a human on the other side a line you would never cross? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report: Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • China’s robotics giant puts 200 robots to the test

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A Chinese robotics company recently did something most tech firms would never dare attempt. Agibot put more than 200 robots on stage for a live one-hour televised event called Agibot Night. 

    The gala took place in Shanghai ahead of the Chinese Lunar New Year, which gave the production cultural weight as well as technical significance. According to the company, it was the world’s first large-scale live event fully led by humanoid robots.

    Throughout the show, the machines danced, boxed and performed martial arts. They also walked the runway in synchronized fashion routines, while some executed Shaolin-style stances and others handled acrobatic sequences using props such as fire torches. Even the audience was made up entirely of robots, which reinforced the scale of the production.

    At first glance, it felt like pure entertainment. However, the event functioned as a high pressure systems test playing out in public.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    WORLD’S FASTEST HUMANOID ROBOT RUNS 22 MPH

    More than 200 humanoid robots perform during Agibot Night, a live televised gala in Shanghai ahead of Lunar New Year. (Tang Yanjun/China News Service/VCG via Getty Images)

    Why stage a robot gala?

    At first glance, the event looked like a flashy product showcase. In reality, it functioned as a real-world stress test for Agibot humanoid robots. In controlled lab environments, engineers can pause a machine, adjust parameters and try again. Live television does not offer that luxury. A stumble, a delay or a synchronization error would have unfolded in front of a global audience.

    By running complex choreography for an hour straight, Agibot tested balance, motor control, battery endurance and multi-robot coordination under pressure. Sustained dance routines, martial arts sequences and synchronized formations push hardware and software in ways short demos never do.  Some segments even included card magic performed jointly with human magicians and floating illusion acts executed entirely by robots, adding another layer of complexity to the live show.

    The company described the event as a milestone for embodied intelligence, moving from experimentation into social and cultural spaces. It also positioned the gala as proof of system-level reliability and a showcase of its broader product ecosystem. Strip away the marketing language, and the message is clear. These robots are no longer lab prototypes. They are entering large-scale production.

    The robots behind the performance

    Agibot’s G2 humanoid robots handled the bipedal routines. They executed synchronized dance sequences, high-speed spins and coordinated formations. These movements require precise joint control and real-time sensor feedback. The company’s D1 quadruped robots added dynamic stability to the lineup, showcasing agility and terrain adaptability.

    The stage also featured Agibot’s broader humanoid portfolio, including the full-sized A2 Series built for multimodal interaction and navigation, and the compact X2 Series designed for natural conversation and expressive movement.

    In some segments, human dancers performed alongside the robots. The timing and alignment happened live, demonstrating how closely robotic motion can mirror human movement. One of the most talked about moments came from Elf Xuan, an ultra-realistic humanoid developed by AheadForm. During a singing performance, its facial expressions appeared strikingly lifelike, showing how expressive robotics continues to evolve.

    Even the comedic skits showed real progress. Several humanoids shared the stage, responded to each other and stayed on cue. When robots can handle timing and interaction like that, it signals that the underlying systems are becoming more stable and coordinated.

    WARM-SKINNED AI ROBOT WITH CAMERA EYES IS SERIOUSLY CREEPY

    A robot giving a performance in a lab.

    Robots box, spin and handle fire torches as part of a large-scale systems test disguised as entertainment. (Tang Yanjun/China News Service/VCG via Getty Images)

    Agibot humanoid robots lead global shipments

    Agibot is not a small player testing ideas on the sidelines. According to research firm Omdia, the company led global humanoid robot shipments in 2025. It delivered 5,168 units out of roughly 13,000 shipped worldwide that year. For a company founded in 2023 in Shanghai, that is a strong position in a fast-moving market.

    Shipment totals show demand. However, a live event like Agibot Night shows confidence. When robots perform for an hour straight, there is nowhere to hide. Motors heat up. Sensors can drift. Software can glitch. When hundreds of machines move in sync, even small issues stand out immediately.

    By putting its robots on display ahead of a major national holiday, Agibot reinforced the idea that its humanoid robots have moved beyond experimentation and into scaled production.

    Several segments also placed AGIBOT robots alongside well-known consumer and lifestyle brands, signaling the company’s ambition to integrate humanoids into commercial and consumer-facing environments.

    This was not the first time humanoid robots appeared in a major Chinese celebration. Unitree robots performed alongside human dancers at China Central Television’s Spring Festival Gala. Agibot’s event dramatically expanded that concept by scaling to more than 200 robots in a single coordinated production.

    A shift in how robots are introduced

    For years, humanoid robotics advanced behind closed doors. Progress showed up in research papers, factory trials and controlled demos. Agibot chose a different approach. Instead of presenting technical specifications at a trade show, it turned engineering validation into a live cultural event.

    That strategy changes perception. When robots perform dance routines, hold martial arts stances or coordinate fashion walks in front of a broadcast audience, they feel less like prototypes and more like machines designed for real-world environments. This does not mean humanoid robots will suddenly appear in every shopping mall. However, it does show the industry is accelerating toward greater public visibility. The more often people see robots operate in shared spaces, the more normal that presence becomes.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    HUMANOID ROBOTS ARE GETTING SMALLER, SAFER AND CLOSER

    A tech in a room full of robots.

    Agibot’s G2 humanoid robots execute synchronized dance and martial arts routines during a one-hour broadcast. (Tang Yanjun/China News Service/VCG via Getty Images)

    Kurt’s key takeaways

    Agibot Night put the technology on display in the most public way possible. More than 200 robots performed demanding routines for a full hour under broadcast conditions. That leaves little room for mistakes. Pair that performance with leading global shipment numbers, and the direction becomes clearer. Agibot is pushing hard to show its humanoid robots are ready for larger roles and wider deployment.

    So here is the question. If robots can execute synchronized martial arts routines, handle props like fire torches and stay coordinated for a live televised gala, how long before seeing one at work, in a store or at a public event feels completely normal to you? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • World’s fastest humanoid robot runs 22 MPH

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A full-size humanoid robot just ran faster than most people will ever sprint. 

    Chinese robotics firm MirrorMe Technology has unveiled Bolt, a humanoid robot that reached a top speed of 22 miles per hour during real-world testing. This was not CGI or a computer simulation. The footage, shared by the company on X, shows a real humanoid robot running at full speed inside a controlled testing facility.

    That milestone makes Bolt the fastest running humanoid robot of its size ever demonstrated outside computer simulations. For robotics, this is a line-crossing moment.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    WARM-SKINNED AI ROBOT WITH CAMERA EYES IS SERIOUSLY CREEPY

    MirrorMe Technology’s humanoid robot Bolt reaches 22 mph during a real-world sprint test inside a controlled facility. (Zhang Xiangyi/China News Service/VCG via Getty Images)

    What allows the world’s fastest humanoid robot to run at 22 mph

    In the promotional video, the run is shown using a split-screen view. On one side of the screen, Wang Hongtao, the founder of MirrorMe Technology, runs on a treadmill. On the other side, Bolt runs under the same conditions. The comparison makes the difference clear. As the pace increases, Wang struggles to keep up and eventually gives up, while Bolt continues running smoothly, maintaining balance as its stride rate increases.

    Bolt takes shorter strides than a human runner but makes up for it with a much faster stride rhythm. That faster rhythm helps the robot stay stable as it accelerates. Engineers say this performance reflects major progress in humanoid locomotion control, dynamic balance and high-performance drive systems. Speed is impressive. Speed with control is the real achievement.

    The humanoid robot design choices behind Bolt’s speed

    Bolt stands about 5 feet, 7 inches tall and weighs roughly 165 pounds, putting it close to the size and mass of an average adult human. MirrorMe says that similarity is intentional. The company describes this as the ideal humanoid form. 

    Rather than oversized limbs or exaggerated mechanics, Bolt relies on newly designed joints paired with a fully optimized power system. The goal is to replicate natural human motion while staying stable at extreme speeds. That combination is what sets Bolt apart.

    HUMANOID ROBOTS ARE GETTING SMALLER, SAFER AND CLOSER

    Robot running on a track.

    MirrorMe says Bolt’s 22 mph run highlights stability and control, not just raw speed. ( Cui Jun/Beijing Youth Daily/VCG via Getty Images)

    Why Bolt’s sprint reflects years of robotics development

    Bolt did not appear overnight. MirrorMe has focused on robotic speed as a long-term priority since 2016. Last year, its Black Panther II robot stunned viewers by sprinting 328 feet in 13.17 seconds during a live television broadcast in China. Reports suggested the performance exceeded comparable tests involving Boston Dynamics machines. 

    In 2025, the company also set a record with a four-legged robot that surpassed 22 mph, reinforcing its focus on acceleration, agility and sustained high-speed motion. China’s interest in robotic athletics continues to grow. Beijing even hosted the first World Humanoid Robot Games, where humanoid robots competed in sprint races on a track.

    Why MirrorMe says speed is not the end goal

    Running at 22 mph grabs attention, but MirrorMe says speed alone is not the point. The engineers behind Bolt care more about what happens at that speed. Balance, reaction time and control matter more than a headline number. Those skills are what let a humanoid robot move like a trained runner instead of a machine on the verge of tipping over.

    That is where the athlete angle comes in. MirrorMe envisions Bolt as a training partner that can run alongside elite athletes, hold a steady pace and push limits without getting tired. By matching and slightly exceeding human performance, the robot could help runners fine-tune form, pacing and endurance while collecting precise motion data. In that context, the sprint is not a stunt. It shows how humanoid robots could move beyond demos and into real training and performance settings.

    What this means to you

    Humanoid robots that can run at highway speeds are no longer something you only see in demos or concept videos. As these machines get faster and more stable, they start to fit into real-world roles. That includes athletic training, emergency response and physically demanding jobs where speed and endurance make a real difference. At the same time, faster robots bring real concerns. Safety, oversight and clear rules matter even more when machines can move this quickly around people. When robots run this fast, the limits need to be clear.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    HUMANOID ROBOT MAKES ARCHITECTURAL HISTORY BY DESIGNING A BUILDING

    Robots running a race.

    Engineers say Bolt’s high-speed sprint reflects advances in locomotion control, balance and drive systems. (Photo by Kevin Frayer/Getty Images)

    Kurt’s key takeaways

    Bolt running at 22 mph is eye-catching, but the speed is not the main takeaway. What matters is what it shows. Robots are starting to move more like people. They can run, adjust and stay upright at speeds that used to knock machines over. That opens the door to real uses, but it also raises real questions. How fast is too fast around people? Who sets the rules? And who is responsible when something goes wrong? The technology is moving quickly. The conversation around it needs to move just as fast.

    If humanoid robots can soon outrun and outtrain humans, where should limits be set on how and where they are allowed to operate? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP 

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Warm-skinned AI robot with camera eyes is seriously creepy

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Humanoid robots are no longer hiding in research labs somewhere. These days, they are stepping into public spaces, and they are starting to look alarmingly human. 

    A Shanghai startup has now taken that idea further by unveiling what it calls the world’s first biometric AI robot. Yes, it is as creepy as it sounds. The robot is called Moya, and it comes from DroidUp, also known as Zhuoyide. The company revealed Moya at a launch event in Zhangjiang Robotics Valley, a growing hotspot for humanoid development in China. 

    At first glance, you can still tell Moya is a robot. The skin looks plasticky. The eyes feel vacant. The movements are slightly off. Then you learn more details about her, and that’s when the discomfort kicks in.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Warm skin makes this humanoid robot feel unsettling

    HUMANOID ROBOTS ARE GETTING SMALLER, SAFER AND CLOSER

    Even when standing still, the robot’s posture and proportions blur the line between machine and person in a way many people find unsettling. (DroidUp)

    Most robots feel cold and mechanical. Moya does not. According to DroidUp, Moya’s body temperature sits between 90°F and 97°F, roughly the same range as a human. Company founder Li Qingdu says robots meant to serve people should feel warm and approachable. That idea sounds thoughtful until you picture a humanoid with warm skin standing next to you in a quiet hallway. DroidUp says this design points toward future use in healthcare, education and commercial settings. It also sees Moya as a daily companion. That idea may excite engineers. However, for many people, it triggers the opposite reaction. Warmth removes one of the few clear signals that separates machines from humans. Once that line blurs, discomfort grows fast.

    Why this humanoid robot’s walk feels so off

    Moya does not roll or glide. She walks. DroidUp says her walking motion is 92% accurate, though it is not clear how that number is calculated. On screen, the movement feels cautious and a little stiff. It looks like someone is moving carefully after leg day at the gym. The hardware underneath is doing real work. Moya runs on the Walker 3 skeleton, an updated system connected to a bronze medal finish at the world’s first robot half-marathon in Beijing in April 2025. Put simply, robots are getting better at moving through everyday spaces. Watching one do it this convincingly feels strange, not impressive. It makes you stop and stare, then wonder why it feels so uncomfortable.

    Camera eyes and facial reactions raise privacy concerns

    Behind Moya’s eyes sit cameras. Those cameras allow her to interact with people and respond with subtle facial movements, often called microexpressions. Add onboard AI and DroidUp now labels Moya a fully biomimetic-embodied intelligent robot. That phrase sounds impressive. It also raises obvious questions. If a humanoid robot can see you, track your reactions and mirror emotional cues, trust becomes complicated. You may forget you are interacting with a machine. You may act differently. That shift has consequences in public spaces. This is AI moving out of screens and into physical proximity. Once that happens, the stakes change.

    Price alone keeps this robot out of your home

    If you are worried about waking up to a warm-skinned humanoid in your home, relax for now. Moya is expected to launch in late 2026 at roughly $173,000. That price places her firmly in institutional territory. DroidUp sees the robot working in train stations, banks, museums and shopping malls. Tasks would include guidance, information and public service interactions. That still leaves plenty of people uneasy, especially those whose jobs already feel vulnerable to automation. For homes, the future still looks more like robot vacuums than walking companions.

    Close up of human-like robot with pink hair.

    Up close, Moya’s eyes look almost human, which raises questions about how much realism is too much for robots meant to operate in public spaces. (DroidUp)

    WORLD’S FIRST AI-POWERED INDUSTRIAL SUPER-HUMANOID ROBOT

    What this means to you

    This is not about buying a humanoid robot tomorrow. It is about where technology is heading. Warm skin, camera eyes and human-like movement signal a shift in design priorities. Engineers want robots that blend in socially. The more they succeed, the harder it becomes to maintain clear boundaries. As these machines enter public spaces, questions about consent, surveillance and emotional manipulation will follow. Even if the robot is polite and helpful, the presence alone changes how people behave. Creepy reactions are not irrational. They are early warning signs.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Moya’s debut feels worth paying attention to because she is real enough to trigger discomfort almost instantly. That reaction matters. It suggests people are being asked to get used to lifelike machines before they have time to question what that really means. Humanoid robots do not need warm skin to be helpful. They do not need faces to point someone in the right direction. Still, companies keep pushing toward realism, even when it makes people uneasy. In tech, speed often comes before reflection, and this is one area where slowing down might matter more than racing ahead.

    If a warm-skinned robot with camera eyes greeted you out in public, would you trust it or avoid eye contact and walk faster? Let us know by writing to us at Cyberguy.com.

    Two human-like robots standing side-by-side.

    Moya’s humanlike appearance is intentional, from her warm skin to subtle facial details designed to feel familiar rather than mechanical. (DroidUp)

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Humanoid robots are getting smaller, safer and closer

    [ad_1]

    NEWYou can now listen to Fox News articles!

    For decades, humanoid robots have lived behind safety cages in factories or deep inside research labs. Fauna Robotics, a New York-based robotics startup, says that era is ending. 

    The company has introduced Sprout, a compact humanoid robot designed from the ground up to operate around people. Instead of adapting an industrial robot for public spaces, Fauna built Sprout specifically for homes, schools, offices, retail spaces and entertainment venues.

    “Sprout is a humanoid platform designed from first principles to operate around people,” the company said. “This is a new category of robot built for the spaces where we live, work, and play.” That philosophy drives nearly every design choice behind Sprout.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    ROBOTS LEARN 1,000 TASKS IN ONE DAY FROM A SINGLE DEMO

    Sprout is designed to operate safely around people, even in shared spaces like homes and classrooms where close interaction matters. (Fauna Robotics)

    Why Fauna believes humanoid robots belong beyond factories

    Fauna Robotics’ founders started with a simple idea. If robots are going to become part of daily life, they must move naturally around humans and earn trust through safety and reliability. Most humanoid robots today focus on industrial efficiency or controlled research environments. Fauna is targeting a different reality. Service industries now make up the majority of the global workforce. At the same time, labor shortages continue to grow in healthcare, education, hospitality and eldercare. Sprout is designed to explore how humanoid robots could support those spaces without creating new safety risks or operational headaches.

    HUMANOID ROBOT MAKES ARCHITECTURAL HISTORY BY DESIGNING A BUILDING

    A robot walking through a living room

    The robot uses onboard sensing and navigation to move confidently through indoor spaces without needing safety cages or fixed paths. (Fauna Robotics)

    Sprout is a safety-first humanoid robot built for people

    Standing about 3.5 feet tall, Sprout fits naturally into human spaces instead of towering over them. At roughly 50 pounds, it carries less kinetic energy during movement or contact, which makes close interaction safer by design. Lightweight materials and a soft-touch exterior further reduce risk. The design avoids sharp edges and limits pinch points, allowing the robot to operate near people without safety cages. Quiet motors and smooth movement also reduce noise and help Sprout feel less intimidating in shared spaces.

    Rather than complex multi-fingered hands, Sprout uses simple one-degree-of-freedom grippers. This approach lowers weight and improves durability while still supporting practical tasks like object fetching, hand-offs, and basic shared-space interaction. Flexible arms and legs allow the robot to walk, kneel, and crawl. Sprout can also fall and recover without damaging sensitive components. In everyday environments, where conditions are rarely perfect, that resilience matters.

    Under the hood, Sprout uses a highly articulated body with 29 degrees of freedom to support smooth movement and expressive gestures. Onboard NVIDIA compute provides the processing power needed for perception, navigation, and human-robot interaction without relying on external systems. A battery that supports several hours of active use makes Sprout practical for research, development, and real-world testing in shared human spaces.

    Built for natural human-robot interaction

    Sprout’s expressive face helps it communicate in a way people can quickly understand. Simple facial cues show what the robot is doing and how it is feeling, so you do not need technical knowledge to follow along. The robot can walk, kneel, crawl, and recover from falls, which helps it move naturally in everyday spaces. Because its motors are quiet, and its movements are smooth, Sprout feels less startling and more predictable when it is nearby. Behind the scenes, Sprout supports teleoperation, mapping and navigation. These tools give developers the building blocks to create interactions that feel intuitive and human, not stiff or mechanical.

    ELON MUSK TEASES A FUTURE RUN BY ROBOTS

    A closeup of a robot hand

    Instead of complex hands, Sprout uses simple, durable grippers that prioritize safety while still handling everyday tasks like hand-offs and object pickup. (Fauna Robotics)

    A modular software platform for rapid development

    Sprout runs on a modular software system that is built to grow over time. Developers get stable controls along with tools for deployment, monitoring, and data collection, so they can focus on building new ideas instead of managing the robot itself. As new abilities improve, Fauna can add them through software updates rather than redesigning the hardware. This keeps costs down and helps Sprout stay useful longer as technology evolves. Fauna also kept sensing simple. Sprout uses head-mounted RGB-D sensors instead of wrist cameras, which reduces complexity and maintenance. At the same time, it still gives the robot a strong perception for moving and working safely in shared spaces.

    Who Sprout is designed for

    Fauna positions Sprout as a developer-first humanoid platform rather than a finished consumer product. It is designed for developers who want to build and test applications on accessible hardware with full SDK access and built-in movement, perception, navigation, and expression. At the same time, enterprises can use Sprout to create next-generation AI applications that operate safely in places like retail, hospitality, and offices. Researchers can also use the platform to study locomotion, manipulation, autonomy, and human-robot interaction without building a robot from scratch. Together, these uses point to real-world deployments across retail and hospitality, consumer and home settings, research and education, and entertainment experiences.

    What this means for you

    Even if you never plan to build a robot, Sprout signals a shift in how robotics companies think about everyday life. Humanoid robots are no longer being designed only for factories and labs. Companies like Fauna are betting that the future of robotics depends on safety, trust, and natural interaction in human spaces. If successful, platforms like Sprout could lead to robots that assist in classrooms, support hospitality staff, help researchers move faster and create interactive experiences that feel less robotic and more human.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Sprout is not trying to replace workers or flood homes with machines overnight. Instead, Fauna is laying the groundwork for a future where humanoid robots earn their place through careful design and responsible deployment. By prioritizing safety, simplicity, and developer collaboration, Sprout represents a quieter but potentially more meaningful step forward in humanoid robotics. The real test will be how developers and researchers use the platform and whether people feel comfortable sharing space with robots like Sprout.

    Would you trust a humanoid robot to work beside you in a school, hotel, or office if it were designed for safety first? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Elon Musk warns the U.S. is ‘1,000% going to go bankrupt’ unless AI and robotics save the economy from crushing debt | Fortune

    [ad_1]

    Tesla CEO Elon Musk doubled down on his warnings about U.S. debt, predicting financial doom will be guaranteed without the transformative effects of AI and robotics on the economy.

    In a lengthy, wide-ranging interview with podcaster Dwarkesh Patel alongside Stripe cofounder and president John Collison on Thursday, the tech billionaire was asked why he pushed for aggressive spending cuts while leading the Department of Government Efficiency if technology will supercharge GDP growth and ease the debt burden.

    Musk replied that he was concerned about waste and fraud. That’s despite reports that many across-the-board staffing cuts included critical employees who had to be hired back.

    “In the absence of AI and robotics, we’re actually totally screwed because the national debt is piling up like crazy,” he added.

    Interest payments alone on the $38.5 trillion debt pile are about $1 trillion a year, exceeding the U.S. military budget, Musk pointed out.

    Debt-servicing costs also top spending on social programs like Medicare. But President Donald Trump has vowed to boost annual defense outlays to $1.5 trillion, so the defense budget could overtake interest payments again, at least temporarily.

    Reflecting on his work with DOGE, Musk said he had hoped to slow down the unsustainable financial trajectory the U.S. is on, buying more time for AI and robotics to boost growth.

    “It’s the only thing that could solve the national debt. We are 1,000% going to go bankrupt as a country, and fail as a country, without AI and robots,” he predicted. “Nothing else will solve the national debt. We just need enough time to build the AI and robots to not go bankrupt before then.”

    In late November, Musk made similar comments, saying on Nikhil Kamath’s podcast that the deployment of AI and robotics “at very large scale” is the only solution to the U.S. debt crisis.

    But he cautioned that the increased output in goods and services as a result of the technologies would likely lead to significant deflation.

    “That seems likely because you simply won’t be able to increase the money supply as fast as you increase the output of goods and services,” Musk added.

    Deflation would actually worsen the debt burden in real terms, while inflation would ease it initially, though a resulting spike in bond yields would eventually send debt-interest payments soaring.

    To be sure, the U.S. has some built-in advantages given that the dollar remains the world’s reserve currency, allowing the Treasury Department to borrow at lower interest rates than would be possible otherwise.

    The ability of the U.S. to issue debt in its own currency and the Federal Reserve’s bond-buying capacity also lessen the risk of an outright default.

    Still, the Committee for a Responsible Federal Budget warned last month that the U.S. is on a trajectory that could trigger six distinct types of fiscal crises.

    While it’s “impossible” to know when disaster will strike, “some form of crisis is almost inevitable” without a course correction, the CRFB said in a report.

    [ad_2]

    Jason Ma

    Source link

  • Humanoid robot makes architectural history by designing a building

    [ad_1]

    NEWYou can now listen to Fox News articles!

    What happens when artificial intelligence (AI) moves from painting portraits to designing homes? That question is no longer theoretical. 

    At the Utzon Center in Denmark, Ai-Da Robot, the world’s first ultra-realistic robot artist, has made history as the first humanoid robot to design a building.

    The project, called Ai-Da: Space Pod, is a modular housing concept created for future bases on the Moon and Mars. CyberGuy has covered Ai-Da before, when her work focused on drawing, painting and performance art. That earlier coverage showed how a robot could create original artwork in real time and why it sparked global debate.

    Now, the shift is clear. Ai-Da is moving beyond art and into physical spaces designed for humans and robots to live in.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.com newsletter.

    3D-PRINTED HOUSING PROJECT FOR STUDENT APARTMENTS TAKES SHAPE

    Ai-Da Robot is the humanoid artist that made architectural history by becoming the first robot to design a building. (FABRICE COFFRINI/AFP via Getty Images)

    Inside the ‘I’m not a robot’ exhibition

    The exhibition “I’m not a robot” has just opened at Utzon Center and runs through October. It explores the creative capacity of machines at a time when robots are increasingly able to think and create for themselves. Visitors can experience Ai-Da’s drawings, paintings and architectural concepts. Throughout the exhibition period, visitors can also follow Ai-Da’s creative process through sketches, paintings and a video interview.

    ELON MUSK TEASES A FUTURE RUN BY ROBOTS

    How Ai-Da creates art and architecture

    Ai-Da is not a digital avatar or animation. She has camera eyes, specially developed AI algorithms and a robotic arm that allows her to draw and paint in real time. Developed in Oxford and built in Cornwall in 2019, Ai-Da works across disciplines. She is a painter, sculptor, poet, performer and now an architectural designer whose work is meant to provoke reflection.

    “Ai-Da presents a concept for a shared residential area called Ai-Da: Space Pod, a foreshadowing of a future where AI becomes an integrated part of architecture,” explains Aidan Meller, creator of Ai-Da and Director of Ai-Da Robot. “With intelligent systems, a building will be able to sense and respond to its occupants, adjusting light, temperature and digital interfaces according to needs and moods.”

    A building designed for humans and robots

    The Space Pod is intentionally modular. Each unit can connect to others through corridors, creating a shared residential environment.

    Through a series of paintings, she envisions a home and studio for humans or robots alike. According to the Ai-Da Robot team, these designs could evolve into fully realized architectural models through 3D renderings and construction. They could also adapt to planned Moon or Mars base camps.

    Ai-Da robot at AI conference in 2023

    Aidan Meller presents Ai-Da robot, the first AI-powered robot artist during the UN Global Summit on AI for Good, where they are giving the keynote speech, on July 7, 2023, in Geneva, Switzerland. (Johannes Simon/Getty Images for Aidan Meller)

    While the concept targets future bases on the Moon and Mars, the design can also be built as a prototype on Earth. That detail matters as space agencies prepare for longer missions beyond our planet.

    “With our first crewed Moon landing in 50 years coming in 2027, Ai-Da: Space Pod is a simple unit connected to other Pods via corridors,” Meller said. “Ai-Da is a humanoid designing homes. This raises questions about where architecture may go when powerful AI systems gain greater agency.” The timing also aligns with renewed lunar exploration tied to NASA missions.

    AUSTRALIAN CONSTRUCTION ROBOT CHARLOTTE CAN 3D PRINT 2,150-SQ-FT HOME IN ONE DAY USING SUSTAINABLE MATERIALS

    Why this exhibition is meant to challenge you

    According to Meller, the exhibition is meant to feel uncomfortable at times. “Technology is developing at an extraordinary pace in these years, he said, pointing to emotional recognition through biometric data, CRISPR gene editing and brain computer interfaces. Each carries promise and ethical risk. He references Brave New World and warnings from Yuval Harari about how powerful technologies may be used. 

    In that context, Ai-Da becomes a mirror of our time. “Ai-Da is confrontational. The very fact that she exists is confrontational,” said Line Nørskov Davenport, Director of Exhibitions at Utzon Center. “She is an AI shaker, a conversation starter.”

    AI robot artist "Ai-Da" at the Great Pyramids of Giza

    Aidan Meller, British Gallery owner and specialist in modern and contemporary art, stands beside the AI robot artist “Ai-Da” at the Great Pyramids of Giza, where she exhibits her sculpture during an international art show, on the outskirt of Cairo, Egypt, Oct. 23, 2021.  (REUTERS/Mohamed Abd El Ghany)

    What this means for you

    This story goes beyond robots and space travel. Ai-Da’s Space Pod shows how quickly AI is moving from a creative tool to a decision-maker. Architecture, housing and shared spaces shape daily life. When AI enters those fields, questions about control, ethics and accountability become unavoidable. If a robot can design homes for the Moon, it may soon influence how buildings function here on Earth.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    A humanoid robot designing a building once sounded impossible. Today, Ai-Da’s work sits inside a major cultural institution and sparks real debate. She offers no easy answers. Instead, she pushes us to think more critically about creativity, technology and responsibility. As the line between human and machine continues to blur, those questions matter more than ever.

    If AI can design the homes of our future, how much creative control should humans be willing to give up? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Tiny autonomous robots can now swim on their own

    [ad_1]

    NEWYou can now listen to Fox News articles!

    For decades, microscopic robots lived mostly in our imagination. Movies like “Fantastic Voyage” convinced us that tiny machines would one day cruise through the human body, fixing problems from the inside. In reality, that future stayed frustratingly out of reach. 

    The reason was not a lack of ambition. It was physics. 

    Now, a breakthrough from researchers at the University of Pennsylvania and the University of Michigan has changed the equation. The teams have built the smallest fully programmable autonomous robots ever created, and they can swim.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    A new way to swim without moving parts

    Seen on a fingertip, this tiny swimming robot is smaller than a grain of salt yet fully autonomous. (Kurt “CyberGuy” Knutsson)

    ROBOTS LEARN 1,000 TASKS IN ONE DAY FROM A SINGLE DEMO

    The robots measure about 200 by 300 by 50 micrometers. That is smaller than a grain of salt and close to the size of a single-celled organism. They do not have legs or propellers. Instead, they use electrokinetics. Each robot generates a small electrical field that pulls charged ions in the surrounding fluid. Those ions drag water molecules with them, effectively creating a flowing river around the robot. The result is motion without moving parts. That makes the robots extremely durable and surprisingly easy to handle, even with delicate lab tools.

    A brain powered by almost nothing

    Each robot runs on tiny solar cells that generate just 75 nanowatts of power. That is more than 100,000 times less than a smartwatch. To make this work, engineers redesigned everything. They built ultra-low voltage circuits and created a custom instruction set that compresses complex behavior into just a few hundred bits of memory. Despite the limits, each robot can sense its environment, store data and decide how to move next.

    How these robots communicate with a dance

    The robots cannot carry antennas, so the team borrowed a trick from nature. Each robot performs a tiny wiggle pattern to report information like temperature. The motion follows a precise encoding scheme that researchers can decode by watching through a microscope. The idea closely mirrors how bees communicate through movement. Programming works the other way. Researchers flash light signals that the robots read as instructions. A built-in passcode prevents random light from interfering with their memory.

    What these tiny robots can do today

    In current tests, the robots demonstrate thermotaxis. They sense heat and autonomously swim toward warmer areas. That behavior hints at future uses like tracking inflammation, locating disease markers or delivering drugs with extreme precision. Light can already power robots near the skin. For deeper environments, the researchers are exploring ultrasound as a future energy source.

    PRIVATE AUTONOMOUS PODS COULD REDEFINE RIDE-SHARING

    Close-up of a researcher's hands adjusting a modern microscope in a lab setting.

    Tiny robots move by creating electric fields that pull surrounding fluid, allowing them to swim without propellers or moving parts. (iStock)

    Cheap enough to use by the thousands

    Because these robots are made with standard semiconductor manufacturing, they can be produced in large numbers. More than 100 robots fit on a single chip, and manufacturing yields already exceed 50%. In mass production, the estimated cost could drop below one cent per robot. At that price, disposable robot swarms become realistic rather than theoretical.

    What this means to you

    This technology is not about flashy gadgets. It is about scale. Robots this small could one day monitor health at the cellular level, build materials from the bottom up or explore environments too delicate for larger machines. While medical use is still years away, this breakthrough shows that true autonomy at the microscale is finally possible.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    For nearly 50 years, microscopic robots felt like a promise science could never quite keep. This research, published in Science Robotics, changes that narrative. By embracing the strange physics of the microscale instead of fighting it, engineers unlocked an entirely new class of machines. This is only the first chapter, but it is a big one. Once sensing, movement and decision-making fit into something almost invisible, the future of robotics looks very different.

    If tiny robots could swim through your body one day, would you trust them to monitor your health or deliver treatment? Let us know by writing to us at Cyberguy.com.

    Microscope in front of screens with brain scans.

    Light-based commands trigger precise movements as microscopic robots receive instructions, change direction and move independently. (iStock)

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.  

    [ad_2]

    Source link

  • Robots that feel pain react faster than humans

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Touch something hot, and your hand snaps back before you even think. That split second matters.

    Sensory nerves in your skin send a rapid signal to your spinal cord, which triggers your muscles right away. Your brain catches up later. Most robots cannot do this. When a humanoid robot touches something harmful, sensor data usually travels to a central processor, waits for analysis and then sends instructions back to the motors. Even tiny delays can lead to broken parts or dangerous interactions. 

    As robots move into homes, hospitals and workplaces, that lag becomes a real problem.

    A robotic skin designed to mimic the human nervous system

    Scientists at the Chinese Academy of Sciences and collaborating universities are tackling this challenge with a neuromorphic robotic e-skin, also known as NRE-skin. Instead of acting like a simple pressure pad, this skin works more like a human nervous system. Traditional robot skins can tell when they are touched. They cannot tell whether that touch is harmful. The new e-skin can do both. That difference changes everything.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    CES 2026 SHOWSTOPPERS: 10 GADGETS YOU HAVE TO SEE

    A humanoid robot equipped with neuromorphic e-skin reacts instantly to harmful touch, mimicking the human nervous system to prevent damage and improve safety. (Eduardo Parra/Europa Press via Getty Images)

    How the neuromorphic e-skin works

    The e-skin is built in four layers that mirror how human skin and nerves function. The top layer acts as a protective outer covering, similar to the epidermis. Beneath it sit sensors and circuits that behave like sensory nerves. Even when nothing touches the robot, the skin sends a small electrical pulse to the robot every 75 to 150 seconds. This signal acts like a status check that says everything is fine. When the skin is damaged, that pulse stops. The robot immediately knows where it was injured and alerts its owner. Touch creates another signal. Normal contact sends neural-like spikes to the robot’s central processor for interpretation. However, extreme pressure triggers something different.

    How robots detect pain and trigger instant reflexes

    If force exceeds a preset threshold, the skin generates a high-voltage spike that goes straight to the motors. This bypasses the central processor entirely. The result is a reflex. The robot can pull its arm away instantly, much like a human does after touching a hot surface. The pain signal only appears when the contact is truly dangerous, which helps prevent overreaction. This local reflex system reduces damage, improves safety and makes interactions feel more natural.

    ROBOTS LEARN 1,000 TASKS IN ONE DAY FROM A SINGLE DEMO

    Person testing a robot hand.

    Scientists developed a robotic skin that can detect pain and trigger reflexes without waiting for a central processor to respond. (Han Suyuan/China News Service/VCG via Getty Images)

    Self-repairing robotic skin makes fixes fast

    The design includes another clever feature. The e-skin is made from magnetic patches that fit together like building blocks. If part of the skin gets damaged, an owner can remove the affected patch and snap in a new one within seconds. There is no need to replace the entire surface. That modular approach saves time, lowers costs and keeps robots in service longer.

    Why pain-sensing skin matters for real-world robots

    Future service robots will need to work close to people. They will assist patients, help older adults and operate safely in crowded spaces. A sense of touch that includes pain and injury detection makes robots more aware and more trustworthy. It also reduces the risk of accidents caused by delayed reactions or sensor overload. The research team says their neural-inspired design improves robotic touch, safety and intuitive human-robot interaction. It is a key step toward robots that behave less like machines and more like responsive partners.

    What this technology means for the future of robots

    The next challenge is sensitivity. The researchers want the skin to recognize multiple touches at the same time without confusion. If successful, robots could handle complex physical tasks while staying alert to danger across their entire surface. That brings humanoid robots one step closer to acting on instinct.

    ROBOT STUNS CROWD AFTER SHOCKING ONSTAGE REVEAL

    Close up of a robot head.

    A new e-skin design allows robots to pull away from dangerous contact in milliseconds, reducing the risk of injury or mechanical failure. (CFOTO/Future Publishing via Getty Images)

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Robots that can feel pain may sound unsettling at first. In reality, it is about protection, speed and safety. By copying how the human nervous system works, scientists are giving robots faster reflexes and better judgment in the physical world. As robots become part of daily life, those instincts could make all the difference.

    Would you feel more comfortable around a robot if it could sense pain and react instantly, or does that idea raise new concerns for you? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Beatbot’s New Pool Cleaning Robot Uses AI to Find Pool Debris

    [ad_1]

    At CES 2026, smart home pool cleaning robot company Beatbot has announced a new, self-cleaning pool robot system, called the AquaSense X, that it says uses AI to identify up to 40 common types of pool debris, from the bottom of a pool up to the water’s surface. It’s available to preorder now.

    The AquaSense X is a two-part system. First there’s the cleaning bot itself, called the AquaSense X AI Robotic Pool Cleaner. The company says the pool cleaner uses cameras along with infrared and ultrasonic sensing to navigate and identify and clean debris, as well as identify steps, edges, and shallow platforms. It isn’t Matter-compatible, but gets voice control—for things like starting a clean, checking its battery, or getting voice alerts at the end of a clean—through Google Home, Alexa, and Siri.

    © Wes Davis / Gizmodo

    The second part is the Beatbot AstroRinse Cleaning Station, which features an automatic filter-cleaning system. It’s not self-docking—owners will have to put it in its dock themselves—but once in there, it can clear debris from the robot. The AstroRinse can hold up to 22 liters, which Beatbot says can “hold up to two full cleaning cycles per week for as long as two months” before it needs owners to empty it. 

    Render of the RoboTurtle entering the ocean from a beach.
    © Beatbot

    Beatbot also announced that it has made some improvements to its RoboTurtle, an aquatic robot that looks like a sea turtle and, uh, swims around your pool. The company showed it at CES 2025, but since then it says it has updated it so it swims more like a real sea turtle and uses cameras and other sensors to avoid objects and respond to “select hand gestures.” Alas, for those to whom a robot sea turtle appeals, the company gave no word on availability or pricing.

    The AquaSense X is available to preorder for $4,250, and the company says the first 500 people to preorder it—with a $250 deposit—will get bonuses like an extra year of warranty (for a total of four years), plus a one-year pool-care kit. Beatbot didn’t reveal the actual launch date of the system, but when I went through the preorder process on its website, it gave me a launch date of March 16.

    Gizmodo is on the ground in Las Vegas all week bringing you everything you need to know about the tech unveiled at CES 2026. You can follow our CES live blog here and find all our coverage here.

    [ad_2]

    Wes Davis

    Source link

  • Beyond the CES hype: why home robots need the self-driving car playbook | Fortune

    [ad_1]

    With CES 2026 upon us and some predicting that the first affordable home robot will set off a technological race to market this year, those walking the conference floor in Las Vegas this week can expect thrilling robot demos and big promises we’ve been hearing since the 1960s. The explosion of AI has thrown the humanoid home robot hype machine into full tilt, and to be fair, an AI home revolution is indeed underway. 

    While we’ve embraced Roombas, smart thermostats, and AI-powered security systems like Ring doorbells for years, significant issues remain, such as data availability, privacy, and social acceptance, before we achieve Jetson-era assistants who will not only fold our laundry and help us care for our children and aging parents, but be trusted to do so.  

    As our cars continue to gain more autonomy, it would seem the time is ripe for home robots. After all, if the AI, sensors, computing hardware, and other components required for autonomy have become powerful and safe enough for the road, why can’t they take on the home?

    I’ve been around computers since receiving my Commodore 64 as a kid. Now, as an AI and robotics professor and a founder of an AI startup, I’m exploring how computer-based systems interact with our world. While we have come far, there are many technological hurdles the industry must overcome to deliver fully autonomous humanoid robots.

    The Autonomy Myth

    For all the hype and advances in AI programming, over 46 percent of companies fail to turn their exciting, demo-ready proofs of concepts into something usable in the real world—in part because systems lack the data and experience to complete their AI training. In the home robotics space, being an early adopter puts a large portion of that training onus on users (paying users in fact) while also bringing up larger issues of privacy and safety.

    Like autonomous cars and systems on the road, home robots must function safely and efficiently 99.999% of the time because one mistake could lead to catastrophic results such as a stovetop burner being left on, a missed pill, or a fall in the shower. In addition to being trained on the massive amounts of data captured by cameras, sensors, and experiments in the real world, home robots must also be prepared to perceive, reason, and act in the face of unexpected scenarios. 

    This ability to adapt to real-world and unexpected situations has been a thorn in the side of autonomous cars on the road (remember that they were supposed to be available in 2020).While synthetic data, simulations, and experience help fill these holes, teams like Waymo’s Fleet Response also keep humans in the loop to help the AI make decisions and act fast when faced with scenarios that confound or confuse them.

    Robots coming into our private homes will run into far more unexpected scenarios that range from each building’s unique physical map to the culture—the so-called patterns of life—of those who live there. No matter how much training is done off-site, setting up and continuously training for our environments today means sending to the cloud rich personal data about everything from when we sit down to eat to how we resolve conflicts with and parent our children. 

    Amidst the ongoing privacy issues surrounding door cameras and the backlash over social media giants exploiting user data to train their own models, today’s robots invite both passive and active observers into our homes and leave our data exposed to bad actors.

    Take the automotive road to success by solving one problem at a time

    Working to resolve this privacy issue is one of the exciting challenges before the industry today. Even as we strive to find solutions here, developers and early adopters anxious for home robots that can actually deliver today can take a lesson from the automotive industry’s success. 

    Ten years ago, our cars had basic cruise control, and today, that early AI assistance has evolved into adaptive cruise control, lane following systems, and more. Autonomous cars are, in fact, several AI systems working in concert. 

    While the auto industry has been peeling off problems and use cases, one by one, we have not woven this sort of progress into the home. Over two decades after Roombas first entered our homes, most of our smart devices—Alexa assistants, Ring doorbells, and AI chatbots—still don’t physically interact with or move through the world around us.

    The right refrigerator might notify us when we’re low on milk and even create a grocery order for us to approve, but there’s still no robot to unpack the groceries, let alone do our ironing or hang up our clothes—two of the many promises featured way back in this 1960s BBC predictions video.

    Going up? Social acceptance is essential in stepping up new technology

    While many of us would love to hand off our housework and even, at times, our kids to a trusty robot, the industry needs to do more than make them safe and reliable while being respectful of social expectations around privacy. Innovators also have to convince us to trust them.

    Today, we take passenger elevators for granted, but as the very first autonomous vehicle, they were radical when introduced in the 19th century. Humans could suddenly step into a box, perhaps hear gears grind, and then exit the box on a different floor—and even as safety features were innovated, that was terrifying. That’s why when this remarkable feat became as easy as the push of a button, human operators remained on board.

    Elevator operators are now a sign of prestige, but in the early days of this technology, their presence was essential to building trust and acceptance to evolve the social norm.

    Similarly, while it’s hard to avoid stories about AI backlash since ChatGPT exploded, the technology has quietly been assisting us for years via services like credit card fraud detection. Credit card companies implemented protective algorithms without advertising the fact, and avoided backlash from users by bringing the human back into the equation once transactions were flagged for review.

    In the home, another human is not the answer, which brings us back to the most challenging piece of the puzzle. While the home robotics industry can find success by addressing smaller problems that require less data and compute, innovators must also solve the much larger problem of how to acquire and protect the data that will fuel, train, and inform our trusty helpers.

    We may not have to wait 50 years to catch up to the Jetsons, but the path is certainly longer and more complex than the home robot demos you’ll see at CES suggest. When walking the halls this week, don’t ignore the less exciting but useful window washer, bartender, or snowblower. Be inspired by the promise of those walking robots, even as we focus on the challenges that lie ahead. 

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    [ad_2]

    Jason Corso

    Source link

  • Hyundai and Boston Dynamics May Have Just Stolen the Robot Factory Narrative Away from Tesla

    [ad_1]

    Hyundai is not claiming that robots will take over the world, or save mankind, but on Monday at CES it may have just cracked a load-bearing pillar of investor confidence in Elon Musk and Tesla. 

    Hyundai simply makes way, way, way, more cars than Tesla. Over the last three years, Hyundai sold roughly 7 million cars per year globally, while Tesla hovered around 1.8 million sales per year over that same period.

    What makes Tesla, not Hyundai, the darling of Wall Street isn’t the company’s present day output, but the business narratives that make investors want to buy in with the expectation of an exit that will make them a fortune. Specifically, that narrative stems in part from Elon Musk’s promise of a self-driving car future in which, he claims, Tesla will crush Waymo. But perhaps more importantly, it comes from Musk’s claim that his Optimus line of robots is so powerful, they might end poverty, become the “biggest product of all time,” and generate “infinite” revenue.

    But Tesla’s line of robots has a lot to prove in a short time. It was less than five years ago that Elon Musk said he was revealing a robot prototype, but it turned out to actually be a person in a lycra bodysuit, and the whole thing was a sort of awkward, you-can’t-laugh-at-me-if-I’m-laughing-too fake joke

    Hyundai, by contrast, owns Boston Dynamics, a company three decades old, and one that pioneered the creepy, quadrupedal and then bipedal robots that used to go viral and make people make the same “kill it with fire” joke over and over. Boston Dynamics absolutely wrote the book on present-day robots. 

    So with that in mind, watch the head of the Atlas program at Boston Dynamics, Zachary Jackowski, hype his robot, and keep in mind that he knows his competitor is Elon Musk:

    He claims that while that thing moving around is just a research prototype, his company has been “hard at work on making the actual product version of Atlas,” and that it’s going to be “the best and actually simplest robot that we have ever built.” It’s going to be, he claims, water resistant, and able to endure temperatures as cold as minus 4 and as hot as 104 degrees Fahrenheit.

    Jackowski claims Boston Dynamics and Hyundai are putting together, the “most complete dataset in the world to train humanoid skills in manufacturing,” and that the car side of the company will soon be both using and manufacturing these things in “a new robotics factory capable of producing 30,000 Atlas robots a year.”

    This is all, of course, just hype. There’s no way to know what’s purely meant to soothe uneasy investors and board members who are eager to slash labor costs, and what’s meant to attract the attention of businesses who are thinking of becoming humanoid robot customers. 

    Meanwhile, Elon Musk will only get the complete version of his famous trillion-dollar pay package if he deploys 1 million Optimus robots, so it’s pretty clear what’s motivating him. Nonetheless, he’s pushed back the start date for Optimus robots, which, back in 2024, were supposed to be doing work in Tesla factories in 2025, and available for purchase by other companies in 2026. But Musk’s claims about applications for his robots keep expanding. In November of last year he compared Optimus robots to having a “personal C-3PO/R2-D2.”

    If you’re reading this, Tesla probably doesn’t make you feel warm and fuzzy inside, but Hyundai shouldn’t either. It’s a Chaebol, meaning it’s one of the colossal, scandal-prone companies with troubling ties to that country’s government. When it comes to creating armies of robots with the potential to crush labor power and generate “infinite” revenue, the question is not whether you should root for a company like Tesla or one like Hyundai. It’s which company’s outlandish narrative do you find more plausible? 

    [ad_2]

    Mike Pearl

    Source link

  • Robots learn 1,000 tasks in one day from a single demo

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Most robot headlines follow a familiar script: a machine masters one narrow trick in a controlled lab, then comes the bold promise that everything is about to change. I usually tune those stories out. We have heard about robots taking over since science fiction began, yet real-life robots still struggle with basic flexibility. This time felt different.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    ELON MUSK TEASES A FUTURE RUN BY ROBOTS

    Researchers highlight the milestone that shows how a robot learned 1,000 real-world tasks in just one day. (Science Robotics)

    How robots learned 1,000 physical tasks in one day

    A new report published in Science Robotics caught our attention because the results feel genuinely meaningful, impressive and a little unsettling in the best way. The research comes from a team of academic scientists working in robotics and artificial intelligence, and it tackles one of the field’s biggest limitations.

    The researchers taught a robot to learn 1,000 different physical tasks in a single day using just one demonstration per task. These were not small variations of the same movement. The tasks included placing, folding, inserting, gripping and manipulating everyday objects in the real world. For robotics, that is a big deal.

    Why robots have always been slow learners

    Until now, teaching robots physical tasks has been painfully inefficient. Even simple actions often require hundreds or thousands of demonstrations. Engineers must collect massive datasets and fine-tune systems behind the scenes. That is why most factory robots repeat one motion endlessly and fail as soon as conditions change. Humans learn differently. If someone shows you how to do something once or twice, you can usually figure it out. That gap between human learning and robot learning has held robotics back for decades. This research aims to close that gap.

    THE NEW ROBOT THAT COULD MAKE CHORES A THING OF THE PAST

    A robot doing dishes

    The research team behind the study focuses on teaching robots to learn physical tasks faster and with less data.  (Science Robotics)

    How the robot learned 1,000 tasks so fast

    The breakthrough comes from a smarter way of teaching robots to learn from demonstrations. Instead of memorizing entire movements, the system breaks tasks into simpler phases. One phase focuses on aligning with the object, and the other handles the interaction itself. This method relies on artificial intelligence, specifically an AI technique called imitation learning that allows robots to learn physical tasks from human demonstrations.

    The robot then reuses knowledge from previous tasks and applies it to new ones. This retrieval-based approach allows the system to generalize rather than start from scratch each time. Using this method, called Multi-Task Trajectory Transfer, the researchers trained a real robot arm on 1,000 distinct everyday tasks in under 24 hours of human demonstration time.

    Importantly, this was not done in a simulation. It happened in the real world, with real objects, real mistakes and real constraints. That detail matters.

    Why this research feels different

    Many robotics papers look impressive on paper but fall apart outside perfect lab conditions. This one stands out because it tested the system through thousands of real-world rollouts. The robot also showed it could handle new object instances it had never seen before. That ability to generalize is what robots have been missing. It is the difference between a machine that repeats and one that adapts.

    AI VIDEO TECH FAST-TRACKS HUMANOID ROBOT TRAINING

    A robot doing dishes

    The robot arm practices everyday movements like gripping, folding and placing objects using a single human demonstration.  (Science Robotics)

    A long-standing robotics problem may finally be cracking

    This research addresses one of the biggest bottlenecks in robotics: inefficient learning from demonstrations. By decomposing tasks and reusing knowledge, the system achieved an order of magnitude improvement in data efficiency compared to traditional approaches. That kind of leap rarely happens overnight. It suggests that the robot-filled future we have talked about for years may be nearer than it looked even a few years ago.

    What this means for you

    Faster learning changes everything. If robots need less data and less programming, they become cheaper and more flexible. That opens the door to robots working outside tightly controlled environments.

    In the long run, this could enable home robots to learn new tasks from simple demonstrations instead of specialist code. It also has major implications for healthcare, logistics and manufacturing.

    More broadly, it signals a shift in artificial intelligence. We are moving away from flashy tricks and toward systems that learn in more human-like ways. Not smarter than people. Just closer to how we actually operate day to day.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com     

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP 

    Kurt’s key takeaways 

    Robots learning 1,000 tasks in a day does not mean your house will have a humanoid helper tomorrow. Still, it represents real progress on a problem that has limited robotics for decades. When machines start learning more like humans, the conversation changes. The question shifts from what robots can repeat to what they can adapt to next. That shift is worth paying attention to.

    If robots can now learn like us, what tasks would you actually trust one to handle in your own life? Let us know by writing to us at Cyberguy.com

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Secret phrases to get you past AI bot customer service

    [ad_1]

    NEWYou can now listen to Fox News articles!

    You’re gonna love me for this. 

    Say you’re calling customer service because you need help. Maybe your bill is wrong, your service is down or you want a refund. Instead of a person, a cheerful AI voice answers and drops you into an endless loop of menus and misunderstood prompts. Now what?  

    That’s not an accident. Many companies use what insiders call “frustration AI.” The system is specifically designed to exhaust you until you hang up and walk away.

    Not today.  (Get more tips like this at GetKim.com)

    FOX NEWS POLL: VOTERS SAY GO SLOW ON AI DEVELOPMENT — BUT DON’T KNOW WHO SHOULD STEER

    Here are a few ways to bypass “frustration” AI bots. (Sebastian Kahnert/picture alliance via Getty Images)

    Use the magic words

    You want a human. For starters, don’t explain your issue. That’s the trap. You need words the AI has been programmed to treat differently.

    Nuclear phrases: When the AI bot asks why you’re calling, say, “I need to cancel my service” or “I am returning a call.” The word cancel sets off alarms and often sends you straight to the customer retention team. Saying you’re returning a call signals an existing issue the bot cannot track. I used that last weekend when my internet went down, and, bam, I had a human.

    Power words: When the system starts listing options, clearly say one word: “Supervisor.” If that doesn’t work, say, “I need to file a formal complaint.” Most systems are not programmed to deal with complaints or supervisors. They escalate fast.

    Technical bypass: Asked to enter your account number? Press the pound key (#) instead of numbers. Many older systems treat unexpected input as an error and default to a human.

    OPENAI ANNOUNCES UPGRADES FOR CHATGPT IMAGES WITH ‘4X FASTER GENERATION SPEED’

    A phone and a computer

    “Supervisor” is one magic word that can get you a human on the other end of the line. (Neil Godwin/Future via Getty Images)

    Go above the bots

    If direct commands fail with AI, be a confused human.

    The Frustration Act: When the AI bot asks a question, pause. Wait 10 seconds before answering. These systems are built for fast, clean responses. Long pauses often break the flow and send your call to a human.

    The Unintelligible Bypass: Stuck in a loop? Act like your phone connection is terrible. Say garbled words or nonsense. After the system says, “I’m having trouble understanding you” three times, many bots automatically transfer you to a live agent.

    The Language Barrier Trick: If the company offers multiple languages, choose one that’s not your primary language or does not match your accent. The AI often gives up quickly and routes you to a human trained to handle language issues.

    Use these tricks when you need help. You are calling for service, not an AI bot.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    A woman making a call on her cell phone

    Long pauses and garbled language can also get you referred to a human. (iStock)

    Get tech-smarter on your schedule

    • National radio: Airing on 500-plus stations across the U.S. Find yours or get the free podcast.
    • Daily newsletter: Join 650,000 people who read the Current (free!)
    • Watch: On Kim’s YouTube channel

    Award-winning host Kim Komando is your secret weapon for navigating tech.

    Copyright 2026, WestStar Multimedia Entertainment. All rights reserved. 

    [ad_2]

    Source link

  • Chinese Robot Sets Guinness World Record With 66-Mile Walk

    [ad_1]

    The Chinese robotics company AgiBot has set a new world record for the longest continuous journey walked by a humanoid robot. AgiBot’s A2 walked 106.286 kilometers (66.04 miles), according to Guinness World Records, making the trek from Nov. 10-13.

    The robot journeyed from Jinji Lake in China’s Jiangsu province to Shanghai’s Bund waterfront district, according to China’s Global Times news outlet. The robot never powered off and reportedly continued to operate while batteries were swapped out, according to UPI.

    A video posted to YouTube shows a highly edited version of the walk that doesn’t give much insight into how it was presumably monitored by human handlers. But even if it did have some humans playing babysitter, the journey included just about everything you’d expect when traveling by foot in an urban environment, including different types of ground, limited visibility at night, and slopes, according to the Global Times.

    The robot obeyed traffic signals, but it’s unclear what level of autonomy may have been at work. The company told the Global Times that “the robot was equipped with dual GPS modules along with its built-in lidar and infrared depth cameras, giving it the sensing capability needed for accurate navigation through changing light conditions and complex urban environments.”

    That suggests it was fully autonomous, and the Guinness Book of World Records used the word “autonomous,” though Gizmodo couldn’t independently confirm that claim.

    “Walking from Suzhou to Shanghai is difficult for many people to do in one go, yet the robot completed it,” Wang Chuang, partner and senior vice president at AgiBot, told the Global Times.

    The amount of autonomy a robot is operating under is a big question when it comes to companies rolling out their demonstrations. Elon Musk’s Optimus robot has been ridiculed at various points because the billionaire has tried to imply his Tesla robot is more autonomous than it actually is in real life.

    For example, Musk posted a video in January 2024 that appeared to show Optimus folding a shirt. That’s historically been a difficult task for robots to accomplish autonomously. And, as it turns out, Optimus was actually being teleoperated by someone who was just off-screen. Well, not too far off-screen. The teleoperator’s hand was peeking into the frame, which is how people figured it out.

    Tesla’s Optimus robot folding laundry in Jan. 2024 with an annotation of a red arrow added by Gizmodo showing the human hand. Gif: Tesla / Gizmodo

    Musk did something similar in October 2024 when he showed off Optimus robots supposedly pouring beer during his big Cybercab event in Los Angeles. They were teleoperated as well.

    It’s entirely possible that AgiBot’s A2 walked the entire route autonomously. The tech really is getting that good, even if long-lasting batteries are still a big hurdle. But obviously, people need to remain skeptical when it comes to spectacular claims in the robot race.

    We’ve been promised robotic servants for over a century now. And the people who have historically sold that idea are often unafraid to use deception to hype up their latest achievements. Remember Miss Honeywell of 1968? Or Musk’s own unveiling of Optimus? They were nothing more than humans in a robot costume.

    [ad_2]

    Matt Novak

    Source link

  • You Must Read This Riveting Whistleblower Lawsuit About Allegedly Dangerous Robots

    [ad_1]

    The allegations detailed in a new whistleblower lawsuit against a Silicon Valley robotics company read like the first act of a sci-fi suspense movie: a sidelined safety technician plays Cassandra while a robotics company allegedly rushes ahead trying to commercialize a powerful humanoid robot with bone-crunching capabilities. The situation gets more and more sinister—and intolerable for the safety officer—and finally, company leadership allegedly just gets rid of him so they can build their terminators in peace.

    These are just allegations, to be abundantly clear, and a spokesperson for the company itself, Figure AI, has told CNBC the safety technician was “terminated for poor performance.” The claims in the lawsuit are “falsehoods that Figure will thoroughly discredit in court,” the spokesperson further claims.

    If the lawsuit—framed as a case of alleged retaliatory termination against a whistleblower—is really fiction, it’s the start of a blockbuster. It invokes riveting corporate dramas like Michael Clayton or The Insider, with a dash of Robocop.

    You may remember Figure AI. The company released an eye-popping demo of its 01 model last year in which a humanoid robot appeared to respond to spoken, open-ended commands by carrying out tasks of its own choosing. A request for “something to eat” results in the robot gently handing the user an apple, for instance.

    The plaintiff, Robert Gruendel, a robotics safety engineer, who once worked in R&D for Amazon according to his LinkedIn, says he only joined Figure after that demo was made. The suit he filed Friday in a federal court for California’s Northern District, claims that in his first week on the job, he discovered that Figure had “no formal safety procedures, incident-reporting systems, or risk-assessment processes for the robots,” and that the only other person responsible for worker safety was an outside contractor with experience in chip manufacturing, not robots. 

    Most mentions of a robot in the suit concern Figure’s 02 model, depicted below:

    Initially, as outlined in the suit, company brass is receptive to these concerns when Gruendel voices them, and CEO Brett Adcock and chief engineer Kyle Edelberg approve a safety “roadmap.” But then, the following ominous conversation with company leadership occurs, the filing alleges:

    “Adcock and Edelberg expressed a dislike of written product requirements, which Plaintiff responded to by indicating that their stance was abnormal in the field of machinery safety and of concern to him as Head of Product Safety.”

    In the filing, the heads of the company frequently come across as dismissive of the safety officer they themselves hired. The company’s vice president of commercial allegedly says at one point that Gruendel’s safety mandates would be ignored because the CEO “would shoot us if we did it.”

    At the start of 2025, the pressure on Gruendel seems to intensify when Adcock, the CEO, supposedly asks Gruendel “what it would take to put Figure robots in the home.” Per the suit, Gruendel, concerned about the robot’s power, and the unpredictability of the AI at its core, designs another “roadmap,” publishes it internally, and holds a meeting about it that the CEO skips. So, allegedly, Gruendel writes a condensed version and sends it to the CEO, but is ignored.

    Investors allegedly see a fairly comprehensive safety plan, which they like, after which company leadership downgrades it, an action Gruendel flags to leaders, according to the suit, saying it “could be interpreted as fraudulent.”

    Then things get really cinematic in the lead-up to Gruendel’s September 2025 firing. In July, Gruendel conducts safety tests involving just how hard the robot can hit, the suit says. ”During the impact test, [the robot moves] at super-human speed,” and generates force “twenty times higher than the threshold of pain.” According to Gruendel’s calculations, it produces “more than twice the force necessary to fracture an adult human skull.”

    The next day, according to the suit, the company’s vice president of growth gets in touch with Gruendel to tell him he had just received a raise in the amount of $10,000 per year with an admiring note about Gruendel’s “continued growth and impact at Figure.” The supposed note also acknowledges Gruendel’s “consistent effort,” and “positive mindset.”

    Fresh from receiving his raise, and apparently undeterred, he sends a Slack message to the CEO, saying the robot could inflict “severe permanent injury on humans,” only to be ignored again, the suit alleges. So the suit says he tries the chief engineer, telling him Figure needs to take “immediate action to distance personnel from the robots.”

    Gruendel starts worrying, the suit says, that near-misses are occurring, and that there’s no system in place to track them. And then:

    ”This conclusion was further evidenced by an instance where an employee was standing next to [a robot] and the [robot] malfunctioned and punched a refrigerator, narrowly missing the employee. The robot left a ¼-inch deep gash in the refrigerator’s stainless-steel door.”

    So Gruendel, as depicted in the suit, seems to pour everything into getting an emergency stop button added to the robot system in the workplace in order to protect the employees who have to be near it. The company seems to cooperate with the effort, and then more or less abandon it, the suit alleges. Also, a safety feature allegedly gets axed around this time because someone doesn’t like how it looks.

    Between mid August and early September, the suit alleges that Gruendel’s authority within the company degrades, and he’s finally fired by the same guy who had praised him and given him a raise earlier that summer.

    You can read the whole filing for yourself here.

    As CNBC notes, Figure’s valuation has grown 15-fold since last year when it received capital injections from Nvidia, Jeff Bezos, and Microsoft. A funding round this year from Parkway Venture Capital places the company’s value at $39 billion.

    As evidenced by the viral reaction to the more recent Neo robot from 1x technologies, there seems to be a race to bring household humanoid robots to market. And there are, of course, bubble concerns accompanying this gold rush-style corporate mindset. In September, roboticist and iRobot founder Rodney Brooks wrote an essay claiming that “today’s humanoid robots will not learn how to be dexterous despite the hundreds of millions, or perhaps many billions of dollars, being donated by VCs and major tech companies to pay for their training.” 

    Gizmodo reached out to Figure for additional comments about the allegations in this suit, and will update if we hear back. 

    [ad_2]

    Mike Pearl

    Source link

  • Smart fabric muscles could change how we move

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A new robotic breakthrough out of South Korea may soon turn your clothes into assistive tech. Researchers have found a way to mass-produce ultra-thin “fabric muscles” that can flex and lift like human tissue. The innovation could redefine how wearable robots support people in everyday life.

    Scientists at the Korea Institute of Machinery and Materials (KIMM) developed an automated weaving system that spins shape-memory alloy coils thinner than a strand of hair.

    Despite weighing less than half an ounce, this new material can lift about 33 pounds. That makes it light, flexible and strong enough to power the next generation of wearable robotics.

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.  

    WORLD’S FIRST AI-POWERED INDUSTRIAL SUPER-HUMANOID ROBOT

    Dr. Cheol Hoon Park, principal researcher at the Korea Institute of Machinery and Materials, examines a lightweight clothing-type wearable robot. (KIMM)

    A new way to build strength into clothing

    Until now, most wearable robots have relied on motors or pneumatic systems. These made them bulky, loud and expensive. They also limited how easily a person could move.

    KIMM’s solution replaces the metal core of earlier coil designs with natural fiber. This shift allows the yarn to stretch more freely while keeping its power. The upgraded weaving system now produces these fabric muscles continuously, paving the way for large-scale manufacturing.

    The result is a lightweight actuator that moves naturally with the body. It can support multiple joints at once, like the shoulders, elbows and waist, without restricting movement.

    Real results from early testing

    The team built the world’s first clothing-type wearable robot weighing less than 4.5 pounds. In testing, it cut muscle effort by more than 40% during repetitive work.

    A smaller version designed for shoulder support weighs only about 1.8 pounds. In hospital trials at Seoul National University Hospital, patients with muscle weakness improved their shoulder movement by more than 57%.

    These results show that fabric muscles can do much more than help factory workers; they can restore independence and mobility for people who need it most.

    THE NEW ROBOT THAT COULD MAKE CHORES A THING OF THE PAST

    AI-driven exoskeleton lightens your load, elevates performance

    A man runs while wearing an AI-powered exoskeleton. (Kurt “CyberGuy” Knutsson)

    What this means to you

    This new kind of wearable tech could one day show up in your daily routine. Picture a jacket that quietly helps lift groceries, or a work shirt that reduces strain during long shifts. For people in recovery, it could offer gentle, continuous support that makes movement easier and less painful.

    Healthcare professionals could see fewer injuries, while patients gain more freedom. And in industries like construction and logistics, these fabric muscles could reduce fatigue and boost safety.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    KIMM’s success with automated fabric muscle production marks a turning point for wearable robotics. By weaving strength into soft, flexible materials, engineers are closing the gap between machine power and human comfort. As this technology spreads from labs to workplaces and homes, the idea of clothing that truly supports you, physically and practically, is becoming a reality.

    PUTIN CALLS DANCING RUSSIAN ROBOT ‘VERY BEAUTIFUL’ IN AWKWARD AI CONFERENCE MOMENT

    A humanoid robot with TV screens behind it

    The humanoid robot Tiangong, developed by Beijing Innovation Center of Humanoid Robotics Co., moves an orange during a demonstration at Beijing Robotics Industrial Park in Beijing E-Town, China, on May 16, 2025. (REUTERS/Tingshu Wang)

    Would you wear robotic clothing if it meant less strain, more strength, and greater freedom every day? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Elon Musk Claims Money Won’t Exist in the Future (and Jensen Huang Would Like a Heads Up)

    [ad_1]

    Elon Musk made some wild claims at the US-Saudi Investment Forum at the Kennedy Center in Washington, D.C. on Wednesday, insisting that his Optimus robot would fix poverty, people wouldn’t have to work in the future, and money would eventually become irrelevant. Jensen Huang, the CEO of Nvidia, was also on stage and joked that he’d like Musk to give him a heads up just before currency no longer becomes a thing.

    “AI and humanoid robots will actually eliminate poverty,” Musk claimed on Wednesday. “And Tesla won’t be the only one that makes them. I think Tesla will pioneer this, but there will be many other companies that make humanoid robots. But there is only basically one way to make everyone wealthy, and that is AI and robotics.”

    The Tesla CEO has frequently insisted in recent months that his robots will deliver a kind of post-scarcity future where nobody has to work. The billionaire said it explicitly on Wednesday when asked about what he thinks the future holds for those who are concerned about AI and robots replacing jobs.

    “My prediction is that work will be optional,” Musk said, noting that he was talking about 10-20 years from now.

    The billionaire went on to take his now-common prediction even further, claiming that in such a world where robots are doing all the labor, money won’t exist anymore.

    “I’d always recommend people read Iain Banks’ Culture books to get a sense for what a probable positive AI future is like. And interestingly, in those books, money is no longer… doesn’t exist. It’s kind of interesting,” Musk said.

    “My guess is, if you go out long enough, assuming there’s a continued improvement in AI and robotics, which seems likely, the money will stop being relevant at some point in the future,” Musk continued.

    The moderator of the discussion asked, “Jensen, any thoughts?” as the crowd laughed. “By the way, the Nvidia earnings call is later today,” Musk said, joining the laughter.

    Huang shifted uncomfortably in his seat and laughed to himself with a kind of bewildered look. “And by the way, since currency is irrelevant…” Huang joked, trailing off. “Elon just wants to share with you breaking news.”

    After a good laugh, Huang got serious again and sort of hedged on what Musk was saying. Huang has previously taken the opposite view of the crowd that insists there won’t be any work in the future. Back in August, Huang said that AI and automation will actually make everyone busier. Huang acknowledged that things would be different, including things like how students learn and how people do their work. But he stuck to his guns in predicting that people will actually just be busier because they can accomplish more of their goals.

    “It is my guess that Elon will be busier as a result of AI. I’m gonna be busier as a result of AI,” said Huang. “And the reason for that is because we have so many ideas we wanna pursue, so many things that we still have in our backlog inside our company that we can go pursue. If we were more productive, we can get to those things faster, and so in the near term, I would say that there’s every evidence that we will be more productive and yet still be busier because we have so many ideas.”

    Huang then joked that since he texts with Musk often, he hopes the Tesla CEO will give him a heads up before currency is no longer relevant. Musk said, “You’ll see it coming.”

    Musk is constantly talking about how the robots he’s developing at Tesla, known as Optimus, are the key to eliminating poverty. But, as we’ve written before, this is probably his most ridiculous lie. Improving efficiency doesn’t redistribute wealth. Musk never addresses who will be paying Americans to just sit around and do nothing while billions of robots actually perform the labor. Is it the government? Because that would require a massive change in political and economic structures.

    And why should we believe Musk, of all people, wants to pay people for sitting around? This is the man who stormed into the federal government earlier this year with his so-called Department of Government Efficiency (DOGE) and decided that too many people were taking advantage of government benefits. He’s also the guy who has called the word homeless a “propaganda word” for “violent drug addicts.”

    Musk frequently tries to suggest that people experiencing homelessness don’t have jobs, even though somewhere between 40 and 60% of people who don’t have housing are employed, according to government estimates. He does not give a fuck about poverty. He cares about making more money and is on track to become the world’s first trillionaire. And he never talks about the mechanism by which his utopian idea for a leisure society would actually work.

    The ideas Musk promotes were extremely common in 20th-century futurism. And it’s clear that’s where he’s drawing his inspiration, even citing Iain Banks and his utopian Culture series of books on Wednesday. But none of it makes sense unless you establish some kind of radical socialist or communist entity at the heart of this vision to distribute the necessities to live.

    Musk wants to sell you his robots, and that makes sense in our current economic system. But after he sells you a robot, it doesn’t follow that the person who owns that robot would no longer have to work. It’s a bit like imagining that all of the appliances in your home right now are somehow paying for themselves. They’re not. They may improve your life, but they don’t institute a political or economic system whereby people no longer have to work. If all wealth is derived from robots in this imaginary system Musk creates, he would have to be the one redistributing his wealth to pay for everyone else not working.

    The end of the discussion with Musk and Huang was a good reminder of where we’re actually situated here in 2025. The Saudi moderator said “my boss and your bosses is going to talk next,” referring to Saudi Arabia’s Crown Prince Mohammed bin Salman (MBS) and President Donald Trump. The two tech executives didn’t vocally object to Donald Trump being called their “boss.” But it stripped away the fantasy Musk seemed to be engaged in about robotics and AI delivering utopia anytime soon.

    Trump and MBS have no plans to let people sit around and get paid for doing nothing. And they’re building a future where that could never conceivably happen.

    [ad_2]

    Matt Novak

    Source link

  • Anthropic’s Claude Takes Control of a Robot Dog

    [ad_1]

    As more robots start showing up in warehouses, offices, and even people’s homes, the idea of large language models hacking into complex systems sounds like the stuff of sci-fi nightmares. So, naturally, Anthropic researchers were eager to see what would happen if Claude tried taking control of a robot—in this case, a robot dog.

    In a new study, Anthropic researchers found that Claude was able to automate much of the work involved in programming a robot and getting it to do physical tasks. On one level, their findings show the agentic coding abilities of modern AI models. On another, they hint at how these systems may start to extend into the physical realm as models master more aspects of coding and get better at interacting with software—and physical objects as well.

    “We have the suspicion that the next step for AI models is to start reaching out into the world and affecting the world more broadly,” Logan Graham, a member of Anthropic’s red team, which studies models for potential risks, tells WIRED. “This will really require models to interface more with robots.”

    Courtesy of Anthropic

    Courtesy of Anthropic

    Anthropic was founded in 2021 by former OpenAI staffers who believed that AI might become problematic—even dangerous—as it advances. Today’s models are not smart enough to take full control of a robot, Graham says, but future models might be. He says that studying how people leverage LLMs to program robots could help the industry prepare for the idea of “models eventually self-embodying,” referring to the idea that AI may someday operate physical systems.

    It is still unclear why an AI model would decide to take control of a robot—let alone do something malevolent with it. But speculating about the worst-case scenario is part of Anthropic’s brand, and it helps position the company as a key player in the responsible AI movement.

    In the experiment, dubbed Project Fetch, Anthropic asked two groups of researchers without previous robotics experience to take control of a robot dog, the Unitree Go2 quadruped, and program it to do specific activities. The teams were given access to a controller, then asked to complete increasingly complex tasks. One group was using Claude’s coding model—the other was writing code without AI assistance. The group using Claude was able to complete some—though not all—tasks faster than the human-only programming group. For example, it was able to get the robot to walk around and find a beach ball, something that the human-only group could not figure out.

    Anthropic also studied the collaboration dynamics in both teams by recording and analyzing their interactions. They found that the group without access to Claude exhibited more negative sentiments and confusion. This might be because Claude made it quicker to connect to the robot and coded an easier-to-use interface.

    Courtesy of Anthropic

    The Go2 robot used in Anthropic’s experiments costs $16,900—relatively cheap, by robot standards. It is typically deployed in industries like construction and manufacturing to perform remote inspections and security patrols. The robot is able to walk autonomously but generally relies on high-level software commands or a person operating a controller. Go2 is made by Unitree, which is based in Hangzhou, China. Its AI systems are currently the most popular on the market, according to a recent report by SemiAnalysis.

    The large language models that power ChatGPT and other clever chatbots typically generate text or images in response to a prompt. More recently, these systems have become adept at generating code and operating software—turning them into agents rather than just text-generators.

    [ad_2]

    Will Knight

    Source link