ReportWire

Tag: Robotics

  • Robots step in more than ever when credit traders go on vacation

    When US credit traders go to the beach, algorithms are increasingly stepping in for them, allowing transaction volume to stay relatively high even during a traditionally slow period. Algorithmic trading accounted for more than 40% of trading in the US high-grade market in August, a percentage that has climbed steadily since that month in 2020, […]

    Bloomberg News

    Source link

  • Now summer’s over, here’s what to do with all those photos on your camera roll

    LONDON — The summer holidays are over, and all those great times you had on vacation have been memorialized in hundreds of smartphone photos. Now what?

    Some highlights — the prettiest sunset, the best group shots — have been posted on Instagram or shared in the family chat group. But many more will likely languish in your camera roll.

    Because smartphones come with increasingly large amounts of file storage, it’s too easy to take photos just because we can. But it’s also real work to go through them all later, so it’s too easy to forget about them.

    Here are some quick and easy methods to help deal with the pictures (and videos) overwhelming your phone.

    There will some shots that will be the most important — standout photos that you want to share with others, or know that you’ll look back on years later, or just keep for reference. Star or heart any photos that fall into this category, which puts them into a folder or album for favorites.

    After a recent extended family trip to Turkey, I ended up with quite a few photos of restaurant and cafe menus. They were shared in the family WhatsApp group to decide where or what to eat. But we’ll probably never visit those establishments again.

    It’s always good practice cull photos that you just don’t need anymore, which could also include screenshots, pictures of receipts or duplicate images. But going through hundreds of trip photos could be a little tedious without some help. Fortunately, there are dozens of photo deletion and cleanup apps available that aim to speed the job up.

    Many of them resemble dating apps like Tinder, because they let you swipe left to delete and swipe right to save a photo. Some are free, others need a subscription.

    It starts getting more challenging when you have images that are similar but not identical. Which one should you keep? Some apps have a comparison feature to help you decide.

    I tried a few of these apps and found that Clever Cleaner’s Similars function works well, helping me whittle down, for example, many of the various nearly identical shots I took of Istanbul’s skyline while crossing the Bosphorus Strait by ferry at dusk. The free app grouped similar pictures together and then suggested the best shot to keep. I found that I generally agreed with its suggestions.

    Even if you’ve managed to sort through your camera roll, it will probably still be a jumble of images stretching back in an unbroken stream.

    So group photos into albums organized by themes. Android and iPhone users can do this on the Photos apps on their respective operating systems. Select all the photos from a trip and add them to a new album.

    Planning ahead will make this process easier. Create an album when you start your trip, then save the photos there as you take them.

    You can also create a shared album on Android or iPhone, which lets other people view or comment on photos or add their own.

    If you don’t want to set up a shared album, Android and Google Photos lets users create links so others can just view an album or individual photo. It’s not so easy on iOS, which only lets users export the album’s photos. You can share individual photos with an iCloud link but it expires after 30 days.

    Now that you’ve edited and curated your holiday pictures, consider taking an analog approach to showing them off.

    Print them out and put them in an album that people can flip — not scroll — through. Or blow up the most eye-catching shot to frame and hang it as wall art.

    Google Photos offers a photo book printing service that uses artificial intelligence to curate photos into generic themes, like Spring 2025, Memories, or They Grow Up So Fast, and generate basic no-frills layouts.

    Other services like Mixbook and Shutterstock offer services that automatically generate more elaborately designed photobooks. Mixbook can even provide AI-generated photo captions, though the results might be, well, mixed.

    ____

    AP Technology Writer Barbara Ortutay in San Francisco contributed to this report.

    ____

    Is there a tech topic that you think needs explaining? Write to us at onetechtip@ap.org with your suggestions for future editions of One Tech Tip.

    Source link

  • Elon Musk in line for $1 trillion pay package if Tesla hits aggressive goals

    Tesla CEO Elon Musk could be in line for a payout of $1 trillion if his electric car company meets a series of extremely aggressive targets over the next 10 years, according to documents released by the company.

    Tesla, which is leaning heavily into robotics and AI, said in a regulatory filing on Friday that the package has a dozen share tranches that include awards for Musk if targets, ranging from car production to the total value of the company, are met over that time period.

    Very early in the plan, Tesla would have to reach a market valuation of $2 trillion and achieve 20 million vehicles deliveries. Tesla delivered less than 2 million vehicles in 2024.

    That milestone would also required a million robotaxis in commercial operation and the delivery of 1 million artificial intelligence bots.

    Musk needs to remain with Tesla for at least seven and a half years to cash out on any stock, and 10 years to earn the full amount.

    Musk has been one of the richest people in the world for several years.

    Musk would also receive more voting power over Tesla under the proposed plan. The EV company is set to hold its annual shareholders meeting on Nov. 6. Tesla’s last shareholders meeting was on June 13 of last year, where investors voted to restore Musk’s record $44.9 billion pay package that was thrown out by a Delaware judge earlier that year.

    A condition of the 11th and 12th tranches of the plan includes Musk coming up with a framework for someone to succeed him as CEO.

    The goals set out for Musk and Tesla are extremely ambitious given recent tumult at the Texas company.

    Tesla shares have plunged 25% this year largely due to blowback over Musk’s affiliation with President Donald Trump. But Tesla also faces intensifying competition from the big Detroit automakers and particularly from China.

    Telsa sales have fallen precipitously in Europe after Musk aligned with a far-right political party in German.

    Sales plunged 40% in July in the 27 European Union countries compared with the year earlier even as sales overall of electric vehicle soared, according to the European Automobile Manufacturers’ Association. Meanwhile sales of Chinese rival BYD continued to climb fast, grabbing 1.1% market share of all car sales in the month versus Tesla’s 0.7%.

    In its most recent quarter, Tesla reported that quarterly profits plunged from $1.39 billion to $409 million. Revenue also fell and the company fell short of even the lowered expectations on Wall Street.

    Investors have grown increasingly worried about the trajectory of the company after Musk had spent so much time in Washington this year, becoming one of the most prominent officials in the Trump administration in its bid to slash the size of the U.S. government.

    Last month Tesla said that it gave Musk a stock grant of $29 billion as a reward for years of “transformative and unprecedented” growth despite a recent foray into right-wing politics that has hurt its sales, profits and its stock price.

    The award arrived eight months after a judge revoked Musk’s 2018 pay package for a second time, something the company noted in August. Tesla has appealed the ruling.

    Tesla said at the time that the grant was a “first step, good faith” way of retaining Musk and keeping him focused, citing his leadership of SpaceX, xAI and other companies. Musk said recently that he needed more shares and control so he couldn’t be ousted by shareholder activists.

    Tesla’s stock rose nearly 2% in premarket trading.

    Source link

  • Salesforce CEO Marc Benioff hails Tesla’s Optimus robot as a ‘productivity game-changer,’ in video showing what it can and can’t do

    Salesforce CEO Marc Benioff on Wednesday delivered a strong endorsement of Tesla’s Optimus humanoid robot, with a tweet and accompanying video declaring it the “dawn of the physical Agentforce revolution” and a “productivity game-changer” that could transform how businesses operate. His enthusiastic post endorses Tesla’s strategy and Elon Musk’s bold robotics ambitions.

    “Elon’s Tesla Optimus is here! Dawn of the physical Agentforce revolution, tackling human work for $200K–$500K. Productivity game-changer!” Benioff wrote on X, adding a personal note: “Congrats @elonmusk, and thank you for always being so kind to me!” The tweet was accompanied by a video showing Benioff interacting directly with one of Tesla’s humanoid robots at the company’s California facility.

    The casual exchange captured in the video offers a glimpse into the current capabilities, and limitations, of Tesla’s Optimus. When Benioff asks, “Hey, Optimus. What are you doing there?” the robot responds, “Just chilling, ready to help.” The conversation continues with Benioff requesting directions to find a Coke, to which Optimus replies, “Sorry, I don’t have real-time info, but I can take you to the kitchen if you want to check for a Coke there.” As they prepare to walk together, someone off-camera notes, “We need to give it a bit more room. Right now, it’s kind of paranoid about space. And it’ll be able to walk a lot faster, too.”

    You can watch the full exchange below:

    Tesla shares are holding steady at around $333 as of Thursday morning, but they’re up over 22% over the last six months. On Tuesday, Musk shared Tesla’s so-called “Master Plan Part IV,” which positions Optimus as central to the company’s future: The CEO claims “about 80% of Tesla’s value will eventually come from Optimus,” a projection that would value the robot program at roughly $20 trillion based on Tesla’s current market capitalization.

    Musk set ambitious targets early in the year, predicting Tesla would manufacture thousands of Optimus units in 2025 and projecting the project could eventually generate more than $10 trillion in revenue. However, production plans encountered significant headwinds when China implemented export restrictions on rare-earth materials essential for the robots’ movement. During Tesla’s April earnings call, Musk explained a magnet issue was disrupting production timelines, noting that China required assurances the rare-earth magnets would not be used for military purposes, adding that Tesla was working with Beijing to secure the necessary export licenses. Optimus’ production challenges deepened in June when Milan Kovac, who had overseen Tesla’s Optimus development since 2022, stepped down to spend more time with family.

    The projected price range for Tesla’s Optimus robot is between $15,000 and $30,000 at launch, with most recent updates suggesting the initial consumer models will be priced around $18,999 to $20,000, depending on features and configuration. Elon Musk and Tesla have publicly targeted keeping the price “under $20,000” for the base version, though more advanced or customized units could cost more.

    Production reality check

    While Musk’s rhetoric remains ambitious, the practical reality of bringing Optimus to market tells a more complex story. Tesla initially targeted producing 5,000 units by the end of 2025, but has manufactured only hundreds of prototypes so far. And with Kovac, the project’s original head, out, the program is now undergoing significant redesign under the leadership of Ashok Elluswamy, Tesla’s AI software vice president.

    Technical challenges continue to plague the project. According to The Information, engineers have reportedly run into issues with joints overheating, limited battery life, difficulties achieving human-like dexterity in the robot’s hands, and overall efficiency. Tesla has reportedly stockpiled mostly complete robot bodies that are still missing critical components like hands and forearms, while production of these intricate parts lags behind. Meanwhile, current Optimus prototypes deployed in Tesla’s own battery workshops are apparently operating at less than half the efficiency of human workers. The company paused parts procurement in June to redesign core systems, with suppliers indicating the fixes could take months.

    Tesla did not immediately respond to Fortune‘s request for comment.

    What it all means for Tesla

    For Benioff, the Optimus endorsement aligns with his broader transformation of Salesforce into what he calls an “agentic” enterprise. Under his leadership, the company has deployed AI agents extensively, reducing its customer support workforce from 9,000 to 5,000 employees while maintaining service levels. This experience with digital labor gives weight to his assessment of physical robots as the next frontier.

    “I’m not just managing human beings—I’m also managing agents, an entirely new type of digital labor,” Benioff said at the Salesforce 2.0 event last December. His vision extends beyond software to encompass robots as “physical manifestations of agents,” positioning companies like Tesla at the forefront of what could potentially be a trillion-dollar market opportunity.

    While Musk has increasingly positioned Tesla as an AI and robotics company rather than a traditional automaker, skeptics point to Tesla’s history of ambitious timelines that have consistently been pushed back—I mean, just look at this list. Many of Musk’s previous promises remain unfulfilled. That said, the stakes for Optimus are enormous. If successful, Optimus could revitalize Tesla and revolutionize manufacturing, caregiving, and countless other industries while justifying Tesla’s premium valuation. If production challenges persist, it risks becoming another example of Musk’s tendency to overpromise on breakthrough technologies.

    For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.

    Dave Smith

    Source link

  • This Robot Only Needs a Single AI Model to Master Humanlike Movements

    While there is a lot of work to do, Tedrake says all of the evidence so far suggests that the approaches used to LLMs also work for robots. “I think it’s changing everything,” he says.

    Gauging progress in robotics has become more challenging of late, of course, with videoclips showing commercial humanoids performing complex chores, like loading refrigerators or taking out the trash with seeming ease. YouTube clips can be deceptive, though, and humanoid robots tend to be either teleoperated, carefully programmed in advance, or trained to do a single task in very controlled conditions.

    The new Atlas work is a big sign that robots are starting to experience the kind of equivalent advances in robotics that eventually led to the general language models that gave us ChatGPT in the field of generative AI. Eventually, such progress could give us robots that are able to operate in a wide range of messy environments with ease and are able to rapidly learn new skills—from welding pipes to making espressos—without extensive retraining.

    “It’s definitely a step forward,” says Ken Goldberg, a roboticist at UC Berkeley who receives some funding from TRI but was not involved with the Atlas work. “The coordination of legs and arms is a big deal.”

    Goldberg says, however, that the idea of emergent robot behavior should be treated carefully. Just as the surprising abilities of large language models can sometimes be traced to examples included in their training data, he says that robots may demonstrate skills that seem more novel than they really are. He adds that it is helpful to know details about how often a robot succeeds and in what ways it fails during experiments. TRI has previously been transparent with the work it’s done on LBMs and may well release more data on the new model.

    Whether simple scaling up the data used to train robot models will unlock ever-more emergent behavior remains an open question. At a debate held in May at the International Conference on Robotics and Automation in Atlanta, Goldberg and others cautioned that engineering methods will also play an important role going forward.

    Tedrake, for one, is convinced that robotics is nearing an inflection point—one that will enable more real-world use of humanoids and other robots. “I think we need to put these robots out of the world and start doing real work,” he says.

    What do you think of Atlas’ new skills? And do you think that we are headed for a ChatGPT-style breakthrough in robotics? Let me know your thoughts on ailab@wired.com.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    Will Knight

    Source link

  • Humanoid robots showcase skills at Ancient Olympia. But they’re on a long road to catch up to AI

    ANCIENT OLYMPIA, Greece — With jerky determination, robots played soccer, wowed children with shadow-boxing skills and shot arrows on Monday at the birthplace of the Olympic Games.

    As they shuffled and occasionally froze for a battery change, their creators and futurologists debated the central question of when robots will be ready to tidy closets and wash dishes.

    Despite the explosive advance of artificial intelligence in applications like ChatGPT, their physical cousins — robots with human-like appearances and skills — are lagging years behind.

    “I really believe that humanoids will first go to space and then to houses … the house is the final frontier,” said Minas Liarokapis, a Greek academic and startup founder who organized the International Humanoid Olympiad.

    The four-day event gathered experts and developers at Ancient Olympia in southern Greece where the flame is lit every two years for the modern Summer and Winter Games.

    “To enter the house it’ll take more than 10 years. Definitely more,” Liarokapis said. “I’m talking about executing tasks with dexterity, not about selling robots that are cute and are companions.”

    AI is racing ahead thanks to vast amounts of data readily available online. But training material for humanoid robots is scarce. It involves real-world actions that are slower, more expensive and harder to record than digital data like text or images.

    By one measure, humanlike robots are roughly 100,000 years behind AI in learning from data, according to an article in the current edition of the journal Science Robotics.

    To catch up, author Ken Goldberg, a professor at the University of California, Berkeley, urged makers to move beyond simulations and combine “old-fashioned engineering” with real-world training. That, he argues, would let robots “collect data as they perform useful work, such as driving taxis and sorting packages.”

    Luis Sentis, professor of aerospace engineering and engineering mechanics at The University of Texas at Austin, said that successful robotics requires collaboration between researchers, data companies and major manufacturers to provide scale. Those partnerships, he noted, are already attracting billions of dollars in funding to develop humanoid robots.

    “These synergies are happening very, very quickly. So I do see these problems being cracked on a day-to-day basis,” said Sentis, who’s also a co-founder of humanoid maker Apptronik.

    Developers at the Greek event brought their own ideas.

    Aadeel Akhtar, CEO and founder of advanced prosthetics maker Psyonic, gained international attention after appearing on the U.S. television show “Shark Tank” last year seeking investment for his company’s bionic hand, which offers sensory feedback.

    That data, he told The Associated Press on Monday, could accelerate robot development.

    “We’ve built our hand for both humans and robots,” he said. “So we’re closing that gap by actually using the hand of the prosthetic on humans and then translating that (data) over to robots.”

    Hon Weng Chong, CEO of Cortical Labs, said that the Australian biotech company is developing a so-called biological computer that uses real brain cells grown on a chip. Those cells can learn and respond to information — and potentially teach robots to think and adapt more like humans.

    At the Olympiad, organizers hoped to lay a foundation for annual competitions providing an “honest validation of the progress that has been made in humanoid robots,” said Patrick Jarvis, who with Liarokapis is co-founder of robot maker Acumino.

    Organizers limited events to what humanoids could reasonably attempt.

    “We were trying to get the discus and the javelin, but that’s tough for humanoid robots,” Jarvis said. “We also can’t say whose robot can do a high jump because you’d have to build special legs … and that’s not necessary for most humanoid robots.”

    One company even tested whether its machine could manage the shot put, said Thomas Ryden, executive director of MassRobotics, who worked to “get as many humanoid companies there as possible.”

    In the end, several U.S. roboticists came to Greece to speak, but few brought robots.

    Chinese companies increasingly showcase their machines at public events, such as Beijing’s first Humanoid Robot Games in August, while U.S. rivals mostly stick to polished videos that can mask failures.

    There are exceptions. Elon Musk revealed Tesla’s Optimus in 2022: The prototype walked stiffly onstage, turned and waved to a cheering crowd.

    Boston Dynamics went further. Ten years after launching its dog-like Spot, the company had them dance in synchrony to a Queen song on “America’s Got Talent.”

    One of the five broke down mid-routine, creating a reality-show punchline, but also highlighting their agility and coordination.

    “Can I be honest with you? I actually think — I don’t mean this in a cruel way — it was weirdly better that one of them died,” judge Simon Cowell said. “Because it showed how difficult this was.”

    ___

    AP Technology Writer Matt O’Brien reported from Providence, Rhode Island.

    Source link

  • Nvidia Unveils High-Tech ‘Brain’ for Humanoid Robots and Self-Driving Cars

    Could humanoid robots get a lot more human? Nvidia may have made that possibility a bit realer today with a smarter robot brain that has less energy demands. 

    The tech giant’s latest robotics offering is Jetson Thor, a super computer built for real-time AI computation on humanoid robots and smart machines alike, Nvidia announced in a press release on Monday.

    The new module is built to handle larger amounts of information at less energy than previous model Jetson Orin. Powered by the latest Blackwell GPUs, Jetson Thor has more than seven times the AI compute power and twice the memory at more than three times speed and efficiency than its predecessor, Nvidia claims.

    All this new power is supposed to unlock higher speed sensor data and visual reasoning that can help humanoid robots get better at autonomously seeing, moving, and making decisions.

    “Jetson Thor solves one of the most significant challenges in robotics: enabling robots to have real-time, intelligent interactions with people and the physical world,” the company wrote.

    It’s a considerable performance leap that Nvidia hopes will appeal to engineers. The company says early adopters include Amazon, Meta, Caterpillar, and Agility Robotics, a startup that makes commercially available humanoid robots for warehouses and other manufacturing facilities. The model is being considered for adoption by John Deere and OpenAI.

    It’s also being adopted by research labs at Stanford, Carnegie Mellon, and the University of Zurich, to power autonomous robots in medical research settings and more, Nvidia said in a blog post on Monday.

    The developer kit Jetson AGX Thor, which includes the Jetson T5000 module plus a reference carrier board, power supply, and an active heatsink with a fan, is now on sale on the company’s website starting at $3,499.

    Coming soon—and available now on pre-order—is Nvidia Drive AGX Thor, a developer kit using the same technology but for autonomous vehicles instead. Deliveries for that are slated to start in September, the company said.

    Nvidia’s growing bet on robotics

    Although AI chips are Nvidia’s bread and butter, the tech giant is betting big on robotics and autonomous vehicles.

    “This is going to be the decade of AV [autonomous vehicles], robotics, autonomous machines,” CEO Jensen Huang told CNBC in an interview in June.

    Huang elaborated on his trust in just how much the robotics industry can scale at the company’s annual shareholders meeting later that month.

    Along with AI, Nvidia expects robotics to provide the largest growth for the company, and combined, the two represent “a multitrillion-dollar growth opportunity,” Huang told investors.

    Earlier this year, the company also released a family of AI models that can be used to train humanoid robots, called Cosmos.

    Huang’s bet isn’t an empty one. Humanoid robots are advancing.

    Just last week, China, one of the key players in the global robotics race, hosted its first-ever robot Olympics, World Humanoid Robot Games. At the three-day spectacle, companies showcased robots that can complete a 1,500-meter race in just a little over six seconds and achieve practical job skills like sorting medicine or taking food orders.

    But still, the technology is hugely limited and far from widespread adoption. Even at the great robotics showcase in China, many of the robots suffered technical difficulties. One robot in the track and field race even ran straight into and knocked over a bystander walking off-course. 

    Big week ahead for Nvidia

    Nvidia made the announcement at a rather convenient time for the company. The tech giant is reporting fiscal second quarter earnings on Wednesday afternoon, and the market is buzzing already.

    Nvidia dominates the AI market, so the company’s earnings always draw huge speculation, but the importance this week is boosted by volatile policy changes and questions around the economic value of wide-scale AI adoption.

    The company has been on a policy rollercoaster ride in its efforts to sell AI chips in China amidst the escalating trade war between Beijing and Washington. China is a major market for Nvidia, and the uncertainty is keeping company investors at the edge of their seats.

    Also keeping investors occupied is a concerning new AI report from MIT researchers. The report found that despite the bold bets on AI in the corporate world, fewer than one in 10 AI pilot programs have translated to real revenue gains.

    Nvidia just hit $4 trillion market value last month, becoming the first public company to achieve the feat. Now, the stakes are high, as it’s up to the tech giant to prove that it’s valuation is not just built on AI hype.

    Ece Yildirim

    Source link

  • KnowAtom Launches SocraCircle+: A First-of-Its-Kind Social AI Tool Designed to Deepen Human Connection and Student Thinking Through Live Classroom Dialogue

    New virtual student participant, SocraBot, helps K-12 students engage curiosity, build peer-to-peer understanding, and think more deeply – together

    KnowAtom, a leading provider of hands-on, inquiry-driven core science curriculum for K-8, has announced the launch of SocraCircle+, a groundbreaking classroom discussion tool that uses artificial intelligence not to generate answers – but to cultivate deeper, more human conversations in real time.

    At the center of SocraCircle+ is SocraBot, a virtual student participant who models curiosity, reflects on peer contributions, and helps students explore, agree, disagree, and extend their thinking in meaningful ways.

    “In many of today’s edtech tools, AI is used to help teachers or students do more of the same – create faster, polish more, or extract content through Q&A interfaces,” said Francis Vigeant, CEO of KnowAtom. “But more of the same simply produces more of the same outcomes. SocraCircle+ asks a different question: How can AI help students become more authentic, connected, and curious humans in conversation with one another? That’s where learning lives.”

    SocraCircle+ supports live, student-centered discussion across any K-12 subject – including science, ELA, social studies, SEL, and more. Teachers can launch a discussion in seconds. As students contribute, SocraBot responds naturally in real time, offering thoughtful follow-ups, modeling high-quality reasoning, and prompting students to consider ideas from new angles – just like a reflective peer would.

    Unlike AI tools that replicate or automate teacher work, SocraCircle+ is designed to elevate student voice and reduce reliance on teacher-led prompts or curriculum scripting. It’s built on KnowAtom’s own research and long-standing classroom results: clients consistently rank among the top performers in state science assessments, thanks to a pedagogy rooted in decades of cognitive science, educational research, and practical implementation.

    “At KnowAtom, we understand that learning is the result of interplay between culture, problems, questions, and ideas – and dialogue is the nexus where they meet,” Vigeant said. “SocraCircle+ brings that dialogue to life at a deep level.”

    Built-in multilingual support allows students to participate in most native languages, with responses translated for the group, ensuring a voice for all students. SocraBot never provides links or external content, and student identities are anonymized within the platform to create a safe, focused environment.

    SocraCircle+ is now available at no additional cost to KnowAtom’s science curriculum customers. Educators can log in to their KnowAtom portal today to begin using the tool. For teachers who don’t have access to KnowAtom’s next generation science curriculum portal, free trials are available on KnowAtom’s SocraCircle+ page.

    For more information, visit www.knowatom.com or contact press@knowatom.com

    About KnowAtom
    KnowAtom is an award-winning K-8 education company committed to helping students think like scientists and engineers. Through fully integrated, 100% hands-on core science curriculum, professional development, and digital tools, KnowAtom supports schools in building classroom cultures of thinking around next generation science standards in adaptive and adoptive states. Grounded in research and proven through top-tier student outcomes, KnowAtom’s approach empowers educators to shift learning from passive recall to active, authentic engagement.

    Source: KnowAtom, LLC

    Source link

  • One Firefly Supports the Future of STEM with $10,000 Donation to FIRST Robotics Competition South Florida Regional

    One Firefly, an award-winning marketing agency that caters to technology professionals in the residential and commercial custom integration markets, is reinforcing its commitment to innovation and community impact by donating $10,000 to the 2025 FIRST Robotics Competition South Florida Regional. This sponsorship underscores the company’s dedication to fostering the next generation of STEM leaders and providing opportunities for young minds to engage with science, technology, engineering, and mathematics in meaningful ways.

    FIRST (For Inspiration and Recognition of Science and Technology) is a globally recognized nonprofit that equips students with hands-on experience in robotics, teamwork, and problem-solving. The annual FIRST Robotics Competition brings high school teams nationwide to design, build, and program industrial-sized robots to compete in high-energy challenges. The South Florida Regional event in April 2025 is one of many held nationwide that encourages students to develop critical STEM skills while fostering creativity, leadership, and collaboration.

    One Firefly CEO, Ron Callis, has had a long-standing connection with FIRST, dating back to 2012 when he co-founded a robotics team in South Florida after being inspired by a keynote address from FIRST founder Dean Kamen at CEDIA. Reflecting on his experience, Callis shared:

    “I’ve seen firsthand the profound impact of FIRST Robotics on students. This program teaches STEM skills and instills confidence, teamwork, and business acumen. Many students, especially those from underprivileged backgrounds, can access opportunities they never imagined possible. At One Firefly, we believe in giving back and investing in the future of our industry. Supporting FIRST Robotics is one way to help shape the next generation of innovators.”

    The connection between FIRST Robotics and the custom integration industry is particularly relevant, as many students develop skills that translate directly into technology, engineering, and automation careers. Callis noted the increasing need for skilled talent in the custom integration space and emphasized how programs like FIRST can serve as a pipeline for the next generation of industry professionals:

    “The custom integration industry faces real challenges regarding labor shortages. Many of these students have the technical aptitude, problem-solving mindset, and hands-on experience that make them ideal candidates for careers in our field. By supporting FIRST, we’re not only investing in these students’ futures but also in the future of our industry.”

    For Jessica Telles, Corporate Programs Lead at One Firefly, the sponsorship holds personal significance. A former FIRST Robotics team captain, Telles experienced firsthand how the program opens doors for students. After participating in FIRST, she interned at One Firefly before joining full-time, where she has now been an integral team member for nearly a decade.

    “FIRST Robotics shaped my career in ways I never expected,” said Telles. “It gave me leadership experience, technical skills, and a network of mentors who supported my growth. Seeing One Firefly support this initiative is incredibly meaningful because I know firsthand how life-changing this program can be for students.”

    As part of its sponsorship, One Firefly will participate in the South Florida Regional event, engage with students, and explore additional opportunities to support STEM education in the future. The company remains dedicated to fostering innovation, education, and career development within the technology industry.

    For more information about the FIRST Robotics Competition South Florida Regional, visit www.firstinspires.org/robotics/frc.

    About One Firefly

    One Firefly is an award-winning marketing agency specializing in custom-tailored marketing solutions and other growth solutions like recruiting and hiring technology professionals in residential and commercial markets. The company was founded in 2007 to help businesses in the AV and integration industry grow and succeed through effective branding, digital marketing, and web development. A five-time honoree on the Inc. 5000 list of fastest-growing companies in the U.S., One Firefly is proud to have built a reputation for delivering purposeful marketing solutions to the niche audio-visual space. For more information, visit www.onefirefly.com.

    Source: One Firefly

    Source link

  • Faceport Launches Telepresence Helmet and Unveils Facegiving Project to Reunite Families This Thanksgiving

    Introducing the World’s First Telepresence Helmet and Its Accompanying Robot, Ushering in a New Era of Connection with the Facegiving Project.

    Faceport, Inc., a New York City-based telepresence company, proudly announces the launch of its groundbreaking telepresence platform, designed to bring remote individuals face-to-face with others as if they were physically present.

    At the heart of this innovation is the Faceport Helmet, a device worn by a trusted person that displays a remote individual’s face in real-time. This cutting-edge technology fosters natural, seamless conversations, creating an unparalleled sense of presence for everyone in the wearer’s surroundings.

    The Faceport Robot, an optional docking station for the Faceport Helmet, adds versatility by serving as a dressable, portable upper body with a robotic neck. Ideal for use at tables or counters, it allows the remote user to “look around” and engage in group conversations naturally, offering a unique, lifelike telepresence experience.

    To celebrate this milestone, Faceport is proud to introduce the Facegiving Project, a heartfelt initiative offering free telepresence services to reunite loved ones. This Thanksgiving, select applicants will have the chance to “be present” with their families through Faceport’s groundbreaking technology, bridging physical distances for this cherished holiday. Applications are now open, offering an opportunity for people to bring their loved ones home and share in the joy of being together this Thanksgiving.

    “Our mission at Faceport is to break down the barriers of distance and create meaningful human connections,” said Evan Kaye, Founder and CEO of Faceport. “With the launch of our technology and the Facegiving Project, we’re redefining how technology can unite people, enabling them to share life’s most significant moments, no matter where they are.”

    A limited quantity of both products will be available for purchase starting in early December, with exclusive early access available to anyone who signs up for Faceport’s email list. Deliveries are expected to begin in Spring 2025.

    Source: Faceport, Inc.

    Source link

  • A tiny grain of nuclear fuel is pulled from ruined Japanese nuclear plant, in a step toward cleanup

    TOKYO — A robot that has spent months inside the ruins of a nuclear reactor at the tsunami-hit Fukushima Daiichi plant delivered a tiny sample of melted nuclear fuel on Thursday, in what plant officials said was a step toward beginning the cleanup of hundreds of tons of melted fuel debris.

    The sample, the size of a grain of rice, was placed into a secure container, marking the end of the mission, according to Tokyo Electric Power Company Holdings, which manages the plant. It is being transported to a glove box for size and weight measurements before being sent to outside laboratories for detailed analyses over the coming months.

    Plant chief Akira Ono has said it will provide key data to plan a decommissioning strategy, develop necessary technology and robots and learn how the accident had developed.

    Despite multiple probes in the years since the 2011 disaster that wrecked the plant and forced thousands of nearby residents to leave their homes, much about the site’s highly radioactive interior remains a mystery.

    The sample, the first to be retrieved from inside a reactor, was significantly less radioactive than expected. Officials had been concerned that it might be too radioactive to be safely tested even with heavy protective gear, and set an upper limit for removal out of the reactor. The sample came in well under the limit.

    That’s led some to question whether the robot extracted the nuclear fuel it was looking for from an area in which previous probes have detected much higher levels of radioactive contamination, but TEPCO officials insist they believe the sample is melted fuel.

    The extendable robot, nicknamed Telesco, first began its mission August with a plan for a two-week round trip, after previous missions had been delayed since 2021. But progress was suspended twice due to mishaps — the first involving an assembly error that took nearly three weeks to fix, and the second a camera failure.

    On Oct. 30, it clipped a sample weighting less than 3 grams (.01 ounces) from the surface of a mound of melted fuel debris sitting on the bottom of the primary containment vessel of the Unit 2 reactor, TEPCO said.

    Three days later, the robot returned to an enclosed container, as workers in full hazmat gear slowly pulled it out.

    On Thursday, the gravel, whose radioactivity earlier this week recorded far below the upper limit set for its environmental and health safety, was placed into a safe container for removal out of the compartment.

    The sample return marks the first time the melted fuel is retrieved out of the containment vessel.

    Fukushima Daiichi lost its key cooling systems during a 2011 earthquake and tsunami, causing meltdowns in its three reactors. An estimated 880 tons of fatally radioactive melted fuel remains in them.

    The government and TEPCO have set a 30-to-40-year target to finish the cleanup by 2051, which experts say is overly optimistic and should be updated. Some say it would take for a century or longer.

    No specific plans for the full removal of the fuel debris or its final disposal have been decided.

    Source link

  • A robot retrieves the first melted fuel from Fukushima nuclear reactor

    A robot retrieves the first melted fuel from Fukushima nuclear reactor

    TOKYO — A remote-controlled robot has safely returned with a tiny piece of melted fuel it collected from inside one of three damaged reactors at the tsunami-hit Fukushima Daiichi nuclear power plant for the first time since the 2011 meltdown.

    The Tokyo Electric Power Company Holdings, which manages the plant, said Saturday that the extendable fishing rod-like robot successfully clipped a gravel as big as 5 millimeters (2 inches), the size of a tiny granola bit, from the top surface of a mound of molten fuel debris that sits on the bottom of the No. 2 reactor’s primary containment vessel.

    The “telesco” robot, with its frontal tongs still holding the melted fuel bit, returned to the enclosed container for safe storage after workers in full hazmat gear pulled it out of the containment vessel earlier Saturday.

    The sample return marks the first time the melted fuel is retrieved out of the containment vessel. But the mission is not over until it’s certain that the sample’s radioactivity is below a set standard and safely placed into a container.

    If the radioactivity exceeds the limit, the robot must go back inside the reactor to find another piece. TEPCO officials said they expect the piece is small enough to meet the requirement.

    The mission initially started in August for what was supposed to be a two-week round trip but had been suspended twice due to mishaps.

    First one was the procedural mistake at the beginning that held up the work for nearly three weeks, then the robot’s two cameras designed to transmit views of the target areas for its operators in the remote control room failed. The camera problem required the robot to be pulled out all the way for replacement before the mission resumed Monday.

    Fukushima Daiichi lost its key cooling systems during the 2011 earthquake and tsunami, causing meltdowns in its three reactors. An estimated 880 tons of fatally radioactive molten fuel remains in them, and TEPCO has carried out a number of robotic probes to figure out how to decommission the plant.

    Telesco on Wednesday successfully clipped a piece presumably measuring less than 3 grams (0.1 ounce) from the planned area right underneath the Unit 2 reactor core, from which large amounts of melted fuel fell during the meltdown 13 years ago, TEPCO said.

    Plant chief Akira Ono said only the tiny spec can provide key data to plan decommissioning strategy, develop necessary technology and robots and retroactively learn how the accident had developed.

    The government and TEPCO have set a 30-to-40-year target for the cleanup, which experts say is overly optimistic and should be updated.

    No specific plans for the full removal of the fuel debris or its final disposal have been decided.

    Source link

  • This Is a Glimpse of the Future of AI Robots

    This Is a Glimpse of the Future of AI Robots

    Despite stunning AI progress in recent years, robots remain stubbornly dumb and limited. The ones found in factories and warehouses typically go through precisely choreographed routines without much ability to perceive their surroundings or adapt on the fly. The few industrial robots that can see and grasp objects can only do a limited number of things with minimal dexterity due to a lack of general physical intelligence.

    More generally capable robots could take on a far wider range of industrial tasks, perhaps after minimal demonstrations. Robots will also need more general abilities in order to cope with the enormous variability and messiness of human homes.

    General excitement about AI progress has already translated into optimism about major new leaps in robotics. Elon Musk’s car company, Tesla, is developing a humanoid robot called Optimus, and Musk recently suggested that it would be widely available for $20,000 to $25,000 and capable of doing most tasks by 2040.

    Courtesy of Physical Intelligence

    Previous efforts to teach robots to do challenging tasks have focused on training a single machine on a single task because learning seemed untransferable. Some recent academic work has shown that with sufficient scale and fine-tuning, learning can be transferred between different tasks and robots. A 2023 Google project called Open X-Embodiment involved sharing robot learning between 22 different robots at 21 different research labs.

    A key challenge with the strategy Physical Intelligence is pursuing is that there is not the same scale of robot data available for training as there is for large language models in the form of text. So the company has to generate its own data and come up with techniques to improve learning from a more limited dataset. To develop π0 the company combined so-called vision language models, which are trained on images as well as text, with diffusion modeling, a technique borrowed from AI image generation, to enable a more general kind of learning.

    For robots to be able to take on any robot chore that a person asks them to do, such learning will need to be scaled up significantly. “There’s still a long way to go, but we have something that you can think of as scaffolding that illustrates things to come,” Levine says.

    Will Knight

    Source link

  • Preparing students for Industry 5.0: Rethinking STEM to shape the future workforce

    Preparing students for Industry 5.0: Rethinking STEM to shape the future workforce

    Key points:

    The global workforce is transforming, propelled by the dawn of the Fifth Industrial Revolution–commonly referred to as Industry 5.0. Unlike previous revolutions that focused solely on technological advancement, Industry 5.0 strongly emphasizes collaboration between humans and machines. While AI, robotics, and drones continue to push boundaries, this era also recognizes the importance of human creativity and problem-solving in conjunction with these tools.

    As we prepare the workforce of the future, it becomes clear that we must rethink our approach to STEM education. It’s no longer enough to teach technical skills in isolation. Instead, we must create learning environments that foster creativity and adaptability–key traits that will help students thrive in an increasingly complex and tech-driven world.

    The imperative for Industry 5.0 readiness

    The rise of AI and automation is reshaping industries, creating an urgent need for students to develop technical competencies and think innovatively about how these technologies can be applied. The future workforce must be able to work alongside machines in ways we can’t even fully anticipate yet. Anticipating this demands an education system that evolves to meet future challenges–not just by focusing on coding or data analysis but by cultivating skills that will prove invaluable in navigating new, unforeseen challenges.

    Hands-on STEM learning is key to this evolution. Rather than confining students to theoretical exercises, integrating real-world technologies like drones into the classroom can provide students with the physical experiences they need to better understand the evolving job market. As these young minds engage with advanced tools, they gain the technical know-how and develop the mindset required to succeed in Industry 5.0.

    Why drones? Connecting STEM to real-world applications

    Drones are among the most impactful ways to bring STEM education to life. Unlike traditional teaching methods, drones allow students to interface directly with technology, transforming their learning experiences from passive to active. In classrooms incorporating drones, students can experience real-world problem-solving scenarios that transcend textbook learning.

    For example, drones are already playing a crucial role in industries such as agriculture, logistics, and environmental monitoring. By bringing these applications into the classroom, students are provided the opportunity to understand these technologies and explore their potential in solving pressing challenges across industries. Students can learn about everything from engineering and physics to coding and data analysis, all while working on projects with tangible, real-world implications.

    Take, for instance, schools that leverage partnerships with drone providers to deploy curricula that include practical lesson plans, like surveying local farmland and analyzing soil conditions to help improve crop yields. These projects go beyond theoretical knowledge, teaching students to apply data analytics in meaningful ways. In another example, high school students can design drones to support healthcare initiatives, like delivering medical supplies to remote areas–projects that mirror innovations currently being explored in healthcare logistics. These experiences prepare students for real-world careers and illuminate career pathways that may not have otherwise been obvious or desirable options.

    Bridging the skills gap with experiential learning

    Verticalized skills gaps have become a significant barrier to innovation and economic growth, as many students are graduating without the technical and critical thinking abilities demanded by today’s employers. The gap is particularly evident in data analysis, programming, advanced manufacturing, and cybersecurity–fields that are essential for navigating the complexities of the modern digital economy.

    This gap continues to widen as technological advancements outpace traditional education methods. In a world increasingly driven by data, students need to learn how to collect, analyze, and interpret information to make informed decisions. Introducing project-based learning centered around data analysis–such as interpreting data sets from environmental studies or designing experiments that involve data collection–gives students hands-on experience in this critical skill area.

    As work becomes increasingly global and cross-functional, students must develop the ability to communicate effectively in diverse teams. Experiential learning projects, such as team-based STEM competitions or group technology builds, teach students the importance of working together toward shared goals while honing their communication skills, mirroring the collaborative environments they will encounter in the workforce.

    Incorporating creativity and human ingenuity in Industry 5.0

    Technical skills are essential, but the distinguishing factor of Industry 5.0 is the synergy between human ingenuity and machine precision. Our ability to innovate and collaborate with machines to solve complex problems will mark this era. Schools should focus on fostering creativity alongside technical training, as the future workforce will be called upon to design new solutions, lead teams, and tackle challenges that have yet to emerge.

    Schools can consider integrating design thinking into their curriculum, where students engage in iterative processes to ideate, prototype, and test solutions to complex problems. In a classroom setting, students could use design thinking to create smart home devices that integrate human comfort with AI precision, focusing on user-centric solutions.

    Entrepreneurship courses in schools will empower students to develop tech startups where they identify a societal problem, design a technological solution, and pitch their idea to judges, peers, and even potential investors. This encourages both creativity in coming up with new ideas and collaboration with technology to make ideas a reality.

    The classroom as a catalyst for the future workforce

    As we move deeper into Industry 5.0, the demand for a workforce that can blend technical skills with innovative problem-solving increases. Integrating hands-on technology like drones into educational environments offers a dynamic way to address this need. It allows students to connect with STEM fields practically and inspiringly. Educators have the crucial responsibility to provide students with the necessary tools and perspectives. By incorporating creative, physical, and project-based lessons into the curriculum, we foster the innovation, adaptability, and collaboration essential for the future workforce.

    Latest posts by eSchool Media Contributors (see all)

    Rob Harvey, FTW Robotics

    Source link

  • CU Boulder hosts robotics showcase to celebrate Research & Innovation Week

    CU Boulder hosts robotics showcase to celebrate Research & Innovation Week

    BOULDER, Colo — It is Research & Innovation Week at the University of Colorado at Boulder. To celebrate, the university hosted a robotics showcase for the public.

    The College of Engineering and Applied Science demonstrated some of its best technology in the engineering center. Students at CU Boulder are developing cutting-edge robotics to help with future search and rescue efforts and other dangerous, dark and dirty jobs.

    Richard Butler

    Dr. William Doe, CU Boulder Research development manager, said their goal is to find new solutions to the perplexing problems facing the world.

    “Robotics is something that’s being used pretty much in every walk of life. It’s very interdisciplinary. We have students that have interest in electrical engineering, computer science, mechanical engineering, aerospace engineering, all of those kinds of degrees contribute to robotics,” said Doe.

    Robotics lab manager Destin Woods and PhD student Miles Mena showcased a powerful dog-like robotic that would be useful in search and rescue operations.

    SPOT.png

    Richard Butler

    “It would use these cameras to find different objects within the environment that could detect a human presence of some sort. A 3D map is created from the lidar that sits on top of the robot. It would autonomously plan to create a path that will expand the map so it can explore more areas,” said Woods.

    Spot, the four-legged robot, is capable of autonomous exploration. Parts and all, he is worth about $150,000.

    “This robot was part of a program that was designed to place robots in dangerous, dirty and dark situations. All those environments you don’t want a human in because they’re detrimental to their health and their safety,” said Mena.

    PhD student Heiko Kabutz demonstrated CLARI and mCLARI. It is an insect-sized robot that is capable of shifting. Changing its form allows for exploration and discovery in the tiniest of places.

    “Our robot is a four-legged, small-scale robot, which is similar to an insect, but a robot. The body is soft, so the body can change shape, which allows the robot to squeeze into gaps where typically a robot can’t fit into,” Kabutz said.

    Next, Kabutz and his team plan to further enhance the mCLARI by adding more features and advancing the shape-shifting and leg motions.

    Coloradans making a difference | Denver7 featured videos


    Denver7 is committed to making a difference in our community by standing up for what’s right, listening, lending a helping hand and following through on promises. See that work in action, in the videos above.

    Richard Butler

    Source link

  • Elon Musk unveils Tesla’s ‘Cybercab,’ plans to bring autonomous driving tech to other models in 2025

    Elon Musk unveils Tesla’s ‘Cybercab,’ plans to bring autonomous driving tech to other models in 2025

    LOS ANGELES (AP) — Tesla unveiled its long-awaited robotaxi at a Hollywood studio Thursday night, though fans of the electric vehicle maker will have to wait until at least 2026 before they are available.

    CEO Elon Musk pulled up to a stage at the Warner Bros. studio lot in one of the company’s “Cybercabs,” telling the crowd that the sleek, AI-powered vehicles don’t have steering wheels or pedals. He also expressed confidence in the progress the company has made on autonomous driving technology that makes it possible for vehicles to drive without human intervention.

    Tesla began selling the software, which is called “Full Self-Driving,” nine years ago. But there are doubts about its reliability.

    “We’ll move from supervised Full Self-Driving to unsupervised Full Self-Driving. where you can fall asleep and wake up at your destination,” he said. “It’s going to be a glorious future.”

    Tesla expects the Cybercabs to cost under $30,000, Musk said. He estimated that the vehicles would become available in 2026, then added “before 2027.”

    The company also expects to make the Full Self-Driving technology available on its popular Model 3 and Model Y vehicles in Texas and California next year.

    “If they’re going to eventually get to robotaxis, they first need to have success with the unsupervised FSD at the current lineup,” said Seth Goldstein, equity strategist at Morningstar Research. “Tonight’s event showed that they’re ready to take that step forward.”

    When Tesla will actually take that step, however, has led to more than a little anxiety for investors who see other automakers deploying similar technology right now. Shares of Tesla Inc. tumbled 9% at the opening bell Friday.

    Waymo, the autonomous vehicle unit of Alphabet Inc., is carrying passengers in vehicles without human safety drivers in Phoenix and other areas. General Motors’ Cruise self-driving unit had been running robotaxis in San Francisco until a crash last year involving one of its vehicles.

    Also, Aurora Innovation said it will start hauling freight in fully autonomous semis on Texas freeways by year’s end. Another autonomous semi company, Gatik, plans to haul freight autonomously by the end of 2025.

    “Tesla yet again claimed it is a year or two away from actual automated driving — just as the company has been claiming for a decade. Indeed, Tesla’s whole event had a 2014 vibe, except that in 2014 there were no automated vehicles actually deployed on public roads,” Bryant Walker Smith, a University of South Carolina law professor who studies automated vehicles, told The Associated Press in an email. “Now there are real AVs carrying real people on real roads, but none of those vehicles are Teslas. Tonight did not change this reality; it only made the irony more glaring.”

    Tesla had 20 or so Cybercabs on hand and offered event attendees the opportunity to take rides inside the movie studio lot — not on Los Angeles’ roads.

    At the presentation, which was dubbed “We, Robot” and was streamed live on Tesla’s website and X, Musk also revealed a sleek minibus-looking vehicle that, like the Cybercab, would be self-driving and can carry up to 20 passengers.

    The company also trotted out several of its black and white Optimus humanoid robots, which walked a few feet from the attendees before showing off dance moves in a futuristic-looking gazebo.

    Musk estimated that the robots would cost between $28,000-$30,000 and would be able to babysit, mow lawns, fetch groceries, among other tasks.

    “Whatever you can think of, it will do,” he said.

    The unveiling of the Cybercab comes as Musk tries to persuade investors that his company is more about artificial intelligence and robotics as it labors to sell its core products, an aging lineup of electric vehicles.

    Tesla’s model lineup is struggling and isn’t likely to be refreshed until late next year at the earliest, TD Cowen analyst Jeff Osborne wrote in a research note last week.

    Osborne also noted that, in TD Cowen’s view, the “politicization of Elon” is tarnishing the Tesla brand among Democrat buyers in the U.S.

    Musk has endorsed Republican presidential candidate Donald Trump and has pushed many conservative causes. Last weekend he joined Trump at a Pennsylvania rally.

    Musk has been saying for more than five years that a fleet of robotaxis is near, allowing Tesla owners to make money by having their cars carry passengers while they’re not in use by the owners. Musk said that Tesla owners will be able to put their cars into service on a company robotaxi network.

    But he has acknowledged that past predictions for the use of autonomous driving proved too optimistic. In 2019, he promised the fleet of autonomous vehicles by the end of 2020.

    The announcement comes as U.S. safety regulators are investigating Full Self Driving and Autopilot based on evidence that it has a weak system for making sure human drivers pay attention.

    In addition, the U.S. National Highway Traffic Safety Administration forced Tesla to recall Full Self-Driving in February because it allowed speeding and violated other traffic laws, especially near intersections. Tesla was to fix the problems with an online software update.

    Last April in Snohomish County, Washington, near Seattle, a Tesla using Full Self-Driving hit and killed a motorcyclist, authorities said. The Tesla driver told authorities that he was using the system while looking at his phone when the car rear-ended the motorcyclist. The motorcyclist was pronounced dead at the scene, authorities said.

    NHTSA says it’s evaluating information on the fatal crash from Tesla and law enforcement officials.

    The Justice Department also has sought information from Tesla about Full Self-Driving and Autopilot, as well as other items.

    ___

    Krisher reported from Detroit.

    Source link

  • One Tech Tip: Here’s what you need to do before and after your phone is stolen or lost

    One Tech Tip: Here’s what you need to do before and after your phone is stolen or lost

    LONDON (AP) — Phones hold so much of our digital lives — emails, social media and bank accounts, photos, chat messages and more — that if they ever get stolen or go missing, it can cause major disruption beyond just the loss of a device.

    In some places, phone thefts have surged so much it’s now an everyday problem, with thieves on electric bikes snatching them out of pedestrians’ hands, swiping them off restaurant tables or pickpocketing them on the subway.

    In Britain, where 200 phones are stolen every day in “snatch thefts,” the government has pledged to crack down on the crime and is meeting with tech companies and device makers to come up with solutions.

    Here are steps you can take before and after your phone goes missing:

    Basic protections

    There are things you can do to make it less painful if your phone is stolen. Because some of these features are more technical in nature, people often overlook them.

    Lock down as much as you can. At a minimum, require a password or biometric scan to unlock the device. You can also add similar requirements to important individual apps — like your banking account, WhatsApp or Signal — to protect your finance or chats from thieves.

    Also, activate the find my device feature, which is available for both iOS and Android. Samsung also offers its own service called SmartThings Find.

    You’ll probably have lots of precious photos saved on your camera roll. It’s a good idea to back them up, along with contacts, calendar items and other files. Google and Apple offer cloud-based backup services, although the free versions have limited storage space. You can also back up your files to an external hard drive, memory card or a laptop.

    Some police forces and phone companies advise turning off message previews, which prevents thieves trying to break into your accounts from seeing reset or login codes when the phone is locked. To do this on an iPhone, for example, go to the Notifications section of your settings menu and tap Show Previews. You can also scroll down the app list to turn previews off for individual apps but leave them on for less risky ones like news or weather.

    Turn on newer features

    Recent iOS and Android updates include a number of new functions designed to make thefts less attractive.

    IPhone users can turn on Stolen Device Protection, which makes it a lot harder for phone thieves to access key functions and settings. Many thieves will want to wipe the data off and reset so they can resell it, but with this feature on, they’ll need a face or fingerprint scan to do so. Apple also recently updated its “ activation lock ” feature to make it harder for thieves to sell parts from stolen phones.

    Android phones, meanwhile, can now use use artificial intelligence to detect motion indicating someone snatched it out of your hand and is racing away on foot or a bike, and then lock the screen immediately. And there’s a feature called Private Spaces that lets you hide sensitive files on your phone.

    Jot down your device number

    Take note of your phone’s serial number, also known as an IMEI number. It can link you to the phone if it does eventually get recovered. Call it up by typing (asterisk)#06# on your phone’s keypad. If you’ve already lost your phone you can also find it in other places like the box it came in.

    If it’s stolen

    If you’re unlucky enough to have your phone stolen, notify police. Call your insurance company if you have a policy that covers the device. Inform your phone company so they can freeze your number and issue a replacement SIM card or eSIM. Notify your bank so they can watch out for suspicious transactions.

    Tracking your device

    Try to locate your phone with the find my device feature. For iPhones, go to iCloud.com/find from a web browser while Android users should head to www.google.com/android/find. Samsung also has its own service for Galaxy phones.

    These services will show your phone’s current or last known location on a map, which is also handy if you’ve just lost track of it somewhere in the house. Apple says even if a phone can’t connect to the internet or has been turned off, it can use Bluetooth to ping any nearby Apple devices using the same network behind its AirTags tracking devices. Google says newer Pixel phones can be located “for several hours” after they’ve been turned off using similar technology.

    You can get the phone to play a sound, even if it’s on silent. You can also put the phone in lost mode, which locks it and displays a message and contact details on the screen for anyone who finds it. Lost mode on iOS also suspends any Apple Pay cards and passes.

    If the device shows up in an unfamiliar location on the map, and you suspect it has been stolen, experts say it’s better to notify police rather than trying to get it back yourself.

    Cybersecurity company Norton says, “Confronting a thief yourself is not recommended.”

    Final steps

    If you can’t find your phone, there are some final steps to take.

    Log yourself out of all your accounts that might be accessible on the phone, and then remove it from your list of trusted devices that you use to get multifactor authentication codes — but make sure you can get those codes somewhere else, such as email.

    Then, as a last resort, you can erase the phone remotely so that there’s no chance of any data falling into the wrong hands. However, take note: Apple says that if the iPhone is offline, the remote erase will only happen the next time it come back online. But if you find the phone before it gets erased, you can cancel the request.

    Google warns that SD memory cards plugged into Android phones might not be remotely erased. And after the phone has been wiped, it won’t show up with find my device.

    ___

    Is there a tech challenge you need help figuring out? Write to us at [email protected] with your questions.

    Source link

  • One Tech Tip: Here’s what you need to do before and after your phone is stolen or lost

    One Tech Tip: Here’s what you need to do before and after your phone is stolen or lost

    LONDON — Phones hold so much of our digital lives — emails, social media and bank accounts, photos, chat messages and more — that if they ever get stolen or go missing, it can cause major disruption beyond just the loss of a device.

    In some places, phone thefts have surged so much it’s now an everyday problem, with thieves on electric bikes snatching them out of pedestrians’ hands, swiping them off restaurant tables or pickpocketing them on the subway.

    In Britain, where 200 phones are stolen every day in “snatch thefts,” the government has pledged to crack down on the crime and is meeting with tech companies and device makers to come up with solutions.

    Here are steps you can take before and after your phone goes missing:

    There are things you can do to make it less painful if your phone is stolen. Because some of these features are more technical in nature, people often overlook them.

    Lock down as much as you can. At a minimum, require a password or biometric scan to unlock the device. You can also add similar requirements to important individual apps — like your banking account, WhatsApp or Signal — to protect your finance or chats from thieves.

    Also, activate the find my device feature, which is available for both iOS and Android. Samsung also offers its own service called SmartThings Find.

    You’ll probably have lots of precious photos saved on your camera roll. It’s a good idea to back them up, along with contacts, calendar items and other files. Google and Apple offer cloud-based backup services, although the free versions have limited storage space. You can also back up your files to an external hard drive, memory card or a laptop.

    Some police forces and phone companies advise turning off message previews, which prevents thieves trying to break into your accounts from seeing reset or login codes when the phone is locked. To do this on an iPhone, for example, go to the Notifications section of your settings menu and tap Show Previews. You can also scroll down the app list to turn previews off for individual apps but leave them on for less risky ones like news or weather.

    Recent iOS and Android updates include a number of new functions designed to make thefts less attractive.

    IPhone users can turn on Stolen Device Protection, which makes it a lot harder for phone thieves to access key functions and settings. Many thieves will want to wipe the data off and reset so they can resell it, but with this feature on, they’ll need a face or fingerprint scan to do so. Apple also recently updated its “ activation lock ” feature to make it harder for thieves to sell parts from stolen phones.

    Android phones, meanwhile, can now use use artificial intelligence to detect motion indicating someone snatched it out of your hand and is racing away on foot or a bike, and then lock the screen immediately. And there’s a feature called Private Spaces that lets you hide sensitive files on your phone.

    Take note of your phone’s serial number, also known as an IMEI number. It can link you to the phone if it does eventually get recovered. Call it up by typing (asterisk)#06# on your phone’s keypad. If you’ve already lost your phone you can also find it in other places like the box it came in.

    If you’re unlucky enough to have your phone stolen, notify police. Call your insurance company if you have a policy that covers the device. Inform your phone company so they can freeze your number and issue a replacement SIM card or eSIM. Notify your bank so they can watch out for suspicious transactions.

    Try to locate your phone with the find my device feature. For iPhones, go to iCloud.com/find from a web browser while Android users should head to www.google.com/android/find. Samsung also has its own service for Galaxy phones.

    These services will show your phone’s current or last known location on a map, which is also handy if you’ve just lost track of it somewhere in the house. Apple says even if a phone can’t connect to the internet or has been turned off, it can use Bluetooth to ping any nearby Apple devices using the same network behind its AirTags tracking devices. Google says newer Pixel phones can be located “for several hours” after they’ve been turned off using similar technology.

    You can get the phone to play a sound, even if it’s on silent. You can also put the phone in lost mode, which locks it and displays a message and contact details on the screen for anyone who finds it. Lost mode on iOS also suspends any Apple Pay cards and passes.

    If the device shows up in an unfamiliar location on the map, and you suspect it has been stolen, experts say it’s better to notify police rather than trying to get it back yourself.

    Cybersecurity company Norton says, “Confronting a thief yourself is not recommended.”

    If you can’t find your phone, there are some final steps to take.

    Log yourself out of all your accounts that might be accessible on the phone, and then remove it from your list of trusted devices that you use to get multifactor authentication codes — but make sure you can get those codes somewhere else, such as email.

    Then, as a last resort, you can erase the phone remotely so that there’s no chance of any data falling into the wrong hands. However, take note: Apple says that if the iPhone is offline, the remote erase will only happen the next time it come back online. But if you find the phone before it gets erased, you can cancel the request.

    Google warns that SD memory cards plugged into Android phones might not be remotely erased. And after the phone has been wiped, it won’t show up with find my device.

    ___

    Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.

    Source link

  • Tesla unveiling its long-awaited robotaxi amid doubts about the technology it runs on

    Tesla unveiling its long-awaited robotaxi amid doubts about the technology it runs on

    DETROIT — Expectations are high for the long-awaited unveiling of Tesla’s robotaxi at a Hollywood studio Thursday night. Too high for some analysts and investors.

    The company, which began selling software it calls “Full Self-Driving” nine years ago that still can’t drive itself, is expected to show off the so-called “Cybercab” vehicle, which may not have a steering wheel and pedals.

    The unveiling comes as CEO Elon Musk tries to persuade investors that his company is more about artificial intelligence and robotics as it struggles to sell its core products, an aging lineup of electric vehicles.

    Some analysts are predicting that it will be a historic day for the Austin, Texas, company as it takes a huge step toward a long-awaited robotaxi service powered by AI.

    But others who track self-driving vehicles say Musk has yet to demonstrate Tesla’s system can travel safely without a human driver ready to step in to prevent crashes.

    “I don’t know why the headlines continue to be ‘What will Tesla announce?’ rather than ‘Why does Tesla think we’re so stupid?’” said Bryant Walker Smith, a University of South Carolina law professor who studies autonomous vehicles.

    He doesn’t see Tesla having the ability to show off software and hardware that can work without human supervision, even in a limited area that’s well-known to the driving system.

    “We just haven’t seen any indication that that is what Tesla is working toward,” Walker Smith said. “If they were, they would be showcasing this not on a closed lot, but in an actual city or on an actual freeway.”

    Without a clear breakthrough in autonomous technology, Tesla will just show off a vehicle with no pedals or steering wheel, which already has been done by numerous other companies, he said.

    “The challenge is developing a combination of hardware and software plus the human and digital infrastructure to actually safely drive a vehicle even without a steering wheel on public roads in any conditions,” Walker Smith said. “Tesla has been giving us that demo every year, and it’s not reassuring us.”

    Many industry analysts aren’t expecting much from the event either. While TD Cowen’s Jeff Osborne expects Musk to reveal the Cybercab and perhaps the Model 2, a lower-cost electric vehicle, he said he doesn’t expect much of a change on self-driving technology.

    “We expect the event to be light on details and appeal to the true long-term believers in Tesla,” Osborne wrote in a note. Musk’s claims on the readiness of Full Self Driving, though, will be crucial “given past delays and ongoing scrutiny” of the system and of Tesla’s less-sophisticated Autopilot driver-assist software.

    Tesla’s model lineup is struggling and isn’t likely to be refreshed until late next year at the earliest, Osborne wrote. Plus, he wrote that in TD Cowen’s view the “politicization of Elon” is tarnishing the Tesla brand among Democrat buyers in the U.S.

    Musk has endorsed Republican presidential candidate Donald Trump and has pushed many conservative causes. Last weekend he joined Trump at a Pennsylvania rally.

    Musk has been saying for more than five years that a fleet of robotaxis is near, allowing Tesla owners to make money by having their cars carry passengers while they’re not in use by the owners.

    But he has acknowledged that past predictions for the use of autonomous driving proved too optimistic. In 2019, he promised the fleet of autonomous vehicles by the end of 2020.

    However, Wedbush analyst Dan Ives, who is bullish on Tesla stock, wrote in an investor note that robotaxi event, dubbed “We, Robot,” by the company, will be a new chapter of growth for Tesla.

    Ives expects many updates and details from Tesla on the robotaxi, plus breakthroughs in Full Self Driving and artificial intelligence. He also is looking for a phased-in strategy for rolling out the robotaxis within the next year, as well as a Tesla ride-sharing app, and demonstrations of technology “designed to revolutionize urban transportation.”

    Ives, whose organization will attend the invitation-only event at the Warner Bros. studio, wrote that he also expects updates on Tesla’s Optimus humanoid robot, which the company plans to start selling in 2026.

    “We believe this is a pivotal time for Tesla as the company prepares to release its years of Robotaxi R&D shadowed behind the curtains, while Musk & Co. lay out the company’s vision for the future,” Ives wrote.

    The announcement comes as U.S. safety regulators are investigating Full Self Driving and Autopilot based on evidence that it has a weak system for making sure human drivers pay attention.

    In addition, the U.S. National Highway Traffic Safety Administration forced Tesla to recall Full Self-Driving in February because it allowed speeding and violated other traffic laws, especially near intersections. Tesla was to fix the problems with an online software update.

    Last April in Snohomish County, Washington, near Seattle, a Tesla using Full Self-Driving hit and killed a motorcyclist, authorities said. The Tesla driver told authorities that he was using the system while looking at his phone when the car rear-ended the motorcyclist. The motorcyclist was pronounced dead at the scene, authorities said.

    NHTSA says it’s evaluating information on the fatal crash from Tesla and law enforcement officials.

    The Justice Department also has sought information from Tesla about Full Self-Driving and Autopilot, as well as other items.

    Source link

  • Could This Be the Start of Amazon’s Next Robot Revolution?

    Could This Be the Start of Amazon’s Next Robot Revolution?

    In 2012, Amazon quietly acquired a robotics startup called Kiva Systems, a move that dramatically improved the efficiency of its ecommerce operations and kickstarted a wider revolution in warehouse automation.

    Last week, the ecommerce giant announced another deal that could prove similarly profound, agreeing to hire the founders of Covariant, a startup that has been testing ways for AI to automate more of the picking and handling of a wide range of physical objects.

    Covariant may have found it challenging to commercialize AI-infused industrial robots given the high costs and sharp competition involved; the deal, which will also see Amazon license Covariant’s models and data, could bring about another revolution in ecommerce—one that might prove hard for any competitor to match given Amazon’s vast operational scale and data trove.

    The deal is also an example of a Big Tech company acquiring core talent and expertise from an AI startup without actually buying the company outright. Amazon came to a similar agreement with the startup Adept in June. In March, Microsoft struck a deal with Inflection, and in August, Google hired the founders of Character AI.

    Back in the aughts, Kiva developed a way to move products through warehouses by having squat robots lift and carry stocked shelves over to human pickers—a trick that meant workers no longer needed to walk miles every day to find different items. Kiva’s mobile bots were similar to those employed in manufacturing, and the company used clever algorithms to coordinate the movement of thousands of bots in the same physical space.

    Amazon’s mobile robot army grew from around 10,000 in 2013 to 750,000 by 2023, and the sheer scale of the company’s operations meant that it could deliver millions of items faster and cheaper than anyone else.

    As WIRED revealed last year, Amazon has in recent years developed new robotic systems that rely on machine learning to do things like perceive, grab, and sort packed boxes. Again, Amazon is leveraging scale to its advantage, with the training data being gathered as items flow through its facilities helping to improve the performance of different algorithms. The effort has already led to further automation of the work that had previously been done by human workers at some fulfillment centers.

    The one chore that remains stubbornly difficult to mechanize, however, is the physical grasping of products. It requires adaptability to account for things like friction and slippage, and robots will inevitably be confronted with unfamiliar and awkward items among Amazon’s vast inventory.

    Covariant has spent the past few years developing AI algorithms with a more general ability to handle a range of items more reliably. The company was founded in 2020 by Pieter Abbeel, a professor at UC Berkeley who has done pioneering work on applying machine learning to robotics, along with several of his students, including Peter Chen, who became Covariant’s CEO, and Rocky Duan, the company’s CTO. This week’s deal will see all three of them, along with several research scientists at the startup, join Amazon.

    “Covariant’s models will be used to power some of the robotic manipulation systems across our fulfillment network,” Alexandra Miller, an Amazon spokesperson, tells WIRED. The tech giant declined to reveal financial details of the deal.

    Abbeel was an early employee at OpenAI, and his company has taken inspiration from the story of ChatGPT’s success. In March, Covariant demonstrated a chat interface for its robot and said it had developed a foundation model for robotic grasping, meaning an algorithm designed to become

    Will Knight

    Source link