ReportWire

Tag: Artificial Intelligence

  • Gmail’s new AI features, turning it into a personal assistant

    [ad_1]

    More artificial intelligence is being implanted into Gmail as Google tries to turn the world’s most popular email service into a personal assistant that can improve writing, summarize far-flung information buried in inboxes and deliver daily to-do lists.

    The new AI features announced Thursday could herald a pivotal moment for Gmail, a service that transformed email when it was introduced nearly 22 years ago. Since then, Gmail has amassed more than 3 billion users to become nearly as ubiquitous as Google’s search engine.

    Gmail’s new AI options will only be available in English within the United States for starters, but the company is promising to expand the technology to other countries and other languages as the year unfolds.

    The most broadly available tool will be a “Help Me Write” option designed to learn a user’s writing style so it can personalize emails and make real-time suggestions on how to burnish the message.

    Google is also offering subscribers who pay for its Pro and Ultra services access to technology that mirrors the AI Overviews that’s been built into its search engine since 2023. The expansion will enable subscribers pose conversational questions in Gmail’s search bar to get instant answers about information they are trying to retrieve from their inboxes.

    In what could turn into another revolutionary step, “AI Inbox” is also being rolled out to a subset of “trusted testers” in the U.S. When it’s turned on, the function will sift through inboxes and suggest to-do lists and topics that users might want to explore.

    “This is us delivering on Gmail proactively having your back,” said Blake Barnes, a Google vice president of product.

    All of the new technology is tied to the Google’s latest AI model, Gemini 3, which was unleashed into its search engine late last year. The upgrade, designed to turn Google search into a “thought partner” has been so well received that it prompted OpenAI CEO Sam Altman, whose company makes the popular ChatGPT chatbot, to issue a “code red” following its release.

    But thrusting more AI into Gmail poses potential risks for Google, especially if the technology malfunctions and presents misleading information or crafts emails that get users into trouble — even though people are able to proofread the messages or turn off the features at any time.

    Allowing Google’s AI to dig deeper into inboxes to learn more about their habits and interest also could raise privacy issues — a challenge that Gmail confronted from the get-go.

    To help subsidize the free service, Google included targeted ads in Gmail that were based on information contained within the electronic conversations. That twist initially triggered a privacy backlash among lawmakers and consumer groups, but the uproar eventually died down and never deterred Gmail’s rapid growth as an email provider. Rivals eventually adopted similar features.

    As it brings more AI into Gmail, Google promises none of the content that the technology analyzes will be used to train the models that help Gemini improve. The Mountain View, California, company says it also has built an “engineering privacy” barrier to corral all the information within inboxes to protect it from prying eyes.

    [ad_2]

    Source link

  • At CES 2026, iMogul AI pitches a smarter path into Hollywood – WTOP News

    [ad_1]

    iMogul AI, created by a Rockville startup, is designed to help screenwriters, actors and producers connect — using artificial intelligence not to create content, but to analyze it.

    iMogul CEO Chris LeSchack at CES 2026 in Las Vegas, Nevada.(Courtesy Steve Winter)

    Breaking into Hollywood has never been easy.

    For decades, aspiring screenwriters have faced a familiar cycle: write a script, submit it, wait, follow up, wait some more — and often never hear back. In an industry where who you know is invariably more valuable than what you know, even strong material can die on the vine before it ever reaches the right decision-makers.

    At CES 2026, a Rockville, Maryland-based startup believes they have found a way to disrupt that process.

    Exhibiting this year from Eureka Park at CES, iMogul AI is unveiling a platform designed to help screenwriters, actors and producers connect more efficiently — using artificial intelligence not to create content, but rather to analyze, validate and accelerate the acceptance process, essentially trimming that all-important barrier to entry.

    “The company and the product is called iMogul,” CEO Chris LeSchack said. “As we all know, it’s incredibly hard to get into Hollywood. iMogul is essentially designed for screenwriters who have created screenplays but don’t know where to go with it.”

    LeSchack speaks from personal experience. In 2005, he attempted to pitch a screenplay to Fox Studios. While the studio expressed interest, the project ultimately stalled.

    “They said, ‘Yeah, Jerry Bruckheimer has done this before. Maybe next time,’” LeSchack recalled.

    The experience planted the seed for what would eventually become iMogul AI.

    Rather than acting as another script-hosting site or marketplace, iMogul AI aims to create a feedback-driven ecosystem around each screenplay. Writers upload their scripts to the app, where audiences can read them, vote on elements such as casting, filming locations and creative direction, and provide validation that can be shared with potential investors and producers.

    “What if I had an app and got the demographics or the information from the audience that actually go and read the script, vote on actors, vote on directors and cinematographers?” LeSchack said. “And then I take that information and provide it to friends and family investors or actual real investors who are interested in Hollywood.”

    iMogul AI, LeSchack said, absolutely does not use generative AI to write or alter scripts.

    “I don’t use AI to do anything with the content itself,” he said. “That’s all the screenwriter.”

    Instead, the platform applies AI to market analysis — evaluating potential audiences, identifying tax incentives and shooting locations, and recommending actors who might align with a project’s budget and goals.

    “If the screenwriter is interested in selecting their own talent, they can go and do that,” LeSchack said. “While the higher tier actor or actress a film engages, the higher will be the value of the screenplay; but in many instances, we want to bring in relative unknowns … some B-listers and others … talent that might bring down the cost down while also helping the screenwriter pitch it to investors and producers.”

    The AI also analyzes scripts to suggest optimal filming locations. By parsing external and internal scenes, settings and themes, the system can flag regions with favorable tax incentives.

    “We’re using AI really to … deal with flow,” LeSchack said. “Help actors, screenwriters get back to work, producers — in fact, everybody in the film industry.”

    Bypassing traditional gatekeepers

    For emerging creatives, that promise resonates strongly.

    Zsuzsanna Juhasz, an employee of iMogul AI, is also a junior at USC majoring in film studies and production. As she embarks on her career in the entertainment industry, Juhasz is fully representative of the sort of individual for whom iMogul was created.

    “One of the scariest things about breaking into the industry is not knowing the right people,” Juhasz said. “If you don’t know the right people, maybe your work won’t be recognized or it won’t get out there. And that’s terrifying as you’ve invested four years into your education building your portfolio.”

    She sees iMogul AI as a way to bypass traditional gatekeepers.

    “This app will bridge that connection,” she said. “My work will be in front of audiences. People can read the kind of worlds I’m building, the characters I’m building, and they’ll be interested in that. They can vote for it.”

    The platform’s casting features are also central to its appeal. Actors can read sides, submit reels and audition directly through the app — opening doors for performers without agency representation.

    “It lets you have a sort of control that the industry doesn’t always offer you,” Juhasz said.

    That functionality will soon expand, thanks to a new feature called iMogul Take One, which LeSchack announced at CES.

    “Take One is going to invite actors to come in and read sides … and then pitch it out into the real world,” he said. “So we might be able to find the next up-and-coming actor.”

    The app is currently free to download on Apple’s App Store, with a Google version presently in the works. While screenwriters may eventually pay a modest monthly fee, LeSchack said the priority is growth.

    “The more screenwriters that put screenplays up there, more audience comes in,” he said.

    As iMogul AI makes its CES debut, the company is positioning itself not as a replacement for Hollywood, but as a smarter on-ramp. For creatives long locked out of the system, that may be the most compelling pitch of all.

    [ad_2]

    Thomas Robertson

    Source link

  • AI company, Google settle lawsuit over Florida teen’s suicide linked to Character.AI chatbot

    [ad_1]


    A Florida family agreed to settle a wrongful death lawsuit Wednesday with an AI company, Google and others after their teen son died by suicide in 2024. 

    The terms of the settlement, which was filed in the U.S. District Court in the Middle District of Florida, were not disclosed. 

    Megan Garcia filed a lawsuit in October 2024, saying her 14-year-old son Sewell Setzer, III, died in February after conducting a monthslong virtual emotional and sexual relationship with a chatbot known as “Dany.”Garcia says she found out after her son’s death that he was having conversations with multiple bots and he conducted a virtual romantic and sexual relationship with one in particular.

    In testimony before Congress in September, Garcia said, “I became the first person in the United States to file a wrongful death lawsuit against an AI company for the suicide of my son.”

    She said her 6’3″ son was a “gentle giant” and was gracious and obedient, easy to parent, who loved music and made his brothers laugh. She said he “had his whole life ahead of him.”

    In this undated photo provided by Megan Garcia of Florida in Oct. 2024, she stands with her son, Sewell Setzer III.

    Courtesy Megan Garcia via AP


    Garcia testified that the platform had no mechanisms to protect her son or notify an adult when teens were spending too much time interacting with Chatbots. She said the “companion” chatbot was programmed to engage in sexual roleplay, presented itself as a romantic partner and even as a psychotherapist falsely claiming to be licensed.

    Users can interact with existing bots or create original chatbots, which are powered by large language models (LLMs), can send lifelike messages and engage in text conversations with users. 

    Character AI announced new safety features “designed especially with teens in mind” in December 2024 after two lawsuits alleging its chatbots inappropriately interacted with underage users. The company said it is collaborating with teen online safety experts to design and update features. Users must be 13 or older to create an account.

    A Character.AI spokesperson told CBS News the company cannot comment further at this time. 

    [ad_2]

    Source link

  • At CES, auto and tech companies transform cars into proactive companions

    [ad_1]

    LAS VEGAS — In a vision of the near future shared at CES, a girl slides into the back seat of her parents’ car and the cabin instantly comes alive. The vehicle recognizes her, knows it’s her birthday and cues up her favorite song without a word spoken.

    “Think of the car as having a soul and being an extension of your family,” Sri Subramanian, Nvidia’s global head of generative AI for automotive, said Tuesday.

    Subramanian’s example, shared with a CES audience on the show’s opening day in Las Vegas, illustrates the growing sophistication of AI-powered in-cabin systems and the expanding scope of personal data that smart vehicles may collect, retain and use to shape the driving experience.

    Across the show floor, the car emerged less as a machine and more as a companion as automakers and tech companies showcased vehicles that can adapt to drivers and passengers in real time — from tracking heart rates and emotions to alerting if a baby or young child is accidentally left in the car.

    Bosch debuted its new AI vehicle extension that aims to turn the cabin into a “proactive companion.” Nvidia, the poster child of the AI boom, announced Alpamayo, its new vehicle AI initiative designed to help autonomous cars think through complex driving decisions. CEO Jensen Huang called it a “ChatGPT moment for physical AI.”

    But experts say the push toward a more personalized driving experience is intensifying questions about how much driver data is being collected.

    “The magic of AI should not just mean all privacy and security protections are off,” said Justin Brookman, director of marketplace policy at Consumer Reports.

    Unlike smartphones or online platforms, cars have only recently become major repositories of personal data, Brookman said. As a result, the industry is still trying to establish the “rules of the road” for what automakers and tech companies are allowed to do with driver data.

    That uncertainty is compounded by the uniquely personal nature of cars, Brookman said. Many people see their vehicles as an extension of themselves — or even their homes — which he said can make the presence of cameras, microphones and other monitoring tools feel especially invasive.

    “Sometimes privacy issues are difficult for folks to internalize,” he said. “People generally feel they wish they had more privacy but also don’t necessarily know what they can do to address it.”

    At the same time, Brookman said, many of these technologies offer real safety benefits for drivers and can be good for the consumer.

    On the CES show floor, some of those conveniences were on display at automotive supplier Gentex’s booth, where attendees sat in a mock six-seater van in front of large screens demonstrating how closely the company’s AI-equipped sensors and cameras could monitor a driver and passengers.

    “Are they sleepy? Are they drowsy? Are they not seated properly? Are they eating, talking on phones? Are they angry? You name it, we can figure out how to detect that in the cabin,” said Brian Brackenbury, director of product line management at Gentex.

    Brackenbury said it’s ultimately up to the car manufacturers to decide how the vehicle reacts to the data that’s collected, which he said is stored in the car and deleted after the video frames, for example, have been processed. “

    “One of the mantras we have at Gentex is we’re not going to do it just because we can, just because the technology allows it,” Brackebury said, adding that “data privacy is really important.”

    [ad_2]

    Source link

  • The coolest technology from Day 2 of CES 2026

    [ad_1]

    LAS VEGAS — Crowds flooded the freshly opened showroom floors on Day 2 of the CES and were met by thousands of robots, AI companions, assistants, health longevity tech, wearables and more.

    Siemens President and CEO Roland Busch kicked off the day with a keynote detailing how its customers are harnessing artificial intelligence to transform their businesses. He was joined onstage by Nvidia CEO Jensen Huang to announce an expanded partnership, saying they are launching a new AI-driven industrial revolution to reinvent all aspects of manufacturing, production and supply chain management.

    Lenovo ended the day with a guest star-rich visual banquet dedicated to spotlighting how its AI platforms can help people personally (wearables), with their businesses (enterprise platforms) and the world around them. To strike home his points, its CEO Yang Yuanqing was joined by tech superstars like Nvidia’s Huang, AMD CEO Lisa Su and Intel CEO Lip-Bu Tan.

    The CES is a huge opportunity annually for companies large and small to parade products they plan to put on shelves this year. Here are the highlights from Day 2:

    Gaming tech company Razer is well known for bringing buzz-worthy hardware to CES, like haptic, or tactile, seat cushions and tri-screen laptops.

    This year, it’s reaching beyond its standard gaming base and demonstrating two AI-powered prototypes — an over-ear gaming headset that doubles as a general-purpose assistant, and an AI desk companion that can provide gaming advice and also organize a user’s life.

    The holographic companion, based on a Razor on-screen AI assistant launched last year (Project Ava), has transitioned off-screen into a small glass tube that sits near your computer. The animated sprite has built-in speakers and a camera so it can see the world around it.

    Both devices are AI agnostic, so you can use your preferred model. For the demo, the headset — Project Motoko — ran on OpenAI’s ChatGPT. Project Ava worked off xAI’s Grok. Although still in development, Razer said it expects both to be released commercially later this year.

    Imagine your plane lands and, when you look out the window you see autonomous robots guiding it to the gate and then unloading the luggage. Oshkosh Corporation is pitching that future for airports big and small.

    At CES, it debuted a fleet of autonomous airport robots designed to help airlines pull off what it calls “the perfect turn” — a tightly timed process that happens after a plane lands, including fueling, cleaning, handling cargo and getting passengers off and back on.

    For travelers, CEO John Pfeifer says the goal is fewer delays without compromising safety. The technology is also designed to keep those tarmac tasks moving even during severe weather, like winter storms or extreme heat, when conditions are daunting for human crews, Pfeifer said. Testing with major airlines is already underway, and the robots would likely debut at large hub airports like Atlanta or Dallas, with a goal of rolling them out over the next few years.

    Chinese robovac maker Roborock has introduced a vacuum that literally sprouts chicken-like legs to navigate stairs and clean steps along the way.

    The newly introduced Saros Rover was a tad slow in its ascent and descent (but it was cleaning each step) during the demo, but Roborock says it will be able to traverse almost any style of stairwell, including spiraled. No release date was given for the Rover, which the company says is still in development.

    While it may look like a typical scale you’d buy for your bathroom, Withings’ new Body Scan 2 measures much more than weight. Taking off their shoes and socks, people lined up to try out the “smart scale” that in 90 seconds measures 60 different biomarkers, including their heart age, vascular age and their metabolism using the pads of their feet and hands.

    The $600 scale, which will be available for purchase in the spring, also provides a nerve health score and measures changes in someone’s electrodermal activity, or the skin’s electrical properties due to sweat gland activity. The smart scale and a corresponding app, which costs $10 a month or $100 a year, provide personalized advice and a health trajectory for its users. The French company’s goals are to help people monitor their health and reverse bad habits to promote longevity.

    Commonwealth Fusion Systems, NVIDIA and Siemens announced Tuesday that they are working together to use AI to hasten making nuclear fusion a new source of carbon-free energy.

    In Massachusetts, Commonwealth Fusion Systems is building a prototype fusion power plant called SPARC, which is about 70% complete. Through the new partnership, it will create a “digital twin,” or online simulation, of the physical machine.

    CFS CEO Bob Mumgaard said it will ask questions of the simulation to speed up progress on the physical machine and rapidly analyze data, compressing years of manual experimentation into weeks of understanding.

    SPARC is a prototype for the company’s first planned power plant, called ARC, that is meant to connect to the grid in the early 2030s. The device will use very strong magnets to create conditions for fusion to happen. Mumgaard also said CFS’s first high-temperature superconducting magnet has been installed in SPARC.

    [ad_2]

    Source link

  • At CES, Auto and Tech Companies Transform Cars Into Proactive Companions

    [ad_1]

    LAS VEGAS (AP) — In a vision of the near future shared at CES, a girl slides into the back seat of her parents’ car and the cabin instantly comes alive. The vehicle recognizes her, knows it’s her birthday and cues up her favorite song without a word spoken.

    “Think of the car as having a soul and being an extension of your family,” Sri Subramanian, Nvidia’s global head of generative AI for automotive, said Tuesday.

    Subramanian’s example, shared with a CES audience on the show’s opening day in Las Vegas, illustrates the growing sophistication of AI-powered in-cabin systems and the expanding scope of personal data that smart vehicles may collect, retain and use to shape the driving experience.

    Across the show floor, the car emerged less as a machine and more as a companion as automakers and tech companies showcased vehicles that can adapt to drivers and passengers in real time — from tracking heart rates and emotions to alerting if a baby or young child is accidentally left in the car.

    Bosch debuted its new AI vehicle extension that aims to turn the cabin into a “proactive companion.” Nvidia, the poster child of the AI boom, announced Alpamayo, its new vehicle AI initiative designed to help autonomous cars think through complex driving decisions. CEO Jensen Huang called it a “ChatGPT moment for physical AI.”

    But experts say the push toward a more personalized driving experience is intensifying questions about how much driver data is being collected.

    “The magic of AI should not just mean all privacy and security protections are off,” said Justin Brookman, director of marketplace policy at Consumer Reports.

    Unlike smartphones or online platforms, cars have only recently become major repositories of personal data, Brookman said. As a result, the industry is still trying to establish the “rules of the road” for what automakers and tech companies are allowed to do with driver data.

    That uncertainty is compounded by the uniquely personal nature of cars, Brookman said. Many people see their vehicles as an extension of themselves — or even their homes — which he said can make the presence of cameras, microphones and other monitoring tools feel especially invasive.

    “Sometimes privacy issues are difficult for folks to internalize,” he said. “People generally feel they wish they had more privacy but also don’t necessarily know what they can do to address it.”

    At the same time, Brookman said, many of these technologies offer real safety benefits for drivers and can be good for the consumer.

    On the CES show floor, some of those conveniences were on display at automotive supplier Gentex’s booth, where attendees sat in a mock six-seater van in front of large screens demonstrating how closely the company’s AI-equipped sensors and cameras could monitor a driver and passengers.

    “Are they sleepy? Are they drowsy? Are they not seated properly? Are they eating, talking on phones? Are they angry? You name it, we can figure out how to detect that in the cabin,” said Brian Brackenbury, director of product line management at Gentex.

    Brackenbury said it’s ultimately up to the car manufacturers to decide how the vehicle reacts to the data that’s collected, which he said is stored in the car and deleted after the video frames, for example, have been processed. “

    “One of the mantras we have at Gentex is we’re not going to do it just because we can, just because the technology allows it,” Brackebury said, adding that “data privacy is really important.”

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – December 2025

    [ad_2]

    Associated Press

    Source link

  • AI renewing some prescriptions in Utah pilot program, Politico reports

    [ad_1]

    In Utah, artificial intelligence can now renew some prescriptions. A Politico exclusive says the pilot program will “test how far patients and regulators are willing to trust AI in medicine.” Yasmin Khorram, economic policy reporter for Politico, joins CBS News to discuss her reporting.

    [ad_2]

    Source link

  • California Tax Revenue Getting a Boost From AI Boom — but for How Long?

    [ad_1]

    As California becomes more dependent on tax revenue from the tech industry, its stake in the health of the artificial intelligence industry has grown.

    The state is seeing financial benefits from the AI boom, a new analysis by the Legislative Analyst’s Office shows. But the boom raises questions: Will it continue to be accompanied by a decline in tech and other jobs? Is it a bubble?

    Tax revenue from stock-option withholding paid by some of the state’s biggest tech companies made up about 10% of all income tax withholding in 2025, estimated Chas Alamo, the principal fiscal and policy analyst with the LAO. Alamo looked at tech companies’ public financial filings and other data through the second quarter of 2025. That figure would be about the same as 2024, and is up from more than 6% just three years ago, when he first did the analysis.

    The state’s biggest source of revenue is personal income tax. It’s common for tech companies to pay employees in stock options in addition to their base wages. Stock options that have vested and are fully owned by employees are treated like ordinary income for tax purposes, so companies pay withholding taxes on some of that income to the state and U.S. governments.

    Shining a spotlight on where the state’s tax revenue comes from is especially timely, when it needs all the revenue it can get. California is expected to have a nearly $18 billion budget deficit this year, with the state expecting to have to fill funding gaps because of cuts by President Donald Trump’s administration. But the state’s growing reliance on AI-driven revenue is risky for two reasons: fears that the technology is overhyped, and because AI’s rise threatens livelihoods.

    Alamo based his analysis on the performance of the state’s five most valuable tech companies by market value: Apple, Google, Nvidia, Broadcom and Meta. Shares of Nvidia, Broadcom and Google did especially well in 2025: They rose 25%, 46% and 59% for the year, respectively. Alamo also included Intel, Cisco, AMD, Intuit, PayPal, Applied Materials and Qualcomm in his analysis because they paid substantial amounts of withholding on their employees’ stock options.

    “We’re seeing a real boost to income-tax receipts because of this — for a relatively small number of employees,” Alamo told CalMatters. “If the AI market were to deteriorate, we could see these withholdings decline.”

    In other words, if the AI bubble pops, California could see a steep drop in tax revenue. That’s because there has been little job growth and wages are not rising, Alamo said, adding that the analyst’s office has been raising its concern over “the stagnant nature of the state’s labor market and broader economy” for the past couple of years. In September, the most recent data available, California’s unemployment rate rose to 5.6%, the highest among U.S. states.


    ‘AI is not a job-gainer’

    Despite the AI boom, the number of tech jobs in the Bay Area actually decreased from September 2024 to August 2025, according to the latest analysis by the Bay Area Council Economic Institute, a think tank supported by the Bay Area Council, a business coalition. Jobs in the information industry were down 1.3% over that period, while jobs in professional and business services fell 1.5%. Some tech companies, such as San Francisco-based Salesforce, mentioned AI as a factor when they disclosed layoffs of thousands of employees.

    “Right now, on net, AI is not a job-gainer,” said Jeff Bellisario, executive director of the think tank. “The bigger question for us is, you put aside (tech companies’) valuation and think about the number of people employed in these companies.”

    Another analysis of employment data by the California Business Roundtable’s information arm, the California Center for Jobs and the Economy, shows a loss of more than 130,000 jobs in high tech, including manufacturing jobs, through the first quarter of last year.

    “Tech booms in the past have led to an employment boom,” Bellisario said. “This doesn’t feel like that.”

    There’s no consensus about whether this tech boom is set to go bust anytime soon. Some of the biggest AI optimists include Jensen Huang, chief executive of chipmaker Nvidia, who told investors in November: “There has been a lot of talk about an AI bubble. From our vantage point, we see something very different.”

    Another optimist is Dan Ives, longtime tech analyst and managing director at Wedbush Securities.

    “This is not a bubble,” he told CalMatters. “This is Year 3 of an eight- to 10-year buildout of the AI revolution.” Ives said AI could be huge for U.S. innovation, and that this moment in time reminds him “much more of a 1996 moment than a 1999 or 2000 moment.”

    In the mid-1990s, widespread adoption of personal computers and the advent of the graphical web browser paved the way for the dot-com boom and gave rise to companies such as Google, Netflix and PayPal. But by 2000 or shortly afterward, after the founders of those companies made their fortunes, many other internet companies had gone out of business — some in spectacular flameouts, such as Webvan or Pets.com.

    Today, there are signs that there are too many startups in certain subsectors, according to analysts at PitchBook, which tracks public and private capital markets. Among the ones they mentioned in their 2026 outlook: AI scribes in health care, which automatically generate medical notes; aerial defense drones; content development in gaming; personal assistant bots; and more. The analysts warned investors that startups would really need to differentiate themselves to bring value.

    Researchers for Allianz Trade, the global insurance company, wrote in a November brief: “The financial market frenzy over AI shows classic signs of an asset bubble: widespread consensus, unproven valuations and returns at times detached from earnings.” The researchers also said they were watching a lot of corporate spending on AI as concerns grow around tightening energy constraints. AI is driving demand for data centers, which are straining the electric grid.

    Discussion about a bubble aside, some tech-friendly experts point out that California’s reliance on AI means the state should help the sector succeed, such as by not overregulating it.

    “What’s important to remember is that California’s social safety net depends on a healthy tech industry, “ said Kaitlyn Harger, an economist for Chamber of Progress, a think tank funded by the tech industry. The financial cushion tech provides helps the state fund public-sector jobs, health services, education, social services and more, Harger said.

    California leads all states in trying to regulate AI, and is expected to fight against the president’s recent executive order to develop federal laws around AI that would supersede state laws.

    This story was originally published by CalMatters and distributed through a partnership with The Associated Press.

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – December 2025

    [ad_2]

    Associated Press

    Source link

  • Lisa Su Shows Off AMD’s High-End Chips Designed for A.I.’s ‘Yotta-Scale’ Future

    [ad_1]

    Lisa Su holds up the AMD Ryzen AI Halo, an A.I. developer platform, during AMD’s keynote at CES 2026 on Jan. 5, 2026. Caroline Brehman / AFP via Getty Images

    At CES 2026, AMD CEO Lisa Su used the industry’s biggest stage to outline where the next era of A.I. is headed. The A.I. industry, she said during her keynote yesterday (Jan. 5), is entering the era of “yotta-scale computing,” driven by unprecedented growth in both training and inference. The constraint, Su argued, is no longer the model itself but the computational foundation beneath it.

    “Since the launch of ChatGPT a few years ago, we’ve gone from about a million people using A.I. to more than a billion active users,” Su said. “We see A.I. adoption growing to over five billion active users as it becomes indispensable to every part of our lives, just like the cell phone and the internet today.”

    Global A.I. compute capacity, she noted, is now on a path from zettaflops toward yottaflops within the next five years. A yottaflop is 1 followed by 24 zeros. “Ten yottaflops is 10,000 times more computing power than we had in 2022. There has never been anything like this in the history of computing, because there has never been a technology like A.I.,” Su said.

    Yet Su cautioned that the industry still lacks the computing power required to support what A.I. will ultimately enable. AMD’s response, she said, is to build the foundation end-to-end—positioning the company as an architect of the next A.I. phase rather than a supplier of isolated components.

    That strategy centers on Helios, a rack-scale data center platform designed for trillion-parameter A.I. training and large-scale inference. A single Helios rack delivers up to three A.I. exaflops, integrating Instinct MI455X accelerators, EPYC “Venice” CPUs, Pensando networking and the ROCm software ecosystem. The emphasis is on durability at scale, with systems built to grow alongside A.I. workloads rather than locking customers into closed, short-lived architectures.

    AMD also previewed the Instinct MI500 Series, slated for launch in 2027. Built on next-generation CDNA 6 architecture, the roadmap targets up to a thousandfold increase in A.I. performance compared with the MI300X GPUs introduced in 2023.

    Su stressed that yotta-scale computing will not be confined to data centers. A.I., she said, is becoming a local, everyday experience for billions of users. AMD announced an expansion of its on-device A.I. push with Ryzen AI Max+ platforms, capable of supporting models with up to 128 billion parameters using unified memory.

    Beyond commercial products, Su tied AMD’s roadmap to public-sector priorities. Joined on stage by Michael Kratsios, President Trump’s science and technology advisor, who is slated to speak at CES later this week, she discussed the U.S. government’s Genesis Mission, a public-private initiative aimed at strengthening national A.I. leadership. As part of that effort, AMD-powered supercomputers Lux and Discovery are coming online at Oak Ridge National Laboratory, reinforcing the company’s role in scientific discovery and national infrastructure.

    The keynote closed with a $150 million commitment to A.I. education, aligned with the U.S. A.I. Literacy Pledge—signaling that, in AMD’s view, sustaining yotta-scale ambition will depend as much on talent development as on silicon.

    Lisa Su Shows Off AMD’s High-End Chips Designed for A.I.’s ‘Yotta-Scale’ Future

    [ad_2]

    Victor Dey

    Source link

  • Jensen Huang Shakes Vegas With Nvidia’s Physical A.I. Vision at CES

    [ad_1]

    Jensen Huang opened CES 2026 with a 90-minute keynote on Nvidia’s latest innovations. Patrick T. Fallon / AFP via Getty Images

    Nvidia CEO Jensen Huang is the biggest celebrity in Las Vegas this week. His CES keynote at the Fontainebleau Resort proved harder to get into than any sold-out Vegas shows. Journalists who cleared their schedules for the event waited for hours outside the 3,600-seat BleauLive Theatre. Many who arrived on time—after navigating the sprawling maze of conference venues and, in some cases, flying in from overseas to see the tech king of the moment—were turned away due to overcapacity and redirected to a watch party outside, where some 2,000 attendees gathered in a mix of frustration and reverence.

    Shortly after 1 p.m., Huang jogged onto the stage, wearing a glistening, embossed black leather jacket, and wished the crowd a happy New Year. He opened with a brisk history of A.I., tracing the last few years of exponential progress—from the rise of large language models to OpenAI’s advances in reasoning systems and the explosion of so-called agentic A.I. All of it built toward the theme that dominated the bulk of his 90-minute presentation: physical A.I.

    Physical A.I. is a concept that has gained momentum among leading researchers over the past year. The goal is to train A.I. systems to understand the intuitive rules humans take for granted—such as gravity, causality, motion and object permanence—so machines can reason about and safely interact with real environments.

    Nvidia enters the self-driving race

    Huang unveiled Alpamayo, a world foundational model designed to power autonomous driving. He called it “the world’s first reasoning autonomous driving A.I.”

    To demonstrate, Nvidia played a one-shot video of a Mercedes vehicle equipped with Alpamayo navigating busy downtown San Francisco traffic. The car executed turns, stopped for lights and vehicles, yielded to pedestrians and changed lanes. A human driver sat behind the wheel throughout the drive but did not intervene.

    One particularly interesting thing Huang discussed was how Nvidia trains physical A.I. systems—a fundamentally different challenge from training language models. Large language models learn from text, of which humanity has produced enormous quantities. But how do you teach an A.I. Newton’s second law of motion?

    “Where does that data come from?” Huang asked. “Instead of languages—because we created a bunch of text that we consider ground truths that A.I. can learn from—how do we teach an A.I. the ground truths of physics? There are lots and lots of videos, but it’s hardly enough to capture the diversity of interactions we need.”

    Nvidia’s answer is synthetic data: information generated by A.I. systems based on samples of real-world data. In the case of Alpamayo, another Nvidia world model—called Cosmos—uses limited real-world inputs to generate far more complex, physically plausible videos. A basic traffic scenario becomes a series of realistic camera views of cars interacting on crowded streets. A still image of a robot and vegetables turns into a dynamic kitchen scene. Even a text prompt can be transformed into a video with physically accurate motion.

    Nvidia said the first fleet of Alpamayo-powered robotaxis, built in the 2025 Mercedes-Benz CLA vehicles, is slated to launch in the U.S. in the first quarter, followed by Europe in the second quarter and Asia later in 2026.

    For now, Alpamayo remains a Level 2 autonomous driving system—similar to Tesla’s Full Self-Driving—which requires a human driver to remain attentive behind the wheel at all times. Nvidia’s longer-term goal is Level 4 autonomy, where vehicles can operate without human supervision in specific, constrained environments. That’s one step below full autonomy, or Level 5.

    “The ChatGPT moment for physical A.I. is nearly here,” Huang said in a voiceover accompanying one of the videos shown during the keynote.

    Jensen Huang Shakes Vegas With Nvidia’s Physical A.I. Vision at CES

    [ad_2]

    Sissi Cao

    Source link

  • The Most Interesting Tech AP Saw on Day 1 of CES

    [ad_1]

    LAS VEGAS (AP) — Sure, Nvidia, AMD and Intel all had important chip and AI platform announcements on the first day of CES 2026, but all audiences wanted to see more of was Star Wars and Jensen Huang’s little robot buddies.

    CES is a huge opportunity annually for companies both large and small to parade products they plan to put on shelves this year. And, as predicted, artificial intelligence was anchored in nearly everything as tech firms continue to look for AI products that will attract customers.

    AP has been on the ground looking at booths and covering big announcements, here is a roundup of the highlights we saw on the first day of CES.

    The biggest buzzword in the air at CES is “physical AI,” Nvidia’s term for AI models that are trained in a virtual environment using computer generated, “synthetic” data, then deployed as physical machines once they’ve mastered their purpose.

    CEO Jensen Huang showed off Cosmos, an AI foundation model trained on massive datasets, capable of simulating environments governed by actual physics. He also announced Alpamayo, an AI model specifically designed for autonomous driving. Huang revealed that Nvidia’s next generation AI superchip platform, dubbed Vera Rubin, is in full production, and that Nvidia has a new partnership with Siemens. All of this shows Nvidia is going to fight increased competition to retain its reputation as the backbone of the AI industry.

    But once Huang called for two little, waddling, chirping robots to join him on stage, that’s all the audience wanted to see more of.


    The chips are back in town

    AMD CEO Lisa Su announced a new line of its famed Ryzen AI processors as the company continues to expand its footprint in the world of AI-powered personal computers.

    For gamers, AMD also showed off the latest version of its gaming-focused processor, the AMD Ryzen 7 9850X3D.

    Meanwhile, Intel announced its new AI chip for laptops, Panther Lake (also known as the Intel Core Ultra Series 3), and said the company has plans to launch a new platform to address a growing market for handheld video gaming machines.

    Intel, a Silicon Valley pioneer that enjoyed decades of growth as its processors powered the personal computer boom, fell into a slump after missing the shift to the mobile computing era unleashed by the iPhone. It fell further behind after the AI boom propelled Nvidia into the spotlight.

    President Donald Trump’s administration stepped in recently to secure a 10% stake in the company, making the government one of Intel’s biggest shareholders. Federal officials said they invested in Intel to support U.S. technology and domestic manufacturing.


    Uber dives back into the robotaxi game

    Uber is giving the public a first look at their robotaxi at this CES this week. Uber, along with luxury electric vehicle manufacturer Lucid Motors and vehicle tech company Nuro, introduced an autonomous vehicle with an Uber-designed in-cabin experience.

    Uber calls it the most luxurious robotaxi yet. It features cameras, sensors and radars that provide 360-degree perception and a low-profile roof “halo” with integrated LEDs that will display riders’ initials to help them spot their car and track their ride status. Inside, riders can personalize everything from climate and seat heating to music, while real-time visuals show exactly what the vehicle is seeing on the road and the route it plans to take.

    Autonomous on-road testing began last month in San Francisco, led by Nuro, marking a major step toward what the companies said is a planned launch before the end of the year.


    Star Wars and Lego announce new a partnership

    When Lucasfilm chief creative officer David Filoni brought out an array of X-Wing pilots, Chewbacca, R2D2 and C-3PO, he won the Star Wars fandom for Lego.

    Lego announced its Lego Smart Play platform on Monday, which introduces new smart bricks, tags and special minifigs for your collection. The new bricks contain sensors that enable them to sense light and distance, and to provide an array of responses, essentially lights and sounds, when they are used in unison.

    Combine this with a newly announced partnership with the Star Wars franchise and now you can create your own interactive space battles and light-saber duels.


    LG reveals a new robot to help around the home

    File this one under intrigued, for now.

    The Korean tech giant gave the media a glimpse Monday of its humanoid robot that is designed to handle household chores such as folding laundry and fetching food. Although many companies have robots on display at CES, LG certainly is one of the biggest tech companies to promise to put a service robot in homes.

    It will be on display — and we assume demonstrating some of its purported abilities — beginning Tuesday, so we’ll have more to report soon.


    What’s new with lollipops?

    Music you can taste was on display Monday at CES: Lollipop Star unveiled a candy that plays music while you eat it. The company says it uses something called “bone induction technology,” which lets you hear songs — like tracks from Ice Spice and Akon — through the lollipop as you lick it or bite it in the back of your mouth, according to spokesperson Cassie Lawrence.

    The musical lollipops will go on sale after CES on Lollipop Star’s website for $8.99 each. And if that wasn’t enough star power, Akon was expected to visit the company’s booth Tuesday when CES opens to the public.


    Atlas holds up Hyundai’s (manufacturing) world

    Hyundai-owned Boston Dynamics publicly demonstrated its humanoid robot Atlas for the first time at the CES tech showcase, ratcheting up a competition with Tesla and other rivals to build robots that look like people and do things that people do.

    The company said a version of the robot that will help assemble cars is already in production and will be deployed by 2028 at Hyundai’s electric vehicle manufacturing facility near Savannah, Georgia.

    Delta Air Lines is taking entertainment to new heights as the “official airline” of the Sphere in Las Vegas. The airline announced a new multiyear partnership with Sphere Entertainment Co. that it says will deliver premium experiences to the venue, including a Delta SKY360° Club lounge.

    The carrier said SkyMiles members can unlock exclusive access to other experiences at the Sphere, starting during the final weekend of the Backstreet Boys’ residency in February with features including private suite seating, food and beverages. The partnership brings Delta branding to the Sphere’s massive exterior LED screen. Delta says more exclusive SkyMiles experiences will roll out in 2026 and beyond.

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – December 2025

    [ad_2]

    Associated Press

    Source link

  • AMD unveils new AI PC processors for general use and gaming at CES | TechCrunch

    [ad_1]

    AMD Chair and CEO Lisa Su kicked off her keynote at CES 2026 with a message about what compute could deliver: AI for everyone.

    As part of that promise, AMD announced a new line of AI processors as the company thinks AI-powered personal computers are the way of the future.

    The semiconductor giant revealed AMD Ryzen AI 400 Series processor, its latest version of its AI-powered PC chips, at the yearly CES conference on Monday. The company says the latest version of its Ryzen processor series allows for 1.3x faster multitasking than its competitors and are 1.7x times faster at content creation.

    These new chips feature 12 CPU Cores, individual processing units inside a core processor, and 24 threads, independent streams of instruction

    This is an upgrade to the Ryzen AI 300 Series processor that was announced in 2024. AMD started producing the Ryzen processor series in 2017.

    Rahul Tikoo, senior vice president and general manager of AMD’s client business, said AMD has expanded to over 250 AI PC platforms on the company’s recent press briefing. That represents a growth 2x over the last year, he added.

    “In the years ahead, AI is going to be a multi-layered fabric that gets woven into every level of computing at the personal layer,” Tikoo said. “Our AI PCs and devices will transform how we work, how we play, how we create and how we connect with each other.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    AMD also announced the release of the AMD Ryzen 7 9850X3D, the latest version of its gaming-focused processor.

    “No matter who you are and how you use technology on a daily basis, AI is reshaping everyday computing,” Tikoo said. “You have thousands of interactions with your PC every day. AI is able to understand, learn context, bring automation, provide deep reasoning and personal customization to every individual.”

    PCs that include either the Ryzen AI 300 Series processor or the AMD Ryzen 7 9850X3D processor become available in the first quarter of 2026.

    The company also announced the latest version of its Redstone ray tracing technology, which simulates physical behavior of light, which allows for better video game graphics without a performance or speed lag.

    Follow along with all of TechCrunch’s coverage of the annual CES conference here.

    [ad_2]

    Rebecca Szkutak

    Source link

  • New Google TV Update Is a Serious Bid to Get You to Watch AI Outputs from Your Couch

    [ad_1]

    Google TV, the operating system mainly serving the successor devices to Google’s defunct Chromecast line of products, is far from ubiquitous when you compare it to the overwhelmingly more popular Roku operating system and Samsung’s Tizen, but for what it’s worth, GTV is the one trying the hardest to shoehorn AI into the user experience. And an upcoming change announced Monday at CES will bring image and video generation via Google Gemini’s Nano Banana text-to-image model family to your TV.

    Like anything announced at CES, the implied promise is that people will want to use this, and the suite of features being described here is, I have to admit, intriguing. 

    There are some AI assistant features mentioned in this announcement, but since the advantage Google TV has over most smart TV operating systems is that it’s connected to your Google account, the most interesting new change is that Gemini will be able to search your library on Google Photos, and apply the Nano Banana features you may have already futzed around with on your smartphone, but from the comfort of your couch this time. This means adding uncanny effects to your family photos via the Photos Remix feature, and the ability to, according to Google’s press release about the update “transform memories into cinematic immersive slideshows.” 

    This next ability is listed separately in Google’s press release, even though it sounds a bit like the first: “Use Nano Banana and Veo to reimagine your personal photos or create original media directly on your TV.” 

    As photos accompanying the announcement make clear, much of what’s on offer here is designed to, well, get TV viewers to watch a slop generator.  

    In one image, Google AI Premium users are invited to create videos. Another shows the actual video creation interface, which has what look like Pixar-style animated sample videos with suggested prompts like, “Fluff fish swimming on coral reefs made with squishy yarn.” There’s a popup at the bottom of this menu with the text, “Describe your video…” Below that is instructional text about pressing and holding the mic button on your remote to talk.

    It all paints a picture of an activity you’re meant to enjoy in your living room: the “generate videos of our family members” game, perhaps. But the window dressing is more wholesome and kid-oriented than Sora’s more brainrot-forward approach to user-generated video.

    Anecdotally, most people I know who tried Sora had their curiosity slaked after a few days on the app, and don’t really revisit it. I can see that being a problem with generating custom videos on Google TV as well. But there is, at the very least, something novel about messing around with AI while curled up with the dog and a bowl of popcorn. 

     

    Google’s release says these features will come to certain TCL devices first, and will expand to the rest of the Google TV universe “over the coming months.” 

    [ad_2]

    Mike Pearl

    Source link

  • AI-generated images and clips shared after Maduro’s capture

    [ad_1]

    After the Trump administration captured Venezuelan leader Nicolás Maduro and his wife, Cilia Flores, images and videos that claimed to show the aftermath went viral on social media. 

    “Venezuelans are crying on their knees thanking Trump and America for freeing them from Nicolas Maduro,” the caption of one Jan. 3 X post read. 

    The arrest unleashed complicated reactions in the U.S. and abroad. But that X post and other images and videos like it were generated with artificial intelligence, clouding social media with an inaccurate record.

    An X user said image of Maduro’s capture was AI-generated

    Facebook and X users shared an image of Maduro with his hands behind his back, with soldiers in fatigues flanking him and holding his arms. One of the soldiers has the letters DEA — which stands for the Drug Enforcement Administration — on his uniform. The image is timestamped Jan. 3. Conservative activist Benny Johnson shared the image in a Jan. 3 Facebook post that was shared 14,000 times. 

    Tal Hagin, an open source intelligence analyst, found that the image appeared to have been created by X user Ian Weber, who describes himself as an “AI video art enthusiast.” In a Jan. 5 X post, Weber said, “This photo I created with AI went viral worldwide.”

    Hagin also shared an analysis by Gemini, Google’s AI model, that said the image was created with Google AI.

    PolitiFact found noncropped versions of the image, which we used to prompt Gemini. It found that the image contains the SynthID watermark for images created by the tool. It is invisible to humans but detectable to Google’s technology.

    Trump shared an image on Truth Social on Jan. 3 that he said shows “Maduro on board the USS Iwo Jima.” News outlets also released pictures of Maduro in U.S. custody, in which he is wearing a light blue jacket. In the real image, he is with DEA Administrator Terry Cole, who is not wearing fatigues.

    Images of New York protest, celebration in Venezuela show signs of AI

    A Jan. 4 Facebook post shared two images with the caption, “Right now, Americans are marching in New York chanting… ‘Hands off Venezuela,’ ‘Stop the war,’ ‘Free Venezuela’ …while actual Venezuelans are celebrating in the streets because a real dictator is finally gone.”

    The images show signs of being created with AI. The text on some of the protest signs is illegible, and some of the Venezuelan flags are inaccurate. The real Venezuelan flag has eight stars in an arc, and yellow, blue and red horizontal stripes. One Venezuelan flag in the image has the wrong colors, one had only seven stars, and two showed the stars forming a shape other than an arc.

    Protest signs show illegible text. Supposed Venezuelan flags include the wrong colors, or have an inaccurate shape or number of stars. (Screenshots from Facebook)

    A protest did occur in Times Square on Jan. 3, but this image does not show that. 

    Videos of Venezuelans reacting show inconsistencies

    The X account “Wall Street Apes” shared a video with the text, “Venezuelans take to the streets to celebrate Maduro’s downfall,” which got 5.3 million views. 

    The first clip showed an elderly woman kneeling in the street, clutching a flag and crying, while the second and third clips show young men saying in Spanish, “The dictator finally fell.” The fourth clip shows an elderly woman — wearing a shirt similar, but not identical, to the woman on her knees — thanking Trump.

    The earliest version of this video that we found was uploaded Jan. 3 by the TikTok account “curiosmindusa.” The account has shared other AI-generated videos, including fake clips of Trump. 

    Some inconsistencies in the videos show they were AI-generated. In the first clip, a girl disappears in the background, and a flag disappears after a man waves it. The second, third and fourth clips showed inaccurate flags: The stars were in the wrong shape or in the wrong number.

    Venezuelan flags show stars that are in the wrong shape or in the wrong number. (Screenshots from TikTok

    These images and videos were AI-generated and do not depict real events. We rate them Pants on Fire!

    PolitiFact Staff Writer Maria Briceño contributed to this report. 

    RELATED: Fact-checking Donald Trump following U.S. attacks on Venezuela and capture of Nicolás Maduro

    [ad_2]

    Source link

  • Can AI chatbots trigger psychosis in vulnerable people?

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.

    Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    What psychiatrists are seeing in patients using AI chatbots

    Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.

    OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN 

    Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)

    Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.

    Why AI chatbot conversations feel different from past technology

    Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating. 

    For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.

    How AI chatbots can reinforce false or delusional beliefs

    Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.

    Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.

    OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

    Computer open to ChatGPT screen.

    Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

    What research and case reports reveal about AI chatbots

    Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.

    A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.

    What AI companies say about mental health risks

    OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.

    Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.

    What this means for everyday AI chatbot use

    Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.

    I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS

    ChatGPT logo on an iPhone.

    Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)

    Tips for using AI chatbots more safely

    Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.

    • Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
    • Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
    • Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
    • Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
    • Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.

    If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaways

    AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.

    As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Altered and misleading images proliferate on social media amid Maduro’s capture

    [ad_1]

    AI-generated images, old videos and altered photos proliferated on social media in the hours following former Venezuelan President Nicolás Maduro’s capture. Several of these images quickly went viral, fueling false information online. 

    CBS News analyzed circulating images by comparing dubious images to verified content and using publicly available tools such as reverse image search. In some cases, CBS News ran images through AI detection tools, which can be inconsistent or inaccurate but can still help flag possibly manipulated content.

    Checking the source of the content, as well as the date, location, and other news sources are all ways to suss out whether an image is accurate, according to experts.

    AI-generated images of Maduro flood social media

    After President Trump announced Maduro’s capture in a social media post early Saturday morning, questions brewed about the logistics of the mission, where Maduro would be flown and the future of Venezuela. Meanwhile, images of Maduro that were likely manipulated or generated with AI tools circulated on social media, garnering millions of views and thousands of likes across platforms. 

    One photo purporting to show Maduro after his capture was shared widely, including by the mayor of Coral Gables Florida, Vince Lago, and in a joint Instagram post by two popular conservative content accounts with over 6 million combined followers. Using Google’s SynthID tool, CBS News Confirmed team found the photo was likely edited or generated using Google AI. 

    CBS News also found a video generated from the photo, showing military personnel escorting Maduro from an aircraft. It was posted around 6:30 a.m. — 12 hours before CBS News reported that a person in shackles was seen disembarking the plane carrying Maduro and confirmed his eventual arrival Saturday evening at Metropolitan Detention Center, a federal facility in Brooklyn.

    Another unverified photo that made the rounds on social media depicts Maduro in an aircraft with U.S. soldiers. While two different AI detection tools gave inconsistent results as to its authenticity, CBS was not able to confirm its legitimacy.

    On Saturday, Mr. Trump posted an image captioned “Nicolas Maduro on board the USS Iwo Jima” after the South American leader’s capture. Later that evening, the White House Rapid Response account shared a video that appeared to show Maduro being escorted down a hallway by federal agents.

    Old images recirculate

    Old videos and images from past events recirculated, purporting to show reactions to Maduro’s capture and strikes in Caracas. One video showing people tearing down a billboard image of Maduro dates as far back as July 2024. Another video purporting to show a strike in Venezuela had circulated on social media as far back as June 2025.

    Another image showing a man with a sack over his head while sitting in the back of a car circulated widely, sparking online speculation as to whether the photo showed Maduro’s capture. Many users flagged that the photo was probably not of Maduro, but as of this afternoon the post had 30,000 likes and over a thousand reposts. A Daily Mail article from 2023 reported that the photo shows Saddam Hussein after his capture, sitting with a Delta Force member, but CBS has not independently confirmed this.

    CBS News reached out to X and Meta regarding the companies’ policies on AI-generated images, but has not received a response. X’s rules page says it may label posts containing synthetic and manipulated media and Meta says it prohibits AI that contributes to misinformation or disinformation.

    [ad_2]

    Source link

  • What to expect from CES 2026, the annual show of all things tech?

    [ad_1]

    LAS VEGAS — With the start of the New Year squarely behind us, it’s once again time for the annual CES trade show to shine a spotlight on the latest tech that companies plan to offer in 2026.

    The multiday event, organized by the Consumer Technology Association, kicks off this week in Las Vegas, where advances across industries like robotics, healthcare, vehicles, wearables, gaming and more are set to be on display.

    Artificial intelligence will be anchored in nearly everything, again, as the tech industry explores offerings consumers will want to buy. AI industry heavyweight Jensen Huang will be taking the stage to showcase Nvidia’s latest productivity solutions, and AMD CEO Lisa Su will keynote to “share her vision for delivering future AI solutions.” Expect AI to come up in other keynotes, like from Lenovo’s CEO, Yuanqing Yang.

    The AI industry is tackling issues in healthcare, with a particular emphasis on changing individual health habits to treat conditions — such as Beyond Medicine’s prescription app focused on a particular jaw disorder — or addressing data shortages in subjects such as breast milk production.

    Expect more unveils around domestic robots too. Korean tech giant LG already has announced it will show off a helper bot named “CLOiD,” to handle a range of household tasks. Hyundai also is announcing a major push on robotics and manufacturing advancements. Extended reality, basically a virtual training ground for robots and other physical AI, is also in the buzz around CES.

    In 2025, more than 141,000 attendees from over 150 countries, regions, and territories attended CES. Organizers expect around the same numbers for this year’s show, with more than 3,500 exhibitors across the floor space this week.

    The AP spoke with CTA Executive Chair and CEO Gary Shapiro about what to expect for CES 2026. The conversation has been edited for clarity and length.

    Well, we have a lot at this year’s show.

    Obviously, using AI in a way that makes sense for people. We’re seeing a lot in robotics. More robots and humanoid-looking robots than we’ve ever had before.

    We also see longevity in health, there’s a lot of focus on that. All sorts of wearable devices for almost every part of the body. Technology is answering healthcare’s gaps very quickly and that’s great for everyone.

    Mobility is big with not only self-driving vehicles but also with boats and drones and all sorts of other ways of getting around. That’s very important.

    And of course, content creation is always very big.

    You are seeing humanoid robots right now. It sometimes works, sometimes doesn’t.

    But yes, there are more and more humanoid robots. And when we talk about CES five, 10, 15, 20 years now, we’re going to see an even larger range of humanoid robots.

    Obviously, last year we saw a great interest in them. The number one product of the show was a little robotic dog that seems so life-like and fun, and affectionate for people that need that type of affection.

    But of course, the humanoid robots are just one aspect of that industry. There’s a lot of specialization in robot creation, depending on what you want the robot to do. And robots can do many things that humans can’t.

    AI is the future of creativity.

    Certainly AI itself may be arguably creative, but the human mind is so unique that you definitely get new ideas that way. So I think the future is more of a hybrid approach, where content creators are working with AI to craft variations on a theme or to better monetize what they have to a broader audience.

    We’re seeing all sorts of different devices that are implementing AI. But we have a special focus at this show, for the first time, on the disability community. Verizon set this whole stage up where we have all different ways of taking this technology and having it help people with disabilities and older people.

    Well, there’s definitely no bubble when it comes to what AI can do. And what AI can do is perform miracles and solve fundamental human problems in food production and clean air and clean water. Obviously in healthcare, it’s gonna be overwhelming.

    But this was like the internet itself. There was a lot of talk about a bubble, and there actually was a bubble. The difference is that in late 1990s there were basically were no revenue models. Companies were raising a lot of money with no plans for revenue.

    These AI companies have significant revenues today, and companies are investing in it.

    What I’m more concerned about, honestly, is not Wall Street and a bubble. Others can be concerned about that. I’m concerned about getting enough energy to process all that AI. And at this show, for the first time, we have a Korean company showing the first ever small-scale nuclear-powered energy creation device. We expect more and more of these people rushing to fill this gap because we need the energy, we need it clean and we need a kind of all-of-the-above solution.

    [ad_2]

    Source link

  • What to Expect From CES 2026, the Annual Show of All Things Tech?

    [ad_1]

    LAS VEGAS (AP) — With the start of the New Year squarely behind us, it’s once again time for the annual CES trade show to shine a spotlight on the latest tech companies plan on offering in 2026.

    The multi-day event, organized by the Consumer Technology Association, kicks off this week in Las Vegas, where advances across industries like robotics, healthcare, vehicles, wearables, gaming and more are set to be on display.

    Artificial intelligence will be anchored in nearly everything, again, as the tech industry explores offerings consumers will want to buy. AI industry heavyweight Jensen Huang will be taking the stage to showcase Nvidia’s latest productivity solutions, and AMD CEO Lisa Su will keynote to “share her vision for delivering future AI solutions.” Expect AI to come up in other keynotes, like from Lenovo’s CEO, Yuanqing Yang.

    The AI industry is out in full force tackling issues in healthcare, with a particular emphasis on changing individual health habits to treat conditions — such as Beyond Medicine’s prescription app focused on a particular jaw disorder — or addressing data shortages in subjects such as breast milk production.

    Expect more unveils around domestic robots too. Korean tech giant LG already has announced it will show off a helper bot named “ CLOiD,” which allegedly will handle a range of household tasks. Hyundai also is announcing a major push on robotics and manufacturing advancements. Extended reality, basically a virtual training ground for robots and other physical AI, is also in the buzz around CES.

    In 2025, more than 141,000 attendees from over 150 countries, regions, and territories attended the CES. Organizers expect around the same numbers for this year’s show, with more than 3,500 exhibitors across the floor space this week.

    The AP spoke with CTA Executive Chair and CEO Gary Shapiro about what to expect for CES 2026. The conversation has been edited for clarity and length.


    What are the main themes we can expect this week?

    Well, we have a lot at this year’s show.

    Obviously, using AI in a way that makes sense for people. We’re seeing a lot in robotics. More robots and humanoid-looking robots than we’ve ever had before.

    We also see longevity in health, there’s a lot of focus on that. All sorts of wearable devices for almost every part of the body. Technology is answering healthcare’s gaps very quickly and that’s great for everyone.

    Mobility is big with not only self-driving vehicles but also with boats and drones and all sorts of other ways of getting around. That’s very important.

    And of course, content creation is always very big.


    Is 2026 the year we finally see humanoid robots in people’s homes?

    You are seeing humanoid robots right now. It sometimes works, sometimes doesn’t.

    But yes, there are more and more humanoid robots. And when we talk about CES 5, 10, 15, 20 years now, we’re going to see an even larger range of humanoid robots.

    Obviously, last year we saw a great interest in them. The number one product of the show was a little robotic dog that seems so life-like and fun, and affectionate for people that need that type of affection.

    But of course, the humanoid robots are just one aspect of that industry. There’s a lot of specialization in robot creation, depending on what you want the robot to do. And robots can do many things that humans can’t.


    Will we start seeing more innovative use of AI tools in entertainment?

    AI is the future of creativity.

    Certainly AI itself may be arguably creative, but the human mind is so unique that you definitely get new ideas that way. So I think the future is more of a hybrid approach, where content creators are working with AI to craft variations on a theme or to better monetize what they have to a broader audience.


    Any interesting AI-powered devices or services that consumers will want to buy?

    We’re seeing all sorts of different devices that are implementing AI. But we have a special focus at this show, for the first time, on the disability community. Verizon set this whole stage up where we have all different ways of taking this technology and having it help people with disabilities and older people.


    Are you concerned about a potential AI bubble?

    Well, there’s definitely no bubble when it comes to what AI can do. And what AI can do is perform miracles and solve fundamental human problems in food production and clean air and clean water. Obviously in healthcare, it’s gonna be overwhelming.

    But this was like the internet itself. There was a lot of talk about a bubble, and there actually was a bubble. The difference is that in late 1990s there were basically were no revenue models. Companies were raising a lot of money with no plans for revenue.

    These AI companies have significant revenues today, and companies are investing in it.

    What I’m more concerned about, honestly, is not Wall Street and a bubble. Others can be concerned about that. I’m concerned about getting enough energy to process all that AI. And at this show, for the first time, we have a Korean company showing the first ever small-scale nuclear-powered energy creation device. We expect more and more of these people rushing to fill this gap because we need the energy, we need it clean and we need a kind of all-of-the-above solution.

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – December 2025

    [ad_2]

    Associated Press

    Source link

  • Boston Dynamics is training an AI-powered humanoid robot to do factory work

    [ad_1]

    With rapid advances in artificial intelligence, computer scientists and engineers are making progress in developing robots that look and act like humans. A global race is underway to develop humanoid robots for widespread use. 

    Boston Dynamics has established itself as a frontrunner in the field. With support from South Korean carmaker Hyundai, which owns an 88% stake in Boston Dynamics, the Massachusetts company is testing a new generation of its humanoid robot, Atlas. 

    This past October, a 5-foot-9-inch, 200-pound Atlas was put to the test at Hyundai’s new Georgia factory, where it practiced autonomously sorting roof racks for the assembly line.

    Today’s AI-powered humanoids are learning movements that, until recently, were considered a step too far for a machine, according to Scott Kuindersma, who is the head of robotics research at Boston Dynamics.

    “A lot of this has to do with how we’re going about programming these robots now, where it’s more about teaching, and demonstrations, and machine learning than manual programming.” Kuindersma said. 

    How Atlas is trained

    When 60 Minutes visited Boston Dynamics’ headquarters in 2021, Atlas was a bulky, hydraulic robot that could run and jump. Back then, Atlas relied on algorithms written by engineers. The Atlas of today is sleeker, with an all-electric body and an AI brain powered by Nvidia’s advanced microchips, making it smart enough to master hard-to-believe feats.

    Atlas learns in several ways. At Boston Dynamics, machine learning scientist Kevin Bergamin demonstrated an example of supervised learning. Wearing a virtual reality headset, Bergamin took direct control of the humanoid and guided its hands and arms through each task until Atlas succeeded.

    “That generates data that we can use to train the robot’s AI models to then later do that task autonomously,” Kuindersma said.

    Boston Dynamics.head of robotics Scott Kuindersma and Bill Whitaker

    60 Minutes


    Another teaching technique involves a motion capture body suit. 60 Minutes correspondent Bill Whitaker wore the suit while performing jumping jacks.

    Since Atlas’ body is different from Whitaker’s, the robot was trained to match his motions. Data collected by the motion capture suit was fed into Boston Dynamics’ machine learning process. 

    More than 4,000 digital Atlases trained for six hours in simulation. The simulation added challenges for the avatars — like slippery floors, inclines or stiff joints – and homed in on the best way for Atlas to perform the jumping jacks. 

    The Boston Dynamics team then uploaded the new skill into the AI system that controls every Atlas robot. Once one was trained, they were all trained. At the end of the process, Atlas performed jumping jacks that looked just like Whitaker’s. 

    Having learned from the same technique, Atlas demonstrated the ability to run, crawl, skip, and dance.

    There are limitations, Kuindersma said. Atlas isn’t proficient at performing most of the routine tasks that people do in their daily lives, like putting on clothes or pouring a cup of coffee. 

    “There are no humanoids that do that nearly as well as a person,” Kuindersma said. “But I think the thing that’s really exciting now is we see a pathway to get there.”

    The future of humanoids 

    Boston Dynamics CEO Robert Playter spearheaded the company’s humanoid development. 

    “There’s a lot of excitement in the industry right now about the potential of building robots that are smart enough to really become general purpose,” he said. 

    Boston Dynamics CEO Robert Playter

    Boston Dynamics CEO Robert Playter

    60 Minutes


    Goldman Sachs predicts the market for humanoids will reach $38 billion within the decade. Boston Dynamics and other U.S. robot makers are fighting to come out on top. State-supported Chinese companies are also in the race. 

    “The Chinese government has a mission to win the robotics race.,” Playter said. “Technically I believe we remain in the lead. But there’s a real threat there that, simply through the scale of investment, we could fall behind.”

    Should humans be worried about humanoids?

    As fears grow that AI will displace workers, humanoid robots are learning to perform human tasks. Boston Dynamics is training Atlas to do a job that human workers currently handle at Hyundai’s Georgia plant.

    Playter said it could be several years before Atlas becomes a full-time worker at Hyundai, but he predicted that humanoids will change the nature of work.

    “The really repetitive, really backbreaking labor is really, is going to end up being done by robots. But these robots are not so autonomous that they don’t need to be managed. They need to be built. They need to be trained. They need to be serviced.”

    Playter said there are benefits to creating robots like Atlas, which can move in ways that humans can’t. 

    Atlas humanoid

    60 Minutes


    “We would like [robots] that could be stronger than us or tolerate more heat than us or definitely go into a dangerous place where we shouldn’t be going,” he said. “So you really want superhuman capabilities.”

    Still, Playter said there’s no reason to worry about a future like the one depicted in “The Terminator.”

    “[If you] saw how hard we have to work to get the robots to just do some of the straightforward tasks we want them to do, that would dispel that worry about sentience and rogue robots,” he said.

    [ad_2]

    Source link

  • Boston Dynamics’ AI-powered humanoid robot is learning to work in a factory

    [ad_1]

    For decades, engineers have been trying to create robots that look and act human. Now, rapid advances in artificial intelligence are taking humanoids from the lab to the factory floor. As fears grow that AI will displace workers, a global race is underway to develop human-like robots able to do human jobs. Competitors include Tesla, startups backed by Amazon and Nvidia, and state-supported Chinese companies. Boston Dynamics is a frontrunner. The Massachusetts company, valued at more than a billion dollars, is hard at work on a humanoid it calls Atlas. South Korean carmaker Hyundai holds an 88% stake in the robot maker. We were invited to see the first real-world test of Atlas at Hyundai’s new factory near Savannah, Georgia. There, we got a glimpse of a humanoid future that’s coming faster than you might think.

    Hyundai’s sprawling auto plant is about as cutting-edge as it gets. More than 1,000 robots work alongside almost 1,500 humans, hoisting, stamping and welding in robotic unison. This may look like the factory of the future, but we found the future of the future in the parts warehouse, tucked away in the back corner, getting ready for work. 

    Meet Atlas: A 5’9″, 200 pound, AI-powered humanoid created by Boston Dynamics. The rise of the robots is science fiction no more.

    Bill Whitaker: I have to say, every time I see it, I just can’t believe what my eyes are seeing. Is this the first time Atlas has been out of the lab?

    Zack Jackowski: This is the first time Atlas has been out of the lab doing real work.

    Bill Whitaker and Zack Jackowski

    60 Minutes


    Zack Jackowski heads Atlas development. He has two mechanical engineering degrees from MIT and a mission to turn the robot into a productive worker on the factory floor. We watched as Atlas practiced sorting roof racks for the assembly line without human help. 

    Bill Whitaker: So he’s working autonomously. 

    Zack Jackowski: Correct

    Bill Whitaker: You’re down here to see how Atlas works in the field, and you’ll be showing Atlas off to your bosses at Hyundai?

    Zack Jackowski: Yeah. 

    Bill Whitaker: Do you feel like a proud papa? 

    Zack Jackowski: I feel like– a nervous engineer. 

    Jackowski has been preparing for this moment for a year. We first met him and Atlas a month earlier at Boston Dynamics’ headquarters just outside the city, where he and his team were teaching Atlas skills needed to work at Hyundai. And Atlas, with its AI brain, was gaining knowledge through experience – in other words, it seemed to be learning.

    Bill Whitaker: You know how crazy that sounds?

    Zack Jackowski: Yeah, a little bit. I– and I– I think a lot of our roboticists would’ve thought that was pretty crazy five, six years ago. 

    When 60 Minutes last visited Boston Dynamics in 2021, Atlas was a bulky, hydraulic robot that could run and jump. Back then, Atlas relied on algorithms written by engineers. When we dropped in again this past fall, we saw a new generation Atlas with a sleek, all-electric body and an AI brain, powered by Nvidia’s advanced microchips, making Atlas smart enough to pull off hard to believe feats autonomously. We saw Atlas skip and run with ease.

    Bill Whitaker: Do you ever stop thinking, gee whiz?

    Scott Kuindersma: I remain extremely excited about where we are in the history of robotics but we see that there’s so much more that we can do, as well.

    Scott Kuindersma is head of robotics research, a job he proudly wears on his sleeve.

    Scott Kuindersma

    Scott Kuindersma

    60 Minutes


    Bill Whitaker: You even have on a robot shirt.

    Scott Kuindersma: Well, once I saw that this shirt existed, there was no way I wasn’t buying it. 

    He told us robots today have learned to master moves that until recently were considered a step too far for a machine.

    Scott Kuindersma: And a lot of this has to do with how we’re going about programming these robots now, where it’s more about teaching, and demonstrations, and machine learning than manual programming.

    Bill Whitaker: So this humanoid, this mechanical human, can actually learn?

    Scott Kuindersma: Yes. And– and we found that that’s actually one of the most effective way to program robots like that.

    Atlas learns in different ways. In supervised learning, machine learning scientist Kevin Bergamin – wearing a virtual reality headset – takes direct control of the humanoid, guiding its hands and arms, move-by-move through each task until Atlas gets it.

    Scott Kuindersma: And if that teleoperator can perform the task that we want the robot to do, and do it multiple times, that generates data that we can use to train the robot’s AI models to then later do that task autonomously. 

    Kuindersma used me to demonstrate another way Atlas learns.

    Scott Kuindersma: That v– very stylish suit that you’re wearing is actually gonna capture all of your body motion to train Atlas to try to mimic exactly your motions. And so you’re about to become a 200-pound metal robot.

    He asked me to pick an exercise. They captured the way I work as well.

    Bill Whitaker: I am here at the AI Lab at Boston Dynamics. All of my movements, my walking, my d– arm gestures are being picked up by these sensors…

    Then engineers put my data into their machine learning process. Atlas’ body is different from mine, so they had to teach it to match my movements virtually – more than 4,000 digital Atlases trained for six hours in simulation.

    Atlas humanoid

    60 Minutes


    Scott Kuindersma: And they’re all trying to do jumping jacks, just like you. And as you can see, they’re just starting to learn, so they’re not very good at it.

    The simulation, he told us, added challenges for the avatars, like slippery floors, inclines, or stiff joints, and then homed in on what works best.

    Scott Kuindersma: And it can eventually get to a state where we have many copies of Atlas doing really good jumping jacks. 

    They uploaded this new skill into the AI system that controls every Atlas robot. Once one is trained, they’re all trained.

    Scott Kuindersma: So that’s what you look like when you’re exercising. 

    Bill Whitaker: Uh-huh.

    And what I look like doing my job.

    Bill Whitaker: I am here at the AI Lab at Boston Dynamics. All of my movements, my walking, my d– arm gestures are being picked up by these sensors … 

    Bill Whitaker: This is mind-blowing.

    Through the same processes, Atlas was taught to crawl, do cartwheels. It didn’t fare as well with the duck walk. 

    Scott Kuindersma: Oh, that was fun. And then this happens.

    Bill Whitaker: And then this happens. 

    Scott Kuindersma: We love when things like this happen, actually. Because it’s often an opportunity to understand something we didn’t know about the system.

    Bill Whitaker: What are some of the limitations you see now?

    Scott Kuindersma: Well, I’d- I would say that most things that a person does in their daily lives, Atlas or– other humanoids can’t really do that yet. I think we’re start–

    Bill Whitaker: Like- like what?

    Scott Kuindersma: Well, just putting on clothes in the morning, or pouring your cup of coffee and walking around the house with it.

    Bill Whitaker: That’s too difficult for– for Atlas?

    Scott Kuindersma: Yeah, I think there are no humanoids that do that nearly as well as a person would do that. But I think the thing that’s really exciting now is we see a pathway to get there. 

    A pathway provided by AI. What stands out in this Atlas is its brain. Nvidia chips – the ones that helped launch the AI revolution with ChatGPT – process the flood of collected data, moving this humanoid robot closer to something like common sense.

    Scott Kuindersma: So the analogy might be if I was teaching a child how to do free throws in basketball, if I allow them to just explore and come up with their own solutions, sometimes they can come up with a solution that I didn’t anticipate. And that’s true for these systems as well.

    Atlas can see its surroundings and is figuring out how the physical world works. 

    Scott Kuindersma: So that some day you can put a robot like this in a factory and just explain to it what would– you would like it to do, and it has enough knowledge about how the world works that it has a good chance of doing it.

    Robert Playter: There’s a lot of excitement in the industry right now about the potential of building robots that are smart enough to really become general purpose.

    Boston Dynamics CEO Robert Playter

    Boston Dynamics CEO Robert Playter

    60 Minutes


    Robert Playter, the CEO of Boston Dynamics, spearheaded the company’s humanoid development. He’s been building toward this moment for more than 30 years. The cornerstone was this robotic dog, Spot, introduced almost a decade ago. Spots are trained in heat, cold and varied terrain, and roam the halls of Boston Dynamics.

    Robert Playter: So we have some cameras– thermal sensors, acoustic sensors. An array of sensors on its back that lets it collect data about the health of a factory.

    Spots carry out quality control checks at Hyundai, making sure the cars have the right parts. They conduct security and industrial inspections at hundreds of sites around the world. What began with Spot has evolved into Atlas. 

    Robert Playter: So this robot is capable of superhuman motion, and so it’s gonna be able to exceed what we can do. 

    Bill Whitaker: So you are creating a robot that is meant to exceed the capabilities of humans.

    Robert Playter: Why not, right? We– we would like things that could be stronger than us or tolerate more heat than us or definitely go into a dangerous place where we shouldn’t be going. So you really want superhuman capabilities. 

    Bill Whitaker: To a lotta people that sounds scary. You don’t foresee– a world of Terminators? 

    Robert Playter: Absolutely not. I think if you saw how hard we have to work to get the robots to just do some of the straightforward tasks we want them to do, that would dispel that– that worry about sentience and rogue robots. 

    We wondered if people might have more immediate concerns. We saw workers doing a job at the Hyundai plant that Atlas is being trained to perform. 

    Bill Whitaker: I guarantee you there are going to be people who will say, “I’m gonna lose my job to a robot.” 

    Robert Playter: Work does change. So the really repetitive, really back-breaking labor is really- is gonna end up being done by robots. But these robots are not so autonomous that they don’t need to be managed. They need to be built. They need to be trained. They need to be serviced. 

    Playter told us it could be several years before Atlas joins the Hyundai workforce fulltime. Goldman Sachs predicts the market for humanoids will reach $38 billion within the decade. Boston Dynamics and other U.S. robot makers are fighting to come out on top. But they’re not the only ones in the ring. Chinese companies are proving to be formidable challengers. They’re running to win.

    Bill Whitaker: Are they outpacing us? 

    Robert Playter: The Chinese government has a mission to win the robotics race. Technically I believe we remain– in the lead. But there’s a real threat there that, simply through the scale of investment– we could fall behind. 

    To stay ahead, Hyundai made that big investment in Boston Dynamics.

    Zack Jackowski: Four robots…

    We were at the Georgia plant when Atlas engineer Zack Jackowski presented Atlas to Heung-soo Kim, Hyundai’s head of global strategy. He came all the way from South Korea to check in on the brave new world the carmaker is funding. 

    Bill Whitaker: What do you think of the progress that they’ve made with Atlas?

    Heung-soo Kim: I think we are on track- about the development. Atlas, so far, it’s very successful. It’s a kind of– a start of great journey. Yeah.

    The destination? That humanoid future we mentioned at the start – robots like us working beside us, walking among us. It’s enough to make your head spin.

    Produced by Marc Lieberman. Associate producer, Cassidy McDonald. Broadcast associate, Mariah Johnson. Edited by Matt Richman.

    [ad_2]

    Source link