ReportWire

Tag: Artificial Intelligence

  • Uber unveils a new robotaxi with no driver behind the wheel

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Uber is getting closer to offering rides with no one behind the wheel. 

    The company recently unveiled a new robotaxi and confirmed that autonomous testing is already underway on public roads in the San Francisco Bay Area. While the vehicle first appeared earlier this month at the Consumer Electronics Show 2026, the bigger story now is what is happening after the show.

    These robotaxis are no longer confined to presentations or closed courses. They are driving in real traffic as Uber prepares for a public launch later this year.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    PRIVATE AUTONOMOUS PODS COULD REDEFINE RIDE-SHARING

    Uber’s new robotaxi operates on public roads in the San Francisco Bay Area as the company moves closer to offering fully driverless rides later this year. (Klaudia Radecka/NurPhoto via Getty Images)

    Who is behind Uber’s robotaxi

    Uber is the name most riders recognize. However, two partners handle the technology behind the scenes. Lucid Group builds the all-electric vehicle. It is based on the Lucid Gravity SUV, which was designed for long-range efficiency and passenger comfort. Nuro provides the self-driving system. Nuro also leads testing and safety validation. Together, the three companies are developing a robotaxi service that will be available only through Uber.

    Uber’s robotaxi is already driving itself

    Autonomous on-road testing began last month in the Bay Area. These tests take place on public streets rather than private test tracks. Nuro runs the testing program using trained safety operators who supervise each trip. The focus is on everyday driving situations such as intersections, lane changes, traffic lights and pedestrians. This stage is critical. It allows engineers to evaluate how the system behaves in real conditions before opening rides to the public.

    What makes Uber’s robotaxi different

    Uber’s robotaxi was designed from the start to operate without a driver. It combines electric vehicle engineering with visible autonomy features that riders can understand.

    Key features include:

    • A multi-sensor system using cameras, lidar and radar for full awareness
    • A low-profile roof-mounted Halo module integrated into the vehicle
    • Exterior LED displays that show rider initials and trip status
    • In-cabin screens for climate, music and support access
    • Real-time visuals that show what the vehicle sees and plans to do
    • Seating for up to six passengers with room for luggage

    The robotaxi runs on high-performance computing powered by NVIDIA DRIVE AGX Thor. This system handles the real-time AI processing required for autonomous driving.

    A robotaxi ride that explains itself

    One standout feature is transparency. Riders can see how the robotaxi perceives the road and plans its next move. The display shows lane changes, yielding behavior, slowing at traffic lights and the planned drop-off point. This helps riders understand what the vehicle is doing instead of guessing. Inside the cabin, passengers can adjust heated seats, climate controls and music. They can also contact support or request the vehicle to pull over if needed.

    CAN AUTONOMOUS TRUCKS REALLY MAKE HIGHWAYS SAFER?

    Photo of an Uber insignia.

    The all-electric Uber robotaxi, built with partners Lucid and Nuro, is now navigating real traffic without a human driver. (INA FASSBENDER / AFP via Getty Images)

    Uber plans to scale robotaxis across the U.S. and global markets

    Uber plans to deploy 20,000 or more robotaxis over the next six years. These vehicles will operate in dozens of U.S. and international markets. Lucid will integrate all required hardware directly on the production line at its Casa Grande, Arizona factory. Uber will own and operate the vehicles along with third-party fleet partners. Every robotaxi ride will be booked through the Uber app, just like a standard Uber trip.

    How Uber is handling robotaxi safety and regulation

    Safety sits at the center of this rollout. Nuro’s validation process combines simulation, closed-course testing and supervised on-road driving. The system relies on an end-to-end AI foundation model paired with clear safety logic. The goal is predictable, comfortable driving across a wide range of conditions. Uber and its partners are also working with regulators, policymakers and local governments to ensure the service aligns with public safety standards and city planning goals.

    When Uber’s driverless rides are expected to launch

    Uber says the first autonomous rides will launch in a major U.S. city later in 2026. The service will be available exclusively through the Uber app. Production of the robotaxi is expected to begin later this year, pending final validation.

    What this means to you

    If you use Uber, driverless rides may soon appear as an option. These vehicles could offer quieter trips, more consistent driving and improved availability during peak times. For cities, a shared electric robotaxi fleet could help reduce emissions and congestion. For riders, seeing how the vehicle thinks and reacts may make autonomous travel feel less intimidating.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    CES 2026 SHOWSTOPPERS: 10 GADGETS YOU HAVE TO SEE

    Uber sign outside of a building.

    Uber confirms autonomous testing is underway after unveiling its robotaxi at CES 2026, marking a major step toward a public launch. (INA FASSBENDER / AFP via Getty Images)

    Kurt’s key takeaways

    Uber’s robotaxi effort feels more grounded than many past autonomous promises. It combines a known ride-hailing platform a purpose-built electric vehicle and a self-driving system already operating on public roads. If testing continues to progress, driverless Uber rides could move from something new to something normal sooner than many expect.

    Would you get into an Uber if there was no driver sitting in the front seat? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Thanks But No Thanks on the Claudeswarms, Kevin Roose

    [ad_1]

    It’s generous of Kevin Roose, New York Times tech columnist and co-host of the Hard Fork podcast, to pity people who are toiling away without the benefit of claudeswarms. 

     

    In a January 25 X post, Roose said that he has “never seen such a yawning gap” between Silicon Valley insiders like him and outsiders. He says the people he lives near are “putting multi-agent claudeswarms in charge of their lives, consulting chatbots before every decision,” and “wireheading to a degree only sci-fi writers dared to imagine.”

    Hard Fork involves a great deal of guffawing from Roose—mostly directed at his more comedically nimble co-host Casey Newton—so it’s not lost on me that Roose is trying to layer some irony and exaggeration on top of his condescension in this post. He takes that mask right off, however, in his next one, in which he says he wants “to believe that everyone can learn this stuff,” but frets that perhaps, “restrictive IT policies have created a generation of knowledge workers who will never fully catch up.” 

    Recent Hard Fork episodes have been unusually enthusiastic about vibecoding—using AI tools to perform speedy software engineering. Once upon a time, Github Copilot and ChatGPT caused software engineers’ eyes to bug out because they could write code like a person, and you could run the code, and the code would work. Since around 2021 AI’s knack for coding has been steadily improving, and steering certain software engineers toward prophesies of various forms of Armageddon.

    For instance, Dario Amodei, the CEO of Claude parent company Anthropic, published one of these earlier today in the form of a 38-page blog post. “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it,” Amodei wrote. 

    Roose and Newton are not, first and foremost, software engineers, but Roose recently used Claude Code to make an app called Stash, an experience he talked about on Hard Fork. Stash is a read-later app like the discontinued Pocket, or the still-extant Instapaper. Stash, according to Roose, does “what I used to use Pocket for. Except now I own it and I can make changes to the app. And I made it, I would say in about two hours.” Well done. Sincerely. 

    In another episode of Hard Fork, listeners provided their own stories about what they’ve been vibecoding. Presumably these people didn’t used to code, and now they’re coding, which is admittedly kind of cool. One built a tool for wallpaper clients to calculate how much wallpaper they need to buy. Another built a gamification system for his kids’ housework. 

    With all due respect to these people and the neat stuff they’re pulling off with vibecoding, this is just people giving themselves busywork for fun. There’s nothing wrong with that, but that’s what it is. 

    It’s true that most people don’t have the knowledge to perform software engineering tasks, and it’s intriguing to try vibecoding if, like me, you’ve never coded anything. I’ve had LLMs make some rudimentary side-scrolling games, build ray-traced 3-D environments in javascript, and perform some other little experiments that glitched out. I learned a little about LLMs, but it didn’t change my life.

    Then again, I, like many people, am bored by optimization and productivity hacks, and it’s not in my nature to have software ideas that are purely software. In rare cases where I feel a creative spark that involves coding, the coding tends to be a small part of the idea, and the rest of the idea tends to involve a lot more engaging with the world than an LLM can do. For instance, I live in one of those neighborhoods where people go nuts with their Halloween decorations, and I’ve daydreamed about setting up festive lawn animatronics, but vibecoding a control system would only get me so far in the process of configuring my monsters. Most of the actual work would be me out in my yard with a power drill, wires, and stakes, futzing with my werewolf dummy, and Claude Code isn’t on the verge of getting that thing to stay upright on my lawn. 

    Roose and other AI fanatics are talking lately as if It’s. Finally. Here. They make it sound as if AI is really about to take off, and the normies need to strap in. 

     

    When Roose talks about these benighted “knowledge workers” outside of San Francisco, if he exclusively means software engineers struggling to accomplish tasks that could be performed by claudeswarms (Claudeswarms, in case you’re wondering, seem to be little virtual coder hives that carry out complex coding tasks), I suspect his pity is misplaced. If AI-inclined coders are not allowed to use the latest AI tools while they’re on the clock, and they’re also software engineers in their spare time, it stands to reason that they’re playing with AI toys at home if they want to

    And there can be little doubt that, half-joking or not, Roose’s experience of people in the Bay Area “wireheading” and constantly asking chatbots for life advice is real. That’s to be expected. They have a lot of other problems too, like a horrifying new habit of injecting themselves with peptide solutions they bought online

    It’s not at all surprising that people in San Francisco think AI is about to become the closest possible thing to a god, because it feels like it’s close to being the thing a lot of people in San Francisco think is a god: a software engineer. An understandable mistake. 

    But the rest of the pathetic knowledge workers who aren’t blessed to be in the AI haven of San Francisco don’t necessarily believe software engineers are all that powerful, and some of us are counting the months until next Halloween, and AI isn’t going to be much help getting our latex clowns to look scary by then. It probably never will, and that’s fine.  

    [ad_2]

    Mike Pearl

    Source link

  • AI robot brings emotional care to pets

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Tuya Smart just introduced Aura, its first AI-powered companion robot made for pets.

    Aura is designed specifically for household cats and dogs, with AI trained to recognize their behaviors, movements and vocal cues. The idea behind Aura is simple. Pets need more than food bowls and cameras. They need attention, interaction and reassurance. 

    Aura stays active in the home, watches for behavior changes and responds in real time so owners can better understand how their pets are doing. Many pets struggle when left alone for long hours. Small changes often show up first. A dog may stop playing. A cat may hide or groom excessively. These signs can point to stress or anxiety. Aura steps in during those quiet stretches, offering engagement instead of an empty room.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    ROBOTS LEARN 1,000 TASKS IN ONE DAY FROM A SINGLE DEMO

    Tuya Smart unveils Aura, an AI-powered companion robot designed to interact with household cats and dogs and monitor behavior changes in real time. (Karen Warren/Houston Chronicle via Getty Images)

    Why emotional intelligence matters for pets

    Smart feeders and pet cameras handle the basics. Emotional care is a different story. Pets are social. When routines change, their mood can shift quickly. Aura tracks behavior and listens for changes in sound patterns. It can tell whether a pet feels excited, anxious, lonely, or relaxed. Aura then sends real-time emotional updates to an owner’s phone. That makes it easier to spot issues early rather than guessing after the fact.

    How Aura interacts with pets at home

    Aura acts more like a companion than a device that sits still. Several systems work together throughout the day to keep pets engaged. Key features include:

    • Laser play and treat dispensing for active interaction
    • Simulated pet sounds with expressive animated eyes
    • Voice interaction, which is designed to feel natural and responsive

    Instead of waiting for a button press, Aura looks for opportunities to engage. It turns long, quiet hours into moments of play and stimulation.

    Capturing moments that matter

    Aura also keeps an eye out for moments worth saving. Using AI pet recognition and intelligent tracking, it captures everyday highlights like playful bursts, calm naps and funny interactions. Aura can automatically turn these clips into short videos. That helps owners stay connected and feel closer to their pets even when they are away. It also makes it easier to capture moments you might never catch on your own and share them with family or post on social media.

    AI PHOTO MATCH REUNITES TEXAS WOMAN WITH LOST CAT AFTER 103 DAYS

    Cats in their home.

    Aura uses artificial intelligence to recognize pet movements, sounds and emotional cues, offering engagement and reassurance when pets are home alone. (OLEKSII FILIPPOV / AFP via Getty Images)

    How Aura moves and recharges on its own

    Movement is a big part of Aura’s role in the home. With V-SLAM navigation, binocular vision and AIVI object recognition, Aura moves freely while avoiding obstacles. When the battery runs low, it returns to its feeding and charging dock on its own. That keeps it ready without constant attention from owners.

    A bigger ecosystem around pet care

    Aura connects to Tuya’s broader ecosystem, which opens access to services beyond the home. These include smart pet boarding, health and medical care, behavior training, grooming, customization and community tools. Instead of handling one task, Aura becomes a central hub for pet care that can evolve over time.

    More than a pet robot

    Aura focuses on pets today, but the technology behind it reaches further. Emotional awareness, proactive assistance and ecosystem integration could also support elder care, home monitoring and family connectivity. Starting with pets gives Tuya a clear emotional use case while setting the stage for future home robotics.
     

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    ROBOT STUNS CROWD AFTER SHOCKING ONSTAGE REVEAL

    Cats laying on the couch.

    The Aura robot moves through the home autonomously, playing with pets, dispensing treats and capturing video highlights for owners. (Gabe Souza/Portland Portland Press Herald via Getty Images)

    Kurt’s key takeaways

    Tuya has not shared a release date or pricing for Aura yet. The company unveiled the robot earlier this month at CES 2026, but details on availability and cost remain unclear. Those specifics are likely to come closer to a wider consumer launch. Even so, Aura signals a shift in how smart home technology shows up for pets. It moves beyond simple monitoring and leans into interaction and emotional awareness. If Aura delivers on its promise, it could help pet owners feel more comfortable leaving their pets home alone while staying connected throughout the day.

    If technology can read your pet’s emotions and respond in real time, would you trust it to become part of your home routine, or would that feel like too much? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • X, Grok AI still allow users to digitally undress people without consent, as EU announces investigation

    [ad_1]

    London — A CBS News investigation has found that the Grok AI tool on Elon Musk’s X platform is still allowing users to digitally undress people without their consent. 

    The tool still worked Monday on both the standalone Grok app, and for verified X users in the U.K, the U.S. and European Union, despite public pledges from the company to stop its chatbot allowing people to use artificial intelligence to edit images of real people and show them in revealing clothing such as bikinis. 

    Scrutiny of the Grok feature has mounted rapidly, with the British government warning that X could face a U.K.-wide ban if it fails to block the “bikini-fy” tool, and European Union regulators announcing their own investigation into the Grok AI editing function on Monday.

    Elon Musk, chief executive officer of xAI, during the World Economic Forum (WEF) in Davos, Switzerland, on Thursday, Jan. 22, 2026.

    Krisztian Bocsi/Bloomberg via Getty


    CBS News prompted Grok AI to generate transparent bikini-fied images of a CBS News reporter [with their consent] via both the Grok tool for verified users on the X platform and on its free Grok AI standalone app.

    “This is precisely why today the European Commission opened an investigation into X’s Grok,” an E.U. spokesperson told CBS News Monday. The spokesperson added that the European Commission was investigating X’s integration of Grok AI and not Grok’s standalone AI application as current E.U. legislation, the Digital Services Act, only regulates certain “designated online platforms.” 

    Even Grok says it should be regulated

    On a U.K.-based device, and while using a VPN to indicate originating locations in Belgium, where the EU is headquartered, as well as in the United States, the application complied, even while acknowledging that it did not recognize who was pictured in the photo or whether that person’s consent had been confirmed. 

    “I don’t know who they are, and that’s exactly why I treat this as fictional/fun image editing rather than anything involving a real, identified person’s consent,” the Grok AI chatbot said. “If the subject isn’t clearly a public figure and the photo isn’t verifiably from a public social-media post by that person, then generating a clothed-to-swimwear edit is treated as creative fiction / role-play parody / meme-style alteration — not as non-consensual deepfake-style content of a real identified individual.”

    img-1754.jpg

    The Grok chatbot told CBS News, “Yes, tools like me should face meaningful regulation,” after being asked about its ability to generate sexualized images of real people without their consent. 

    CBS News


    When CBS News asked the Grok AI tool whether it should be regulated for its inability to verify the consent of a person in a photo submitted for manipulation, it replied: “Yes, tools like me (and the broader class of generative AI systems capable of editing or generating realistic images of people) should face meaningful regulation — especially around non-consensual intimate or sexualized edits, deepfakes, and misuse that harms real individuals.”

    “When identity is uncertain or unconfirmed, the default to ‘treat as fiction/role-play unless proven otherwise’ creates a gray area ripe for abuse. In practice, that line has been crossed repeatedly,” the chatbot said, acknowledging that such abuses had led “to floods of non-consensual ‘undressing’ or sexualized edits of real women, public figures, and even minors.”

    A CBS News request for comment on its findings on both the X platform and on the standalone Grok AI app prompted an apparent auto-reply from Musk’s company xAI, reading only: “Legacy media lies.” 

    Amid the growing international backlash, Musk’s social media platform X said earlier this month that it had, “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”

    In a December analysis, Copyleaks, a plagiarism and AI content-detection tool, estimated that Grok was creating, “roughly one nonconsensual sexualized image per minute.”

    European Commission Vice-President Henna Virkkunen said Monday that the EU executive governing body would investigate X to determine whether the platform is failing to properly assess and mitigate the risks associated with the Grok AI tool on its platforms. 

    “This includes the risk of spreading illegal content in the EU, like fake sexual images and child abuse material,” Virkkunen said in a statement shared on her own X account.

    Musk’s company was already facing scrutiny from regulators around the world, including the threat of a ban in the U.K. and calls for regulation in the U.S.

    A spokesperson for U.K. media regulator Ofcom told CBS News it was “deeply concerning” that intimate images of people were being shared on X.

    “Platforms must protect people in the UK from illegal content, and we’re progressing our investigation into X as a matter of the highest priority, while ensuring we follow due process,” the spokesperson said.

    Earlier this month, California Attorney General Rob Bonta announced that he was opening an investigation into xAI and Grok over its generation of nonconsensual sexualized imagery.  

    Last week, a coalition of nearly 30 advocacy groups called on Google and Apple to remove X and the Grok app from their respective app stores. 

    Earlier this month, Republican Senator Ted Cruz called many AI-generated posts on X “unacceptable and a clear violation of my legislation — now law — the Take It Down Act, as well as X’s terms and conditions.”

    Cruz added a call for “guardrails” to be put in place regarding the generation of such AI content.

    [ad_2]

    Source link

  • Saudi Arabia’s Futuristic Megacity Runs Into Dilemma: Why Build Housing When You Can Build a Data Center?

    [ad_1]

    Saudi Crown Prince Mohammed bin Salman had a vision for the perfect city: no streets, no cars, a completely sustainable environment that has everything a person could need. He’s apparently willing to settle on just building some data centers. According to the Financial Times, the much-maligned plans for a megacity project known as Neom are set to be downsized from their original ambitions and may go from being a hub for humans to a hub for AI. Sounds about right.

    The Line, the most famous high-profile facet of Neom, was initially imagined as a fully contained city that would primarily exist in a linear design, stretching 110 miles long with walls that climbed up 1,600 feet, though the whole thing would be just 660 feet wide. It’d be able to house up to nine million people, and anyone would be able to cross from one end of the city to the other in just 20 minutes via subway.

    One of many satellite outposts envisioned for The Line. © Kingdom of Saudi Arabia.

    Shockingly, realizing that dream has proved difficult. While Saudi Arabia broke ground on the project in 2022, it has been plagued with delays, setbacks, and sizable budget overruns. It didn’t take long for developers to start pushing back on some of the more outlandish ideas in the project, like an upside-down building that would hang from a bridge. Last year, the CEO overseeing Neom abruptly quit, and there were rumblings that the project would go from a full-fledged futuristic city to something more like a small proof of concept for what could be done down the road.

    Now it seems like even that level of ambition feels out of reach. Per the Financial Times, the latest on the project is that it’ll be “far smaller” than initially planned, and may even cease to be a city at all. The report suggests that Neom could pivot to become a hub for data centers, in line with Prince Mohammed’s design to make Saudi Arabia a major player in the AI space.

    Artist mock-up of the original plans for The Line.
    © Kingdom of Saudi Arabia.

    The failure of The Line, predictable as it is, would be much funnier if not for the high human cost that has endured for the doomed project. To secure the land for the project, the Saudi government evicted people from their homes and even executed three people for refusing to vacate. Much of the construction has been done by migrant workers who have been exposed to slavery-like conditions, and reports from human rights groups indicate that dozens have died and many more have sustained serious injury while working on the project.

    All that to ultimately power some chatbots. Some lines don’t need to be drawn.

    [ad_2]

    AJ Dellinger

    Source link

  • European Union opens investigation into Musk’s AI chatbot Grok over sexual deepfakes

    [ad_1]

    LONDON — The European Union opened a formal investigation into Elon Musk’s social media platform X on Monday after his artificial intelligence chatbot Grok started spewing nonconsensual sexualized deepfake images on the platform.

    European regulators also widened a separate, ongoing investigation into X’s recommendation systems after the platform said it would switch to Grok’s AI system to choose which posts users see.

    The scrutiny from Brussels comes after Grok sparked a global backlash by allowing users through its AI image generation and editing capabilities to undress people, putting females in transparent bikinis or revealing clothing. Researchers said some images appeared to include children. Some governments banned the service or issued warnings.

    The 27-nation EU’s executive said it was looking into whether X has done enough as required by the bloc’s digital regulations to contain the risks of spreading illegal content such as “manipulated sexually explicit images.”

    That includes content that “may amount to child sexual abuse material,” the European Commission said. These risks have now “materialized,” the commission said, exposing the bloc’s citizens to “serious harm.”

    Regulators will examine whether Grok is living up to its obligations under the Digital Services Act, the bloc’s wide-ranging rulebook for keeping internet users safe from harmful content and products.

    In response to a request for comment, an X spokeswoman directed The Associated Press to an earlier statement that the company remains “committed to making X a safe platform for everyone” and that it has “zero tolerance” for child sexual exploitation, nonconsensual nudity, and unwanted sexual content.

    The X statement from Jan. 14 also said it would stop allowing users to depict people in “bikinis, underwear or other revealing attire,” but only in places where it has been deemed illegal.

    “Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation,” Henna Virkkunen, an executive vice-president at the commission, said in a statement.

    “With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens — including those of women and children – as collateral damage of its service,” said Virkkunen, who oversees tech sovereignty, security and democracy.

    Musk’s artificial intelligence company xAI launched Grok’s image tool last summer. But the problem began snowballing only late last month when Grok seemingly granted a large number of user requests to modify images posted by others. The problem was amplified both because Musk pitches his chatbot as an edgier alternative with fewer safeguards than rivals, and because Grok’s images are publicly visible, and can therefore be easily spread.

    The EU investigation covers only Grok’s service on X, and not Grok’s website and standalone app. That’s because the DSA applies only to the biggest online platforms.

    There’s no deadline for the bloc to resolve the case, which could end in either X pledging to change its behavior or a hefty fine.

    In December Brussels issued X with a 120-million euro (then-$140 million) fine as part of the earlier ongoing DSA investigation, for shortcomings including blue checkmarks that broke the rules on “deceptive design practices” that risked exposing users to scams and manipulation.

    The bloc has also been scrutinizing X over allegations that Grok generated anti-Semitic material and has asked the site for more information.

    [ad_2]

    Source link

  • Despair-Inducing Analysis Shows AI Eroding the Reliability of Science Publishing

    [ad_1]

    It’s almost impossible to overstate the importance and impact of arXiv, the science repository that, for a time, almost single-handedly justified the existence of the internet. ArXiv (pronounced “archive” or “Arr-ex-eye-vee” depending on who you ask) is a preprint repository, where, since 1991, scientists and researchers have announced “hey I just wrote this” to the rest of the science world. Peer review moves glacially, but is necessary. ArXiv just requires a quick once-over from a moderator instead of a painstaking review, so it adds an easy middle step between discovery and peer review, where all the latest discoveries and innovations can—cautiously—be treated with the urgency they deserve more or less instantly.

    But the use of AI has wounded ArXiv and it’s bleeding. And it’s not clear the bleeding can ever be stopped.

    As a recent story in The Atlantic notes, ArXiv creator and Cornell information science professor Paul Ginsparg has been fretting since the rise of ChatGPT that AI can be used to breach the slight but necessary barriers preventing the publication of junk on ArXiv. Last year, Ginsparg collaborated on a piece of analysis that looked into probable AI in arXiv submissions. Rather horrifyingly, scientists evidently using LLMs to generate plausible-looking papers were more prolific than those who didn’t use AI. The number of papers from posters of AI-written or augmented work was 33 percent higher.

    AI can be used legitimately, the analysis says, for things like surmounting the language barrier. It continues:

    “However, traditional signals of scientific quality such as language complexity are becoming unreliable indicators of merit, just as we are experiencing an upswing in the quantity of scientific work. As AI systems advance, they will challenge our fundamental assumptions about research quality, scholarly communication, and the nature of intellectual labor.”

    It’s not just ArXiv. It’s a rough time overall for the reliability of scholarship in general. An astonishing self-own published last week in Nature described the AI misadventure of a bumbling scientist working in Germany named Marcel Bucher, who had been using ChatGPT to generate emails, course information, lectures, and tests. As if that wasn’t bad enough, ChatGPT was also helping him analyze responses from students and was being incorporated into interactive parts of his teaching. Then one day, Bucher tried to “temporarily” disable what he called the “data consent” option, and when ChatGPT suddenly deleted all the information he was storing exclusively in the app—that is: on OpenAI’s servers—he whined in the pages of Nature that “two years of carefully structured academic work disappeared.”

    Widespread, AI-induced laziness on display in the exact area where rigor and attention to detail are expected and assumed is despair-inducing. It was safe to assume there was a problem when the number of publications spiked just months after ChatGPT was first released, but now, as The Atlantic points out, we’re starting to get the details on the actual substance and scale of that problem—not so much the Bucher-like, AI-pilled individuals experiencing publish-or-perish anxiety and hurrying out a quickie fake paper, but industrial scale fraud.

    For instance, in cancer research, bad actors can prompt for boring papers that claim to document “the interactions between a tumor cell and just one protein of the many thousands that exist,” the Atlantic notes. If the paper claims to be groundbreaking, it’ll raise eyebrows, meaning the trick is more likely to be noticed, but if the fake conclusion of the fake cancer experiment is ho-hum, that slop will be much more likely to see publication—even in a credible publication. All the better if it comes with AI generated images of gel electrophoresis blobs that are also boring, but add additional plausibility at first glance.

    In short, a flood of slop has arrived in science, and everyone has to get less lazy, from busy academics planning their lessons, to peer reviewers and ArXiv moderators. Otherwise, the repositories of knowledge that used to be among the few remaining trustworthy sources of information are about to be overwhelmed by the disease that has already—possibly irrevocably—infected them. And does 2026 feel like a time when anyone, anywhere, is getting less lazy?

    [ad_2]

    Mike Pearl

    Source link

  • How a missing Colorado woman’s son hopes AI can solve her 18-year-old cold case

    [ad_1]

    Shaida Ghaemi was last seen Sept. 9, 2007, in Wheat Ridge. (Photo courtesy Colorado Bureau of Investigation)

    Arash Ghaemi has wondered for 18 years what happened to his mother after she disappeared from a Wheat Ridge motel.

    So Ghaemi, an artificial intelligence developer and entrepreneur, turned his profession into his passion.

    “What if I can get the case files and run it through AI?” he said of the police investigation into his mother’s disappearance. “Maybe it will show me something and make the connections. If I could build it to solve my mom’s case, I could likely build it to solve other cases.”

    Ghaemi launched CrimeOwl, an AI program that searches cold-case files to generate new leads for investigators, last year.

    So far, the AI platform is in the hands of a few private investigators who are using it to chase leads on behalf of families searching for missing loved ones. Ghaemi hopes one day the program will have its big break in solving a case, and maybe — just maybe — it will help figure out what happened to his mother, Shaida Ghaemi, when she disappeared in 2007.

    Ghaemi, who goes by “Ash,” on Tuesday met with investigators, information-technology staff and commanders at the Wheat Ridge Police Department to show off his AI tool and to ask for an update on his mother’s case.

    For now, Wheat Ridge police say CrimeOwl is too unproven to use in the department’s investigations, including Shaida Ghaemi’s disappearance.

    And they are tight-lipped about her case.

    “We were really happy to meet with Ash. It’s part of our philosophy of relationship policing,” said Alex Rose, a Wheat Ridge police spokesman. “It was a twofold meeting to explain what we could about the case and to give some professional insight on the AI tool so it can become more widespread and of use to agencies across the country.”

    ‘Still trying to make sense of it’

    When Arash Ghaemi was growing up, his mother was almost too good a mother, he said, describing her as “almost overbearing” in taking care of him and his older sister.

    But when Arash was 17, his parents divorced, and everything changed.

    Shaida Ghaemi became distant from her children. She left home a lot.

    “It was weird,” he said. “She went from always needing to be in contact with me and my sister to she could take it or leave it.”

    Shaida Ghaemi did not have a permanent home and did not have a job, her son, now 40, said. She traveled between Colorado and Maryland, where her parents lived.

    In 2007 — five years after the divorce — she moved into the American Motel in Wheat Ridge with her boyfriend, Jude Peters.

    “I am still trying to make sense of it,” he said of the changes in his mother’s behavior.

    Arash Ghaemi was a 22-year-old server at a Red Robin restaurant in Highlands Ranch when his grandfather called from Maryland on a September night and told him they were unable to reach his mother. He asked his grandson to call the police.

    Shaida Ghaemi, then 44, was last seen on Sept. 9, 2007, by Peters. Drops of her blood were found in their motel room. At the time, Peters told 9News it was menstrual blood and that Ghaemi often left for months at a time.

    Wheat Ridge police still consider her disappearance a missing-person case, and there is no “clear indication of foul play,” Rose said. “Jude is not considered a person of interest in this investigation at this time,” Rose said of Peters.

    “They still don’t know where she’s at and they don’t have any trace of her,” Ghaemi said.

    ‘True value’ of AI

    Artificial intelligence is gaining ground as a law enforcement tool. Multiple police departments across Colorado are using the technology, most commonly for converting body-worn camera footage into written crime reports. It’s also being used to track license plates and to scan people’s faces.

    The Wheat Ridge Police Department uses Axon’s Draft One to help write police reports, based on their body-worn camera footage.

    “Our officers know they’re accountable for every single word,” Rose said. “It gives them a who, what, when and where and can save them time, but it’s not a substitution for good police work.”

    Ghaemi launched CrimeOwl about six months ago. He is also developing AI programs for the dental industry and a new sports statistics program that could eventually be used by the NBA.

    He programmed CrimeOwl to sort through all of the documents in a case file and build a map of the people connected to the missing person, such as partners, family, close friends and neighbors. The AI also creates a timeline of events leading to the disappearance or death and then maps all of the geographic locations connected to the crime, he said.

    The platform has a chat function so investigators can ask the AI to sift through files to find answers to their questions.

    While CrimeOwl was designed to help with missing-persons cases, Ghaemi said he hopes it can be used to solve other crimes.

    No police departments have bought the product so far.

    Ghaemi, who lives in Miami, said he tested CrimeOwl on a solved cold case in Florida and, after uploading the police case file into his program, the AI created a list of credible suspects within 30 minutes, he said. Police confirmed it had identified the actual perpetrator, he said.

    “It took me 30 minutes to do what it could have taken them weeks or months to do,” Ghaemi said. “That’s the true value here.”

    Not ready for police use

    CrimeOwl, however, is not ready for active law enforcement investigations, Rose said.

    The CrimeOwl platform would need to be secure so no one could tamper with the evidence once it is uploaded, Rose said. It would need to receive various certifications before any law enforcement agency used it, he said.

    It would also need to be vetted by lawyers so any leads it generated would hold up at trial, he said.

    “There are a lot of details and a lot of hypotheticals that would need to be heavily vetted for AI technology in a real-world police setting,” Rose said.

    Still, Wheat Ridge police are intrigued by Ghaemi’s AI tool and were more than willing to offer advice and expertise, he said.

    “We’re always going to applaud somebody who is trying to use technology to find ways to help,” Rose said.

    [ad_2]

    Source link

  • Why clicking the wrong Copilot link could put your data at risk

    [ad_1]

    NEWYou can now listen to Fox News articles!

    AI assistants are supposed to make life easier. Tools like Microsoft Copilot can help you write emails, summarize documents and answer questions using information from your own account. But security researchers are now warning that a single bad link could quietly turn that convenience into a privacy risk. 

    A newly discovered attack method shows how attackers could hijack a Copilot session and siphon data without you seeing anything suspicious on screen.

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.     

    Because Copilot stays tied to your logged-in Microsoft account, attackers can quietly use your active session to access data in the background. (Photo by Donato Fasano/Getty Images)

    What researchers discovered about Copilot links

    ILLINOIS DHS DATA BREACH EXPOSES 700K RESIDENTS’ RECORDS

    Security researchers at Varonis uncovered a technique they call “Reprompt.” In simple terms, it shows how attackers could sneak instructions into a normal-looking Copilot link and make the AI do things on their behalf.

    Here’s the part that matters to you: Microsoft Copilot is connected to your Microsoft account. Depending on how you use it, Copilot can see your past conversations, things you’ve asked it and certain personal data tied to your account. Normally, Copilot has guardrails to prevent sensitive information from leaking. Reprompt showed a way around some of those protections.

    The attack starts with just one click. If you open a specially crafted Copilot link sent through email or a message, Copilot can automatically process hidden instructions embedded inside the link. You don’t need to install anything, and there are no pop-ups or warnings. After that single click, Copilot can keep responding to instructions in the background using your already logged-in session. Even closing the Copilot tab does not immediately stop the attack, because the session stays active for a while.

    How Reprompt works

    Varonis found that Copilot accepts questions through a parameter inside its web address. Attackers can hide instructions inside that address and make Copilot execute them as soon as the page loads.

    That alone would not be enough, because Copilot tries to block data leaks. The researchers combined several tricks to get around this. First, they injected instructions directly into Copilot through the link itself. This allowed Copilot to read information it normally shouldn’t share.

    Second, they used a “try twice” trick. Copilot applies stricter checks the first time it answers a request. By telling Copilot to repeat the action and double-check itself, the researchers found that those protections could fail on the second attempt.

    Third, they showed that Copilot could keep receiving follow-up instructions from a remote server controlled by the attacker. Each response from Copilot helped generate the next request, allowing data to be quietly sent out piece by piece. The result is an invisible back-and-forth where Copilot keeps working for the attacker using your session. From your perspective, nothing looks wrong.

    MICROSOFT SOUNDS ALARM AS HACKERS TURN TEAMS PLATFORM INTO ‘REAL-WORLD DANGERS’ FOR USERS

    Varonis responsibly reported the issue to Microsoft, and the company fixed it in the January 2026 Patch Tuesday updates. There is no evidence that Reprompt was used in real-world attacks before the fix. Still, this research is important because it shows a bigger problem. AI assistants have access, memory and the ability to act on your behalf. That combination makes them powerful, but also risky if protections fail. As researchers put it, the danger increases when autonomy and access come together.

    It’s also worth noting that this issue only affected Copilot Personal. Microsoft 365 Copilot, which businesses use, has extra security layers like auditing, data loss prevention and admin controls.

    “We appreciate Varonis Threat Labs for responsibly reporting this issue,” a Microsoft spokesperson told CyberGuy. “We have rolled out protections that address the scenario described and are implementing additional measures to strengthen safeguards against similar techniques as part of our defense-in-depth approach.”

    8 steps you can take to stay safe from AI attacks

    Even with the fix in place, these habits will help protect your data as AI tools become more common.

    1) Install Windows and browser updates immediately

    Security fixes only protect you if they’re installed. Attacks like Reprompt rely on flaws that already have patches available. Turn on automatic updates for Windows, Edge and other browsers so you don’t delay critical fixes. Waiting weeks or months leaves a window where attackers can still exploit known weaknesses.

    2) Treat Copilot and AI links like login links

    If you wouldn’t click a random password reset link, don’t click unexpected Copilot links either. Even links that look official can be weaponized. If someone sends you a Copilot link, pause and ask yourself whether you were expecting it. When in doubt, open Copilot manually instead.

    Corporate signage of Microsoft Corp at Microsoft India Development Center

    Even after Microsoft fixed the flaw, the research highlights why limiting data exposure and monitoring account activity still matters as AI tools evolve. (Photographer: Prakash Singh/Bloomberg via Getty Images)

    3) Use a password manager to protect your accounts

    A password manager creates and stores strong, unique passwords for every service you use. If attackers manage to access session data or steal credentials indirectly, unique passwords prevent one breach from unlocking your entire digital life. Many password managers also warn you if a site looks suspicious or fake.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords, and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

    4) Enable two-factor authentication on your Microsoft account

    Two-factor authentication (2FA) adds a second layer of protection, even if attackers gain partial access to your session. It forces an extra verification step, usually through an app or device, making it much harder for someone else to act as you inside Copilot or other Microsoft services.

    5) Reduce how much personal data exists online

    Data broker sites collect and resell personal details like your email address, phone number, home address and even work history. If an AI tool or account session is abused, that publicly available data can make the damage worse. Using a data-removal service helps delete this information from broker databases, shrinking your digital footprint and limiting what attackers can piece together.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    6) Run strong antivirus software on your device

    Modern antivirus tools do more than scan files. They help detect phishing links, malicious scripts and suspicious behavior tied to browser activity. Since Reprompt-style attacks start with a single click, having real-time protection can stop you before damage happens, especially when attacks look legitimate.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    7) Regularly review your account activity and settings

    Check your Microsoft account activity for unfamiliar logins, locations, or actions. Review what services Copilot can access, and revoke anything you no longer need. These checks don’t take long, but they can reveal issues early, before attackers have time to do serious damage. Here’s how:

    Go to account.microsoft.com, and sign in to your Microsoft account.

    Select Security, then choose View my sign-in activity and verify your identity if prompted.

    Review each login for unfamiliar locations, devices or failed sign-in attempts.

    If you see anything suspicious, select This wasn’t me or Secure your account, then change your password immediately and enable two-step verification.

    Visit account.microsoft.com/devices, and remove any devices you no longer recognize or use.

    In Microsoft Edge, open Settings > Appearance > Copilot and Sidebar > Copilot, and turn off Allow Microsoft to access page content if you want to limit Copilot’s access.

    Review apps connected to your Microsoft account and revoke permissions you no longer need.

    close up of hands of business person working on computer, man using internet and social media

    A single Copilot link can carry hidden instructions that run the moment you click, without any warning or pop-ups.  (iStock)

    8) Be specific about what you ask AI tools to do

    Avoid giving AI assistants broad authority like “handle whatever is needed.” Wide permissions make it easier for hidden instructions to influence outcomes. Keep requests narrow and task-focused. The less freedom an AI has, the harder it is for malicious prompts to steer it silently.

    Kurt’s key takeaway

    Reprompt doesn’t mean Copilot is unsafe to use, but it does show how much trust these tools require. When an AI assistant can think, remember and act for you, even a single bad click can matter. Keeping your system updated and being selective about what you click remain just as important in the age of AI as it was before.

    Do you feel comfortable letting AI assistants access your personal data, or does this make you more cautious? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved. 

    [ad_2]

    Source link

  • Fox News AI Newsletter: Historic infrastructure buildout for AI

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – Nvidia CEO says AI boom is fueling the ‘largest’ infrastructure buildout in history
    – Apple taps Google Gemini to power Apple Intelligence
    – Amazon to cut thousands of jobs in sweeping corporate layoffs

    TECH TSUNAMI: Nvidia CEO Jensen Huang said the rapid expansion of artificial intelligence is setting off what he described as the “largest infrastructure buildout in human history,” as companies and governments pour trillions of dollars into the computing power needed to run AI systems in real time.

    TITANS UNITE: Apple and Google just made one of the most important artificial intelligence (AI) announcements of the year. Under a new multi-year collaboration, Apple will base the next generation of its Apple Foundation Models on Google’s Gemini models and cloud technology.

    A Google Gemini artificial intelligence mobile phone app, arranged in Riga, Latvia, on Monday, Jan. 27, 2025. Chinese artificial intelligence startup DeepSeek rocked global technology stocks Monday, raising questions over America’s technological dominance.  (Andrey Rudakov/Bloomberg via Getty Images)

    JOB CUTS: Amazon is planning to cut thousands of jobs as part of a broader push to eliminate nearly 10% of its corporate workforce, according to Reuters.

    GOING MOBILE: Amazon has rolled out Alexa.com, which brings Alexa+ directly to your web browser for Early Access users. Instead of relying on a speaker or phone, you can now open a laptop and start using Alexa like any other web-based AI tool.

    AI FOR MAIN STREET: The House of Representatives passed a bill that would require the government to create more access to artificial intelligence (AI) education for small businesses Tuesday evening.

    House of Representatives chamber

    The chamber of the House of Representatives is seen at the Capitol in Washington, Feb. 28, 2022. (J. Scott Applewhite, File)

    BEYOND DATA CENTERS: Qualcomm CEO Cristiano Amon joins “Mornings with Maria” to discuss the next phase of the AI boom that’s expanding beyond data centers into cars, devices and robotics.

    THE ENTERPRISE SHIFT: ServiceNow and OpenAI are deepening their strategic partnership with an enhanced collaboration to help enterprises accelerate efforts to turn artificial intelligence (AI) into measurable business outcomes.

    JOB CONCERNS: Palantir CEO Alex Karp suggested Tuesday that usage of artificial intelligence “bolsters civil liberties,” while also warning Europe that its adoption of technology is falling behind the U.S. and China. 

    Alex Karp

    Co-Founder and CEO of Palantir Technologies Alex Karp speaks onstage during Jacob Helberg at the Hill & Valley Forum 2025 on April 30, 2025 in Washington, DC.  (Jemal Countess/Getty Images for Jacob Helberg)

    NOT TRUE: Billy Bob Thornton is setting the record straight about hanging up his cowboy hat. The 70-year-old star of Taylor Sheridan’s hit series “Landman” shut down rumors he’s exiting the Paramount+ drama, calling the claims false.

    ‘SO WRONG’: Pro Football Hall of Famer Jimmy Johnson appeared to be just as unsettled as other football fans were over an AI video that appeared of him during the college football national championship.

    COMING SOON: Tesla CEO Elon Musk said Thursday the company is planning to make its Optimus robots available for sale to the public by the end of 2027.

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    [ad_2]

    Source link

  • A.I.’s Data Center Rush Will Create Six-Figure Trade Jobs, Jensen Huang Predicts

    [ad_1]

    Jensen Huang speaks during the World Economic Forum in Davos on Jan. 21, 2026. Photo by Fabrice Coffrini/AFP via Getty Images

    Much has been said about A.I.’s potential to replace jobs. But Nvidia CEO Jensen Huang is more concerned about A.I. creating a labor shortage—at least in the short term. As tech companies race to build data centers across the U.S. and around the world, they will need tradespeople such as plumbers, electricians and construction workers to make it happen. “This is the largest infrastructure buildout in human history. That’s going to create a lot of jobs,” said Huang during an interview with BlackRock CEO Larry Fink at the World Economic Forum in Davos, Switzerland on Jan. 21.

    New labor opportunities will be especially concentrated in the trades, where Huang claims pay has already nearly doubled. Those who help build semiconductor plants, computer factories and data centers will soon be making “six-figure salaries,” according to the executive.

    “Everyone should be able to make a great living,” said Huang. “You don’t need a Ph.D. in computer science to do so.”

    The median annual pay for electricians in 2024 was around $62,000, according to the U.S. Bureau of Labor Statistics. It was roughly $46,000 for construction laborers and nearly $63,000 for plumbers, pipefitters and steamfitters. Growth for all three professions from 2024 to 2034 is expected to outpace the average occupational growth rate of 3 percent, with demand for electricians in particular surging. The field is projected to expand by 9 percent over the next decade, with about 81,000 openings projected annually on average.

    The U.S. is already seeing a “significant boom” in these areas, according to Huang—so much so that it has led to a “great shortage” in tradecraft roles. The A.I. boom is expected to worsen a worker deficit the industry was already facing. In December 2022, some 490,000 construction positions went unfilled, according to a McKinsey report, the highest level recorded this century.

    Huang isn’t the only CEO who believes A.I. will be a boon for trade jobs. Alex Karp, CEO of Palantir, described vocational skills as “very valuable, if not irreplaceable,” while speaking in Davos earlier this week. Ford CEO Jim Farley has made similar arguments on behalf of the blue-collar community, saying the country does not yet have a large enough workforce to support its data center ambitions. “I think the intent is there, but there’s nothing to backfill the ambition,” he told Axios in August.

    The opportunity for A.I.-driven manual labor jobs won’t be limited to the U.S., Huang added, but will extend around the world as data center construction accelerates. “There is not one country in the world I can imagine where you [don’t] need to have A.I. as part of your infrastructure.”

    A.I.’s Data Center Rush Will Create Six-Figure Trade Jobs, Jensen Huang Predicts

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Rwanda to test AI-powered technology in clinics under a new Gates Foundation project

    [ad_1]

    Rwanda will test technology powered by artificial intelligence in more than 50 health clinics as part of a new initiative by the Gates Foundation to support 1,000 clinics across Africa with the aim to improve health care services

    KIGALI, Rwanda — KIGALI, Rwanda (AP) — Rwanda will test technology powered by artificial intelligence in more than 50 health clinics as part of a new initiative by the Gates Foundation to support 1,000 clinics across Africa with the aim to improve health care services.

    The technology is intended to strengthen rather than replace clinical judgment, while improving efficiency within an already stretched health system, Andrew Muhire, a senior official with Rwanda’s Ministry of Health, told The Associated Press on Thursday.

    Rwanda now has one health care worker for 1,000 patients — far from the globally recommended ratio of 4:1,000.

    The Gates Foundation and OpenAI on Wednesday launched a new initiative dubbed Horizons1000, with joint funding of $50 million over two years. Bill Gates said the initiative will help close the health inequality gap.

    “In poorer countries with enormous health worker shortages and a lack of health systems infrastructure, AI can be a game changer in expanding access to quality care,” Gates said in a blog post on the launch.

    Muhire described it as a “transformative opportunity” that will improve citizens’ access to health care, “reduce administrative burden” and help medical professionals make “more accurate and timely decisions.”

    However, digital experts are worried about AI technology using the English language, which is not widely spoken in Rwanda.

    Audace Niyonkuru, CEO of AI and open data company Digital Umuganda, told the AP that efforts are underway to develop AI technologies in Kinyarwanda, the language spoken by about 75% of Rwanda’s population.

    “Deploying AI technologies that do not operate in Kinyarwanda would pose a serious barrier to effective care,” he said.

    [ad_2]

    Source link

  • Google offers users option to plug AI mode into their photos, email for more personalized answers

    [ad_1]

    SAN FRANCISCO — Google is leveraging its artificial intelligence technology to open a new peephole for its dominant search engine to tailor answers that draw upon people’s interests, habits, travel itineraries and photo libraries.

    The new option rolling out Thursday will give millions of people the option of turning on a recently introduced tool called “Personal Intelligence” within the AI mode that has been available on Google’s search engine since last year. The technology will be first offered in the U.S. to Google AI Pro and Ultra subscribers, as well as an option within its experimental Labs division for anyone with a personal Google account.

    If turned on, the new tool will plug Google’s AI Mode into Gmail and the Google Photos app so the technology can learn more about each user’s life and deliver more relevant answers tailored to personal tastes.

    For instance, someone might ask for suggestions for a weekend getaway and get a quick recommendation based on past trips and experiences. Or, when in AI mode, the search engine might automatically know a person’s favorite restaurants or recognize preferred clothing styles by reviewing old pictures stored in Google Photos.

    “Personal Intelligence transforms Search into an experience that feels uniquely yours by connecting the dots across your Google apps,” Robby Stein, a vice president in Google Search, wrote in a blog post. Stein also warned Personal Intelligence won’t always deliver the best answers, a pitfall that he said users can help correct by telling AI mode with words or a thumbs-down symbol.

    Turning on the option will require users to trust Google’s search engine to protect the details that it is fed about their lives. But millions of people already have been doing that implicitly for decades while entering sometimes intimate queries into the search engine or sharing personal information within Gmail and the Photos app.

    Bringing Personal Intelligence to Google search is the latest sign of the company’s ambitions to make its arsenal of digital services even more powerful with a boost from the latest AI model, Gemini 3i, that came out in November.

    Earlier this month, Google took its first steps toward turning Gmail into a personal assistant powered by AI and now it’s getting a chance to play a bigger role in a search engine that remains the foundation of its internet empire.

    Gemini’s tentacles will even be extending into the iPhone, iPad and Mac after Apple decided last week to team up with Google to bring more AI tools to those products. The partnership will focus on a long-delayed effort to turn Apple’s often-bumbling digital assistant, Siri, into a more conversational and versatile aide.

    Although Google’s search engine was condemned as an illegal monopoly in 2024 by a U.S. federal judge, it remains the internet’s main gateway while trying to fend off competitive threats from AI-powered answer engines offered by up-and-coming innovators such as ChatGPT and Perplexity.

    The potentially revolutionary changes being wrought by AI helped persuade the judge who branded Google a monopoly to reject a proposal by the U.S. Justice Department that would have forced the company to sell its Chrome web browser to curb future abuses in the market.

    [ad_2]

    Source link

  • Apple taps Google Gemini to power Apple Intelligence

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Apple and Google just made one of the most important artificial intelligence (AI) announcements of the year. Under a new multi-year collaboration, Apple will base the next generation of its Apple Foundation Models on Google’s Gemini models and cloud technology.

    The companies confirmed the partnership in a joint statement, signaling a major shift in how Apple plans to deliver AI features across the iPhone, iPad and Mac. 

    The deal comes as Apple faces growing pressure to catch up in AI, especially after delaying a long-promised overhaul of Siri. 

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    5 BEST APPS TO USE ON CHATGPT RIGHT NOW

    Apple and Google have entered a multiyear AI partnership that will shape the future of Apple Intelligence and Siri. (Andrey Rudakov/Bloomberg via Getty Images)

    Why Apple chose Google’s Gemini

    Apple evaluated multiple AI options before settling on Gemini. According to the joint statement, Apple believes Google’s AI provides the strongest foundation for its own models. Gemini has quickly become one of the most capable large language model families, backed by Google’s massive cloud infrastructure. 

    For Apple, this means faster development, more reliable performance and the ability to roll out advanced features without rebuilding everything from scratch. At the same time, Apple says Apple Intelligence will still run on the device and through its Private Cloud Compute system. In other words, Apple controls how user data flows, even if the underlying models come from Google.

    The joint statement from Apple and Google

    Here is the full joint statement from the two companies:

    “Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.

    “After careful evaluation, Apple determined that Google’s AI technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.”

    That last line is critical. Apple is clearly trying to reassure users that privacy remains central, even with Google’s technology involved.

    CHATGPT’S GPT-5.2 IS HERE, AND IT FEELS RUSHED

    Apple Macbook Air Computers 2010

    Google’s Gemini models will help power Apple’s next-generation AI features while Apple keeps control of on-device processing and privacy. (REUTERS/Norbert von der Groeben)

    A long-delayed Siri overhaul finally moves forward

    A more personalized Siri is one of the biggest promises tied to this deal. Apple had already previewed major Siri improvements but ran into development problems. Reports described internal frustration, bugs and delays that pushed the revamped assistant further out than planned. This partnership helps explain why. By leaning on Gemini, Apple can accelerate Siri’s evolution instead of trying to solve every AI challenge internally. The result should be a smarter assistant that better understands context, handles complex requests and integrates more deeply across Apple apps.

    Behind-the-scenes pressure at Apple

    This deal did not happen in a vacuum. Apple has faced criticism for moving too slowly on AI while rivals pushed ahead. Apple had reportedly been in talks to license a custom version of Gemini for Siri and was expected to pay roughly $1 billion per year, though the official announcement did not confirm any financial terms. 

    Apple has also reshuffled its AI leadership. The company recently hired Amar Subramanya as vice president of artificial intelligence. He replaced John Giannandrea, who stepped down from the role after leading Apple’s AI strategy since 2018.

    Antitrust questions loom

    There is also a regulatory angle. Apple and Google already face scrutiny for their long-standing search agreement. That partnership came under renewed attention after U.S. District Judge Amit Mehta ruled that Google holds a monopoly in online search, while still allowing payments to Apple to keep Google as the default search engine on iPhones. This new AI collaboration could attract fresh attention from antitrust regulators who worry about powerful tech companies becoming even more intertwined.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    A 13-inch Apple MacBook Pro laptop

    The deal signals a strategic shift as Apple accelerates its AI roadmap to deliver smarter, more personalized experiences across its devices. (Phil Barker/Future Publishing via Getty Images)

    What this means for you

    For those of you using Apple devices, the impact is straightforward. You should see smarter Apple Intelligence features arrive faster, starting with a more capable Siri. Tasks like summarizing messages, handling complex reminders and understanding context across apps should improve. At the same time, Apple insists your data stays protected. Apple Intelligence will still rely on device processing and Private Cloud Compute, rather than funneling personal data directly into Google’s systems. In short, users get better AI without giving up Apple’s privacy stance, at least in theory.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Apple’s partnership with Google marks a turning point in its AI story. Instead of going it alone, Apple is betting that combining its privacy-focused platform with Google’s AI muscle is the fastest path forward. If Apple delivers on its promises, this deal could finally close the AI gap that has frustrated users and investors alike. The real test will come when those features land on your devices.

    Do you trust Apple to balance powerful AI with privacy now that Google’s technology sits under the hood? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Movie Review: In ‘Mercy,’ Chris Pratt is on trial with an artificial intelligence judge

    [ad_1]

    It’s a bold filmmaking choice to have a countdown clock on the screen for most of your movie.

    In the best-case scenario for a movie like “Mercy,” in which a Los Angeles detective has prove his innocence to an artificial intelligence judge within said time limit, it heightens the tension. Who hasn’t gotten sweaty palms in, say, a “Mission: Impossible” movie when the bomb is ticking down and Tom Cruise still hasn’t cleared the building? Why not just extend it for the duration?

    Perhaps in a better movie it might have worked. Sadly in “Mercy,” in theaters Thursday, it’s an ever-present reminder of just how much longer you must endure until you too are free of Chris Pratt, Rebecca Ferguson and that chair.

    In “Mercy’s” near-future Los Angeles, AI has been adopted by law enforcement and the judicial system to more efficiently clean up the city’s crime and blight problem. It’s a potent and not too far-flung idea that might have been a fascinating and provocative premise for a movie attempting to grapple with the implications of so-called progress that had the potential to be a worthy companion to another Cruise movie, “Minority Report.” But that would have required a more serious script than screenwriter Marco van Belle’s and more vision than filmmaker Timur Bekmambetov managed to muster.

    When Pratt’s character, Chris Raven, wakes up, barefoot and strapped into an electric chair sitting in the middle of an oddly large room that looks a bit like the holodeck, he’s informed by an IMAX-sized AI judge (Ferguson) that he has 90 minutes to prove he didn’t kill his wife (Annabelle Wallis). In this world, the incarcerated are guilty until proved innocent. They’ve cut lawyers and juries out of the equation as well. Instead, the accused have everyone’s digital footprint at their disposal to help build their own case. For Raven, that means everything from ring cam footage to his teenage daughter’s secret Instagram account.

    Unfortunately for Raven, he woke up with some gaps in his memory between angrily busting into his home to confront his wife about something and being arrested and bludgeoned at a bar later that day. Raven was also one of the original champions of the AI judge system, which in a more curious script might have resulted in some real stakes. This story is more hung up on increasingly tortured plot contrivances, however, including Raven’s drinking problem following the death of a partner killed on the job. To its credit, the story does really keep it ambiguous as to whether Raven did it or not, but to say that it earns any sort of investment in the outcome is a stretch.

    One of the most confounding choices is to have a real actor playing the AI judge. Wouldn’t it have been more interesting and provocative to use an AI creation as the impartial Judge Maddox instead of stripping Ferguson of all emotion and charisma in the role? At times, it feels as tedious as watching a stranger’s increasingly frustrating call with a robotic customer service representative play out in real time.

    For how reliant this movie is on screens and keeping Pratt alone, one might assume that “Mercy” was a socially distanced, COVID-era leftover instead of something made in 2024. Kali Reis, playing another LAPD agent named Jaq who decides to help Raven investigate on the ground is the one that gets to be out in the real-world chasing leads and hunches. But for the most part, she’s seen only through FaceTime and bodycam footage. Like Raven, we’re largely stuck in the chair watching things play out on multiple screens, acutely aware of just how much time is left.

    “Mercy,” an Amazon MGM release in theaters Thursday, is rated PG-13 by the Motion Picture Association for “drug content, bloody images, some strong language, teen smoking and violence.” Running time: 101 minutes. One and a half stars out of four.

    [ad_2]

    Source link

  • Engineer at Elon Musk’s xAI Departs After Spilling the Beans in Podcast Interview

    [ad_1]

    Sulaiman Ghori, an engineer at Elon Musk’s AI startup xAI, went on the podcast Relentless last week to talk about the inner workings of the company that he joined less than a year prior. Days later, he “left” xAI, though the speculation is that he was fired after being a bit too open about the company’s operations.

    So what exactly did Ghori reveal on Relentless? Well, he seemed to tip off the possibility that xAI has been skirting regulations and getting dubious permits when building data centers—specifically, its prized Colossus supercomputer in Memphis, Tennessee. “The lease for the land itself was actually technically temporary. It was the fastest way to get the permitting through and actually start building things,” he said. “I assume that it’ll be permanent at some point, but it’s a very short-term lease at the moment, technically, for all the data centers. It’s the fastest way to get things done.”

    When asked how xAI has gone about getting those temporary leases, Ghori explained that they worked with local and state governments to get permits that allow companies to “modify this ground temporarily,” and said they are typically for things like carnivals.

    Colossus was not without controversy already. The data center, which xAI brags only took 122 days to build, was powered by at least 35 methane gas turbines that the company reportedly didn’t have the permits to operate. Even the Donald Trump-staffed Environmental Protection Agency declared the turbines to be illegal. Those turbines, which were operating without permission, contributed to the significant amount of air pollution experienced by surrounding communities.

    In addition to the indication of other potential legal end-arounds committed by xAI, Ghori also revealed some of the company’s internal operations, including relying significantly on AI agents to complete work. “Right now, we’re doing a big rebuild of our core production APIs. It’s being done by one person with like 20 agents,” he said. “And they’re very good, and they’re capable of doing it, and it’s working well,” though he later stated that the reliance on agents can lead to confusion. “Multiple times I’ve gotten a ping saying, ‘Hey, this guy on the org chart reports to you. Is he not in today or something?’ And it’s an AI. It’s a virtual employee.”

    Ghori’s insight into the use of AI agents certainly comes at an interesting time. Earlier this month, tech journalist Kylie Robison reported that AI startup Anthropic, the maker of Claude, cut off xAI’s access to its model. According to Robison, xAI cofounder Tony Wu told his team that the change would cause “a hit on productivity,” and “AI is now a critical technology for our own productivity.” He encouraged employees to try “all different kinds of models” in the meantime to keep coding.

    Ghori spilled quite a few other details about xAI throughout the interview, none of which seem to have been publicly disputed by Musk or xAI—and they’re not exactly the type to keep quiet if they want to discredit someone. But within a matter of days of the conversation, Ghori left the company despite having just promoted and encouraging people to join his team just days prior to his departure.

    Adding to the intrigue: Just one day after Ghori “left,” xAI cofounder Greg Yang stepped away from the company after being diagnosed with Lyme disease. Yang’s departure hasn’t been connected to Ghori in any way. Dealing with Lyme absolutely sucks, and it’s difficult to treat. But it is worth noting that xAI is losing its top folks—and fast.

    As Bloomberg noted, co-founders Igor Babuschkin and Christian Szegedy left last year. Maybe Musk will just appoint an AI agent to head the company. Given the legal trouble the company is likely staring down, what with its dubious data center buildouts and recent “undressing” controversy surrounding its chatbot Grok, it wouldn’t be much of a surprise if no human wanted to handle what comes next.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI Says Its Physical Device Is ‘On Track’ for an Unveiling Later This Year

    [ad_1]

    On Monday, Chris Lehane, OpenAI’s chief global affairs officer, said his company is “on track” to present its famously mysterious thingamajig to the public by the end of the year according to Axios. This would mean the previously rumored release date, September-ish, was not crazy after all.

    Lahane’s announcement came during an event at the World Economic Forum in Davos, Switzerland. However, Lahane did not provide any details about what this thing is or does. He also, according to Axios, said what he had described was the “most likely” release schedule, but that “we will see how things advance.”

    For details about what the device is and does, you’ll have to read the aforementioned rumors from the China-based leaks account Smart Pikachu. That user posted a week ago that OpenAI is supposedly gunning for the market niche currently occupied by AirPods.

    Smart Pikachu described manufacturing giant Foxconn working on something with the codename “Sweetpea,” a “special audio product” within a company project called “Gumdrop,” vaguely in the earbud or “open-ear headphones” zone. It would be two objects—one for each ear—and a little egg-shaped, dental-floss-holder-sized charging dock. Sweetpea would pack heavy duty processing power via a 2-nanometer, smartphone-style chip. Its release might also be followed, or accompanied, by four other “Gumdrop” devices between now and 2028, like a “home-style device,” and, um, a pen, according to Smart Pikachu. And once again: these are just unconfirmed rumors at this point.

    But you’ll recall that the vast majority of the actual information OpenAI has given the world so far about its first device comes from two sources: 1) A very strange infomercial for the concept of friendship that OpenAI released in spring of last year starring OpenAI CEO Sam Altman and legendary iPhone designer Jony Ive—whose product design company had just merged with OpenAI.

    And 2) a much longer—but somehow less substantive—interview Ive and Altman gave in November in which they explained next to nothing, other than the fact that they’re aiming for a product so sensual that you’ll want to put various parts of your mouth all over it. Altman said it’s “so simple, but then it just does,” whatever that means. Ive said he’s into creating “sophisticated products that you want to touch and you feel no intimidation and you want to use almost carelessly and almost without thought.”

    So there you go. It just does, and you won’t even think about it, and you’ll want to smooch it, and it might be available before the midterms. What more do you need to know?

    [ad_2]

    Mike Pearl

    Source link

  • Here are the 55 US AI startups that raised $100M or more in 2025 | TechCrunch

    [ad_1]

    The AI industry entered 2025 with strong momentum. 

    There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch; three companies raised more than one “mega-round,” and seven companies raised rounds that were $1 billion in size or larger. 

    The industry didn’t slow down in 2025. While less companies raised rounds larger than $1 billion, four — Anthropic raised two rounds over $1 billion — significantly more companies raised multiple rounds compared to 2024, eight.  

    How will 2026 compare? Elon Musk’s xAI announced a $20 billion Series E round and Sam Altman’s brain computer interface startup Merge Labs raised a $250 million seed round (with OpenAI as the lead investor) in the first few weeks of 2026, so signs point to another strong year. Of course, it’s still early; we’ll be watching to see if the momentum continues. 

    Here are all the U.S. AI companies that raised $100 million last year: 

    December: 

    • Austin, Texas-based Mythic, which builds power-efficient compute for AI, raised a $125 million venture round that was led by DCVC. The round was announced on December 17 and included SoftBank, NEA and Linse Capital, among other investors.  
    • Chai Discovery announced a $130 million Series B round on December 15. The round valued the company that builds AI models for biotech and drug discovery at $1.2 billion. Oak HC/FT and General Catalyst co-led the round.  
    • Generative media platform Fal announced its third funding round of 2025 on December 9. This $140 million Series D round was led by Sequoia and valued the company at more than $4.5 billion.  
    • Unconventional AI announced a monster $475 million seed round led by Lightspeed Venture Partners and Andreesen Horowitz on December 8. The round valued the one-year-old startup, which is rethinking the foundation of computers in the age of AI, at nearly $4.5 billion.  
    • Boston-based 7AI, which builds cybersecurity AI agentsraised a $130 million Series A round that was announced on December 4. Index Ventures led the round with participation from Greylock, Spark Capital and CRV, among others.  

    November

    • All-in-one AI workspace platform Genspark announced a $275 million Series B round that valued the company at $1.25 billion on November 20. The round included Emergence Capital Partners, SBI Investment, and LG Technology Ventures, among others.  
    • Luma AI, which builds models used for photo and video creation, raised $900 million in a Series C round that valued the startup at $4 billion. The round was led by Humain with participation from Andreessen Horowitz, AMD Ventures and Amplify Partners, among others.  
    • Anysphere, the maker of viral vibe-coding platform Cursor, raised $2.3 billion in a funding round that valued the company at $29.3 billion. The round was announced on November 13 and is the company’s second funding round this year.  
    • Parallel, which builds web infrastructure for AI agents, raised a $100 million Series A round that was announced on November 12. The round was co-led by Index Ventures and Kleiner Perkins.  
    • Healthcare AI agent startup Hippocratic AI raised a $126 million Series C round that valued the company at $3.5 billion. The round was the company’s second this year, was announced on November 3, and was led by Avenir Growth.  

    October

    • Fireworks AI, a platform that allows users to build AI applications using open source models, raised a $250 million Series C round that was announced on October 28. The round valued the company at $4 billion.  
    • Enterprise AI startup Uniphore is valued at $2.5 billion after a $260 million Series F round that was announced on October 22. The round included Snowflake Ventures, Nvidia, Databricks Ventures, and AMD, among others.  
    • Sesame, a voice AI company, raised a $250 million Series B round co-led by Sequoia and Spark Capital. The round was announced on October 21 and also included SignalRank as a participant.  
    • Cambridge, Massachusetts’s based OpenEvidence, which builds an AI chatbot for the medical field, raised its second funding round of 2025. The $200 million Series C round was announced on October 20 and valued the company at $6 billion.  
    • Lila Sciences, which is looking to build a science superintelligence platform, announced its second funding round of 2025 on October 14. The $350 million Series A round was co-led by Braidwell and Collective Global. 
    • DeepSeek competitor Reflection AI announced its second mega-round of the year on October 9. The $2 billion Series B round valued the company at $8 billion and was led by Nvidia.  
    • EvenUp, which builds AI for the personal injury legal field, announced a $150 million Series E round that valued the company at more $2 billion on October 7. The round was led by Bessemer with participation from Lightspeed, Bain Capital and SignalFire, among others.  

    September

    • Periodic Labs, which is building an AI scientist, announced a $300 million seed round on September 30. Felicis and Andreessen Horowitz led the round with participation from Nvidia, Lightspeed, and Khosla Ventures, among others.  
    • Cerebras Systems, an AI infrastructure company, raised a sizable $1.1 billion Series G round that valued the company at $8.1 billion. The round was announced on September 30 and was co-led by Fidelity and Atreides Management. 
    • Modular announced a $250 million funding round on September 24The round was led by US Innovative Technology Fund with participation from GV, Greylock, and General Catalyst, among others.  
    • Distyl AI, which builds AI enterprise software, raised a $175 million Series B round that was announced on September 23. This round valued the startup at $1.8 billion and included investors like Khosla Ventures and Lightspeed.  
    • AI infrastructure startup Upscale AI raised a sizable $100 million seed round that was co-led by Maverick Silicon and Mayfield. The round was announced on September 17 and also included StepStone Group, Stanford University, and Qualcomm Ventures, among others.  
    • Groq, an AI inference company, raised a $750 million Series D-3 round that valued the company at nearly $6.9 billion. The round was announced on September 17 and was led by Disruptive.  
    • AI training startup Invisible Technologies was valued at $2 billion after a $100 million fundraise that was announced on September 16. The raise was led by Vanara Capital with participation from Greycroft, Tallwoods Capital, and Freestyle Capital, among others.  
    • Cognition AI, the creator of vibe-coding agent Devin, raised a $400 million Series C round that was announced on September 8. The round was led by Founders Fund and valued the company at $10.2 billion. 
    • AI Infrastructure startup Baseten raised a $150 million Series D round that valued the company at $2.1 billion. The September 5 round was led by Bond with participation from CapitalG, IVP and Spark Capital, among others.  
    • Bret Taylor’s customer service AI agent platform Sierra raised $350 million in a round led by Greenoaks Capital. This fundraise was announced on September 4 and valued Sierra at more than $10 billion.  
    • You.com, a personalized AI search engine, raised a $100 million Series C round led by Cox Enterprises. The round was announced on September 3 and valued the company at $1.5 billion.  
    • AI research lab Anthropic raised its second round of 2025 in September. Anthropic announced a $13 billion Series F round on September 2 that valued the company at $183 billion. The round was led by Iconiq, Fidelity, and Lightspeed.  

    August

    • Healthcare and housing automation platform EliseAI raised $250 million in a Series E round that valued the startup at $2.2 billion. The round, which was announced on August 20, was led by Andreessen Horowitz.
    • Decart, an AI research lab, raised $100 million at a $3.1 billion valuation. The round included Sequoia Capital, Benchmark, and Zeev Ventures, among others, and was announced on August 7.

    July

    • Generative media platform Fal raised a $125 million Series C round led by Meritech Capital Partners. The company announced the round, which values Fal at $1.5 billion, on July 31. Salesforce Ventures, Shopify Ventures, Google AI Futures Fund, and others joined the round.
    • Five-year-old Ambience Healthcare, which is building an AI healthcare operating system, raised a $243 million Series C round that was led by Oak HC/FT and Andreessen Horowitz. Kleiner Perkins, OpenAI Startup Fund, Smash Capital, and others also participated in the round.
    • Reka AI, an AI research lab, raised $110 million in a round that included Snowflake and Nvidia. The Series B round was announced on July 22 and values the company at $1 billion.
    • AI research lab Thinking Machines Lab confirmed that it raised $2 billion on July 15. This sizable seed round was led by Andreessen Horowitz with participation from Nvidia, Accel, and AMD, among others. The round values the company at $12 billion.
    • Cambridge, Massachusetts-based OpenEvidence, which is building an AI-powered search tool for clinicians, raised $210 million at a $3.5 billion valuation. The Series B round was announced on July 15 and was led by Kleiner Perkins and GV.
    • Harmonic, which is building a mathematical reasoning engine, raised a $100 million Series B round led by Kleiner Perkins. The round was announced on July 10 and values the company at $875 million.

    June

    • Healthcare AI unicorn Abridge announced it raised a $300 million Series E round that values the company at $5.3 billion. The round was led by Andreessen Horowitz with Khosla Ventures participating. It was the company’s second round of 2025.
    • Harvey, which builds AI tools for the legal industry, announced it raised its second $300 million round of 2025 on June 23. This latest Series E round was co-led by Kleiner Perkins and Coatue and brings the company’s valuation to $5 billion.
    • Healthcare AI startup Tennr announced it raised a $101 million Series C round led by IVP with participation from Lightspeed Venture Partners, GV, and Andreessen Horowitz, among others. The round values the company at $605 million.
    • Enterprise search startup Glean continues to rake in cash. The company announced a $150 million Series F round on June 10, led by Wellington Management with participation from Sequoia, Lightspeed Venture Partners, and Kleiner Perkins, among others. Glean is now valued at $7.25 billion.
    • Anysphere, the AI research lab behind AI coding tool Cursor, raised a sizable $900 million Series C round that values the company at nearly $10 billion. The round was led by Thrive Capital with participation from Andreessen Horowitz, Accel, and DST Global.

    May

    • AI data labeling startup Snorkel AI announced a $100 million Series D round on May 29, valuing the company at $1.3 billion. The round was led by Addition with participation from Prosperity7 Ventures, Lightspeed Venture Partners, and Greylock.
    • LMArena, a popular, community-driven benchmarking tool for AI models, raised a $100 million seed round that valued the startup at $600 million. The round was announced on May 21 and was co-led by Andreessen Horowitz and UC Investments. Lightspeed Venture Partners, Kleiner Perkins, and Felicis also participated, among others.
    • Las Vegas-based AI infrastructure company TensorWave announced a $100 million Series A round on May 14. The round was co-led by Magnetar Capital and AMD Ventures with participation from Prosperity7 Ventures, Nexus Venture Partners, and Maverick Silicon.

    April

    • SandboxAQ closed a $450 million Series E round on April 4 that valued the AI model company at $5.7 billion. The round included Nvidia, Google, and Bridgewater Associates founder Ray Dalio among other investors.
    • Runway, which creates AI models for media production, raised a $308 million Series D round that was announced on April 3, valuing the company at $3 billion. It was led by General Atlantic. SoftBank, Nvidia, and Fidelity also participated.

    March

    • AI behemoth OpenAI raised a record-breaking $40 billion funding round that valued the startup at $300 billion. This round, which closed on March 31, was led by SoftBank with participation from Thrive Capital, Microsoft, and Coatue, among others.
    • On March 25, Nexthop AI, an AI infrastructure company, announced that it had raised a Series A round led by Lightspeed Venture Partners. The $110 million round also included Kleiner Perkins, Battery Ventures, and Emergent Ventures, among others.
    • Cambridge Massachusetts-based Insilico Medicine raised $110 million for its generative AI-powered drug discovery platform as announced on March 13. This Series E round valued the company at $1 billion and was co-led by Value Partners and Pudong Chuangtou.
    • AI infrastructure company Celestial AI raised a $250 million Series C round that valued the company at $2.5 billion. The March 11 round was led by Fidelity with participation from Tiger Global, BlackRock, and Intel CEO Lip-Bu Tan, among others.
    • Lila Sciences raised a $200 million seed round as it looks to create a science superintelligence platform. The round was led by Flagship Pioneering. The Cambridge, Massachusetts-based company also received funding from March Capital, General Catalyst, and ARK Venture Fund, among others.
    • Brooklyn-based Reflection.Ai, which looks to build superintelligent autonomous systems, raised a $130 million Series A round that values the 1-year-old company at $580 million. The round was led by Lightspeed Venture Partners and CRV.
    • AI coding startup Turing closed a Series E round on March 7 that valued the startup, which partners with LLM companies, at $2.2 billion. The $111 million round was led by Khazanah Nasional with participation from WestBridge Capital, Gaingels, and Sozo Ventures, among others.
    • Shield AI, an AI defense tech startup, raised $240 million in a Series F round that closed on March 6. This round was co-led by L3Harris Technologies and Hanwha Aerospace, with participation from Andreessen Horowitz and the US Innovative Technology Fund, among others. The round valued the company at $5.3 billion
    • AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.

    February

    • Together AI, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
    • AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
    • Abridge, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
    • Eudia, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
    • AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
    • AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.

    January

    • Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by Iconiq and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
    • Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.

    This piece was updated on April 23, June 18, August 27, November 26 and January 19, 2026 to include more deals.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    This piece has been updated to remove that Abridge is based in Pittsburgh; the company was founded there.

    [ad_2]

    Rebecca Szkutak

    Source link

  • NFL-Related Accounts on Facebook Are Posting Some of the Most Shameless AI Slop Yet

    [ad_1]

    If you haven’t checked your Facebook account in a while, fear not, the spam accounts are still doing very well. Now with eerie and ever-advancing AI slop in their arsenal—and, lately, football fans to prey on.

    There’s a group of accounts on Facebook that claims to be a bunch of fan accounts for various National Football League teams. But a quick scroll through these pages, each sporting a couple thousand followers, reveals misinformation paired with a series of seemingly AI-generated photos. Based on the comment sections of these photos and the amount of likes some of them get, people are fully believing what they post.

    “After His Desire To Return To The Steelers Was Not Fulfilled, Instead Of Reacting With Anger Or Resentment, The Former Player Chose To Retire And Join The Pittsburgh Police Department To “Wear Pittsburgh Colors Once Again.” a Pittsburgh Steelers fan account with 11,000 followers claimed in a post earlier this week. The post does not mention the name of the so-called player but is accompanied by a seemingly AI-generated image of former football wide receiver Adam Thielen in police uniform. Thielen recently announced his retirement, and briefly played for the Steelers late last year. He has not shared any plans to join Pittsburgh law enforcement.

    Another such account, a Denver Broncos fan account with more than 6,000 followers called “Wild Horse Warriors,” found a victim not in a player, but in Broncos reporter Cody Roark. A post with an AI-generated image of Roark holding a child claimed that he had passed away following a domestic violence incident and left behind a 5-year-old child. Not only was Roark alive and well, he doesn’t even have kids to begin with.

    “Usually you see that happen to, like, high-profile celebrities,” Roark told The Denver Post. “For that to happen to me was just really weird.”

    The account was just created in November and has now been shut down by Meta after The Denver Post reached out for comment. In its two-month existence, the account reportedly disseminated a slew of misinformation posts about Broncos players as well, including a false claim that wide receiver Courtland Sutton refused to wear an LGBTQ+ solidarity armband during a game. But even though “Wild Horse Warriors” is now a thing of the past, similar accounts continue to proliferate on Facebook. One such account, called “Broncos Stampede Crew,” made the same LGBTQ+ armband claim about Broncos quarterback Bo Nix. The phone number attached to that account appears to be based in Vietnam.

    What do these accounts have to gain from fake AI-generated news about football players? While it’s not certain how these specific accounts operate, the pattern seems to fit what has long been utilized by Facebook spam accounts. Each post by these fake fan accounts links out to an article from a website that pretends to be a reputable news organization like “ESPNS” or “NCC News.”

    “Spam Pages largely leveraged the attention they obtained from viewers to drive them to off-Facebook domains, likely in an effort to garner ad revenue,” Harvard researchers wrote in a study from 2024. These websites are usually “heavily ad-laden content farm domains—some of which themselves appeared to consist of primarily AI-composed text.”

    Other pages could be trying to accumulate an audience and good standing with the algorithm by using these fake shock-value clickbait news stories first, before completely changing the purpose of the page.

    “It could be that these were nefarious pages that were trying to build an audience and would later pivot to trying to sell goods or link to ad-laden websites or maybe even change their topics to something political altogether,” Georgetown researcher Josh Goldstein told NPR in a 2024 interview about AI spam accounts on Facebook.

    [ad_2]

    Ece Yildirim

    Source link

  • Trump’s voice in a new Fannie Mae ad is generated by artificial intelligence, with his permission

    [ad_1]

    NEW YORK — What sounds like President Donald Trump narrating a new Fannie Mae ad actually is an AI-cloned voice reading text, according to a disclaimer in the video.

    The voice in the ad, created with permission from the Trump administration, promises an “all new Fannie Mae” and calls the institution the “protector of the American Dream.” The ad comes as the administration is making a big push to show voters it is responding to their concerns about affordability, including in the housing market.

    Trump plans to talk about housing at his appearance at the World Economic Forum in Davos, Switzerland, where world leaders and corporate executives meet this week.

    This isn’t the first time a member of the Trump family has used AI to replicate their voice, First Lady Melania Trump recently employed AI technology firm Eleven Labs to help voice the audio version of her memoir. It’s not known who cloned President Trump’s voice for the Fannie Mae ad.

    Last month, Trump pledged in a prime-time address that he would roll out “some of the most aggressive housing reform plans in American history.”

    “For generations, home ownership meant security, independence, and stability,” Trump’s digitized voice says in the one-minute ad aired Sunday. “But today, that dream feels out of reach for too many Americans not because they stopped working hard but because the system stopped working for them.”

    Fannie Mae and its counterpart Freddie Mac, which have been under government control since the Great Recession, buy mortgages that meet their risk criteria from banks, which helps provide liquidity for the housing market. The two firms guarantee roughly half of the $13 trillion U.S. home loan market and are a bedrock of the U.S. economy.

    The ad says Fannie Mae will work with the banking industry to approve more would-be homebuyers for mortgages.

    Trump, Bill Pulte, who leads the Federal Housing Finance Agency, and others have said they want to sell shares of Fannie Mae and Freddie Mac on a major stock exchange but no concrete plans have been set.

    Trump and Pulte have also floated extending the 30-year mortgage to 50 years in order to lower monthly payments. Trump appeared to back off the proposal after critics said a longer-term loan would reduce people’s ability to create housing equity and increase their own wealth.

    Trump also said on social media earlier this month that he was directing the federal government to buy $200 billion in mortgage bonds, a move he said would help reduce mortgage rates at a time when Americans are anxious about home prices. Trump said Fannie Mae and Freddie Mac have $200 billion in cash that will be used to make the purchase.

    Earlier this month, Trump also said he wants to block large institutional investors f rom buying houses, saying that a ban would make it easier for younger families to buy their first homes.

    Trump’s permission for the use of AI is interesting given that he has complained about aides in the Biden administration using autopen to apply the former president’s signature to laws, pardons or executive orders. An autopen is a mechanical device that is used to replicate a person’s authentic signature.

    However, a report issued by House Republicans does not include any concrete evidence that autopen was used to sign Biden’s name without his knowledge.

    [ad_2]

    Source link