Chrome on Android now offers a fresh way to digest information when your hands are busy or your eyes need a break.
A new update powered by Google Gemini can turn written webpages into short podcast-style summaries. Two virtual hosts chat about the content, making it feel easier to follow during your commute or while you multitask.
This upgrade builds on Chrome’s long-standing read-aloud tool, yet now adds a more natural and lively delivery. It does not work on every website, so some pages will still use the original word-for-word reading. When the AI option appears, though, the audio feels polished and smooth.
Below is how to try it on your Android phone right now.
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Make sure you have the newest Chrome version so the AI podcast feature works.(Cyberguy.com)
Update Chrome before you start
First, make sure Chrome is current in the Play Store by opening the Play Store, searching for Google Chrome and tapping Update if it appears. The AI podcast feature works with version 140.0.7339.124 or newer, so confirm you have at least that version installed. Once you finish the update, open Chrome and pick any webpage with text you want to hear.
Settings may vary depending on your Android phone’s manufacturer.
Open the More menu
Tap the More icon or the three vertical dots in the upper right corner. This reveals a set of options that control how Chrome displays or reads the page.
Select Listen to this page
Choose Listen to this page. You will see a small Generating AI playback banner at the bottom. The processing is fast, so you will not wait long.
Hear the AI hosts discuss the page
Chrome will start a mini podcast with two voices talking through the content. You can tap the playback bar to pause, rewind or jump ahead. The panel stays on screen and follows you as you scroll.
Switch to standard playback when you want
The AI audio keeps going even if you leave the webpage. If you prefer a traditional word-for-word readback, tap the AI playback icon in the lower left and pick Standard Playback.
Chrome begins creating the AI audio as soon as you tap the “Listen to this Page” option.(iStock)
This feature can make long articles easier to absorb when you are on the move. You get a quick, conversational rundown without having to read a full page. It also helps you revisit information faster since the controls work like any audio player. If you enjoy podcasts, this tool gives you a familiar way to stay informed without draining your attention.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Chrome’s AI podcast feature brings a new layer of convenience to Android. It saves time, reduces eye strain and turns everyday browsing into a hands-free audio experience. Since it still supports the standard read-aloud mode, you can switch back anytime.
Would you use AI hosts to read your favorite websites, or do you prefer the classic readback style? Let us know by writing to us at Cyberguy.com.
Using the new update powered by Google Gemini, you can change from the AI podcast to a simple word-for-word reading at any time.(“I’ve Had It” YouTube channel)
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.” Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
This week, editors Peter Suderman, Katherine Mangu-Ward, and Matt Welch are joined by associate editor Liz Wolfe to discuss President Donald Trump’s executive order blocking states from enforcing their own artificial intelligence regulations. The panel debates whether a single national framework for AI is necessary to keep American tech companies competitive or whether it represents a serious blow to federalism. They also examine the White House potentially reclassifying marijuana as a Schedule III drug and what that change could mean for the cannabis industry, tax policy, and federal drug enforcement.
The editors then turn to mass shootings in Australia and at Brown University, including the actions of a bystander credited with saving lives at Bondi Beach, and what these incidents suggest about gun control debates. They discuss the U.S. seizure of a Venezuelan oil tanker and threats of land strikes against the Nicolás Maduro regime, and cover the conviction of Hong Kong media tycoon Jimmy Lai under China’s national security law and what it signals for press freedom and U.S.-China relations. A listener asks whether modern socialism reflects moral aspirations that could be redirected toward liberty rather than centralized power.
0:00—Trump blocks states from regulating AI
10:31—Reclassifying marijuana as a Schedule III drug
Sen. Chuck Schumer urges the FTC to investigate Instacart’s pricing practices
Grocery prices reportedly varied by as much as 23 percent between shoppers
Instacart says retailers control pricing and randomized pricing tests follow “strict guardrails”
An investigation by Consumer Reports and the Groundwork Collaborative alleges that Instacart uses artificial intelligence to charge different shoppers on Long Island and elsewhere different prices for the same items. Citing those findings, U.S. Sen. Charles Schumer is urging the Federal Trade Commission to intervene.
“When a shopper fills their grocery cart whether in real life or digitally, they should trust that they are being treated fairly and that prices are transparent,” Schumer said in a news release.
“What we are seeing more and more of is that companies like Instacart are using artificial intelligence to rip off consumers by charging different shoppers different prices for the same exact items,” he said.
“This is jacking up grocery costs across New York City, Long Island, and across the nation,” he added. “So, today, I am sounding the alarm on this predatory practice and demanding the federal government take new action to protect families from this shakedown pricing.”
The investigation comes at a time when consumers are grappling with rising grocery costs, with 71 percent saying they are spending more this year than last year, according to an ABC News/Washington Post/Ipsos poll published in November.
Last week, President Donald Trump issued an executive order establishing task forces to examine potential price-fixing across the food supply chain.
Meanwhile, the Consumer Report and Groundwork Collaborative investigation release last week alleged that in some instances grocery prices varied by as much as 23 percent from shopper to shopper.
Instacart, meanwhile, said in a Dec. 9 blog that it is “doubling down on affordability and convenience.”
The company said in the blog that
“While we have made real progress working with our retail partners to drive affordability through loyalty integrations, same-as-in-store pricing, and more – retail partners on our platform control their pricing strategies. Some choose to apply online markups to help offset the cost of providing same-day delivery to customers. To ensure customers can make the most informed choices when buying groceries from their favorite retailers, we display every retailer’s pricing policy on their Instacart storefront so customers know when prices may differ from in-store and can easily compare across retailers.
Additionally, just as retailers have long tested prices in physical stores to understand what resonates with customers, a small subset of our retail partners – 10 U.S. retail partners that already choose to apply markups – use Instacart’s Eversight technology to run limited online pricing tests. These short-term, randomized tests help retail partners understand category-level price sensitivity so they can sustainably invest in lower prices where consumers care most. For example, as a result of these tests, some consumers may see slightly lower prices on essentials like milk or bread, and slightly higher prices on items like specialty snacks or craft beverages.”
The company added that “these short-term, randomized tests follow strict guardrails.”
Baseball teams have long searched for a way to study the entire swing without sensors or complex lab setups. Today, a new solution is entering the picture. Theia, an AI biomechanics company, debuted a commercially available video-only system that analyzes bat trajectory and full-body biomechanics together. This new approach works in real baseball environments and needs no reflective body markers, wearables or special equipment.
The system has been field-tested by Driveline Baseball and the San Diego Padres Biomechanics Lab, and the tests show it delivers high-quality results in both cages and on the field.
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Theia unveils a video-only biomechanics system that tracks a hitter’s full swing without sensors or lab gear.(Photo by Lachlan Cunningham/Getty Images)
A new chapter in baseball biomechanics
Theia’s platform relies on deep-learning models trained on millions of movement data points. It captures the full 3D bat path, attack angle, sequencing and body motion in one workflow that teams can run with standard high-speed video. This makes advanced biomechanics more accessible to coaches and players who train in normal environments.
Dr. Arnel Aguinaldo of the PLNU Biomechanics Lab tested the system with the Padres. He said, “Theia’s markerless technology represents a breakthrough in how we capture and analyze swing mechanics. It removes the barriers of traditional setups, letting us gather quality swing data directly from the field or the cage. That’s a game changer for both research and applied development.”
Independent testing across more than 2,000 swings showed median bat-plane angle differences of less than 3 degrees compared with marker-based systems. As a result, teams can evaluate roster-sized groups in routine cage or field sessions without slowing players down.
Why video-only tracking works in real baseball settings
Many existing tools rely on sensors or suits that can change how an athlete moves. Marcus Brown, CEO of Theia, explained to CyberGuy why video-only tracking matters.
“Using only video means teams get lab-grade biomechanics data that previously required a full lab setup, but without special suits, reflective markers, or hardware mounted to the bat or the player,” he said.
The system runs in the background once cameras are placed and calibrated. Coaches record sessions as usual, and the analysis processes automatically. Because of this, training routines stay the same, and players move naturally.
Brown added, “Until now, full swing analysis meant choosing between bat-only tools or biomechanics labs that couldn’t scale. Our new markerless technology changes that. Teams can now see the complete swing picture for every hitter using one system in an environment that matches their individual needs.”
How AI bat and body tracking improves player performance
A complete swing view gives coaches the chance to link body motion to bat results. Brown described why this matters for player development.
“Theia’s new bat tracking feature helps players improve because it gives coaches a complete and more accurate picture of the swing. Many tools today either measure the bat or the body, and many rely on wearables or sensors that can influence how an athlete moves,” Brown said. “When coaches can connect a player’s sequencing, posture, timing, and rotation to the bat’s path, speed, and contact quality, they can identify the specific movement patterns that drive results. That makes mechanical adjustments more targeted and much easier to track over time, leading to more consistent and meaningful improvements.”
Driveline Baseball and the Padres Biomechanics Lab report strong accuracy from Theia’s markerless tracking tests.(Photo by Matt McClain/The Washington Post via Getty Images)
What players experience when teams use Theia’s system
Players will not need to attach anything to the bat or their bodies. They swing in their regular training spaces without changing behavior. Brown said, “For athletes, the biggest change is the level of precise personalized feedback they get. Coaches can isolate whether an issue is coming from sequencing, posture, timing, or how the hitter is delivering the barrel to the ball. That level of detail helps translate mechanical work in the cage into more consistent, reliable results in the field.”
Independent testing shows consistent bat and body data
Driveline Baseball and the PLNU x Padres Biomechanics Lab tested the system in both professional and collegiate settings. Brown said, “Our work with Driveline and the PLNUxPadres’ Biomechanics Lab showed the system could deliver high-quality bat-and-body data in the same environments where hitters actually train. What those tests demonstrated was consistency: the ability to capture the full swing automatically, link the bat and body with the precision needed for player development, and fit seamlessly into a normal training session.”
Why Theia’s system fits seamlessly into normal cage sessions
Sports tech can create workflow friction, but Theia aims to avoid that. Brown said, “We designed the system so coaches can use it without changing anything about their normal training routine. Once the cameras are in place, coaches simply record the session the same way they normally would, and the analysis happens automatically in the background.”
There are no extra steps, no equipment put on the players, and no training interruptions.
“Player development is ultimately about understanding what drives performance, and this technology gives coaches a far clearer way to see that,” he said. “When you can connect a player’s movement to the result of the swing with objective repeatable data, you can build training plans that are far more individualized and precise.”
He also added: “This work builds on more than a decade of research and over 50 peer-reviewed validation studies focused on highly accurate markerless human motion tracking. It reflects where the field as a whole is headed toward integrated markerless solutions that give athletes and coaches clearer insight with far less friction.”
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Theia’s new bat and body tracking system reshapes how baseball teams study movement. It gives coaches deeper clarity, provides athletes with natural training conditions, and removes the hardware hurdles that limited biomechanics in the past. Fans may also see long-term effects. This level of detail can influence how hitters develop power, attack angles and timing. Young players may gain personalized training guides that shape better habits earlier in their careers. As video-driven AI expands across sports, tools like this give teams more ways to understand performance.
If your favorite team had access to this level of swing insight, how do you think it would change their lineup development strategy? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
WASHINGTON — As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren’t sure exactly what to do with it.
For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned.
Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. “One of the best things about AI is how easy it is to use,” the user wrote in English.
“Some intelligence agencies worry that AI will contribute (to) recruiting,” the user continued. “So make their nightmares into reality.”
IS, which had seized territory in Iraq and Syria years ago but is now a decentralized alliance of militant groups that share a violent ideology, realized years ago that social media could be a potent tool for recruitment and disinformation, so it’s not surprising that the group is testing out AI, national security experts say.
For loose-knit, poorly resourced extremist groups — or even an individual bad actor with a web connection — AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence.
“For any adversary, AI really makes it much easier to do things,” said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”
Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and video.
When strapped to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies and spread propaganda at a scale unimaginable just a few years ago.
Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war’s actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere.
Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits.
IS also has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS’ evolving use of AI.
Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as “aspirational,” according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government.
But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said.
Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They also can use AI to write malicious code or automate some aspects of cyberattacks.
More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security’s updated Homeland Threat Assessment, released earlier this year.
“ISIS got on Twitter early and found ways to use social media to their advantage,” Fowler said. “They are always looking for the next thing to add to their arsenal.”
Lawmakers have floated several proposals, saying there’s an urgent need to act.
Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies.
“It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors,” Warner said.
During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI.
Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year.
Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said Rep. August Pfluger, R-Texas, the bill’s sponsor.
“Our policies and capabilities must keep pace with the threats of tomorrow,” he said.
BANGKOK (AP) — Shares fell Monday in Asia as China reported investment fell in November in the latest signal that demand in the world’s second largest economy remains weak. The retreat followed a dismal end to last week, when declines for superstar artificial-intelligence stocks knocked Wall Street off its record heights
Tokyo’s Nikkei 225 index shed 1.5% to 50,092.10, as investors wait to see if the Bank of Japan will raise its benchmark interest rate as expected this week.
The BOJ’s quarterly “tankan” survey of big manufacturers, released Monday, showed a slight improvement in sentiment among such businesses. The measure of those expressing optimism rose to 15 from 14 in the last quarter, the highest level in four years, the central bank said.
The index shows the percentage of companies reporting positive conditions minus the percentage reporting unfavorable ones. While the overall survey showed improvement, forecasts for the next quarter were less positive.
Japan’s economy contracted at a 2.3% annual pace in the July-September quarter, the first such decline in six quarters. An agreement between Japan and the U.S. over the level of President Donald Trump’s higher tariffs, limiting baseline import duties to 15%, has helped to reduce uncertainty for big automakers and electronics companies.
Analysts said the stronger results may sway the BOJ toward pressing ahead with a 0.25 percentage point rate hike that will take the key rate to 0.75%.
The Kospi in South Korea dropped 1.2%, to 4,117.68.
In Hong Kong, the Hang Seng declined 0.7% to 25,786.45. The Shanghai Composite index edged 0.1% higher, however, to 3,892.45.
China reported Monday that investment in fixed assets such as factory equipment and other infrastructure fell 2.6% in November from a year earlier, implying that such investments dropped 11.1% year-on-year in the first 11 months of the year.
Retail sales rose 4% in January-November from a year earlier, while factory output climbed 4.8%, the government said.
The latest data followed a high-level meeting of China’s Communist Party leadership last week that yielded no major policy shifts, and a pledge to continue to try to boost consumer spending and investment needed to drive higher domestic demand.
“Policy support should help drive a partial recovery in the coming months, but this probably won’t prevent China’s growth from remaining weak across 2026 as a whole,” Zichun Huang of Capital Economics said in a commentary.
Elsewhere in the region, Australia’s S&P/ASX 200 slipped 0.7% to 8,640.60 and Taiwan’s benchmark lost 1.1%.
The futures for the S&P 500 and the Dow Jones Industrial Average were up 0.3%.
On Friday, the S&P 500 fell 1.1% from its all-time high for its worst day in three weeks, closing at 6,827.41. The weakness for tech stocks yanked the Nasdaq composite down by a market-leading 1.7%, to 23,195.17.
The Dow gave back 0.5% to 48,458.05.
AI heavyweight Broadcom dragged the market lower and tumbled 11.4% even though the chip company reported a stronger profit for the latest quarter than analysts expected. Analysts called the performance solid, and CEO Hock Tan said strong 74% growth in AI semiconductor revenue helped lead the way.
The drop added to worries about the AI boom that flared a day before, when Oracle plunged nearly 11% despite likewise reporting a bigger profit for the latest quarter than analysts expected.
Chip maker Nvidia fell 3.3%, while Oracle fell another 4.5%.
Stocks of companies that depend on spending by U.S. consumers were relatively strong Friday, as two out of every five stocks within the S&P 500 rose. Oil prices eased this week, which could help ease people’s bills, and
In other dealings early Monday, U.S. benchmark crude oil gained 30 cents to $57.74 per barrel. Brent crude, the international standard, rose 29 cents to $61.41 per barrel.
The U.S. dollar slipped to 155.37 Japanese yen from 155.75 yen late Friday. The euro was unchanged at $1.1739.
Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Stocks dipped lower on Friday as tech and AI companies came under pressure from President Trump. He signed an executive order on Thursday to stop state regulation of artificial intelligence, arguing that a patchwork set of rules could hold the U.S. back from dominating the competition. CBS News MoneyWatch correspondent Kelly O’Grady has more.
The Pentagon announces the launch of GenAI.mil, a military-focused artificial intelligence platform powered by Google Gemini.(Julia Demaree Nikhinson/AP)
NEWYou can now listen to Fox News articles!
Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.
IN TODAY’S NEWSLETTER:
– Pentagon launches military AI platform powered by Google Gemini for defense operations – Disney CEO defends massive AI deal, says creators won’t be threatened – Trump says every AI plant being built in US will be self-sustaining with their own electricity
WAR WIRED: The Pentagon is announcing the launch of GenAI.mil, a military-focused AI platform powered by Google Gemini. In a video obtained by FOX Business, Secretary of War Pete Hegseth said the platform is designed to give U.S. military personnel direct access to AI tools to help “revolutioniz[e] the way we win.”
TIMES A CHANGING: After Disney announced a $1 billion equity investment in OpenAI, CEO Bob Iger assured creators in an interview Thursday their jobs would not be threatened.
WATT WARS: President Donald Trump clapped back at a report that was just released about the global artificial intelligence arms race, which claimed China has more than double the electrical power-generation capacity of the United States.
President Donald Trump during a roundtable in the Cabinet Room of the White House in Washington, D.C., on Monday, Dec. 8, 2025. ( Yuri Gripas/Abaca/Bloomberg via Getty Images)
TECH OVER TREES: U.S. Energy Secretary Chris Wright was quoted in a piece on Thursday declaring that America’s top scientific priority is AI. While there is robust debate over how artificial intelligence will be regulated going forward and what safeguards will be mandatory, there is broad bipartisan agreement that this technology has the potential to change the way the world operates.
BABY STEPS: ‘Outnumbered’ panelists react to OpenAI CEO Sam Altman’s admission that he ‘cannot imagine’ raising his newborn son without help from ChatGPT.
INFRASTRUCTURE NOW: Former Sen. Kyrsten Sinema, I-Ariz., warned that the U.S. risks ceding global leadership on artificial intelligence to China, calling the AI race a matter of national security that the nation has “got to win.”
AGE OF MACHINES: Time magazine announced “Architects of AI” as its 2025 person of the year on Thursday, rather than picking a singular individual for the honor.
AI ON TRIAL: The heirs of an 83-year-old woman who was killed by her son inside their Connecticut home have filed a wrongful death lawsuit against ChatGPT maker OpenAI and its business partner Microsoft, claiming the AI chatbot amplified his “paranoid delusions.”
‘CUFFING SEASON’: California Gov. Gavin Newsom trolled President Donald Trump’s administration by posting an AI-generated video depicting Trump, War Secretary Pete Hegseth and White House deputy chief of staff for policy and Homeland Security advisor Stephen Miller, in handcuffs.
‘CLEAR GUIDELINES’: A bipartisan pair of House lawmakers introduced a bill on Wednesday to require federal agencies and officials to label any AI-generated content posted using official government channels.
WARTIME FOOTING: The Navy is warning that the United States must treat shipbuilding and weapons production with the urgency of a country preparing for conflict, with Navy Secretary John Phelan declaring that the sea service “cannot afford to stay comfortable” as it confronts submarine delays, supply-chain failures and a shipyard system he says is stuck in another era.
‘HIS OWN EGO’: Senate Minority Leader Chuck Schumer, D-N.Y., accused President Donald Trump on Tuesday of “selling out America” for announcing that the U.S. will allow Nvidia to export its artificial intelligence chips to China and other countries.
Senate Minority Leader Chuck Schumer, D-N.Y., accused President Donald Trump of “selling out America” by allowing Nvidia to export artificial intelligence chips to China.(Kevin Dietsch/Getty Images)
‘ACCELERATE INNOVATION’: White House science and technology advisor Michael Kratsios opened a meeting of G7 tech ministers by urging governments to clear regulatory obstacles to artificial intelligence adoption, warning that sweeping new rule books or outdated oversight frameworks risk slowing the innovation needed to unlock AI-driven productivity.
EASING FEARS: JPMorgan Chase CEO Jamie Dimon offered an optimistic outlook on artificial intelligence (AI), predicting the technology will not “dramatically reduce” jobs over the next year — provided it is properly regulated.
BOTS GONE ROGUE: Artificial intelligence is becoming smarter and more powerful every day. But sometimes, instead of solving problems properly, AI models find shortcuts to succeed.
This behavior is called reward hacking. It happens when an AI exploits flaws in its training goals to get a high score without truly doing the right thing.
Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.
Mundane and boring products like heating-ventilation-air conditioning systems can sometimes see a surge in demand from an entirely new market and give HVAC stocks a boost. Comfort Systems (FIX) has soared after data centers consuming large amounts of power drove up the demand for its modular cooling systems. Shares of the data center play have surged roughly 140% year to…
President Donald Trump signed an executive order Thursday pressuring states not to regulate artificial intelligence.
Trump and some Republicans argue that the limited regulations already enacted by states, and others that might follow, will dampen innovation and growth for the technology.
Critics from both political parties — as well as civil liberties and consumer rights groups — worry that banning state regulation would amount to a favor for big AI companies, who enjoy little to no oversight and that Trump’s effort oversteps the limits of presidential power.
Here’s what to know about states’ AI regulations and what Trump signed.
Four states — Colorado, California, Utah and Texas — have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.
Those laws include limiting the collection of certain personal information and requiring more transparency from companies.
The laws are in response to AI that already pervades everyday life. The technology helps make consequential decisions for Americans, including who gets a job interview, a home loan and even certain medical care. But research has shown that it can make mistakes in those decisions, including by prioritizing a particular gender or race.
“With a human, I can say, ‘Hey, explain, how did you come to that conclusion, what factors did you consider?’” said Calli Schroeder, director of the AI & Human Rights Program at the public interest group EPIC. “With an AI, I can’t ask any of that, and I can’t find that out. And frankly, half the time the programmers of the AI couldn’t answer that question.”
States’ more ambitious AI regulation proposals require private companies to provide transparency and assess the possible risks of discrimination from their AI programs.
Beyond those more sweeping rules, many states have regulated parts of AI: barring the use of deepfakes in elections and to create nonconsensual porn, for example, or putting rules in place around the government’s own use of AI.
The executive order directs federal agencies to identify burdensome state AI regulations and pressure states not to enact them by withholding federal funding, such as for broadband, or challenging the state laws in court.
It would also begin a process to develop a lighter-touch regulatory framework for the whole country that would override state AI laws.
It does not seek to preempt some laws states have adopted, such as AI-related child safety protections and provisions on how state governments can procure and use AI.
Trump argues that the patchwork of regulations across 50 states impedes AI companies’ growth and allows China to catch up to the U.S. in the AI race. The president has also said state regulations are producing “Woke AI.”
Groups that advocate for consumer rights and tech regulation are sounding the alarm on Trump’s executive order, arguing it allows Big Tech “to operate in a vacuum of accountability,” as the nonprofit Issue One put it.
“After spending millions of dollars on lobbying — including massive donations for the new White House ballroom — Big Tech has successfully leveraged those around the president to pass a federal moratorium that aims to wipe out bipartisan AI safeguards passed in both blue and red states,” said Liana Keesing, Issue One’s policy lead for technology reform. AI-driven scams and discriminatory price-fixing are just some of the harms the state laws are trying to prevent, she added.
Children’s advocacy groups also expressed deep concerns for the generations that are growing up in an AI-saturated world.
“A generation of parents watched their kids become the collateral damage of our failure to regulate social media, and now this moratorium threatens to repeat that tragedy with AI,” said Shelby Knox, director of online safety campaigns at ParentsTogether Action.
There’s a good chance it ends up being part of a court battle.
Last month, when the order was in draft form, Colorado Attorney General Phil Weiser sent a letter to congressional leaders warning that the state would sue if the order was signed.
And Thursday, California state Sen. Scott Wiener, who wrote the AI safety bill signed in that state this year, said in a statement: “If the Trump Administration tries to enforce this ridiculous order, we will see them in court.”
In Connecticut, Democratic Senate President Pro Tempore Martin Looney said Friday the state plans to press ahead with broad regulation even after the order.
In May, attorneys general for 40 states and territories — Republicans and Democrats — signed a letter to congressional leaders calling on them not to pass a provision blocking state AI regulation for 10 years.
Shatorah Roberson, a senior policy counsel at the Lawyers’ Committee for Civil Rights Under Law, says that in this case, it’s clear that the president does not have the authority to preempt state laws.
“This is an issue of our democracy and the president through executive order can’t just preempt state laws without going through the democratic process,” she said.
__
Associated Press writers Mead Gruver, Susan Haigh, Geoff Mulvihill, Trân Nguyễn and Barbara Ortutay contributed to this article.
In a windowless room at Denver police headquarters on a recent Thursday afternoon, Officer Chris Velarde activated a police drone to investigate a potential car break-in.
Officer Chris Velarde flies a drone and monitors live footage from its camera from Denver Police Department headquarters on Thursday, Dec. 4, 2025. (Photo by Hyoung Chang/The Denver Post)
Several floors above, the drone launched from the roof and flew itself — essentially on autopilot — to the site of the call, reported as a man breaking into a car with a crowbar near the Santa Fe Arts District.
The drone whizzed along, 200 feet up, in a straight line across blocks, buildings and streets during the roughly mile-long flight from police headquarters at 1331 Cherokee St. Velarde didn’t pick up the Xbox video-game controller that manually pilots the drone until it reached the area of the call. Then he took control and trolled the block for the supposed break-in, watching live video footage transmitted from the drone on his computer monitor as he flew.
After a few moments, Velarde spotted two people jiggering the passenger-side window of a vehicle. He zoomed in on the pair, and on the car’s license plate. He ran the plate to see whether the vehicle was stolen; it was not. The people on the street didn’t look up. They didn’t seem to know a police drone was hovering above them, that they were being recorded and watched a mile away by officers and a reporter.
Two more people joined the pair at the vehicle’s window and Velarde made the call — this didn’t look like a vehicle break-in. More likely, someone had just locked their keys in their car. He cleared the call with 911 dispatchers and told them there was no need to send an officer to the scene. Then he sent the drone back to headquarters; it flew itself to the rooftop dock, landing autonomously on a platform stamped with bright blue-and-yellow QR codes.
The Denver Police Department began testing drones as first responders — that is, sending them out on 911 calls — in mid-October after signing up for two free pilot programs from rival drone companies Skydio and Flock Safety. The effort has raised concerns among privacy advocates, Denver politicians and the city’s police oversight group, particularly regarding the department’s contract with Flock, the company behind the city’s controversial network of automated license-plate readers.
Police see the drones as a way to speed up call-response times and provide more information to officers as they arrive on scene, improving, they say, both public safety and officer safety. If a drone arrives at a scene before officers, and the drone pilot can tell police on the ground that the man with the knife actually put down the weapon before the officers arrived, that helps everyone, police said.
“The more knowledge, information and intelligence that we can provide our officers on the ground, the better methods that they can use to respond to certain situations, which may cause them to not escalate unnecessarily,” said Cmdr. Clifford Barnes, who heads the department’s Cyber Bureau.
Critics say the eyes in the sky raise serious privacy concerns both with how the drones and the data they collect are used now, and with how they might be used in the future as the technology rapidly changes. They worry that the drones could create a citywide surveillance network with few legal guardrails, that the footage they collect will be used to train private companies’ AI algorithms or that police will misuse emerging AI capabilities, like facial recognition.
“When it comes to the decision of, are we going to use this thing that could potentially increase public safety, that will erode privacy rights — no one should get to decide the public is willing to give away our constitutional rights, except the people,” said Anaya Robinson, public policy director at the American Civil Liberties Union of Colorado. “And when law enforcement makes that decision for us, it becomes extremely problematic.”
Almost 300 drone flights in 55 days
So far, only Skydio drones have flown as first responders over Denver.
Denver police signed a zero-dollar contract with Flock — without public announcement — in August for a year-long pilot of drones as first responders, but the company has yet to set up its autonomous aircraft. Skydio, on the other hand, moved quickly to get drones in the air after Denver police in October signed a contract to test up to four of the company’s drones during a free six-month pilot.
Skydio’s drones can reach about a 2-mile radius around the Denver police headquarters. The company advertises a top speed of 45 mph with 40 minutes of flight time; Denver pilots have found the drones average around 28 mph and around 25 minutes of battery life per flight.
From the first flight on Oct. 15 through Tuesday, two Skydio drones flew 297 times, according to data provided by Denver police in response to an open records request. Most of those flights — 199 — were to answer calls for service; another 82 were training flights, according to the data.
Skydio drones also surveilled events — a function police call “event overwatch” — seven times, the police data shows. Overwatch might include flying over a protest to track where the demonstrators are headed and alert officers on the ground for traffic control, Barnes said. (The police data showed that all seven overwatch flights occurred on Oct. 18, the day of Denver’s “No Kings” rally.)
The drones flew to 29 calls about a person with a weapon, 21 disturbances, 20 assaults in progress, a dozen suspicious occurrences and 11 hold-up alarms, according to data from Denver’s 911 dispatch records. The drones also flew to 39 other types of calls, including reports of prowlers, fights, burglaries, domestic violence and suicidal people.
The most common outcome for a call was that the officers were unable to locate an incident or the suspect was gone by the time the drone or police officers arrived, the records show. Across about 200 calls for service that included drone responses, police made 22 arrests and issued one citation, the dispatch data shows.
When responding to calls for service, the drones reached the scene before patrol officers 88% of the time, the police data shows. A drone was the sole police response in 80 of 199 calls for service, or about 40% of the time.
Barnes said answering calls with solely a drone improves police efficiency.
“If an officer on the ground doesn’t need to respond, and the drone pilot is comfortable with cancelling the other officers coming, we can assign those officers to more important, more pressing matters, so call-response times come down,” he said.
That approach raises questions about what the drones (which are equipped with three different cameras and a thermal imager) can and can’t see, and how officers are making decisions about call responses without actually speaking to anyone at the scene, the ACLU’s Robinson said.
“Humans have bias,” he said. Drone pilots might be more inclined to send officers to a potential car break-in in a low-income neighborhood and more likely not to in a higher-income neighborhood, he said. Or they might miss something from above that they could have seen at street level.
Officer Chris Velarde flies a drone and monitors live footage from its camera from Denver Police Department headquarters on Thursday, Dec. 4, 2025. (Photo by Hyoung Chang/The Denver Post)
But minimizing in-person police interactions with residents, particularly in over-policed neighborhoods, can also be a positive, said Julia Richman, chair of Denver’s Citizen Oversight Board, which provides civilian oversight of the police department.
“Where my head goes is the other outcome, where they roll up on those people who are trying to get keys out of the car and then they shoot them,” she said. “Actually, (the drone-only response) seems like a really good outcome.”
The oversight group has talked with Denver police over the last two years about developing its drone program, she said. The department created a seven-page policy to guide their use; the policy aims to ensure “civil rights and reasonable expectations of privacy are a key component of any decision made to deploy” a drone.
But Richman said she was surprised by aspects of the police department’s pilot programs despite the ongoing conversations with department leadership.
“What was never discussed, not once, was the idea of a third party running those drones or those drones being autonomous,” she said, referring to the drone companies. “What has changed with this latest pilot is the key features and key aspects that would create public concern had never been discussed with us.”
Both Flock and Skydio advertise autonomous features powered by artificial intelligence. Skydio uses AI for its autonomous flight paths, obstacle avoidance and tracking people and cars.
Flock, which also offers autonomous flight, advertises its drones as integrating with its automated license-plate readers. The license-plate readers — there are more than 100 around Denver — automatically photograph every car that passes by them. If a license plate is stolen or involved in a crime, the license-plate readers alert police within seconds.
Police Chief Ron Thomas and Mayor Mike Johnston defended the surveillance network as an invaluable crime-solving tool this year against mounting public discontent around how much data the machines collected and how that data was used — particularly around sharing information with the federal government for the purposes of immigration enforcement.
That privacy debate around Flock’s license plate readers unfolded in communities across Colorado and nationwide this year. In Loveland, the police department for a time allowed U.S. Border Patrol agents to access its Flock cameras before blocking that access. In Longmont, councilmembers voted Wednesday to look for alternatives to replace the 20 Flock license plate readers in that city.
When Denver City Council members, some driven by privacy concerns, voted against continuing Flock’s license-plate readers in May, Johnston extended the surveillance anyway through a free five-month contract extension with Flock in October that did not require approval from the council. Against that backdrop, Denver police quietly signed on for Flock’s drone pilot in August.
Barnes said the police department will not use any license-plate reader capabilities available on Flock drones. Such a feature would constitute “random surveillance,” which is prohibited under the department’s drone policy. The drones never fly without an officer’s direct involvement, he added.
The blue 2-mile-radius line seen on a computer screen shows the range of Denver police Skydio drones flown from Denver Police headquarters. (Photo by Hyoung Chang/The Denver Post)
The policy also prohibits drones from filming anywhere a person has a reasonable expectation of privacy unless police have a warrant, and says officers should take “reasonable precautions … to avoid inadvertently recording or transmitting images of areas where there is a reasonable expectation of privacy.”
Denver police do receive search warrants to fly drones for particular operations outside of the drones-as-first-responder program. In October, a Denver police detective sought and received a warrant to fly a drone over a shooting suspect’s home in Cherry Hills Village to check whether a truck involved in the shooting was parked at the wooded property.
The warrant noted that when driving home from anywhere outside Cherry Hills Village, the suspect could not reach his house without passing by Flock license-plate readers, and that photos from those license-plate readers suggested the truck was at the property.
Denver Councilwoman Serena Gonzales-Gutierrez and Councilman Kevin Flynn both told The Post they were not aware of the police department’s Skydio drone pilot before hearing about it from the newspaper, even though they are both on the city’s Surveillance Technology Task Force. The new group began meeting in August largely to consider Flock license-plate readers, as well as other types of surveillance technology, Gonzales-Gutierrez said.
“We haven’t talked about it in the task force, and the charge of our work in the task force is to come up with those guardrails that need to be put in place for these types of technology being utilized by law enforcement,” she said. “I feel like they just keep moving on without us being able to complete our work.”
Police don’t need permission from the City Council to carry out the pilot programs, Gonzales-Gutierrez said, but she was disappointed by the lack of communication and collaboration from the department.
Flynn sees the potential of police drones, particularly in speeding up officer response times, which can sometimes be dismal in the far-flung areas of his southwestern district.
“If a drone can get there to a 911 call and it can help an officer at headquarters assess the scene before a staffed car could get there, I would love that,” he said.
But he wants to be sure they are used in a way that respects residents’ rights. He would not support using the drones for general patrolling or surveillance, he said.
“This pilot is an excellent opportunity to test all of those boundaries and see if there are ways to operate a system that can be very useful for public safety without crossing boundaries,” he said.”…And maybe we don’t keep using them. That is the point of a pilot.”
‘These are flying cops’
The Skydio drones film from the moment they are launched until they drop in to land.
When the drone is on its way to a call — flying at the 200-foot altitude limit set by the Federal Aviation Administration — its cameras remain pointed at the horizon. In Denver’s denser neighborhoods, the Skydio drones at that height flew among buildings, sometimes at eye-level with balconies, offices and apartment windows, according to video of four flights obtained by The Post through an open records request.
“What if someone is in their apartment unit in one of these giant buildings and they’re changing, and they have their window open because they’re way up high and they don’t think anyone is watching them?” Gonzales-Gutierrez said. “That is crazy.”
The drones buzzed over rooftop decks, balconies and elevated apartment complex pools, the videos show. On one trip, a drone flew past the Colorado State Capitol Building, recording three people on a balcony on the tower under the building’s golden dome. Another time, the drone pilot zoomed in on a license plate so tightly that the car’s small, decorative “LOVE” decal was clearly visible.
Flynn noted that a 200-foot altitude would put the drones well above most of the homes in his less-dense district, and that people on their porches or balconies aren’t somewhere private.
“If someone is out on a balcony, sitting there reading a book… generally speaking, if you are out in public there’s no expectation of privacy,” he said.
The Skydio drones recorded about 54 hours of footage in the first eight weeks of their operation, according to data provided by the police department. Police leadership opted to have the drones’ cameras on and recording whenever the drone is in flight to boost transparency about how the drones are being used, Barnes said.
“It makes sense to keep the camera rolling,” Barnes said. “Then, if there’s an allegation, we just make sure that footage is recorded and treated like digital evidence, uploaded to the evidence management platform so it could be reviewed as necessary. We’re just trying to make sure we establish that balance, being as transparent as possible.”
Drone footage unrelated to criminal investigations is automatically deleted after 60 days, he said. While it’s retained, it’s stored in an evidence system that keeps a record of anyone who looks at it. The drone unit’s sergeant, Brent Kohls, also audits the flight reports monthly. (Footage used in criminal investigations will be on the same retention schedule as body-worn camera footage, police said.)
Kohls noted it would be unusual for the drone footage to be viewed only by the pilot. The feed is often displayed on the wall of the police department’s Real-Time Crime Center as it comes in.
ACLU attorney Nathan Freed Wessler, deputy director of the organization’s speech, privacy and technology project, would rather see police keep the recording off while flying a drone to a call, even if the camera is still livestreaming to police headquarters. In that scenario, a drone pilot might still see a woman tanning topless on her rooftop pool deck, he said, but the government wouldn’t then keep a recording of that privacy violation, amplifying it further.
“The thing we are really worried about is police start deploying drones as first responders for the majority of their calls for service and suddenly you have this crisscrossing network of surveillance all over the city,” Freed Wessler said. “You have the potential for a pervasive record of what everyone is doing all the time.”
Kohls said an officer flying a drone who spotted a different crime occurring while en route to another call would stop to report and respond to that secondary crime, just like an officer would on the ground.
“Absolutely, if an officer sees a crime happening, they’re going to get on the radio, alert dispatch to what they’re observing,” Kohls said. “Hopefully, if they have a few minutes of battery time left still, they can extend their time and circle or overwatch on that scene to provide hopefully life-saving radio traffic, whatever information they need to relay to dispatch to get other officers heading, or the fire department heading that way.”
State and federal laws have not yet caught up to how police are using drones, Freed Wessler said. The Fourth Amendment has what’s known as the plain-view exception, which allows police officers who are lawfully in a place to take action if they see evidence of a crime happening in plain sight.
“The problem here is we are not talking about police doing a thing we would normally expect them to do,” Freed Wessler said. “We are talking about police taking advantage of a new technology that gives them a totally new power to fly at virtually no expense over any part of the city at any time of day and see a whole bunch of stuff happening.”
A Denver police drone lands on its docking station on the roof of Denver Police headquarters in Denver, on Thursday, Dec. 4, 2025. (Photo by Hyoung Chang/The Denver Post)
Police have broad leeway to watch suspects without first getting a search warrant — like by peering through a fence or climbing the steps of a nearby building to look into a yard. But that’s different from using a subtle video camera to record a person 24/7 for months, the justices concluded.
So far, that’s the closest ruling in Colorado on the issue of drone surveillance, Freed Wessler said. Robinson, the policy director at the ACLU of Colorado, said lawmakers should act to regulate police drone use — either at the state or local level.
“These are flying cops,” said Beryl Lipton, senior investigative researcher at the Electronic Frontier Foundation, a nonprofit focused on digital privacy. “That is another one of those slippery slopes.”
Aside from the legality of surveillance, another question is how the drone footage and flight data is used by the drone companies, Lipton said.
“We live in a time where all these AI-fueled companies have a real drive to integrate AI into everything, and they’re really hungry for new data,” she said. “And we have law enforcement helping to feed these companies in a way they don’t really understand.”
Under its current agreement with Denver police, Skydio doesn’t use drone footage to train its algorithm or improve its product. Flock spells out in its contract that the company can “collect, analyze and anonymize” drone footage, then use that anonymized footage to train its “machine learning algorithms,” and enhance its services.
Lipton added that technology is moving fast — Axon, a company that powers many police departments’ body-worn cameras — this month started testing facial recognition on its cameras to automatically alert a police officer if a person they’re encountering has a warrant out for their arrest.
Prisons are experimenting with “movement analysis” to automatically flag a person’s movements as potentially aggressive before the person perpetrates violence, she said.
“We are technologically at a place where it would not be hard for a drone to fly over an area and basically serve as a license-plate reader for humans,” Lipton said. “… Some of this analysis is just not being done because it is not publicly palatable yet. But it is not like it is technologically difficult for some of these companies.”
What to know about the $1 billion Disney agreement with OpenAI – CBS News
Watch CBS News
Disney announced Thursday that it would invest $1 billion in OpenAI and license more than 200 of its animated and illustrated characters to use in Sora’s user-generated content. Jo Ling Kent has more.
First, Marjorie Taylor Greene: The 2025 60 Minutes Interview. Then, researchers warn AI chatbots can harm kids. And, why handmade Swiss watches are so expensive.
President Trump signed an executive order Thursday aimed at restricting states from crafting their own regulations for artificial intelligence, saying the burgeoning industry is at risk of being stifled by a patchwork of onerous rules while in a battle with Chinese competitors for supremacy.
Members of Congress from both parties, as well as civil liberties and consumer rights groups, have pushed for more regulations on AI, saying there is not enough oversight for the powerful technology.
But Mr. Trump told reporters in the Oval Office that “there’s only going to be one winner” as nations race to dominate artificial intelligence, and China’s central government gives its companies a single place to go for government approvals.
“We have the big investment coming, but if they had to get 50 different approvals from 50 different states, you can forget it because it’s impossible to do,” the president said.
The executive order directs Attorney General Pam Bondi to create a new task force to challenge state laws, and directs the Department of Commerce to draw up a list of problematic regulations. It also threatens to restrict funding from a broadband deployment program and other grant programs to states with AI laws.
David Sacks, a venture capitalist who is leading Mr. Trump’s policies on cryptocurrency and artificial intelligence, said Thursday the Trump administration would only push back on “the most onerous examples of state regulation” but would not oppose “kid safety” measures.
Four states — Colorado, California, Utah and Texas — have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.
Those laws include limiting the collection of certain personal information and requiring more transparency from companies.
The laws are in response to AI that already pervades everyday life. The technology helps make consequential decisions for Americans, including who gets a job interview, an apartment lease, a home loan and even certain medical care. But research has shown that it can make mistakes in those decisions, including by prioritizing a particular gender or race.
States’ more ambitious AI regulation proposals require private companies to provide transparency and assess the possible risks of discrimination from their AI programs.
Beyond those more sweeping rules, many states have regulated parts of AI: barring the use of deepfakes in elections and to create nonconsensual porn, for example, or putting rules in place around the government’s own use of AI.
Those who support regulations that would prevent states from restricting AI — including some GOP lawmakers and advocates like Sacks — argue that forcing tech companies to contend with varied or even contradictory rules would hurt the industry.
“At best, we’ll end up with 50 different AI models for 50 different states – a regulatory morass worse than Europe,” Sacks wrote on X earlier this week. “This will stymie innovation, especially by small startups who can’t afford the compliance burden. Meanwhile, China will race ahead.”
But members of both parties have pushed back. Last month, when congressional Republicans weighed adding restrictions on state AI regulations to a defense bill, Florida Republican Gov. Ron DeSantis called the idea a “subsidy to Big Tech.”
“The rise of AI is the most significant economic and cultural shift occurring at the moment; denying the people the ability to channel these technologies in a productive way via self-government constitutes federal government overreach and lets technology companies run wild,” the governor wrote.
Earlier this week, Democratic Sen. Ed Markey of Massachusetts called Mr. Trump’s plan to restrict AI regulations via an executive order an “early Christmas present for his CEO billionaire buddies.”
World, the biometric ID verification project co-founded by Sam Altman, released the newest version of its app today, debuting several new features, including an encrypted chat integration and an expanded, Venmo-like capability for sending and requesting crypto.
World was created by the startup Tools for Humanity in 2019, and originally launched its app in 2023. The company says that, in a world roiled by AI-generated digital fakery, it hopes to create digital “proof of human” tools that can help separate the humans from the bots.
During a small gathering at World’s headquarters in San Francisco on Thursday, Altman and World’s co-founder and CEO, Alex Blania, briefly introduced the new version of the app (which developers have termed a “super app”) before the product team took over to explain the new features. During his remarks, Altman said that the concept for World grew out of conversations he and Blania had had about the need to create a new kind of economic model. That model, based around web3 principles, is what World has been trying to accomplish through its verification network. “It’s really hard to both identify unique people and do that in a privacy-preserving way,” said Altman.
World Chat, the app’s new messenger, seems designed to do just that. It uses end-to-end encryption to keep users’ conversations safe (this encryption is described as being equivalent to Signal, the privacy-focused messenger), and also leverages color-coded speech bubbles to alert users to whether the person they’re talking to has been verified by World’s system or not, the company said. The idea is to incentivize verification, giving people the power to know whether the person they’re talking to is who they say they are. Chat was originally launched in beta in March.
The other big feature reveal on Thursday was an expanded digital payment system that allows app users to send and receive cryptocurrency. World app has functioned as a digital wallet for some time, but the newest version of the app includes broader capabilities. Using virtual bank accounts, users can also receive paychecks directly into World App and make deposits from their bank accounts, both of which can then be converted into crypto. You don’t have be verified by World’s authentication system to use these features.
Tiago Sada, World’s chief product officer, told TechCrunch that part of the reason chat was added was to create a more interactive experience for users. “What we kept hearing from people is that they wanted a more social World app,” Sada said. World Chat is designed to fill that need, creating what Sada says is a secure way to communicate. “It took a lot of work to make this feature-rich messenger that is similar to a WhatsApp or a Telegram, but with encryption and security of something that is a lot closer to Signal,” Sada said.
World (which was originally called Worldcoin) deploys a unique authentication process: interested humans get their eyes scanned at one of the company’s offices, where the Orb—a large verification device—converts the person’s iris into a unique and encrypted digital code. That code, the verified World ID, can then be used by the person to interact with World’s ecosystem of services, which are available through its app.
Techcrunch event
San Francisco | October 13-15, 2026
The addition of more social-friendly features is clearly meant to drive broader adoption of the app, which makes sense since scaling verification is the company’s main challenge. Altman has said that he would like the project to scan a billion people’s eyes, but Tools for Humanity claims to have scanned less than 20 million people.
Since standing in long lines at a corporate office to have your eyeballs scanned by a giant metallic ball may seem slightly less than enticing to some users, the company has already sought to make its verification process less cumbersome. In April, Tools for Humanity announced its Orb Minis—hand-held, phone-like devices—that allow users to scan their own eyes from the comfort of their homes. Blania previously told TechCrunch that, eventually, the company would like to turn the Orb Minis into a mobile point-of-sale device or sell its ID sensor tech to device manufacturers. If the company takes such steps, it would drop the barrier to verification significantly, potentially inspiring much more widespread adoption.
Time magazine is spotlighting key players in the artificial intelligence revolution for its 2025 Person of the Year, the magazine announced Thursday. “The architects of AI” are the latest recipients of the designation, which for more than a century has been given out on an annual basis to an influential person, group of people or, occasionally, a defining cultural theme or idea.
Previous Person of the Year title-holders have held varying roles in a vast range of occupations, with President Trump taking last year’s cover and Taylor Swift capturing the one before. In 2025,
Time’s 2025 honorific was given to the minds and financiers behind AI’s rise to renown and notoriety, including Nvidia CEO Jensen Huang, Softbank CEO Masayoshi Son and Baidu CEO Robin Li, who spoke directly with the magazine for its feature story.
“Person of the Year is a powerful way to focus the world’s attention on the people that shape our lives,” wrote Sam Jacobs, Time’s editor-in-chief, in an editorial piece about the magazine’s decision. “And this year, no one had a greater impact than the individuals who imagined, designed, and built AI.”
Jacobs described 2025 as “the year when artificial intelligence’s full potential roared into view, and when it became clear that there will be no turning back or opting out,” adding: “Whatever the question was, AI was the answer.”
The magazine prepared two separate covers for the issue. In one, artist Jason Seiler painted an interpretative recreation of the iconic 1932 photograph “Lunch Atop a Skyscraper,” an image that depicted workers seated side-by-side on a steel beam hanging high above New York City during the construction of 30 Rockefeller Plaza, which became a symbol of American resilience during the Great Depression.
A cast of tech industry characters at the forefront of AI development are perched on the beam in Seiler’s recreation. Mark Zuckerberg, of Meta, Lisa Su, of Advanced Micro Devices, Elon Musk, of xAI, Sam Altman, of Open AI, Demis Hassabis, of DeepMind Technologies, Dario Amodei, of Anthropic, and Fei-Fei Li, of Stanford’s Human-Centered AI Institute, are all pictured, along with Huang.
The second cover illustration, by artist Peter Crowther, places the same executives among scaffolding at what looks like a construction site for the giant letters “AI.”
From left, cover art by Jason Seiler and Peter Crowther for TIME’s 2025 Person of the Year magazine spread.
Jason Seiler/TIME; Peter Crowther/TIME
“Every industry needs it, every company uses it, and every nation needs to build it,” Huang said of balancing the pressures to implement AI responsibly and deploy it to the public as quickly as possible. “This is the single most impactful technology of our time.”
Most of the industry figures pictured on Time’s cover did not speak to the magazine for the story, so this year’s spread mainly focuses on the implications — positive, negative and in between — of the companies they have built and the technology they continue forging.
AI often took center stage in 2025 in investigative news reports, economic and academic studies, and in Washington, D.C., as policymakers grappled with how to regulate its evolution while tech giants scrambled to trump their competitors’ inventions, as the use of some of them, like chatbots, grew to be commonplace, at times with tragic consequences.
“For these reasons, we recognize a force that has dominated the year’s headlines, for better or for worse,” Jacobs wrote in his editorial. “For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME’s 2025 Person of the Year.”
Doug Burgum may not have the name recognition of President Donald Trump’s other cabinet secretaries, like Homeland Security Secretary Kristi Noem or Health Secretary Robert F. Kennedy Jr., but Burgum, the Interior Secretary and former governor of North Dakota, still pops up on TV from time to time to push the president’s agenda.
And he did just that on Thursday on Fox News to promote both artificial intelligence and fossil fuels. Burgum insisted that any skepticism about AI was unwarranted and that it would “cure cancer.”
The Fox & Friends crew first asked Burgum about reports that data centers across the country were driving up energy costs, an issue that’s been documented frequently in articles from Bloomberg, CNBC, and Pew Research. Bloomberg found that, “electricity now costs as much as 267% more for a single month than it did five years ago in areas located near significant data center activity.” But Burgum called the claim “100% false.”
“If you want to talk about data centers, the highest electric prices in this country are places like Hawaii and Maine, and there’s no data center activity going there,” said Burgum. “Data centers, it’s the first time in history we’ve been able to take a kilowatt of electricity and convert it into intelligence.”
Burgum went on to say that converting electricity to intelligence was “the miracle of AI.”
“We can actually manufacture intelligence. Do you think someone who’s gonna spend $10 billion building an AI factory is gonna put it in a place that has high electric prices today? Of course not,” Burgum insisted.
Burgum went on to compare the rise of AI to the expansion of railroad infrastructure in the 19th century, emphasizing that the U.S. is in an AI arms race with China. And Burgum, like every cabinet secretary who’s placed in front of a microphone, credited President Trump with having the vision to make all of the good things happen. Burgum celebrated Trump’s denunciation of clean energy and the “green new scam,” claiming, without evidence, that green energy was somehow bad for the environment.
It’s true that energy prices in blue states tend to be higher, though solely blaming that on renewable energy doesn’t make much sense. Burgum gave Hawaii as an example of a state without data centers but high energy costs, which isn’t intellectually honest, given Hawaii’s unique geographical characteristics as an island in the middle of the Pacific Ocean. Everything’s more expensive in Hawaii.
West Virginia is a deep red state, with 70% of voters opting for President Trump in 2024. But its energy prices have soared 10.3% since 2018, according to the New York Times. Less than 5% of West Virginia’s energy comes from renewables, according to the state’s Office of Energy.
Burgum’s point doesn’t even make much sense in the context. People aren’t complaining that data centers are being built in areas with high energy costs; they’re pointing out that data centers are increasing energy costs in the areas where they are being built.
The think tank Energy Innovation modeled what’s likely to happen to energy costs since Trump’s so-called Big Beautiful Bill. It found that red states like Kentucky, Missouri, and South Carolina could see the highest jump in household energy prices over the next decade, with the phasing out of a tax credit for wind and solar in favor of fossil fuels like natural gas.
Fox & Friends host Ainsley Earhardt made the reason for Burgum’s appearance on the show more explicit after the Interior Secretary explained that AI is great. Earhardt said that people in Chandler, Arizona, would be voting on Thursday on whether to build a data center.
“We need to stay ahead of China,” Earhardt said. “And if we want to win the AI race against China, we have to build these data centers. So it’s just perception. How do we tell… you coming on Fox & Friends is telling the people in Arizona, ‘vote for this because it doesn’t mean your electric bills are going to go up.’”
Rarely is propaganda and the process behind it ever explained quite so bluntly by the people delivering it.
Burgum received criticism as the Governor of North Dakota that he was too cozy with the oil lobby. At the Interior Department, he’s pushed for ramping up an increase in oil production. The discussion on Fox News got particularly weird when Burgum insisted, without any pushback, that AI would cure cancer.
“First of all, software came upon America and the world in our lifetimes, and it was the greatest extension of human capability. Now we have…AI comes along, and it’s the greatest increase in productivity for humans ever. I mean, this is gonna, it’s not only gonna cure cancer, but it’s gonna eliminate all kinds of drudgery, repetitive jobs,” said Burgum.
“I mean, this drives things forward. So jobs will be different, but if every person in this country can have a free assistant that speaks 30 languages and can code. That’s not a bad thing,” he continued.
Claims that AI has radically increased productivity are highly contested, of course. While you might expect such a grandiose claim about curing cancer from someone who is ignorant of technology, this isn’t some random guy. Burgum made his money back in 2001 when he sold his software company, Great Plains Software, to Microsoft for $1.1 billion. He’s now reportedly worth about $100 million, according to Forbes. In theory, Burgum should have a little skepticism about the wildest claims coming from AI companies, given his tech background.
Burgum never defined any of the terms he was using, including AI, which is used in a wide variety of contexts these days. The AI tech that’s shooting down missiles is not the same AI tech that tells you to put glue on your pizza for a tasty snack. He also didn’t define a specific type of cancer, which is important given the fact that there are so many subtypes of cancer that require different treatments. A universal cancer cure is widely thought to be illogical.
But all of these talking points about AI clearly serve the interests of Big Tech and the people who’ve gotten so close to Trump—all of the same people who were revealed to be on the cover of Time magazine today as “Person of the Year.” Whether they like it or not, they do seem to be a big factor in rising household energy costs.
The Trump regime may have stumbled upon a new strategy for bringing down the cost of energy, even if it’s wildly unethical. On Wednesday, the U.S. seized a Venezuelan oil tanker, leading to questions about the justification for such a move. U.S. officials claim the vessel was violating sanctions.
White House Press Secretary Karoline Leavitt was asked specifically by Fox News’ Peter Doocey whether Trump would use the seized oil to “try to help Americans with affordability here in the United States.”
“As you know, Peter, the vessel will go to a U.S. port and the United States does intend to seize the oil,” Leavitt said. “However, there is a legal process for the seizure of that oil, and that legal process will be followed.”
Reuters reported Thursday that the Department of Justice and Department of Homeland Security had been planning the operation “for months,” and it intends to seize more oil from Venezuela.
Time Magazine has just announced its 2025 Person of the Year, recognizing not just one person, but a group its calling “architects of AI.” The digital article is out now.
SAN SALVADOR, El Salvador — El Salvador President Nayib Bukele said Thursday that his administration is partnering with Elon Musk’s artificial intelligence company xAI to bring artificial intelligence into more than 5,000 public schools.
The millennial leader, who previously made El Salvador the first nation to make bitcoin legal tender in 2021, is betting big on technology again.
In a statement Thursday, xAI said that its Grok chatbot will bring “personalized learning to over one million students” by creating tutoring “that adjusts to each student’s pace, preferences, and mastery level — ensuring every child, from urban centers to rural communities, receives world-class education tailored to their needs.”
Bukele said in the statement that El Salvador would be “pioneering AI-driven education.”
Last month, Bukele announced a partnership with Google to launch a mobile app that would allow Salvadorans to access free virtual medical consultations with doctors that would be assisted by AI.
Earlier this year, xAI said it was taking down “inappropriate posts” made by Grok, which appeared to include antisemitic comments that praised Adolf Hitler. Musk said at the time that the chatbot was improving.
Walt Disney Co. is investing $1 billion in OpenAI under a new commercial partnership with the ChatGPT and Sora developer.
The three-year licensing agreement will allow users of Sora, OpenAI’s artificial intelligence video tool, to create AI videos using more than 200 characters from Disney, Marvel, Pixar and Star Wars, the entertainment giant announced Thursday.
Disney is the first major company to strike a licensing deal with OpenAI on Sora, which uses generative artificial intelligence to create short videos.
“Through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Disney CEO Robert Iger said in a statement.
As part of the deal, Disney said it will deploy ChatGPT for its employees and use OpenAI tech to develop new products. Some user-generated Sora videos will also be made available on the Disney+ streaming service.
The agreement does not include any talent likenesses or voices, Disney said.
AI video generators like Sora have impressed users with their ability to quickly create realistic clips based on simple text prompts. At the same time, concerns over misinformation, deepfakes and copyright have swelled. In the aftermath of the Sora 2 release, clips of copyrighted characters, as well as prominent figures like Martin Luther King Jr., started cropping up on the platform.
Disney did not immediately respond to a request for comment. OpenAI directed CBS News to the press release issued Thursday.