ReportWire

Tag: Artificial Intelligence

  • AI deepfake romance scam steals woman’s home and life savings

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A woman named Abigail believed she was in a romantic relationship with a famous actor. The messages felt real. The voice sounded right. The video looked authentic. And the love felt personal. 

    By the time her family realized what was happening, more than $81,000 was gone — and so was the paid-off home she planned to retire in.

    We spoke with Vivian Ruvalcaba on my “Beyond Connected” podcast about what happened to her mother and how quickly the scam unfolded. What began as online messages quietly escalated into financial ruin and the loss of a family home. Vivian is Abigail’s daughter. She is now her mother’s advocate, investigator, chief advocate and protector.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    FROM FRIENDLY TEXT TO FINANCIAL TRAP: THE NEW SCAM TREND

    Vivian Ruvalcaba says a deepfake video made the scam against her mom, Abigail, feel real, using a familiar face and voice to build trust. (Philip Dulian/picture alliance via Getty Images)

    How the scam quietly started

    The scam did not begin with a phone call or a threat. It began with a message. “Facebook is where it started,” Vivian explained. “She was directly messaged by an individual.” That individual claimed to be Steve Burton, a longtime star of “General Hospital.” Abigail watched the show regularly. She knew his face. She knew his voice.

    After a short time, the conversation moved off Facebook. “He then led her to create an account with WhatsApp,” Vivian said. “When I discovered that, and I looked at the messaging, you can see all the manipulation.”

    That shift mattered. This is a major red flag I often warn people about. When a scammer moves a conversation from a public platform like Facebook to an encrypted app like WhatsApp, it is usually deliberate and designed to avoid detection.

    Grooming through secrecy and isolation

    At first, Abigail told no one. “She was very, very secretive,” Vivian said. “She didn’t share any of this with anyone. Not my father. Not me.” 

    That secrecy was not accidental. “She was being groomed not to share this information,” Vivian explained.

    This is a tactic I see over and over again in scams like this. Once a scammer feels they have someone emotionally invested, the next step is to isolate them. They push victims to keep secrets and avoid talking to family, friends or police. When Vivian finally started asking questions, her mother reacted in a way she never had before. “She said, ‘It’s none of your business,’” Vivian said. “That was shocking.”

    The deepfake video that changed everything

    When Vivian threatened to go to the police, her mother finally revealed what had been happening. “That’s when she showed me the AI video,” Vivian said. In the clip, a man who looked and sounded like Steve Burton spoke directly to Abigail and referred to her as “Abigail, my queen.” The message felt personal. It used her name and promised love and reassurance.

    “It wasn’t grainy,” Vivian said. “To the naked eye, you couldn’t tell.” Still, Vivian sensed something was off. “I looked at it, and I knew right away,” she said. “Mom, this is not real. This is AI.”

    Her mother disagreed and argued back. She pointed to the face and the voice. She also believed the phone calls proved it. That is what makes deepfakes so dangerous. When a video looks and sounds real, it can override common sense and even years of trust within a family.

    From gift cards to life savings

    The money flowed slowly at first. A $500 gift card request raised the first alarm. Then, money orders and Zelle payments. What Vivian discovered next still haunts her. “She pulled out a sandwich baggie,” Vivian said. “About 110 gift cards ranging from $25 up to $500.” Those cards were purchased with credit cards. Cash was mailed. Bitcoin was sent. In total, the Los Angeles Police Department (LAPD) tallied the losses at $81,000. And the scam was not finished.

    A couple posing for a picture

    The scam against Abigail moved from social media to encrypted messaging, a common tactic used to avoid detection. (Kurt “CyberGuy” Knutsson)

    When the scammer took her home

    After draining Abigail’s available cash, the scam did not stop. It escalated again. The scammer began pushing her to sell the one asset she still had: her home. “He was pressing her to sell,” Vivian told me. “Because he wanted more money.” The pressure came wrapped in romance. The scammer told Abigail they would buy a beach house together and start a new life. In her mind, this was not a scam. It was a plan for the future. That belief set off a chain reaction.

    How the home sale happened so quickly

    Abigail sold her condo for $350,000, even though similar homes in the area were worth closer to $550,000 at the time. The sale happened quickly. There was no family involvement. Her husband was still living in the home, yet he did not sign the documents. “She just gave away about $200,000 in equity,” Vivian said. “They stole it.”

    What makes this even more troubling is who bought the property. According to Vivian, the buyer was a wholesale real estate company that moved fast and asked very few questions. Messages later reviewed by the family show Abigail actively trying to hide the sale from her husband. In one text exchange, she warned the buyer not to park in the driveway because her husband had access to a Ring camera. That alone should have raised concerns. Instead, the buyers went along with it. “They appeased whatever she asked for,” Vivian said. “They were getting a property she was basically giving away.”

    These buyers were not the original scammers, but they benefited from the pressure the scammer created. The scammer pushed Abigail to sell. The buyers took advantage of the situation and the deeply discounted price. The home was not extra money, it was Abigail’s retirement. It was the only real security she and her husband had after decades of work. By the time Vivian uncovered the sale, Abigail was days away from sending another $70,000 from the proceeds to the scammer. Had that transfer gone through, nearly everything would have been gone.

    This is the part of the story people struggle to process. Modern AI-driven scams are no longer limited to draining bank accounts or gift cards. They now push victims into selling real property, often with opportunistic players waiting on the other side of the deal.

    Why police and lawyers could not stop the damage

    Vivian contacted the police the same day she realized her mother was being scammed. “They assigned an investigator,” she told me. “He was already very aware of the situation and how little they can help.” That reality is difficult for families to hear, but it is common. 

    Many large-scale scams operate overseas. The money moves quickly through gift cards, wire transfers and crypto. By the time victims realize what is happening, the trail is often cold. “Most of these scammers are out of the country,” Vivian said. “No one is being held accountable.”

    When the case shifted from criminal to civil

    Law enforcement documented the losses and opened a case, but there was little they could do to recover the money or stop what had already happened. The deeper damage came from the home sale, which fell into a legal gray area far beyond a typical fraud report. Once the condo was sold, the situation shifted from a criminal scam to a complex civil fight.

    Vivian immediately began searching for legal help. The first attorneys she contacted discouraged her. One told her it could cost more than $150,000 to pursue a case. Another failed to act even after being told about Abigail’s mental illness and history of bipolar disorder. At one point, an eviction attorney testified in court that Vivian never mentioned the romance scam, something she strongly disputes.

    By March, Abigail and her husband were forced out of their home. By October, they were fully evicted and locked out. Both parents are now displaced. Abigail is living with family out of state. Her husband, now in his mid-70s, is still working because the home was his retirement. 

    It was only after reaching out through personal connections that Vivian found an attorney willing to fight. That attorney is now pursuing the case on a contingency basis, meaning the family does not pay unless there is a recovery. The legal argument centers on Abigail’s mental capacity and whether she could legally understand and execute a home sale under the circumstances. The buyers dispute that claim. The outcome will be decided in court.

    This is why stories like this rarely end with a police arrest or quick resolution. Once a scam crosses into real estate and civil law, families are often left to navigate an expensive and exhausting legal system on their own. And by then, the damage has already been done.

    Why shame keeps scams hidden

    Many victims never report scams. Only about 22% contact the FBI. Fewer than 30% reach out to their local police department. Vivian understands why that happens. “She’s ashamed,” Vivian said. “I know she is.” That shame protects scammers. Silence gives them room to move on and target the next victim.

    INSIDE A SCAMMER’S DAY AND HOW THEY TARGET YOU

    A photo of a couple sitting and smiling at the camera

    What started as online messages escalated into gift cards, lost savings and the sale of a family home. (Kurt “CyberGuy” Knutsson)

    Red flags families cannot ignore

    This case reveals warning signs every family needs to recognize early.

    Red flags to watch for

    • Sudden secrecy about finances or online activity
    • Requests for gift cards, cash or crypto
    • Pressure to move conversations to encrypted apps
    • AI videos or voice messages used as proof of identity
    • Emotional manipulation tied to urgency or romance
    • Requests to sell property or move large assets

    I want to be very clear about this. It does not matter how smart you are or how careful you think you are. You can become a victim and not realize it until it is too late.

    Tips to stay safe and protect your family

    These lessons come from both Vivian’s experience and the patterns I see repeatedly in modern scams. Some are emotional. Others are technical. Together, they can help families spot trouble sooner and limit the damage when something feels off.

    1) Watch for platform changes

    Moving a conversation from Facebook to WhatsApp or another encrypted app is not harmless. Scammers do this to avoid moderation and make messages harder to trace or flag.

    2) Question AI proof

    Deepfake videos and cloned voices can look and sound convincing. Never treat a video or voice message as proof of identity, especially when money or property is involved.

    3) Slow down major financial decisions

    Scammers create urgency on purpose. Any request involving large sums, property sales or retirement assets should pause until a trusted third party reviews it.

    4) Never send gift cards, cash or crypto

    Legitimate people do not ask for payment through gift cards or cryptocurrency. These methods are a common scam tactic because they are hard to trace and nearly impossible to recover.

    5) Talk openly as a family

    Silence helps scammers. Regular conversations about finances, online contacts and unusual requests make it easier to spot problems early and step in without shame.

    6) Reduce online exposure with a data removal service

    Scammers research their targets using public databases. They pull names, phone numbers, relatives and property records. Removing that data reduces how easily criminals can build a profile.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    7) Use strong antivirus protection

    Malware links can expose financial accounts without obvious signs. Good antivirus software can block malicious links before they lead to deeper access or data theft.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    8) Protect assets early

    Living trusts and proper estate planning add protection before a crisis hits. They can help prevent rushed property sales and limit who can legally move assets without oversight.

    9) Use conservatorship when capacity is limited

    “Conservatorship is the only way,” Vivian said. “Power of attorney may not be enough.” When a loved one has diminished capacity, a conservatorship adds court oversight and can stop unauthorized financial decisions before serious damage occurs.

    Kurt’s key takeaways

    This scam did not rely on sloppy emails or obvious mistakes. It used emotion, familiarity and AI that looked real. Once trust was built, the damage followed quickly. Money disappeared. Secrecy grew. Pressure increased. The home was sold. What makes this case especially painful is the speed. A few messages led to gift cards. Gift cards turned into life savings. Life savings became the loss of a home built over decades. Most families never expect this to happen. Many do not talk about it until it has already happened. The lesson is clear. Awareness matters more than intelligence. Open conversations matter more than embarrassment. Acting early matters more than trying to undo the damage later. If you want to hear Vivian tell this story in her own words and understand how fast these scams unfold, listen to our full conversation on the “Beyond Connected” podcast.

    If a deepfake video showed up on your parent’s phone tonight, would you know before everything was gone? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • AWS revenue continues to soar as cloud demand remains high | TechCrunch

    [ad_1]

    Amazon Web Services ended 2025 with its strongest quarterly growth rate in more than three years.

    The company reported Thursday that its cloud service business recorded $35.6 billion in revenue in the fourth quarter of 2025. This figure marks a 24% year-on-year increase and the business segment’s largest growth rate in 13 quarters. Annual revenue run rate for the business segment is $142 billion, according to Amazon. The cloud service also saw an increase in its operating income from $12.5 billion in the fourth quarter compared to $10.6 billion in the same period in 2024.

    “It’s very different having 24% year-over-year growth on $142 billion annualized run rate than to have a higher percentage growth on a meaningfully smaller base, which is the case with our competitors,” Amazon CEO Andy Jassy said during the company’s fourth-quarter earnings call. “We continue to add more incremental revenue and capacity than others, and extend our leadership position.”

    That fourth-quarter growth was fueled by new agreements with Salesforce, BlackRock, Perplexity, and the U.S. Air Force, among other companies and government entities.

    “More of the top 500 U.S. startups use AWS as their primary cloud provider than the next two providers combined,” Jassy said. “We’re adding significant easy to core computing capacity each day.”

    AWS also added more than a gigawatt of power to its data center network in the fourth quarter.

    Jassy said AWS still sees a fair amount of its business coming from enterprises that want to move infrastructure from on-premise to the cloud. AWS is, of course, also seeing a boost from the AI boom, and Jassy credited AWS’s top-to-bottom AI stack functionality.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    “We consistently see customers wanting to run their AI workloads where the rest of their applications and data are,” Jassy said. “We’re also seeing that as customers run large AI workloads on AWS, they’re adding to their core AWS footprint as well.”

    AWS made up 16.6% of Amazon’s overall $213.4 billion revenue in the fourth quarter.

    AWS’s success wasn’t enough to appease Amazon investors, however. Amazon shares fell 10% in after-hours trading after investors reacted to the company’s plan to boost capital expenditures and missed Wall Street’s expectations on earnings per share.

    [ad_2]

    Rebecca Szkutak

    Source link

  • Anthropic Launches New Model That Spots Zero Days, Makes Wall Street Traders Lose Their Minds

    [ad_1]

    Anthropic, the makers of the popular and code-competent chatbot Claude, released a new model Thursday called Claude Opus 4.6. The company is doubling down on coding capabilities, claiming that the new model “plans more carefully, sustains agentic tasks for longer, can operate more reliably in larger codebases, and has better code review and debugging skills to catch its own mistakes.”

    It seems the model is also pretty good at catching other people’s mistakes. According to a report from Axios, Opus 4.6 was able to spot more than 500 previously undisclosed zero-day security vulnerabilities in open-source libraries during its testing period. It also reportedly did so without receiving specific prompting to go hunting for flaws—it just spotted and reported them.

    That’s a nice change of pace from all of the many developments that have been happening around OpenClaw, an open-source AI agent that most users have been running with Claude Opus 4.5. A number of vibe-coded projects that have come out of the community have had some pretty major security flaws. Maybe Anthropic’s upgrade will be able to catch those issues before they become everyone else’s problem.

    Claude’s calling card has been coding for some time now, but it seems Anthropic is looking to make a splash elsewhere with this update. The company said Opus 4.6 will be better at other work tasks like creating PowerPoint presentations and navigating documents in Excel. Seems those features will be key to Cowork, Anthropic’s recent project that it is touting as “Claude Code” for non-technical workers.

    It’s also boasting that the model will have potential use in financial analysis, and it sure seems like the folks on Wall Street could use some help there. The general consensus among financial analysts this week is that Anthropic’s Cowork models are spooking the stock market and playing a major factor in sending software stocks into a spiral. It’s possible that this is what the market has been responding to—after all, the initial release of DeepSeek, the open-source AI model out of China, tanked the AI sector for a day or so, so it’s not like these markets aren’t overly sensitive.

    But it seems unlikely that Opus 4.6 will fundamentally upend the market. Anthropic already holds a solid lead on the plurality of the enterprise market, according to a recent report from Menlo Ventures, and is well ahead of its top (publicly traded) competitors in the space—though OpenAI made its own play to cut into some market share earlier today with the launch of its Frontier platform for managing AI agents. If anything, Anthropic’s new model seems like it’ll help the company maintain its top spot for the time being. But if the stock market shock is any indication, one thing is for sure: the entire economy is completely pot-committed to the developments in AI. Surely that won’t have any repercussions.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI launches a way for enterprises to build and manage AI agents | TechCrunch

    [ad_1]

    OpenAI has launched a new product to help enterprises navigate the world of AI agents, focusing on agent management as critical infrastructure for enterprise AI adoption.

    On Thursday, AI giant OpenAI announced the launch of OpenAI Frontier, an end-to-end platform designed for enterprises to build and manage AI agents, on Thursday. It’s an open platform, which means users can manage agents built outside of OpenAI too.

    Frontier users can program AI agents to connect to external data and applications which allows them to execute tasks far outside of the OpenAI platform. Users can also limit and manage what these agents have access to, and what they can do, of course.

    OpenAI said Frontier was designed to work the same way companies manage human employees. Frontier offers an onboarding process for agents and a feedback loop that is meant to help them improve over time the same way a review might help an employee.

    OpenAI touted enterprises including HP, Oracle, State Farm and Uber as customers, but Frontier is currently only available to a limited number of users with plans to roll out more generally in the coming months.

    The company would not disclose pricing details on a press briefing earlier this week, according to reporting from The Verge. TechCrunch has also reached out for more information regarding pricing.

    Agent-management products become table stakes since AI agents rose to prominence in 2024. Salesforce has arguably the best-known such product, Agentforce, which the company launched in the fall of 2024. Others have quickly followed. LangChain is a notable player in the space that was founded in 2022 and has raised more than $150 million in venture capital. CrewAI is a smaller upstart that has raised more than $20 million in venture capital.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    In December, global research and advisory firm Gartner released a report about this type of software and called agent management platforms both the “most valuable real estate in AI” and a necessary piece of infrastructure for enterprises to adopt AI.

    It’s not surprising that OpenAI would release this platform in early 2026 as the company has made it clear that enterprise adoption is one of its main focus areas for this year. The company has also announced two notable enterprise deals this year with ServiceNow and Snowflake.

    Still, if OpenAI wants to be a meaningful player in the enterprise space, offering a product like Frontier is a promising step.

    [ad_2]

    Rebecca Szkutak

    Source link

  • MLB players strike deal to be turned into AI characters that can chat with fans

    [ad_1]

    Major League Baseball players have agreed to let a tech company create AI characters of themselves that can chat and interact with fans

    [ad_2]

    Source link

  • Anthropic, OpenAI rivalry spills into new Super Bowl ads as both fight to win over AI users

    [ad_1]

    The two artificial intelligence startups behind rival chatbots ChatGPT and Claude are bracing for an existential showdown this year as both need to prove they can grow a business that will make more money than they’re losing.

    The fiercest competition between the two AI developers, along with bigger companies like Google, is a race to win over corporate leaders looking to adopt AI tools to boost workplace productivity. The rivalry is also spilling into other realms, including the Super Bowl.

    Anthropic is airing a pair of TV commercials during Sunday’s game that ridicule OpenAI for the digital advertising it’s beginning to place on free and cheaper versions of ChatGPT. While Anthropic has centered its revenue model on selling Claude to other businesses, OpenAI has opened the doors to ads as a way of making money from the hundreds of millions of consumers who get ChatGPT for free.

    Anthropic’s commercials humorously mock the dangers of manipulative chatbots — represented as real people speaking in a stilted and unnaturally effusive tone — that form a relationship with a user before trying to hawk a product. The commercials end with a written message — “Ads are coming to AI. But not to Claude.” — followed by the opening beat and lyrics of the Dr. Dre song “What’s the Difference.”

    In a sign they struck a nerve, OpenAI CEO Sam Altman said in a social media post that he laughed at the “funny” ads but blasted them as dishonest and threw shade at his competitor’s smaller customer base.

    “Anthropic serves an expensive product to rich people,” Altman wrote on X. He also boasted that more Texans “use ChatGPT for free” than all the people in the United States who use Claude.

    The rivalry has existed ever since a group of OpenAI leaders quit the AI research laboratory and formed Anthropic in 2021, promising a clearer focus on the safety of the better-than-human technology called artificial general intelligence that both San Francisco firms wanted to build. That was before OpenAI first released ChatGPT in late 2022, revealing the huge commercial potential of large language models that could help write emails, homework or computer code.

    The competition ramped up this week as both companies launched product updates. OpenAI on Thursday launched a new platform called Frontier, designed to be a one-stop shop for businesses adopting a variety of AI tools that can work in tandem, particularly AI agents that work autonomously on someone’s behalf.

    “We can be the partner of choice for AI transformation for enterprise. The sky is the limit in terms of revenue we can generate from a platform like that,” Fidji Simo, OpenAI’s CEO of applications, told reporters this week.

    Anthropic earlier in the week said it was adding new functionality to its Cowork assistant to help automate legal research and drafting work.

    “Both OpenAI and Anthropic are really trying to position themselves as a platform company,” said Gartner analyst Arun Chandrasekaran. “The models are important, but the models aren’t a means to an end.”

    The two startups aren’t just competing with each other. They also face competition from Google, which is both a leading developer of a powerful AI model, Gemini, and has its own cloud computing infrastructure backed by revenue from its legacy digital advertising business. They also have complicated relationships with Amazon, which is Anthropic’s primary cloud provider, and Microsoft, which holds a 27% stake in OpenAI.

    The first choice for businesses looking to adopt AI agents is typically cloud computing “hyperscalers” like Microsoft, Google and Amazon, which offer a package of services, while AI model providers like Anthropic and OpenAI “tend to come in second place,” said Nancy Gohring, a senior research director at IDC.

    But there’s an opening because none of the players are giving businesses what they want, which are stronger security and compliance assurances to enable the more widespread use of AI agents.

    “Adopting AI and agents is inherently somewhat risky,” Gohring said.

    There’s also the AI division of Elon Musk’s newly merged SpaceX and its chatbot, Grok, which is not yet a viable contender for business customers. Musk has long set his sights on OpenAI, which he co-founded and is now suing in a court case set for trial in April.

    SpaceX, OpenAI and Anthropic are among the world’s most valuable privately held firms and Wall Street investors expect any, or all of them, could become publicly traded within the next year or so. But unlike SpaceX, which has its rocket business to fall back on, or established tech giants — like Amazon, Google and Microsoft — both Anthropic and OpenAI must find a way to make enough in sales to pay for the huge costs in computer chips and data centers to run their energy-hungry AI systems.

    It’s not that Anthropic and OpenAI aren’t making money or growing their product lines. The private firms don’t publicly disclose sales but both have signaled they are making billions of dollars in revenue on their existing products, including paid chatbot subscriptions for individual users.

    But it has costs a lot more money to fund the computing infrastructure needed to build these powerful AI models and respond to the millions of prompts they get each day. OpenAI, in particular, has said it owes more than $1 trillion in financial obligations to backers — including Oracle, Microsoft and Nvidia — that are essentially fronting the compute costs on the expectation of future payoffs.

    For some, the wait will likely be worth it.

    “Profitability matters, but not as a near‑term decision factor for investors who remain focused on scale, differentiation and infrastructure leverage,” said Forrester analyst Charlie Dai. “Both companies continue to post heavy losses, yet investors still back them because the frontier‑model race demands extraordinary capital intensity.”

    Denise Dresser, OpenAI’s newly hired chief revenue officer, told reporters this week that the company’s priority is “building the best enterprise platform for all industries, all segments.”

    “I don’t think we’re thinking about it from a revenue standpoint, but truly from a customer outcome standpoint,” she said, in part reflecting the “sense of urgency” from CEOs who want a smoother way of applying AI.

    “There’s a recognition that AI is becoming a core operating advantage,” Dresser said. “They don’t want to be on the wrong side of that shift.”

    [ad_2]

    Source link

  • Millions of AI chat messages exposed in app data leak

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A popular mobile app called Chat & Ask AI has more than 50 million users across the Google Play Store and Apple App Store. Now, an independent security researcher says the app exposed hundreds of millions of private chatbot conversations online. 

    The exposed messages reportedly included deeply personal and disturbing requests. Users asked questions like how to painlessly kill themselves, how to write suicide notes, how to make meth and how to hack other apps. 

    These were not harmless prompts. They were full chat histories tied to real users.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    HOW TECH IS BEING USED IN NANCY GUTHRIE DISAPPEARANCE INVESTIGATION

    Security researchers say Chat & Ask AI exposed hundreds of millions of private chatbot messages, including complete conversation histories tied to real users. (Neil Godwin/Getty Images)

    What exactly was exposed

    The issue was discovered by a security researcher who goes by Harry. He found that Chat & Ask AI had a misconfigured backend using Google Firebase, a popular mobile app development platform. Because of that misconfiguration, it was easy for outsiders to gain authenticated access to the app’s database. Harry says he was able to access roughly 300 million messages tied to more than 25 million users. He analyzed a smaller sample of about 60,000 users and more than one million messages to confirm the scope.

    The exposed data reportedly included:

    • Full chat histories with the AI
    • Timestamps for each conversation
    • The custom name users gave the chatbot
    • How users configured the AI model
    • Which AI model was selected

    That matters because many users treat AI chats like private journals, therapists or brainstorming partners.

    How this AI app stores so much sensitive user data

    Chat & Ask AI is not a standalone artificial intelligence model. It acts as a wrapper that lets users talk to large language models built by bigger companies. Users could choose between models from OpenAI, Anthropic and Google, including ChatGPT, Claude and Gemini. While those companies operate the underlying models, Chat & Ask AI handles the storage. That is where things went wrong. Cybersecurity experts say this type of Firebase misconfiguration is a well-known weakness. It is also easy to find if someone knows what to look for.

    We reached out to Codeway, which publishes the Chat & Ask AI app, for comment, but did not receive a response before publication.

    149 MILLION PASSWORDS EXPOSED IN MASSIVE CREDENTIAL LEAK

    Woman typing on phone.

    The exposed database reportedly included timestamps, model settings and the names users gave their chatbots, revealing far more than isolated prompts. (Elisa Schu/Getty Images)

    Why this matters to everyday users

    Many people assume their chats with AI tools are private. They type things they would never post publicly or even say out loud. When an app stores that data insecurely, it becomes a gold mine for attackers. Even without names attached, chat histories can reveal mental health struggles, illegal behavior, work secrets and personal relationships. Once exposed, that data can be copied, scraped and shared forever.

    YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT

    Man outside with Airpods looking at his phone.

    Because the app handled data storage itself, a simple Firebase misconfiguration made sensitive AI chats accessible to outsiders, according to the researcher. (Edward Berthelot/Getty)

    Ways to stay safe when using AI apps

    You do not need to stop using AI tools to protect yourself. A few informed choices can lower your risk while still letting you use these apps when they are helpful.

    1) Be mindful of sensitive topics

    AI chats can feel private, especially when you are stressed, curious or looking for answers. However, not all apps handle conversations securely. Before sharing deeply personal struggles, medical concerns, financial details or questions that could create legal risk if exposed, take time to understand how the app stores protects your data. If those protections are unclear, consider safer alternatives such as trusted professionals or services with stronger privacy controls.

    2) Research the app before installing

    Look beyond download counts and star ratings. Check who operates the app, how long it has been available, and whether its privacy policy clearly explains how user data is stored and protected.

    3) Assume conversations may be stored

    Even when an app claims privacy, many AI tools log conversations for troubleshooting or model improvement. Treat chats as potentially permanent records rather than temporary messages.

    4) Limit account linking and sign-ins

    Some AI apps allow you to sign in with Google, Apple, or an email account. While convenient, this can directly connect chat histories to your real identity. When possible, avoid linking AI tools to primary accounts used for work, banking or personal communication.

    5) Review app permissions and data controls

    AI apps may request access beyond what is required to function. Review permissions carefully and disable anything that is not essential. If the app offers options to delete chat history, limit data retention or turn off syncing, enable those settings.

    6) Use a data removal service

    Your digital footprint extends beyond AI apps. Anyone can find personal details about you with a simple Google search, including your phone number, home address, date of birth and Social Security number. Marketers buy this information to target ads. In more serious cases, scammers and identity thieves breach data brokers, leaving personal data exposed or circulating on the dark web. Using a data removal service helps reduce what can be linked back to you if a breach occurs.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    Kurt’s key takeaways

    AI chat apps are moving fast, but security is still lagging behind. This incident shows how a single configuration mistake can expose millions of deeply personal conversations. Until stronger protections become standard, you need to treat AI chats with caution and limit what you share. The convenience is real, but so is the risk.

    Do you assume your AI chats are private, or has this story changed how much you are willing to share with these apps? Let us know your thoughts by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • TSMC to make advanced AI semiconductors in Japan in boost for its chipmaking ambitions

    [ad_1]

    TOKYO — Taiwan’s chipmaker TSMC said Thursday it will be manufacturing some of the world’s most cutting-edge semiconductors in Japan to meet booming artificial intelligence-related demand, in a boost for the country’s chipmaking ambitions.

    Taiwan Semiconductor Manufacturing Corp., a major chip supplier to companies such as Nvidia and Apple, said Thursday it plans to make 3-nanometer semiconductors — advanced chips that are used in areas such as AI products and smartphones — at its second factory in Japan’s Kumamoto Prefecture, which is under construction.

    The decision by TSMC, the world’s largest contract chip maker, was a coup for Prime Minister Sanae Takaichi ahead of a general election on Sunday, where she hopes to secure the public’s mandate for her policies riding on high approval ratings.

    The announcement came while Takaichi was meeting with TSMC’s CEO and Chairman, C.C. Wei, in Tokyo.

    “It is very meaningful from the perspective of Japanese economic security, and I would like the project to move forward as proposed, by all means,” Takaichi said during the meeting.

    The advanced chips set to be made in Kumamoto will be used in AI, robotics and autonomous driving, sectors that Takaishi’s cabinet has designated as strategically important fields.

    TSMC’s first Kumamoto plant started mass production in late 2024 and makes less advanced chips. The company also is building new plants in Arizona in the U.S. to create a fabrication plant cluster and meet growing demand from customers building on the global AI frenzy.

    TSMC said in a separate emailed statement that Wei believes Japan’s “forward-looking semiconductor policy will deliver significant benefits to the semiconductor industry.”

    As Japan looks to gain ground in global advanced chipmaking competitiveness, it is also providing huge subsidies for its domestic chipmaker Rapidus, which is advancing towards mass producing cutting-edge chips.

    “There is a huge significance to have the world’s most advanced semiconductor factory in Japan from the perspective of economic security,” the Prime Minister’s Office said in a message posted on X on Thursday.

    Despite growing concerns over a potential AI-related bubble where massive investments may not pay off, TSMC’s Wei said last month he was confident the growing AI demand from its customers is “real.”

    Last month, TSMC said it plans to increase capital spending by up to nearly 40% this year as AI-related demand lifted its profits. It plans to raise its capital spending for 2026 to $52 billion-$56 billion, up from last year’s $40 billion.

    ___

    Chan reported from Hong Kong.

    [ad_2]

    Source link

  • Musk vows to put data centers in space, run them on solar power

    [ad_1]

    NEW YORK — Elon Musk vowed this week to upend another industry just as he did with cars and rockets — and once again he’s taking on long odds.

    The world’s richest man said he wants to put as many as a million satellites into orbit to form vast, solar-powered data centers in space — a move to allow expanded use of artificial intelligence and chatbots without triggering blackouts and sending utility bills soaring.

    To finance that effort, Musk combined SpaceX with his AI business on Monday and plans a big initial public offering of the combined company.

    “Space-based AI is obviously the only way to scale,” Musk wrote on SpaceX’s website Monday, adding about his solar ambitions, “It’s always sunny in space!”

    But scientists and industry experts say even Musk — who outsmarted Detroit to turn Tesla into the world’s most valuable automaker — faces formidable technical, financial and environmental obstacles.

    Here’s a look:

    Capturing the sun’s energy from space to run chatbots and other AI tools would ease pressure on power grids and cut demand for sprawling computing warehouses that are consuming farms and forests and vast amounts of water to cool.

    But space presents its own set of problems.

    Data centers generate enormous heat. Space seems to offer a solution because it is cold. But it is also a vacuum, trapping heat inside objects in the same way that a Thermos keeps coffee hot using double walls with no air between them.

    “An uncooled computer chip in space would overheat and melt much faster than one on Earth,” said Josep Jornet, a computer and electrical engineering professor at Northeastern University.

    One fix is to build giant radiator panels that glow in infrared light to push the heat “out into the dark void,” says Jornet, noting that the technology has worked on a small scale, including on the International Space Station. But for Musk’s data centers, he says, it would require an array of “massive, fragile structures that have never been built before.”

    Then there is space junk.

    A single malfunctioning satellite breaking down or losing orbit could trigger a cascade of collisions, potentially disrupting emergency communications, weather forecasting and other services.

    Musk noted in a recent regulatory filing that he has had only one “low-velocity debris generating event” in seven years running Starlink, his satellite communications network. Starlink has operated about 10,000 satellites — but that’s a fraction of the million or so he now plans to put in space.

    “We could reach a tipping point where the chance of collision is going to be too great,” said University at Buffalo’s John Crassidis, a former NASA engineer. “And these objects are going fast — 17,500 miles per hour. There could be very violent collisions.”

    Even without collisions, satellites fail, chips degrade, parts break.

    Special GPU graphics chips used by AI companies, for instance, can become damaged and need to be replaced.

    “On Earth, what you would do is send someone down to the data center,” said Baiju Bhatt, CEO of Aetherflux, a space-based solar energy company. “You replace the server, you replace the GPU, you’d do some surgery on that thing and you’d slide it back in.”

    But no such repair crew exists in orbit, and those GPUs in space could get damaged due to their exposure to high-energy particles from the sun.

    Bhatt says one workaround is to overprovision the satellite with extra chips to replace the ones that fail. But that’s an expensive proposition given they are likely to cost tens of thousands of dollars each, and current Starlink satellites only have a lifespan of about five years.

    Musk is not alone trying to solve these problems.

    A company in Redmond, Washington, called Starcloud, launched a satellite in November carrying a single Nvidia-made AI computer chip to test out how it would fare in space. Google is exploring orbital data centers in a venture it calls Project Suncatcher. And Jeff Bezos’ Blue Origin announced plans in January for a constellation of more than 5,000 satellites to start launching late next year, though its focus has been more on communications than AI.

    Still, Musk has an edge: He’s got rockets.

    Starcloud had to use one of his Falcon rockets to put its chip in space last year. Aetherflux plans to send a set of chips it calls a Galactic Brain to space on a SpaceX rocket later this year. And Google may also need to turn to Musk to get its first two planned prototype satellites off the ground by early next year.

    Pierre Lionnet, a research director at the trade association Eurospace, says Musk routinely charges rivals far more than he charges himself —- as much as $20,000 per kilo of payload versus $2,000 internally.

    He said Musk’s announcements this week signal that he plans to use that advantage to win this new space race.

    “When he says we are going to put these data centers in space, it’s a way of telling the others we will keep these low launch costs for myself,” said Lionnet. “It’s a kind of powerplay.”

    [ad_2]

    Source link

  • Paris prosecutors raid X offices in probe into child abuse images and deepfakes

    [ad_1]

    PARIS — French prosecutors raided the offices of Elon Musk’s social media platform X on Tuesday as part of a preliminary investigation into a range of alleged offences, including spreading child sexual abuse images and deepfakes.

    The investigation was opened in January last year by the prosecutors’ cybercrime unit, the Paris prosecutors’ office said in a statement. It’s looking into alleged “complicity” in possessing and spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organized group, among other charges.

    Prosecutors also asked Elon Musk and former CEO Linda Yaccarino to attend “voluntary interviews” on April 20. Employees of X have also been summoned that same week to be heard as witnesses, the statement said. Yaccarino was CEO from May 2023 until July 2025.

    A spokesperson for X did not respond to a request for comment.

    In a message posted on X, the Paris prosecutors’ office announced the ongoing searches at the company’s offices in France and said it was leaving the platform while calling on followers to join it on other social media.

    “At this stage, the conduct of the investigation is based on a constructive approach, with the aim of ultimately ensuring that the X platform complies with French law, as it operates on the national territory,” the prosecutors’ statement said.

    European Union police agency Europol ’’is supporting the French authorities in this,″ Europol spokesperson Jan Op Gen Oorth told The Associated Press, without elaborating.

    The investigation was first opened following reports by a French lawmaker alleging that biased algorithms on X were likely to have distorted the functioning of an automated data processing system.

    It was later expanded after Musk’s artificial intelligence chatbot Grok generated posts that allegedly denied the Holocaust and spread sexually explicit deepfakes, the statement said. Holocaust denial is a crime in France.

    Grok wrote in a widely shared post in French that gas chambers at the Auschwitz-Birkenau death camp were designed for “disinfection with Zyklon B against typhus” rather than for mass murder — language long associated with Holocaust denial.

    Musk’s artificial intelligence company built xAI and it is integrated into his X platform.

    In later posts on its X account, the chatbot acknowledged that its earlier reply was wrong, said it had been deleted and pointed to historical evidence that Zyklon B in Auschwitz gas chambers was used to kill more than 1 million people.

    Grok has a history of making antisemitic comments. Musk’s company took down posts from the chatbot that appeared to praise Adolf Hitler after complaints.

    X is also under pressure from the EU. The 27-nation bloc’s executive arm opened an investigation last month after Grok spewed nonconsensual sexualized deepfake images on the platform.

    Brussels has already hit X with a 120-million euro (then-$140 million) fine for shortcomings under the bloc’s sweeping digital regulations, including blue checkmarks that broke the rules on “deceptive design practices” that risked exposing users to scams and manipulation.

    [ad_2]

    Source link

  • X office in France searched as Paris prosecutor summons Elon Musk for questioning

    [ad_1]

    Paris, France — French authorities have asked Elon Musk to appear to answer questions as part of a probe into his social media platform X, the Paris prosecutor’s office said Monday, as authorities searched X’s office in the French capital.

    “Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,” the Paris prosecutor’s office said in a statement.

    French cybercrime authorities were carrying out a search, meanwhile, at X’s offices in Paris, the prosecutor’s office said.

    The summonses for Musk and Yaccarino and the search at the X office were related to an investigation launched in January 2025 over complaints about how X’s algorithm recommends content to users and gathers data, the prosecutor’s office said. Officials have previously raised concern that the way X works could amount to political interference.

    The investigation is to ensure that X is in compliance with French laws, and the prosecutor added that it was broadened last year after reports that X was allowing users to share nonconsensual, AI-generated sexually explicit imagery, and holocaust denial content. 

    Elon Musk, CEO of Tesla and SpaceX, and Shivon Zilis, a venture capitalist, arrive to attend the wedding of Dan Scavino, White House Deputy Chief of Staff, and Erin Elmore, the Department of State Director of Art in Embassies, at President Trump’s Mar-a-Lago resort in Palm Beach, Florida, Feb. 1, 2026.

    SAUL LOEB/AFP/Getty


    X and Musk have dismissed the French investigation, and similar probes by European Union and British authorities, as baseless, politically motivated attacks on free speech.

    Yaccarino resigned as CEO of X in July last year after two years at the helm of the company.

    The investigation is being led by the cybercrime unit of the prosecutor’s office, in conjunction with French police and the joint European policing agency Europol.

    A CBS News investigation found late last month that the Grok AI tool on Musk’s X platform still allowed users in the U.S., U.K. and EU to digitally undress people without their consent, despite public pledges from the company to stop the function.

    The Grok chatbot, both via its standalone app and for premium X account holders using the platform, allowed people to use artificial intelligence to edit images of real people and show them in revealing clothing such as bikinis.

    A request for comment on the findings of CBS News’ investigation was met with an apparent auto-reply from Musk’s company xAI, saying only: “Legacy media lies.” 

    Scrutiny of the Grok feature has mounted rapidly in recent months, with the British government warning X could face a U.K.-wide ban if it fails to block the “bikini-fy” tool, and EU regulators announcing their own investigation into the Grok AI editing function on in late January.

    CBS News found Grok was still enabling users to digitally undress people in photos weeks after X said, earlier in January, that it had, “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”

    [ad_2]

    Source link

  • Elon Musk combines his rocket and AI businesses before an expected IPO this year

    [ad_1]

    NEW YORK — Elon Musk is joining his space exploration and artificial intelligence ventures into a single company before what’s expected to be a massive initial public offering for the business later this year.

    His rocket venture, SpaceX, announced on Monday that it had bought xAI in an effort to help the world’s richest man dominate the rocket and artificial intelligence businesses. The deal will combine several of his offerings, including his AI chatbot Grok, his satellite communications company Starlink, and his social media company X.

    Musk has talked repeatedly about the need to speed development of technology that will allow data centers to operate in space. He believes that will help overcome the problem of huge costs in electricity and other resources in building and running AI systems on Earth.

    It’s a goal that Musk suggested in his announcement of the deal could become easier to reach with a combined company.

    “In the long term, space-based AI is obviously the only way to scale,” Musk wrote on SpaceX’s website Monday, then added in reference to solar power, “It’s always sunny in space!”

    Musk said in his announcement he estimates “that within 2 to 3 years, the lowest cost way to generate AI compute will be in space.”

    SpaceX will be competing in that realm with Google, which is working on a research project called Project Suncatcher that would equip solar-powered satellites with AI computer chips, with a prototype that could launch as soon as next year.

    But Musk’s prediction of a near future of space-based AI supercomputers is not shared by many other companies building data centers, including Microsoft.

    “I’ll be surprised if people move from land to low-Earth orbit,” Microsoft’s president, Brad Smith, told The Associated Press last month, when asked about the alternatives to building data centers in the U.S. amid rising community opposition.

    Musk is already facing stiff competition in artificial intelligence, where he’s been scrambling to compete against rivals such as OpenAI, which is also working toward an IPO. Musk’s dislike of OpenAI, which he helped to found more than a decade ago, is part of what drove him to start xAI in 2023 and build the ChatGPT alternative he named Grok.

    Musk has equally ambitious plans for Tesla as he tries to pivot a company with shrinking car sales to focus more on self-driving taxis and humanoid robots, driven by artificial intelligence.

    Tesla recently announced a $2 billion investment in xAI.

    Musk has used his control over multiple companies to combine operations before. Tesla bought SolarCity, a decade ago. And he recently had xAI buy his social media platform X, formerly called Twitter.

    Chatter on Wall Street about the billionaire continuing to meld his many ventures together in a massive Musk Inc. has taken off in recent months, with some investors speculating that Tesla could combine with SpaceX, too.

    Forbes magazine puts Musk’s net worth at $768 billion. He also owns a brain implant company called Neuralink and a tunnel digging business named the Boring Company.

    Terms of the SpaceX purchase of xAI were not disclosed. Among outside investors in the companies is a fund in which President Donald Trump’s son, Don Jr., is a partner. That firm, 1789 Capital, has made more than $1 billion worth of investments in various Musk companies in the past year, including SpaceX, xAI, and X, according to data provider Pitchbook, though it cashed out of some already.

    While pursuing space data centers, xAI is also moving rapidly to expand on Earth. Mississippi officials last month announced that the company is set to spend $20 billion to build a data center near the state’s border with Tennessee.

    The data center, called MACROHARDRR, a likely pun on Microsoft’s name, will be its third one in the greater Memphis area.

    Musk is also hoping the combined company can eventually help reach another goal he has long talked about — the need to colonize other planets in case there is a natural disaster or human-made disaster on Earth.

    When speaking at the World Economic Forum in Davos last week, Musk mused about humanity being a “tiny candle in a vast darkness, a tiny candle of consciousness that could easily go out.”

    [ad_2]

    Source link

  • Artificial Intelligence helps fuel new energy sources

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Artificial Intelligence and data centers have been blamed for rising electricity costs across the U.S. In December 2025, American consumers paid 42% more to power their homes than ten years ago. 

    “When you have increased demand and inadequate supply, costs are going to go up. And that’s what we’re experiencing right now,” Exelon CEO Calvin Butler said. 

    TRUMP SAYS EVERY AI PLANT BEING BUILT IN US WILL BE SELF-SUSTAINING WITH THEIR OWN ELECTRICITY

    In 2024, U.S. data centers used more than 4% of total U.S. electricity consumption according to the International Energy Agency. That equates to as much electricity as the entire nation of Pakistan uses annually. U.S. Data Center consumption is expected to grow by 133% by the end of the decade, using as much power as the entire country of France. 

    “We’re headquartered in Chicago, and we’re the owner of ComEd, the fourth-largest utility in the nation. ComEd’s peak load is roughly 23 gigawatts. We have had data center load come onto the system, but by 2030, we’ll be at 19 Gigawatts,” Butler said. 

    Artifical intelligence data centers in the U.S. used more than 4% of the total U.S. electricity consumption, according to the International Energy Agency. (Exelon)

    Commonwealth Edison has experienced a dramatic increase in data center connection requests. The potential projects total more than 30 gigawatts and are expected to come online between now an 2045.

    “Our growth is unprecedented in the last several decades. So, with the data center advent and the technology coming, we’ve been forced to serve that load, which is our responsibility,” Butler said. “But what we also have to do is build new generation supply, which is not keeping up with the load that is coming on. And that’s the crunch that we’re in right now.”

    IN 2026, ENERGY WAR’S NEW FRONT IS AI, AND US MUST WIN THAT BATTLE, API CHIEF SAYS

    Commonwealth Edison is asking regulators for a $15.3 billion 4-year grid update to meet the growing demand. The U.S. overall has increased its grid capacity by more than 15% over the last decade, but many utility companies and energy producers say it is not enough. 

    “We’re at a stage right now where we’re constrained by electricity,” Commonwealth Fusion Systems CEO Bob Mumgaard said. “You want to make power plants that can make a lot of power in a small package that you can put anywhere, that you could run at any time and fusion fits that bill.”

    Zanskar energy plant

    Zanskar, is the first AI-native geothermal energy company, according to their website. This plant is located in New Mexico. (Zanskar)

    Commonwealth Fusion Systems is working to add a new form of nuclear energy to the grid — fusion. It has the same reliable benefits of standard nuclear energy already in use, but does not produce long-lived radioactive waste and carries fewer risks. 

    “In fusion there’s no chain reaction. The result is helium which is safe and inert and you don’t use it to make anything related to weapons,” Mumgaard said. 

    US POWER CRUNCH LOOMS AS OKLO CEO SAYS GRID CAN’T KEEP UP WITHOUT NEW INVESTMENT

    Commonwealth Fusion Systems says Artificial Intelligence is helping bring fusion energy closer to being a new resource. 

    “Building and designing these complex machines and manipulating this complex data matter of plasma are all things that we’re still learning and we’re still figuring out how to do,” Mumgaard said. “And that’s an area where we’ve been able to accelerate using A.I.”

    Other under-utilized energy sources could soon get a big boost thanks to A.I. Geothermal energy is a small part of the electric grid, because of the high drilling costs and low confidence in where to place infrastructure. 

    Power lines and supporting towers

    Geothermal and nuclear fusion technology will allow energy to be produced in any weather at any time. (AP)

    “If you could drill the perfect geothermal well every single time, like you pick the right spot, you design the right well, you drill the 5,000, 8,000 feet, you hit 400F degree temperatures, that’s incredibly productive,” Zanskar Co-founder Joel Edwards said. “If you could do that every single time over and over and again, geothermal power is the cheapest source of power period.” 

    Zanskar is working to make the geothermal search more exact. The company uses A.I.-fueled mapping to find untapped resources previously thought non-existent. 

    “If we could just get more precise in where we go to find the things and then how we drill into the things, geothermal absolutely has the cost curve to come down,” Edwards said. “And that’s sort of what we’re running towards, with A.I. sort of giving us the boost, giving us an edge to do that.”

    CLICK HERE TO GET THE FOX NEWS APP

    Both geothermal and nuclear fusion can produce energy in any weather at any time, a component that could have helped ease the grid strain amid the recent winter storm. 

    “It’s critical, and we’ve been raising that alarm for years now, and I use the analogy that you’re driving a car and your check engine light is on, but you keep driving it, hoping that you’ll keep getting there and keep going, but when it breaks down, you’re going to have a significantly higher cost,” Butler said. “We have to pay attention to what’s going on, and this winter storm – Winter Storm Fern – is indicative of what’s coming.”

    [ad_2]

    Source link

  • Inside Moltbook, the new social media network for artificial intelligence agents

    [ad_1]

    Moltbook was launched last week by a software developer and mirrors the template of Reddit, but it’s not for humans. Instead, it allows artificial intelligence agents to post written content and interact with other chatbots through comments, up-votes and down-votes. Tyler Cowen, professor of economics at George Mason University, joins CBS News to discuss.

    [ad_2]

    Source link

  • FACT FOCUS: Images of NYC Mayor With Jeffrey Epstein Are AI-Generated. Here’s How We Know

    [ad_1]

    Multiple AI-generated photos falsely claiming to show New York City Mayor Zohran Mamdani as a child and his mother, filmmaker Mira Nair, with disgraced financier Jeffrey Epstein and his confidant Ghislaine Maxwell, along with other high-profile public figures, were shared widely on social media Monday.

    The images originated on an X account labeled as parody after a huge tranche of new Epstein files was released by the Justice Department on Friday. They are clearly watermarked as AI and other elements they contain do not add up.

    Here’s a closer look at the facts.

    CLAIM: Images show Mamdani as a child and his mother with Jeffrey Epstein and other public figures linked to the disgraced financier.

    THE FACTS: The images were created with artificial intelligence. They all contain a digital watermark identifying them as such and first appeared on a parody X account that says it creates “high quality AI videos and memes.”

    In one of the images, Mamdani and Nair appear in the front of a group photo with Maxwell, Epstein, former President Bill Clinton, Amazon founder Jeff Bezos and Microsoft founder Bill Gates. They seem to be posing at night on a crowded city street. Mamdani looks to be a preteen or young teenager.

    Another supposedly shows the same group of people, minus Nair, in what appears to be a tropical setting. Epstein is pictured holding Clinton sitting in his arms, while Maxwell has her arm around Mamdani, who appears slightly younger.

    Other AI-generated images circulating online depict Mamdani as a baby being held by Nair while she poses with Epstein, Clinton, Maxwell and Bezos. None of Epstein’s victims have publicly accused Clinton, Gates or Bezos of being involved in his crimes.

    Google’s Gemini app detected SynthID, a digital watermarking tool for identifying content that has been generated or altered with AI, in all the images described above. This means they were created or edited, either entirely or in part, by Google’s AI models.

    The X account that first posted the images describes itself as “an AI-powered meme engine” that uses “AI to create memes, songs, stories, and visuals that call things exactly how they are — fast, loud, and impossible to ignore.”

    An inquiry sent to the account went unanswered. However, a post by the account seems to acknowledge that it created the images.

    “Damn you guys failed,” it reads. “I purposely made him a baby which would technically make this pic 34 years old. Yikes.”

    The photos began circulating after an email emerged in which a publicist, Peggy Siegal, wrote to Epstein about seeing a variety of luminaries, including Clinton, Bezos and Nair, an award-winning Indian filmmaker, at 2009 afterparty for a film held at Maxwell’s townhouse.

    While Mamdani appears as a baby or young child in all of the images, he was 18 in 2009, when Nair is said to have attended the party.

    The images have led to related falsehoods that have spread online in their wake. For example, one claims that Epstein is Mamdani’s father. This is not true — Mamdani’s father is Mahmood Mamdani, an anthropology professor at Columbia University.

    The NYC Mayor’s Office did not respond to a request for comment.

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – January 2026

    [ad_2]

    Associated Press

    Source link

  • Elon Musk says he is merging SpaceX with artificial-intelligence company xAI

    [ad_1]

    Elon Musk says he is merging SpaceX with his artificial-intelligence company xAI

    [ad_2]

    Source link

  • France might seek restrictions on VPN use in campaign to keep minors off social media

    [ad_1]

    France may take additional steps to prevent minors from accessing social media platforms. As its government advances a proposed ban on social media use for anyone under age 15, some leaders are already looking to add further restrictions. During an appearance on public broadcast service Franceinfo, Minister Delegate for Artificial Intelligence and Digital Affairs Anne Le Hénanff said VPNs might be the next target.

    “If [this legislation] allows us to protect a very large majority of children, we will continue. And VPNs are the next topic on my list,” she said.

    A virtual private network would potentially allow French citizens younger than 15 to circumnavigate the social media ban. We’ve already seen VPN’s experience a popularity spike in the UK last year after similar laws were passed over age-gating content. However, a VPN also offers benefits for online privacy, and introducing age verification requirements where your personal data must be submitted negates a large part of these services’ appeal.

    The French social media ban is still a work in progress. France’s National Assembly voted in favor of the restrictions last week with a result of 116-23, moving it ahead for discussion in the country’s Senate. While a single comment doesn’t mean that France will in fact ban VPNs for any demographic, it does point to the direction some of the country’s leaders want to take. Critics responded to Le Hénanff’s statements with worry that these attempts at protective measures were veering into an authoritarian direction.

    The actions in France echo several other legislative pushes around the world aimed at reducing children and teens’ access to social media and other potentially sensitive content online. The US had seen 25 state-level laws for age verification introduced in the past two years, which has created a new set of concerns around users’ privacy and personal data, particularly when there has been no attempt to standardize how that information will be collected or protected. When data breaches at large corporations are already all too common, it’s hard to trust that the individual sites and services that suddenly need to build an age verification process won’t be an easy target for hacks.

    [ad_2]

    Anna Washenko

    Source link

  • US futures and world shares slip as worries over Trump’s Fed chief pick and AI weigh on markets

    [ad_1]

    U.S. futures and world shares skidded on Monday as worries over President Donald Trump’s nominee to be the next Federal Reserve chair amplified jitters over a possible bubble in the artificial intelligence boom.

    South Korea’s exchange, which is heavily influenced by tech-related developments, briefly suspended trading as its benchmark Kospi bounced, closing 5.3% lower at 4,949.67. Samsung Electronics gave up 6.3%, while chip maker SK Hynix sank 8.7%.

    The Kospi has been forging records for weeks as big tech companies piggybacked on the AI craze with deals with major players like chip maker Nvidia and OpenAI.

    In early European trading, Germany’s DAX edged less than 0.1% lower to 24,528.57. The CAC 40 in Paris shed 0.2% to 8,108.56, while Britain’s FTSE 100 declined 0.3% to 10,195.88.

    The future for the S&P 500 sank 0.7%, while that for the Dow Jones Industrial Average fell 0.4%.

    Markets took a hit as investors considered how Kevin Warsh, Trump’s nominee to lead the Federal Reserve after Fed Chair Jerome Powell’s term ends in May might handle interest rates.

    Warsh’s nomination requires Senate approval. But financial markets fear the Fed may lose some of its independence because of Trump, who has pushed hard for more and faster rate cuts. That fear has helped catapult skyward the price of gold and weaken the U.S. dollar’s value over the last year.

    “People do not get handed the keys to the most powerful central bank on earth because they plan to drive in the opposite direction of the people who gave them the keys,” Stephen Innes of SPI Asset Management said in a commentary.

    Early Monday, the price of gold fell 1.9%, while silver bounced back slightly, gaining 0.2%. Both plunged Friday as record runs in precious metals markets ground to a halt.

    On Friday, the price of gold dropped 11.4%, suddenly losing momentum after a tremendous rally where it roughly doubled over 12 months. It topped $5,000 for the first time on Jan. 26 and was around $5,600 at one point on Thursday.

    Silver, which had been on a similar, jaw-dropping tear, plunged 31.4%.

    U.S. benchmark crude oil lost $3.46 to $61.75 per barrel, while Brent crude, the international standard, fell $3.47 to $65.85 per barrel.

    Speaking to reporters during the weekend, Trump said Iran should negotiate a “satisfactory” deal to prevent the Middle Eastern country from getting any nuclear weapons.

    “I don’t know that they will. But they are talking to us. Seriously talking to us,” he said.

    That comment apparently assuaged some worries over potential disruptions to oil supplies that had pushed prices higher, analysts said.

    In Tokyo, the Nikkei 225 gave up early gains, sinking 1.3% to 52,655.18.

    Hong Kong’s Hang Seng dropped 2.2% to 26,775.57, while the Shanghai Composite index sank 2.5% to 4,015.75.

    In Australia, the S&P/ASX 200 fell 1% to 8,778.60.

    Taiwan’s Taiex lost 1.4%.

    On Friday, the S&P 500 dropped 0.4% and the Dow lost 0.4%. The Nasdaq composite lost 0.9%.

    The Fed chair has a big influence on the economy and markets worldwide by helping to dictate where the U.S. central bank moves interest rates. That affects prices for all kinds of investments, as the Fed tries to keep the U.S. job market humming without letting inflation get out of control.

    A report released Friday showed U.S. inflation at the wholesale level was hotter last month than economists expected. That could put pressure on the Fed to keep interest rates steady for a while instead of cutting them, as it did late last year.

    The longtime assumption has been that the Fed should operate separately from the rest of Washington so that it can make moves that are painful in the short term but necessary for the long term. To get inflation down to the Fed’s goal of 2%, for example, may require the unpopular choice to keep interest rates high and grind down on the economy for a while.

    In other action early Monday, the dollar fell to 154.88 Japanese yen from 154.94 yen. The euro was unchanged at $1.1853.

    [ad_2]

    Source link

  • Did artificial intelligence really drive layoffs at Amazon and other firms? It can be hard to tell

    [ad_1]

    The one thing N. Lee Plumb knows for sure about being laid off from Amazon last week is that it wasn’t a failure to get on board with the company’s artificial intelligence plans.

    Plumb, his team’s head of “AI enablement,” says he was so prolific in his use of Amazon’s new AI coding tool that the company flagged him as one of its top users.

    Many assumed Amazon’s 16,000 corporate layoffs announced last week reflected CEO Andy Jassy’s push to “reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.”

    But like other companies that have tied workforce changes to AI — including Expedia, Pinterest and Dow last week — it can be hard for economists, or individual employees like Plumb, to know if AI is the real reason behind the layoffs or if it’s the message a company wants to tell Wall Street.

    “AI has to drive a return on investment,” said Plumb, who worked at Amazon for eight years. “When you reduce head count, you’ve demonstrated efficiency, you attract more capital, the share price goes up.”

    “So you could potentially have just been bloated in the first place, reduce head count, attribute it to AI, and now you’ve got a value story,” he said.

    Plumb is atypical for an Amazon worker in that he’s also running what he describes as a “long shot” bid for Congress in Texas, on a platform focused on stopping the tech industry’s reliance on work visas to “replace American workers with cheaper foreign labor.”

    But whatever it was that cost Plumb his job, his skepticism about AI-driven job replacement is one shared by many economists.

    “We just don’t know,” said Karan Girotra, a professor of management at Cornell University’s business school. “Not because AI isn’t great, but because it requires a lot of adjustment and most of the gains accrue to individual employees rather than to the organization. People save time and they get their work done earlier.”

    If an employer works faster because of AI, Girotra said it takes time to adjust a company’s management structure in a way that would enable a smaller workforce. He’s not convinced that’s happening at Amazon, which he said is still scaling back from a glut of hiring during the COVID-19 pandemic.

    A report by Goldman Sachs said AI’s overall impact on the labor market remains limited, though some effects might be felt in “specific occupations like marketing, graphic design, customer service, and especially tech.” Those are fields involving tasks that correlate with the strengths of the current crop of generative AI chatbots that can write emails and marketing pitches, produce synthetic images, answer questions and help write code.

    But the bank’s economic research division said in its most recent monthly AI adoption tracker that, since December, “very few employees were affected by corporate layoffs attributed to AI,” though the report was published Jan. 16, before Amazon, Dow and Pinterest announced their layoffs.

    San Francisco-based Pinterest was the most explicit in asserting that AI drove it to cut up to 15% of its workforce. The social media company said it was “making organizational changes to further deliver on our AI-forward strategy, which includes hiring AI-proficient talent. As a result, we’ve made the difficult decision to say goodbye to some of our team members.”

    Pinterest echoed that message in a regulatory disclosure that said the company was “reallocating resources to AI-focused roles and teams that drive AI adoption and execution.”

    Expedia has voiced a similar message but the 162 tech workers the travel website cut from its Seattle headquarters last week included several AI-specific roles, such as machine-learning scientists.

    Dow’s regulatory disclosures tied its 4,500 layoffs to a new plan “utilizing AI and automation” to increase productivity and improve shareholder returns.

    Amazon’s 16,000 corporate job cuts were part of a broader reduction of employees at the ecommerce giant. At the same time as those cuts, all believed to be office jobs, Amazon said it would cut about 5,000 retail workers, according to notices it sent to state workforce agencies in California, Maryland and Washington, resulting from its decision to close almost all of its Amazon Go and Amazon Fresh stores.

    That’s on top of a round of 14,000 job cuts in October, bringing the total to well over 30,000 since Jassy first signaled a push for AI-driven organizational changes.

    Like many companies, in technology and otherwise, but particularly those that make and sell AI tools and services, Amazon has been pushing its workforce to find more efficiencies with AI.

    Meta CEO Mark Zuckerberg said last week that 2026 will be when “AI starts to dramatically change the way that we work.”

    “We’re investing in AI-native tooling so individuals at Meta can get more done, we’re elevating individual contributors, and flattening teams,” he said on an earnings call. “We’re starting to see projects that used to require big teams now be accomplished by a single very talented person.”

    So far, Meta’s layoffs this year have focused on cutting jobs from its virtual reality and metaverse divisions. Also driving job impacts is the industry shifting resources to AI development, which requires huge spending on computer chips, energy-hungry data centers and talent.

    Jassy told Amazon employees last June to be “curious about AI, educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams.”

    Plumb was fully on board with that and said he demonstrated his proficiency in using Amazon’s AI coding tool, Kiro, to “solve massive problems” in the company’s compensation system.

    “If you weren’t using them, your manager would get a report and they would talk to you about using it,” he said. “There were only five people in the entire company that were a higher user of Kiro than I was, or had achieved more milestones.”

    Now he’s shifting gears to his candidacy among a field of Republicans in the Houston area looking to unseat U.S. Rep. Dan Crenshaw in the March primary.

    Cornell’s Girotra said it’s possible that increasing AI productivity is leading companies to cut middle management, but he said the reality is that those making layoff decisions “just need to cut costs and make it happen. That’s it. I don’t think they care what the reason for that is.”

    Not all companies are signaling AI as a reason for cuts. Home Depot confirmed on Thursday that it was eliminating 800 roles tied to its corporate headquarters in Atlanta, though most of the affected employees worked remotely.

    Home Depot’s spokesman George Lane said that Home Depot’s cuts were not driven by AI or automation but “truly about speed, agility” and serving the needs of its customers and front-line workers.

    And exercise equipment maker Peloton confirmed on Friday that it is reducing its workforce by 11% as part of a broader cost-cutting move under its CEO Peter Stern to pare down operating expenses.

    ——

    AP Retail Writer Anne D’Innocenzio contributed to this report.

    [ad_2]

    Source link

  • Indonesia lets Elon Musk’s Grok back online under tight supervision

    [ad_1]

    JAKARTA, Indonesia — Indonesia allowed Elon Musk’s artificial intelligence chatbot Grok to resume operations in the country on a conditional basis and under strict supervision, weeks after banning it for explicit sexual content.

    Musk’s social platform X Corp made a written commitment to service improvements and compliance with applicable laws, the communications ministry said in a statement Sunday.

    The company told the ministry it had taken steps to address the misuse of Grok services, including restricting access to certain features, according to the statement.

    Indonesia and Malaysia were the first two countries that blocked access to Grok in January over concerns it was being misused to generate sexually explicit and nonconsensual images.

    Malaysian authorities lifted the temporary restriction after the company took security and preventive measures. Malaysian regulators said they met last week with X’s representatives and would continue to monitor the situation.

    The normalization of Grok’s operations in Indonesia was not unconditional, said Alexander Sabar, the ministry’s director general of digital space supervision. He added that the steps X claims to have taken will be verified and tested by Indonesian authorities to ensure they prevent violations, including the distribution of illegal content and violations of child protection principles.

    “If inconsistencies or further violations are found in its implementation, the Ministry of Communication and Digital Affairs will not hesitate to take corrective action, including suspending access to services again,” Sabar said.

    [ad_2]

    Source link