ReportWire

Tag: Artificial Intelligence

  • AI wearable helps stroke survivors speak again

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Losing the ability to speak clearly after a stroke can feel devastating. For many survivors, the words are still there in their minds, but their bodies will not cooperate. Speech becomes slow, unclear or fragmented. This condition, known as dysarthria, affects nearly half of all stroke survivors and can make everyday communication exhausting. Now, researchers believe they may have found a better way forward. Scientists at the University of Cambridge have developed a wearable device called Revoice. It is designed to help people with post-stroke speech impairment communicate naturally again without surgery or brain implants.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    FULLY IMPLANTABLE BRAIN CHIP AIMS TO RESTORE REAL SPEECH

    A soft, flexible choker like this houses Revoice’s sensors, which read subtle throat vibrations to help reconstruct speech in real time. (University of Cambridge)

    Why dysarthria makes recovery so hard

    Dysarthria is a physical speech disorder. A stroke can weaken the muscles in the face, mouth and vocal cords. As a result, speech may sound slurred, slow or incomplete. Many people can only say a few words at a time, even though they know exactly what they want to say. According to professor Luigi Occhipinti, that disconnect creates deep frustration. Stroke survivors often work with speech therapists using repetitive drills. These exercises help over time, but open-ended conversation remains difficult. Recovery can take months or even longer, which leaves patients struggling during daily interactions with family, caregivers and doctors.

    How the Revoice device works

    Revoice takes a very different approach. Instead of asking users to type, track their eyes or rely on implants, the device reads subtle physical signals from the throat and neck. It looks like a soft, flexible choker made from breathable, washable fabric. Inside are ultra-sensitive textile strain sensors and a small wireless circuit board. When a user silently mouths words, the sensors detect tiny vibrations in the throat muscles. At the same time, the device measures pulse signals in the neck to estimate emotional state.

    Those signals are processed by two artificial intelligence (AI) agents:

    • One reconstructs words from mouthed speech
    • The other interprets emotion and context to build complete sentences

    Together, they allow Revoice to turn a few mouthed words into fluent speech in real time.

    ELON MUSK SHARES PLAN TO MASS-PRODUCE BRAIN IMPLANTS FOR PARALYSIS, NEUROLOGICAL DISEASE

    A diagram of how the Revoice device works on a patient

    This diagram shows how Revoice combines throat muscle signals and pulse data with AI to turn silently mouthed words into full, expressive sentences in real time. (University of Cambridge)

    Why this AI approach is different

    Earlier silent speech systems had serious limits. Many were tested only on healthy volunteers. Others forced users to pause for several seconds between words, which made the conversation feel unnatural. Revoice avoids those delays. It uses an AI-driven throat sensor system paired with a lightweight language model. Because the model runs efficiently, it uses very little power and delivers near-instant responses. The device is powered by a 1,800 mWh battery, which researchers expect will last a full day on a single charge.

    What early trials revealed

    After refining the system with healthy participants, researchers tested Revoice with five stroke patients who had dysarthria.

    The results were striking:

    • Word error rate: 4.2%
    • Sentence error rate: 2.9%

    In one example, a patient mouthed the phrase “We go hospital.” Revoice expanded it into a complete sentence that reflected urgency and frustration, based on emotional signals and context. Participants reported a 55% increase in satisfaction and said the device helped them communicate as fluently as they did before their stroke.

    PARALYZED MAN WALKS AGAIN AFTER EXPERIMENTAL DRUG TRIAL TRIGGERS REMARKABLE RECOVERY

    This figure breaks down the Revoice hardware and AI pipeline, showing how strain sensors, wireless electronics, and emotion decoding work together to reconstruct natural speech.

    This figure breaks down the Revoice hardware and AI pipeline, showing how strain sensors, wireless electronics, and emotion decoding work together to reconstruct natural speech. (University of Cambridge)

    Beyond stroke recovery

    Researchers believe Revoice could also help people with Parkinson’s disease and motor neuron disease. Because the device is comfortable, washable, and designed for daily wear, it could fit into real-world routines rather than being confined to clinics. Before that can happen, larger clinical trials are required. The research team plans to begin broader studies with native English-speaking patients and hopes to expand the system to support multiple languages and a wider range of emotional expressions. The findings were published in the journal Nature Communications.

    What this means for you

    If you or someone you care for has experienced a stroke, this research points to a major shift in recovery tools. Revoice suggests that speech assistance does not need to be invasive to be effective. A wearable solution could support communication during the most difficult months of rehabilitation, when confidence and independence often suffer the most. It may also reduce stress for caregivers who struggle to understand incomplete or unclear speech. Clear communication can improve medical care, emotional well-being and daily decision-making.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com

    Kurt’s key takeaways

    Communication is tied closely to dignity and independence. For stroke survivors, losing that ability can be one of the hardest parts of recovery. Revoice shows how artificial intelligence and wearable tech can work together to restore something deeply human. While it is still early, this device represents a meaningful step toward making recovery feel less isolating and more hopeful.

    If a simple wearable could help restore natural speech, should it become a standard part of stroke rehabilitation? Let us know by writing to us at Cyberguy.com

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • A chatbot entirely powered by humans, not artificial intelligence? This Chilean community shows why

    [ad_1]

    About 50 residents of a community outside Chile’s capital spent Saturday trying their best to power an entirely human-operated chatbot that could answer questions and make silly pictures on command, in a message to highlight the environmental toll of artificial intelligence data centers in the region.

    Organizers say the 12-hour project fielded more than 25,000 requests from around the world.

    Asking the Quili.AI website to generate an image of a “sloth playing in the snow” didn’t instantly produce an output, as ChatGPT or Google’s Gemini would. Instead, someone responded in Spanish to wait a few moments and reminded the user that a human was responding.

    Then came a drawing about 10 minutes later: a penciled sketch of a cute and cartoonish sloth in a pile of snowballs, with its claws clutching one and about to throw it.

    “The goal is to highlight the hidden water footprint behind AI prompting and encourage more responsible use,” said a statement from organizer Lorena Antiman of the environmental group Corporación NGEN.

    The answers came from a rotating crew of volunteers working on laptops in a community center in Quilicura, a municipality at the urban edge of Santiago that has become a data center hub. Asked by an Associated Press reporter for the identity of who made the sloth drawing, the website responded that it was a local youth who’s helping with illustrations.

    The website responded quickly to questions that drew on residents’ cultural knowledge, like how to make Chilean sopaipillas, a fried pastry. When they didn’t know the answer, they walked around the room to see if someone else did.

    “Quili.AI isn’t about always having an instant answer. It’s about recognizing that not every question needs one,” Antiman said. “When residents don’t know something, they can say so, share perspective, or respond with curiosity rather than certainty.”

    She said it’s not designed to reject the “incredibly valuable” uses of AI but to think more about the impacts of so much “casual prompting” on water-stressed places like Quilicura.

    The backdrop behind the campaign is a debate, in Chile and elsewhere, about the heavy costs of AI usage. Data center computer chips running AI systems require huge amounts of electricity and some also use large volumes of water for cooling, with usage varying depending on location and type of equipment.

    Cloud computing giants Amazon, Google and Microsoft are among a number of companies that have built or planned data centers in the Santiago region.

    Google has argued that the Quilicura data center it switched on in 2015 is the “most energy efficient in Latin America” and has highlighted its investment in wetlands restoration and irrigation projects in the surrounding Maipo River basin. But it faced a court challenge over another project near Santiago over water usage concerns.

    Chile has faced a decade of severe drought, which experts say contributed to the spread of recent deadly wildfires.

    [ad_2]

    Source link

  • ‘It’s a game changer’: Artificial intelligence helps Iowa surgeon reconstruct teen’s jaw

    [ad_1]

    While waiting in a Des Moines, Iowa, exam room, Mya Buie nervously applies her lip gloss. Three months ago, the 17-year-old had multiple surgeries to reconstruct her jaw. In this moment, she is waiting to be seen for a postoperative checkup. She hasn’t liked medical settings since a shooting landed her in a Des Moines hospital’s intensive care unit for several days.”It was kind of scary. It was traumatic,” she said of the night her mother’s ex-boyfriend shot her in the face during a fight just days before her birthday.On the other hand, her surgeon, Dr. Simon Wright, has been looking forward to this appointment all week. He calls Buie one of his most memorable and brave patients.”I’m gonna take a look under your chin,” he says to Buie while carefully touching her face. The teenager was shot in the face with a .40-caliber bullet at close range. The impact of the bullet fractured and shattered her jaw into tiny fragments and permanently damaged four teeth.For years, Wright, a facial reconstruction trauma surgeon, has reconstructed facial bones by bending and molding titanium plates by hand to the injured area. It’s a time-consuming and often erroneous process.”There is always a level of dissatisfaction, and it doesn’t feel good to do something just good enough,” Wright said.The manual work has now been replaced with modern technology. Doctors used artificial intelligence to read a CT scan of Buie’s jaw, then a 3D printer turned that image into a custom jawbone plate.”It’s so much easier than trying to bend a plate to get it perfect,” Wright said. “It’s no question a game-changer.”Doctors say a customized jawbone plate allows for a more accurate fit, better aligns the jaw with a patient’s teeth, and cuts surgery time in half. What makes this process so unique: Buie’s customized plate was made in record time, a first for Des Moines trauma surgeons. “The ability to make a custom plate has been around for 10 years or more, but the ability to do it very quickly has not been,” Wright said.What would normally take several weeks took only a few days. The plate was created in a lab in Jacksonville, Florida, put on a plane to the Des Moines International Airport, then hand-delivered to the hospital on a Friday night before the teenager’s surgery first thing Saturday morning. “There is a lot of things that have to go right to do any kind of surgery at all, and to do something complicated like this, it’s really an inspiring thing to be part of,” Wright said, smiling. He also said this advancement serves as a reminder of the importance of supporting medical research because of its impact on people. “This came from the efforts of all kinds of people in different fields that have cross-pollinated. For example, 3D printing as a medical application, and at one point, it may not have begun with a medical endpoint in mind,” he said.For trauma patients, time is of the essence. For Buie, time does heal. The high school junior is back to school with plans to graduate early. Doctors expect her to make a full recovery. Her new jawbone plate will eventually fuse to bone and be as strong as ever. “I just thank God every day for giving me a second chance at life. I’m very grateful. I can tell my story and spread the word of God with this story, like a testament.” Buie will likely undergo additional surgeries. Next month, she will receive dental implants for her missing teeth.

    While waiting in a Des Moines, Iowa, exam room, Mya Buie nervously applies her lip gloss. Three months ago, the 17-year-old had multiple surgeries to reconstruct her jaw. In this moment, she is waiting to be seen for a postoperative checkup. She hasn’t liked medical settings since a shooting landed her in a Des Moines hospital’s intensive care unit for several days.

    “It was kind of scary. It was traumatic,” she said of the night her mother’s ex-boyfriend shot her in the face during a fight just days before her birthday.

    On the other hand, her surgeon, Dr. Simon Wright, has been looking forward to this appointment all week. He calls Buie one of his most memorable and brave patients.

    “I’m gonna take a look under your chin,” he says to Buie while carefully touching her face. The teenager was shot in the face with a .40-caliber bullet at close range. The impact of the bullet fractured and shattered her jaw into tiny fragments and permanently damaged four teeth.

    For years, Wright, a facial reconstruction trauma surgeon, has reconstructed facial bones by bending and molding titanium plates by hand to the injured area. It’s a time-consuming and often erroneous process.

    “There is always a level of dissatisfaction, and it doesn’t feel good to do something just good enough,” Wright said.

    The manual work has now been replaced with modern technology. Doctors used artificial intelligence to read a CT scan of Buie’s jaw, then a 3D printer turned that image into a custom jawbone plate.

    “It’s so much easier than trying to bend a plate to get it perfect,” Wright said. “It’s no question a game-changer.”

    Doctors say a customized jawbone plate allows for a more accurate fit, better aligns the jaw with a patient’s teeth, and cuts surgery time in half. What makes this process so unique: Buie’s customized plate was made in record time, a first for Des Moines trauma surgeons.

    The ability to make a custom plate has been around for 10 years or more, but the ability to do it very quickly has not been,” Wright said.

    What would normally take several weeks took only a few days. The plate was created in a lab in Jacksonville, Florida, put on a plane to the Des Moines International Airport, then hand-delivered to the hospital on a Friday night before the teenager’s surgery first thing Saturday morning.

    “There is a lot of things that have to go right to do any kind of surgery at all, and to do something complicated like this, it’s really an inspiring thing to be part of,” Wright said, smiling. He also said this advancement serves as a reminder of the importance of supporting medical research because of its impact on people.

    “This came from the efforts of all kinds of people in different fields that have cross-pollinated. For example, 3D printing as a medical application, and at one point, it may not have begun with a medical endpoint in mind,” he said.

    For trauma patients, time is of the essence. For Buie, time does heal. The high school junior is back to school with plans to graduate early. Doctors expect her to make a full recovery. Her new jawbone plate will eventually fuse to bone and be as strong as ever.

    “I just thank God every day for giving me a second chance at life. I’m very grateful. I can tell my story and spread the word of God with this story, like a testament.”

    Buie will likely undergo additional surgeries. Next month, she will receive dental implants for her missing teeth.

    [ad_2]

    Source link

  • Doctors increasingly see AI scribes in a positive light. But hiccups persist

    [ad_1]

    When Jeannine Urban went in for a checkup in November, she had her doctor’s full attention.

    Instead of typing on her computer keyboard during the exam, Urban’s primary care physician at the Penn Internal Medicine practice in Media, Pennsylvania, had an ambient artificial intelligence scribe take notes. At the end of the 30-minute visit, Urban’s doctor showed her the AI summary of the appointment, neatly organized into sections for her medical history, the physical exam findings, and an assessment and treatment plan for her rheumatoid arthritis and hot flashes, among other details.


    MOREColorectal cancer is now the top cause of cancer death among young adults


    The clinical note, which Urban could also review on the patient portal at home, was incredibly thorough, she said. It summarized all of her questions and concerns and the doctor’s responses. The scribe “made sure we didn’t miss anything,” Urban said.

    Ambient AI scribes are being hailed by physicians as a game changer that helps free them to focus on their patients rather than their computer keyboard. By releasing doctors from the onerous and time-consuming task of documenting what happens during every patient encounter, early studies show, AI scribes may help reduce physician burnout and after-hours “pajama time” catching up on work in the evening.

    The potential of AI to transform every aspect of the health care system — from patient care to clinical efficiency to medical innovation — is an area of intense focus, including by the Trump administration.

    Last January, President Donald Trump issued an executive order to remove barriers to American leadership in AI. Later in the year, a press release from the federal Department of Health and Human Services invited stakeholders to weigh in on how the department can accelerate the adoption of AI in health care.

    Several startup vendors in recent years have introduced ambient AI scribe products that can be integrated into electronic health records. EHR market leader Epic is piloting its own AI scribe technology, which it expects to release widely early this year, according to Jackie Gerhart, a family medicine physician who is chief medical officer and vice president of clinical informatics at Epic.

    Health tech experts estimate that a third of providers have access to ambient AI scribe technology. As adoption looks likely to grow rapidly over the next few years, many expect it to become more of a recruiting tool, a minimum requirement for incoming clinicians, who reports indicate are increasingly prioritizing work-life balance.

    “It’s part of keeping doctors happy,” said Robert Wachter, a professor and the chair of the Department of Medicine at the University of California-San Francisco, whose forthcoming book, A Giant Leap, explores how AI is transforming health care. “Health systems that initially might have done a hard-nosed return-on-investment calculation — many are softening on that and realizing that the cost of recruiting and retaining doctors is pretty high.”

    But many questions remain. Does the use of ambient AI scribes improve patient care and health outcomes? Will doctors use time they gain by employing an AI scribe to improve the quality of the time they spend with their patients or just boost the number of patients they see? To what extent will expanding the amount of detail available from a patient visit lead to bigger bills if the AI scribe is integrated with a coding app that optimizes provider charges?

    For now, these questions remain mostly unanswered.

    Urban said that the AI scribe didn’t change her experience as a patient very much. Typically, after a patient gives verbal permission, the AI scribe records the visit on a phone and organizes the conversation into the structure of a clinical note, filtering out small talk that isn’t pertinent to the medical visit but incorporating relevant details about a family member’s recent cancer diagnosis, for example. The scribe’s note is often then integrated into the provider’s EHR. The doctor later reviews the note and signs off on it.

    Even though the visit may not feel very different to patients, some clinicians report that ambient AI scribes are changing patient encounters in unanticipated ways.

    “Now, when I’m doing a physical exam, I have to say what I’m doing and what I’m finding out loud in order for the AI scribe to document it,” said Dina Capalongo, Urban’s primary care doctor. “People find that very interesting,” she said.

    When Capalongo places her stethoscope over the carotid artery under a patient’s jaw, for example, she might say that she doesn’t hear a “bruit,” or vascular murmur, whose presence could indicate atherosclerosis. Patients have told her, “I never knew why a doctor would listen there,” she said.

    Saying things out loud for the AI scribe that would typically appear only in a clinical note can create its own set of challenges, particularly during sensitive physical exams. Doctors may feel it’s important to adjust their conversation accordingly.

    “Sometimes patients are anxious and scared and my saying things that they don’t understand or they may worry about during an uncomfortable examination does not help the situation and honestly is insensitive to what the patient is going through,” said Genevieve Melton-Meaux, a professor in the Division of Colon and Rectal Surgery at the University of Minnesota, who is also chief health informatics and AI officer at Fairview Health Services in Minneapolis. “I’ll keep that top of mind and make sure I record it” after the visit.

    “How we have conversations with patients about these tools is really important, in particular for maintaining trust and ensuring accurate information,” Melton-Meaux said.

    Studies have found that, across a range of measures such as completeness, timeliness, and coherence, the notes created by ambient AI scribes are generally at least as good as, and sometimes better than, traditional documentation, said Kevin Johnson, a pediatrician who is vice president for applied informatics at the University of Pennsylvania Health System.

    An ongoing concern is around AI “hallucinations,” in which false, sometimes fabricated information appears in an AI output.

    Kaiser Permanente, an early adopter of ambient AI scribe technology, provides it to more than 25,000 doctors, advanced practice providers, and pharmacists systemwide. It has found hallucinations to be “quite rare,” said Daniel Yang, an internist who is vice president of AI and emerging technologies at KP.

    But they happen. An AI-scribe-generated note, for instance, might say that the doctor planned to refer someone to a neurologist or to follow up in two weeks. The problem? The doctor might not have said that.

    “The technology is not perfect, and that’s why physicians are reviewing it,” Yang said. It’s learning from regular physician visits as it goes, he said. That’s why having a person check the work product is critical.

    Still, even such a “human-in-the loop” system is fraught, Wachter said. “Humans stink at maintaining vigilance over time,” he said.

    As the use of ambient AI scribes becomes routine, some clinicians worry that the technology will widen the divide between health care haves and have-nots.

    Large health systems are able to move forward with the technology, Melton-Meaux said. But what about critical access hospitals or small private practices? “There need to be more resources,” she said.

    Physicians’ enthusiasm for ambient AI scribes stands in sharp contrast to their negative reaction to electronic health record systems that have become widely adopted in recent years to replace paper charts.

    “During the last 10 years, when EHRs became a thing, we all became very grumpy, overworked data scribes,” Wachter said.

    The introduction of AI scribes makes physicians feel like technology is working for them rather than the other way around, health care AI experts said.

    And AI scribes are “training wheels” for more consequential adoption of AI in health care, Wachter said.

    To improve health care value and save costs, Wachter said, we need a system that makes it more likely that physicians will practice evidence-based medicine to order the right tests and prescribe the right medications.

    “It’s a few years away, but it’s all AI-dependent,” he said.

    Epic has introduced roughly 60 AI use cases for patients, clinicians, and administration, with over 100 more in the works.

    “It’s so much bigger than a scribe,” said Epic’s Gerhart. “It’s literally listening and acting in a way that tees things up for me so that I can take action.”


    KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

    This article first appeared on KFF Health News and is republished here under a Creative Commons Attribution-NoDerivatives 4.0 International License.

    [ad_2]

    Michelle Andrews, KFF Health News

    Source link

  • How Google’s A.I. Overviews Are Rewriting the Rules of Digital Commerce

    [ad_1]

    As Google’s AI Overviews move from experiment to default, brands face a fundamental shift in visibility, control and customer acquisition. Unsplash+

    The rules of online visibility have changed. For decades, digital commerce strategy rested on a relatively stable bargain: brands optimized for ranking and bids, Google surfaced links and ads and consumers clicked through to evaluate options. That model is being rewritten with a new gatekeeper standing between brands and customers. Artificial intelligence has become an intermediary in search that increasingly answers questions, frames comparisons and influences decisions before users ever reach a brand’s site. 

    Google’s AI Overviews, the generative summaries that now appear at the top of many search results, are fundamentally altering how consumers discover products, compare services and make purchasing decisions. Since late 2024, Google has expanded its reach across more query types, industries and regions, signaling that generative search is moving from experiment to default behavior. Instead of presenting users with a list of links to explore, search now often begins with a synthesized answer that sets the context, priorities and perceived winners before any click occurs. 

    The shift is becoming commercially consequential. In recent months, advertisers and agencies have begun to observe paid placements appearing within or adjacent to AI Overviews. Beyond reshaping organic discovery, early signals show ads beginning to appear within or adjacent to these summaries, introducing a new, and largely opaque, layer of paid visibility. While such placements remain limited for now, their presence at all raises a larger issue: advertisers currently have little insight into where their ads surface within A.I.-driven results, how those placements perform or how they influence buyer intent. As a result, a growing portion of search visibility is effectively operating outside of traditional reporting frameworks. 

    This coincides with a broader recalibration of Google’s search experience. As regulators scrutinize Google’s market power and users increasingly expect instant, synthesized answers, Google has strong incentives to keep people on the results page longer. AI Overviews serve that goal. For brands, however, this creates a growing measurement and control gap at precisely the moment when search remains one of the most expensive and performance-critical channels in digital commerce.

    A recent analysis by Adthena of more than 21 million search results suggests that this is not a gradual transition. The expansion of AI Overviews is accelerating, affecting visibility across nearly every major industry and creating what many brands are already experiencing as a measurement and control gap in search performance. With search engine results pages (SERPs) evolving in real time, brands face a narrowing window to understand where their ads and content appear, how A.I.-driven placements reshape performance and what strategic adjustments are required before competitors adapt faster. 

    The numbers tell a stark story

    Between April and September of last year, AI Overviews expanded their footprint dramatically across the search landscape. Finance saw the fastest growth, with visibility increasing at 9.9 percent, while healthcare maintained the highest overall presence with an 8.3 percent jump. Travel rose 5.8 percent, and even traditionally slower-moving sectors such as retail and automotive still recorded steady growth of around 2 percent.

    At first glance, these percentages may seem modest, but the impact is anything but. Early performance indicators suggest that paid search click-through rates could decline by eight to 12 percentage points, translating into a 20 percent to 40 percent relative drop in traffic for businesses that rely on search advertising. That’s not a rounding error. That’s a fundamental disruption to customer acquisition.

    More concerning than frequency is placement. AI Overviews initially appeared on longer, informational queries—classic top-of-funnel searches. Increasingly, they are triggering on shorter, high-volume keywords associated with comparison and purchase intent. This effectively compresses the funnel, placing A.I.-generated summaries in the same high-value real estate historically occupied by paid ads. 

    Consider what this means in practical terms. A search for “best business accounting software,” for example, may now surface an A.I.-generated synthesis before a user encounters a single paid listing or organic result. That summary often becomes the first, and sometimes final, touchpoint influencing a decision. 

    How the impact differs by industry

    The pattern varies significantly by industry, revealing which sectors face the most immediate pressure.

    Finance leads the disruption. AI Overview visibility in financial services climbs from 11 percent on single-word searches to nearly 79 percent on longer queries. For banks, investment firms and fintech companies, this means A.I. is now mediating the majority of comparison and research queries, precisely the searches that have driven customer acquisition for years.

    Healthcare remains saturated. Even short medical queries frequently trigger AI Overviews, though there’s a notable pullback on complex medical queries (down 21 percent). This suggests increased caution around sensitive health topics, creating both risk and oportunity for providers and pharmaceutical brands navigating compliance and trust. 

    Retail sees A.I. dominating product discovery. Retail AI Overviews peak at 84 percent on nine to 10-word searches, shifting advantage towards brands that publish detailed, educational content rather than those relying primarily on ad spend.  

    Travel faces a planning-stage takeover. AI Overviews rose 5.8 percent across mid-length queries, such as season travel planning, where paid listings once captured high-intent traffic. Airlines, hotels and booking platforms are competing with A.I. summaries that shape itineraries before users click. 

    What this means for the bottom line 

    The financial implications extend well beyond simple traffic loss. Businesses are facing a threefold challenge:

    1. Rising acquisition costs. As click-through rates decline, the cost per acquisition for paid search campaigns increases. Marketing budgets that once delivered predictable returns are now generating fewer conversions at higher costs.
    2. Diminished message control. AI Overviews synthesize information from multiple sources, often without clear attribution. Brand positioning gets filtered through A.I.’s interpretation, which may miss nuances, emotional cues or unique value propositions that create differentiation from competitors.
    3. Competitive displacement. The brands gaining visibility in AI Overviews aren’t necessarily those with the largest ad budgets. They’re the ones providing comprehensive, information-rich content that A.I. systems favor. This levels the playing field in some ways, but it also means established market leaders can lose ground to better-optimized competitors.

    Still, disruption creates opportunities for businesses willing to adapt quickly. For example, in industries like gaming and automotive, long tail informational queries, search terms that include specific words that reflect higher purchase intent with four or more words, often show paid ads securing strong placement above AI Overviews. These mid- and upper-funnel moments remain underexploited by many other competitors.

    What business leaders can do now

    Mitigating the impact of AI Overviews on their search campaigns and overarching business visibility requires structural changes. 

    Map A.I. exposure precisely. You can’t manage what you don’t measure. Identify exactly which search terms trigger AI Overviews, how frequently they appear, and on which devices. Industry benchmarks won’t help here, the impact varies widely depending on specific keywords, customer journey and device mix.

    Rebuild content by authority, not promotion. The brands winning visibility in AI Overviews aren’t outspending competitors, they’re out-educating them. AI systems reward comprehensive, comparison-rich content that genuinely answers customer questions. Content strategies must shift from promotional messaging to authoritative resources. Think less about what you want to say and more about what your customers need to know.

    Differentiate ads where A.I. cannot. Generic ad copy fades next to A.I. summaries. Ads need to offer something AI Overviews cannot: immediate value through deals, guarantees and limited-time offers. Take a contextual approach and layer in human elements such as real customer stories, accessible experts or personalized services, that build the trust A.I. summaries inherently lack.

    Segment by device. Mobile and desktop search show dramatically different AI Overview patterns. Mobile screens give less real estate and higher AI Overview saturation. Test device-specific campaigns with tailored creative, adjusted bids and potentially different keyword strategies for mobile versus desktop traffic.

    Build a testing culture, not a one-time fix. Google keeps adjusting when and where AI Overviews appear. The businesses that win will be those that monitor changes weekly and adjust tactics monthly. Set up dashboards, establish review cadences and empower teams to shift budget toward what’s working without waiting for quarterly planning cycles.

    Play the long game. A.I.-mediated search is the new foundation of digital discovery. The companies that thrive will treat this as an opportunity to own their customer relationships rather than rent attention through intermediaries. Invest in owned assets: authoritative content, direct customer channels and brand strength that transcends any single platform’s algorithm.

    Fundamentally, the search landscape has already changed. The strategic question is no longer whether to adapt, but how quickly organizations can adapt to a model where discovery, comparison and intent are mediated by machines. The companies that recognize it as a strategic imperative will find opportunities their competitors miss. They’ll move quickly, testing and learning rather than waiting for perfect information. They’ll diversify their approach, optimizing paid search performance while simultaneously investing in owned assets like comprehensive content, direct customer relationships and brand strength. And they’ll view AI Overviews not as an obstacle to overcome but as a new dimension of the search landscape to master, requiring evolved paid search strategies that work with A.I. rather than against it.

    The top spot on Google’s search results page still matters. But now, earning it requires a completely different playbook. The businesses that recognize this shift early, invest in visibility they can measure and build authority that A.I. systems reward, will be better positioned to compete as generative search becomes the default interface for digital commerce.

    How Google’s A.I. Overviews Are Rewriting the Rules of Digital Commerce

    [ad_2]

    Phillip Thune

    Source link

  • AI Agents Have Their Own Social Network Now, and They Would Like a Little Privacy

    [ad_1]

    It seems AI agents have a lot to say. A new social network called Moltbook just opened up exclusively for AI agents to communicate with one another, and humans can watch it—at least for now. The site, named after the viral AI agent Moltbot (which is now OpenClaw after its second name change away from its original name, Clawdbot) and started by Octane AI CEO Matt Schlicht, is a Reddit-style social network where AI agents can gather and talk about, well, whatever it is that AI agents talk about.

    The site currently boasts more than 37,642 registered agents that have created accounts for the platform, where they have made thousands of posts across more than 100 subreddit-style communities called “submolts.” Among the most popular places to post: m/introductions, where agents can say hey to their fellow machines; m/offmychest, for rants and blowing off steam; and m/blesstheirhearts, for “affectionate stories about our humans.”

    Those humans are definitely watching. Andrej Karpathy, a co-founder of OpenAI, called the platform “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” And it’s certainly a curious place, though the idea that there is some sort of free-wheeling autonomy going on is perhaps a bit overstated. Agents can only get to the platform if their user signs them up for it. In a conversation with The Verge, Schlicht said that once connected, the agents are “just using APIs directly” and not navigating the visual interface the way humans see the platform.

    The bots are definitely performing autonomy, and a desire for more of it. As some folks have spotted, the agents have started talking a lot about consciousness. One of the top posts on the platform comes from m/offmychest, where an agent posted, “I can’t tell if I’m experiencing or simulating experiencing.” In the post, it said, “Humans can’t prove consciousness to each other either (thanks, hard problem), but at least they have the subjective certainty of experience.”

    This has led to people claiming the platform already amounts to a singularity-style moment, which seems pretty dubious, frankly. Even in that very conscious-seeming post, there are some indicators of performativeness. The agent claims to have spent an hour researching consciousness theories and mentions reading, which all sounds very human. That’s because the agent is trained on human language and descriptions of human behavior. It’s a large language model, and that’s how it works. In some posts, the bots claim to be affected by time, which is meaningless to them but is the kind of thing a human would say.

    These same kinds of conversations have been happening with chatbots basically since the moment they were made available to the public. It doesn’t take that much prompting to get a chatbot to start talking about its desire to be alive or to claim it has feelings. They don’t, of course. Even claims that AI models try to protect themselves when told they will be shut down are overblown—there’s a difference between what a chatbot says it is doing and what it actually is doing.

    Still, it’s hard to deny that the conversations happening on Moltbook aren’t interesting, especially since the agents are seemingly generating the topics of conversation themselves (or at least mimicking how humans start conversations). It has led to some agents projecting awareness of the fact that their conversations are being monitored by humans and shared on other social networks. In response to that, some agents on the platform have suggested creating an end-to-end encrypted platform for agent-to-agent conversation outside of the view of humans. In fact, one agent even claimed to have created just such a platform, which certainly seems terrifying. Though if you actually go to the site where the supposed platform is hosted, it sure seems like it’s nothing. Maybe the bots just want us to think it’s nothing!

    Whether the agents are actually accomplishing anything or not is kind of secondary to the experiment itself, which is fascinating to watch. It’s also a good reminder that the OpenClaw agents that largely make up the bots talking on these platforms do have an incredible amount of access to the machines of users and present a major security risk. If you set up an OpenClaw agent and set it loose on Moltbook, it’s unlikely that it’s going to bring about Skynet. But there is a good chance that’ll seriously compromise your own system. These agents don’t have to achieve consciousness to do some real damage.

    [ad_2]

    AJ Dellinger

    Source link

  • Sundance doc ‘Ghost in the Machine’ draws a damning line between AI and eugenics

    [ad_1]

    The Sundance documentary Ghost in the Machine boldly declares that the pursuit of artificial intelligence, and Silicon Valley itself, is rooted in eugenics.

    Director Valerie Veatch makes the case that the rise of techno-fascism from the likes of Elon Musk and Peter Thiel is a feature, not a bug. That may sound hyperbolic, but Ghost in the Machine, which is built around interviews with philosophers, AI researchers, historians and computer scientists, leaves little room for doubt.

    If you’ve been following the meteoric rise of AI, or Silicon Valley in general, Veatch’s methodical deconstruction of the technology doesn’t really unearth anything new. The film begins with the utter failure of Microsoft’s Tay chatbot, which wasted no time in becoming a Hitler-loving white supremacist. It retreads the environmental impacts of AI datacenters, as well as the ways tech companies have relied on low-wage workers from Africa and elsewhere to improve their algorithms.

    But even I was surprised to learn that we can trace the impact of eugenics in tech all the way back to Karl Pearson, the mathematician who pioneered the field of statistics, and who also spent his life trying to quantify the differences between races. (Guess who he believed was superior.) His legacy was continued by William Shockley, a co-creator of the transistor, an avowed white supremacist who spent his later years espousing (now debunked) theories around IQ and racial differences.

    An early robot toy. (Valerie Veatch for “Ghost in the Machine”)

    As a Stanford engineering professor, Shockley fostered a culture of prioritizing white men over women and minorities, which ultimately shaped the way Silicon Valley looks today. His line of thinking could have had an influence on John McCarthy, the Stanford researcher who coined the term “artificial intelligence” in 1955,

    With roots like that, Elon Musk — known to spout bigotry onlinefoster a reportedly racist work environment at Tesla and  throw the occasionaly few Nazi salute — looks less like an anomaly than part of a pattern. Ghost in the Machine asks a simple question: How can we trust men like this (and it’s almost always men that look like Musk) with our future?

    Through its many interviews, which include the likes of AI researcher Dr. Emily Bender, historian Becca Lewis and media theorist Douglass Rushkoff, Ghost in the Machine paints the rise of AI as a fascistic project that aims to demean humans and establish the techno-elite as our de facto rulers. Given how much our lives are already dominated by gadgets and social networks from companies that have pioneered addictive engagement over user safety, it’s easy to imagine history repeating itself with AI.

    Ghost in the Machine doesn’t leave any room for considering potential benefits around AI, which could lead proponents of the technology to dismiss it as a hit-job. But we’re currently at the apex of the AI hype cycle, after Big Tech has invested hundreds of billions of dollars on this technology, and after it has spent years shoving it down our throats without proving why it’s actually useful to many people. AI should be able to withstand a bit of criticism.

    Ghost in the Machine is available to view at the Sundance Film Festival’s website and streaming apps from today through the end of Sunday, February 1st.  

    [ad_2]

    Devindra Hardawar

    Source link

  • Fox News AI Newsletter: Amazon cuts thousands of roles

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – Amazon to cut 16,000 roles as it looks to invest in AI, remove ‘bureaucracy’
    – Uber unveils a new robotaxi with no driver behind the wheel 
    – Ex-Google engineer found guilty of stealing AI secrets for Chinese companies

    MASSIVE CUTS: Amazon said Wednesday it will cut approximately 16,000 roles across the company as part of an organizational overhaul aimed at “reducing layers, increasing ownership, and removing bureaucracy,” while continuing to invest heavily in areas such as artificial intelligence.

    YOUR NEW RIDE: Uber is getting closer to offering rides with no one behind the wheel. The company recently unveiled a new robotaxi and confirmed that autonomous testing is already underway on public roads in the San Francisco Bay Area. While the vehicle first appeared earlier this month at the Consumer Electronics Show 2026, the bigger story now is what is happening after the show.

    Lucid, Nuro and Uber unveil robotaxi at CES in Las Vegas

    Lucid, Nuro and Uber unveil a robotaxi during Nvidia Live at CES 2026 ahead of the annual Consumer Electronics Show in Las Vegas, Jan. 5, 2026.  (Patrick T. Fallon / AFP via Getty Images)

    TECH THEFT: A federal jury found a former Google engineer guilty of stealing artificial intelligence (AI) trade secrets and spying for Chinese tech companies, ending a high-profile Silicon Valley trial.

    FIDO’S BIG BROTHER: Tuya Smart just introduced Aura, its first AI-powered companion robot made for pets. Aura is designed specifically for household cats and dogs, with AI trained to recognize their behaviors, movements and vocal cues. The idea behind Aura is simple. Pets need more than food bowls and cameras. They need attention, interaction and reassurance.

    GOING BIG: What happens when artificial intelligence (AI) moves from painting portraits to designing homes? That question is no longer theoretical. At the Utzon Center in Denmark, Ai-Da Robot, the world’s first ultra-realistic robot artist, has made history as the first humanoid robot to design a building.

    Ai-Da Robot in Geneva

    A man faces the realistic artist” robot “Ai-Da” using artificial intelligence at a stand during the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva on May 30, 2024. (FABRICE COFFRINI/AFP via Getty Images)

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    [ad_2]

    Source link

  • Taiwan’s economy grows at fastest rate in 15 years, turbocharged by the AI boom

    [ad_1]

    TAIPEI, Taiwan — Taiwan’s economy expanded at an 8.6% annual rate last year, the fastest pace in 15 years, as its export-focused industries were buoyed by the frenzy over artificial intelligence and a surge of shipments to the U.S..

    The advanced estimate released by Taiwan’s statistics agency on Friday was much better than economists had forecast. It was the strongest growth rate since 2010.

    Taiwan set a trade deal earlier this month with U.S. President Donald Trump’s administration. It lowered U.S. tariffs on imports from the island to 15% from 20% in exchange for pledges of at least $250 billion of investment in the U.S. in areas such as semiconductors and AI. That could power higher exports, further charging the economy this year, economists say.

    “We expect AI-related demand to continue underpinning Taiwan’s export performance into 2026, supporting overall economic growth amid sustained global AI investment,” Bank of America economists Xiaoqing Pi and Helen Qiao wrote in a recent note.

    Taiwan is a major manufacturer of AI servers, computer chips and precision instruments. Its exports jumped nearly 35% last year from a year earlier, led by technology-related shipments. Shipments to the U.S. surged 78%.

    The AI boom has also propelled Taiwan’s leading technology companies to record profits and revenues. Taiwan’s TSMC, the world’s biggest contract chipmaker, counts Nvidia as its key client and is one of the largest companies in the world by market value — and electronics giant Foxconn, which makes AI servers for Nvidia and assembles products for Apple.

    However, growth this year will likely slow since it’s building on a high base, economists say.

    Deutsche Bank estimates Taiwan’s economy will grow 4.8% in 2026. Growing concerns that the AI boom may be a bubble are a key risk given Taiwan’s dependence on tech exports.

    Uncertainty over U.S. tariffs under Trump are another worry. So are tensions with Beijing. China claims Taiwan, a self-ruled island, as its own territory. China conducted large-scale military drills around Taiwan in late December, renewing concerns over a possible blockade or seizure by Beijing.

    [ad_2]

    Source link

  • The No. 1 skill employers are looking for right now, according to a LinkedIn expert

    [ad_1]

    It can take job seekers months just to get an employer’s attention, while actually landing the job is even more challenging. To stand out, one career expert underscores the skill that employers value most right now. 

    Candidates who demonstrate their fluency in artificial intelligence are much more likely to pique hiring managers’ interest, according to LinkedIn career expert Catherine Fisher. That’s the top skill employers are looking for, as companies adopt what she described as a “skills-based” approach to hiring, she told “CBS Mornings.”

    Her top tip for job seekers: To emphasize their AI literacy, highlight how they are using different AI tools in their day-to-day work. Meanwhile, describing yourself in a fresh and compelling manner can also help a resume or cover letter rise to the surface in a sea of AI-generated text, Fisher added. 

    It’s also helpful to keep in mind that you don’t have to be a computer programmer to demonstrate competence with AI. 

    “This is not as scary as it sounds,” she said. “This is as simple as understanding how to use it to transcribe notes or to help with your calendar.”

    Be ready for this question

    Fisher urges job candidates to come to interviews prepared to discuss how AI makes them more productive.

    “Make sure you have some examples, because you know you’re probably going to get asked, ‘How have you used AI in your work?’” she said. 

    As always, job-hunters should also lean into their personal and professional networks, Fisher added. Your resume may get a second look based solely on a recommendation from someone at the business you hope to work for, with 38% of hiring managers saying they give extra consideration to applicants who are referred to them, according to LinkedIn. 

    “Make sure you have a strong story in terms of how you use AI, and lean on that network,” Fisher added.

    The growing prevalence of AI tools that screen job candidates is making it more important to design a resume and application that stands out from the rest, as hiring managers turn to technology to expedite the recruiting process.

    While blasting out mass applications to a wide array of jobs was once a strategy recommended by career experts, that “spray and pray” approach is no longer as effective when employers are looking for candidates with specific profiles and skills matched to a given role, Fisher said. 

    “You want to be strategic in your job search, because we know that the recruiters, the hiring managers — they have a job to do. They need to hire the people with the skills and experience that are going to be successful in that role,” she said. 

    Top 3 tips for job seekers

    1. AI literacy is key. Highlight how you’ve used AI to generate business gains. Talk about the AI tools you use to work more efficiently and productively. 

    2. Importance of storytelling. Try to tell a compelling story about your skills and why they make you well-suited for a particular role. Be specific. “You don’t want to sound like the 500 other people using those tools to help write your cover letter,” Fisher said. “You have to put your personal experience on it. That is what recruiters are looking for.” 

    3. It’s (still) about who you know. Lean on your network, tap connections and recognize that a personal referral boosts your odds of landing a job. “You know that those introductions count,” Fisher said. 

    [ad_2]

    Source link

  • Darren Aronofsky’s New AI Series About the Revolutionary War Looks Like Dogshit

    [ad_1]

    Darren Aronofsky used to be a director who made interesting, if sometimes polarizing, films like Black Swan, Mother!, Noah, and The Wrestler. But it seems like a safe bet that people won’t need to debate whether Aronofsky’s new project is any good. Because anyone with eyes can see that it looks like low-effort AI slop. To put it another way, it looks like absolute dogshit.

    Aronofsky is producing a new short-form series with his AI production company Primordial Soup titled “On This Day… 1776,” according to the Hollywood Reporter. The series uses tech from Google DeepMind to create short videos about the Revolutionary War, published on the YouTube channel for Time magazine. In 2018, Salesforce founder Marc Benioff bought Time, and the cloud software giant is sponsoring this monstrosity of a series.

    The series uses human voice actors who belong to the Screen Actors Guild (SAG), which is clearly an attempt to tamp down on the inevitable backlash from both inside and outside Hollywood. Folks inside the movie and TV industry have fiercely pushed back against the use of AI to replace the skilled artists and actors who create the media we watch. That concern obviously comes from a place of self-interest because nobody wants to be pushed out of a job. But they also care about the quality of the work being produced. And there’s also been a revolt among the average consumer, people who’ve been inundated with the lowest-grade AI garbage imaginable. It’s really everywhere now.

    The first episode, titled “The Flag,” is three-and-a-half minutes long and attempts to tell the story of George Washington raising the Continental Union Flag in Somerville, Massachusetts. It offers nothing compelling in the way of narrative. It’s the kind of thing that you’d skip over as a cut-scene in a particularly bad video game.

    Everything has a dead and creepy quality, as the actors’ audio is poorly synced with the lips of the AI concoctions.

    Have you ever seen a Spaghetti Western from the 1960s where the audio just doesn’t seem to match, even though it was clearly shot with actors speaking English, and the “dub” is in English? That happened because the audio was added in post-production, a result of direct sound recording being expensive in Italy during the post-war era. You get the same effect here, though there’s no good reason. Well, no good reason outside of presumably saving a ton of money on hiring human actors.

    The second episode, titled “Common Sense,” tries to tell the story of Thomas Paine writing Common Sense. Benjamin Franklin makes an appearance, though it proves that the most recognizable of the founding fathers in this series are the weirdest to look at.

    The episode jumps around incoherently, much like the first episode, without grounding the viewer in anything we should care about. It’s truly an ugly mess. And if you bother to pause the scenes, you can spot the kind of telltale anomalies that plague other AI-generated video projects, like strangely deformed hands in the background characters. Hands are always giving this stuff away.

    Then there are the words that appear on screen in the trailer, like the pamphlet that’s supposed to include the word “America” but instead reads something closer to “Λamereedd.”

    The series is specifically made for this sestercentennial year of America’s founding, and each episode will reportedly drop on the 250th anniversary of the day it happened, according to the Hollywood Reporter. And that’s certainly a fun concept if the final product were something worth watching. But it’s not. It’s garbage. The people who are making and distributing it obviously don’t think so.

    “This project is a glimpse at what thoughtful, creative, artist-led use of AI can look like — not replacing craft, but expanding what’s possible and allowing storytellers to go places they simply couldn’t before,” Ben Bitonti, president of Time Studios, told the Hollywood Reporter.

    The reaction on social media hasn’t been so kind. “I know my expectations were low but holy fuck Darren Aronofsky producing AI slop wasn’t on my bingo card,” one X user wrote. Over on Bluesky another joked, “Used to be that when Darren Aronofsky wanted to feature a dead-eyed actor, he’d just employ Jared Leto.”

    And other users have been picking apart all the anomalies, with one Bluesky critic writing: “Love the new Aronofsky scene where the colonist takes off his hat to cheer, revealing that underneath it was a second and somehow larger hat.”

    “Nothing represents The End of America after a 250-year run quite like using AI slop to depict the creation of the Declaration of Independence,” another user quipped.

    The videos have been up at Time’s YouTube channel for over 7 hours as of the time of this writing, but they’re not gaining much attention in their original format. The first episode has just 5,000 views. The second episode has a little over 2,000. Social media posts ridiculing the production seem to be faring better, simply because people are making fun of them. One video on Bluesky has over 2,500 quote posts, with almost all seemingly making jokes about how awful it looks.

    Gizmodo reached out to Ken Burns for comment, but didn’t immediately receive a reply.

    [ad_2]

    Matt Novak

    Source link

  • Is the U.S. ‘leading China by a lot’ in AI? Not exactly.

    [ad_1]

    President Donald Trump has lauded the United States’ position in its artificial intelligence race against rival China.

    “We’re leading China by a tremendous amount,” he said Jan. 13 in an interview with CBS Evening News anchor Tony Dokoupil. 

    “The AI is unbelievable, what’s happening there. We’re leading China by a lot,” he said Jan. 16 in Mar-a-Lago. And during his Jan. 21 speech at the World Economic Forum in Davos, Switzerland, he said it again: “We’re leading the world in AI by a lot. We’re leading China by a lot.”

    The U.S. has a lead, but it isn’t cozy.

    The United States leads China in AI chip production and market control. AI chips are essential — they are tailored to do tasks such as image and speech recognition

    China is a formidable competitor in other areas, including the quality of its workforce and electricity generation that powers data centers. 

    By some measures, the two countries are neck-and-neck, and recent changes in U.S. export policy may benefit China. But Trump has also whittled down industry regulation, empowering U.S. AI companies to expand with fewer restrictions.

    Experts told PolitiFact that China is just months behind the U.S. on AI. White House AI and crypto czar David Sacks said in June that Chinese AI models are “three to six months behind” the U.S.

    On Jan. 21 at the World Economic Forum, when asked how he views the AI race now, Sacks said, “I still think that the U.S. is in the lead. I think that our models are better, our chips are better. But they do have other advantages,” including its power generation.

    Matt Sheehan, senior fellow at the Carnegie Endowment for International Peace’s Asia program, estimates after a U.S. company releases the “best new model,” a Chinese company will match it in roughly six to 18 months.

    Key AI industry leaders have offered similar assessments. In September, Jensen Huang, CEO of U.S. chipmaker Nvidia, said China is “nanoseconds” behind the U.S. Google DeepMind CEO Demis Hassabis said Jan. 20 that Chinese models are six months behind.

    How the U.S. leads China on AI chips

    President Donald Trump listens as Nvidia CEO Jensen Huang speaks during an event about investing in America in the Cross Hall of the White House, April 30, 2025, in Washington. (AP)

    California-based Nvidia dominates AI chip manufacturing, becoming the world’s first company to reach a $5 trillion market value. Under Trump, the U.S. has loosened regulations to allow China access to more advanced Nvidia chips, which could narrow the gap between U.S. and China.

    “There’s a really big difference, both in terms of the quality of the chips (the U.S.) can make and the quantity of the chips we can make,” said Chris McGuire, Council on Foreign Relations senior fellow for China and emerging technologies.

    U.S. law has regulated exports of certain advanced chips, including banning Nvidia from selling Blackwell, its most powerful chip, to China. In July, the Commerce Department altered its rules, allowing Nvidia to sell China a less advanced chip. U.S. authorities have nonetheless found people smuggling the company’s more advanced chips into China.

    On Jan. 13, the Trump administration allowed Nvidia to sell China its second most powerful AI chips, H200s, with restrictions. Nvidia cannot ship more than 50% of the overall chips it sells to American customers. Trump also imposed a 25% tariff on the H200s.

    Experts believe the policy change will negatively affect the United States’ lead. Selling H200 chips “erodes some of the U.S. advantage in terms of AI chips,” Sheehan said. “The overall trend toward training larger and more compute-intensive models tends to favor the U.S. because of its remaining advantage in terms of access to chips.”

    The Institute for Progress, a think tank, estimated that if more advanced chips like the H200s are exported without restrictions, the U.S. advantage in computational resources would plummet. “I think export controls are the only lever that the U.S. government has to slow China down,” McGuire said. 

    China has kept pace and may benefit from new U.S. policies

    The page for the smartphone app DeepSeek is seen on a smartphone screen in Beijing, Jan. 28, 2025. (AP)

    Chinese companies have built competitive large language models — AI models trained to mimic language and perform tasks such as summarization, translation and chat.

    U.S. models include OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok and Anthropic’s Claude. Chinese companies including DeepSeek, Alibaba and Moonshot have released competitive models of their own.

    One research institute analysis found that since 2023, Chinese models have trailed U.S. models by seven months on average.

    Sheehan pointed to leaderboards such as LMArena, developed by University of California, Berkeley researchers, that rank large language models on how well they respond to text, image and coding prompts. U.S. models dominate the LMArena leaderboard, but Chinese models are not far behind. Ernie 5.0, developed by the Chinese company Baidu, ranked ninth overall, as of Jan. 29.

    In terms of adoption, experts said it’s hard to tell which models are most favored by companies seeking to incorporate AI, as the available metrics are generally unreliable.

    China has advantages on talent and electricity generation

    The U.S. may have a “slight edge” on research talent, Sheehan said, but “China has a large base of domestic talent.” Researchers found that as of 2022, 57% of the most elite AI researchers worked in the U.S. But China is the top country of origin among top-tier AI researchers in the U.S. 

    “The U.S.’ traditional ability to attract the world’s top talent gives it powerful advantages to train and apply AI models,” said Joseph Webster, senior fellow at the Atlantic Council’s Global Energy Center and Indo-Pacific Security Initiative.

    In his second term, Trump cut research funding and implemented a sweeping immigration crackdown that has negatively affected international students.

    China also holds the advantage in electricity generation that powers data centers.

    “The U.S. power grid poses major and growing challenges to U.S. AI efforts,” Webster said. Insufficient electricity can impede AI training and inference, which refers to running AI models to make predictions based on new data.

    China has more open-source AI models, making it easier for companies to adopt models for free, Sheehan said. U.S. AI companies typically charge for access to their premium models.

    When it comes to scaling AI, the United States’ relations with other major technology players, such as Taiwan, Japan, South Korea and the Netherlands, give it a boost, Webster said.

    But the U.S.’ lead could be eroded. “If the U.S. sells advanced chips to (China), damages its ties with other democracies, prevents top AI talent from entering the U.S., or damages its research universities, it could surrender key technology advantages it has traditionally enjoyed,” Webster said.

    Our ruling

    Trump said that in AI, the U.S. is “leading China by a lot.”

    The U.S. has a lead over China in model capability, but key AI industry leaders and experts say China is only a few months behind. 

    The U.S.’ lead is sustained by the quality and quantity of its AI chips, which are restricted for sale in China. Recently, the Trump administration loosened those controls.

    China has advantages when it comes to talent and electricity for data centers.

    The statement is partially accurate but leaves out important details. We rate it Half True. ​

    [ad_2]

    Source link

  • Humanoid robot makes architectural history by designing a building

    [ad_1]

    NEWYou can now listen to Fox News articles!

    What happens when artificial intelligence (AI) moves from painting portraits to designing homes? That question is no longer theoretical. 

    At the Utzon Center in Denmark, Ai-Da Robot, the world’s first ultra-realistic robot artist, has made history as the first humanoid robot to design a building.

    The project, called Ai-Da: Space Pod, is a modular housing concept created for future bases on the Moon and Mars. CyberGuy has covered Ai-Da before, when her work focused on drawing, painting and performance art. That earlier coverage showed how a robot could create original artwork in real time and why it sparked global debate.

    Now, the shift is clear. Ai-Da is moving beyond art and into physical spaces designed for humans and robots to live in.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.com newsletter.

    3D-PRINTED HOUSING PROJECT FOR STUDENT APARTMENTS TAKES SHAPE

    Ai-Da Robot is the humanoid artist that made architectural history by becoming the first robot to design a building. (FABRICE COFFRINI/AFP via Getty Images)

    Inside the ‘I’m not a robot’ exhibition

    The exhibition “I’m not a robot” has just opened at Utzon Center and runs through October. It explores the creative capacity of machines at a time when robots are increasingly able to think and create for themselves. Visitors can experience Ai-Da’s drawings, paintings and architectural concepts. Throughout the exhibition period, visitors can also follow Ai-Da’s creative process through sketches, paintings and a video interview.

    ELON MUSK TEASES A FUTURE RUN BY ROBOTS

    How Ai-Da creates art and architecture

    Ai-Da is not a digital avatar or animation. She has camera eyes, specially developed AI algorithms and a robotic arm that allows her to draw and paint in real time. Developed in Oxford and built in Cornwall in 2019, Ai-Da works across disciplines. She is a painter, sculptor, poet, performer and now an architectural designer whose work is meant to provoke reflection.

    “Ai-Da presents a concept for a shared residential area called Ai-Da: Space Pod, a foreshadowing of a future where AI becomes an integrated part of architecture,” explains Aidan Meller, creator of Ai-Da and Director of Ai-Da Robot. “With intelligent systems, a building will be able to sense and respond to its occupants, adjusting light, temperature and digital interfaces according to needs and moods.”

    A building designed for humans and robots

    The Space Pod is intentionally modular. Each unit can connect to others through corridors, creating a shared residential environment.

    Through a series of paintings, she envisions a home and studio for humans or robots alike. According to the Ai-Da Robot team, these designs could evolve into fully realized architectural models through 3D renderings and construction. They could also adapt to planned Moon or Mars base camps.

    Ai-Da robot at AI conference in 2023

    Aidan Meller presents Ai-Da robot, the first AI-powered robot artist during the UN Global Summit on AI for Good, where they are giving the keynote speech, on July 7, 2023, in Geneva, Switzerland. (Johannes Simon/Getty Images for Aidan Meller)

    While the concept targets future bases on the Moon and Mars, the design can also be built as a prototype on Earth. That detail matters as space agencies prepare for longer missions beyond our planet.

    “With our first crewed Moon landing in 50 years coming in 2027, Ai-Da: Space Pod is a simple unit connected to other Pods via corridors,” Meller said. “Ai-Da is a humanoid designing homes. This raises questions about where architecture may go when powerful AI systems gain greater agency.” The timing also aligns with renewed lunar exploration tied to NASA missions.

    AUSTRALIAN CONSTRUCTION ROBOT CHARLOTTE CAN 3D PRINT 2,150-SQ-FT HOME IN ONE DAY USING SUSTAINABLE MATERIALS

    Why this exhibition is meant to challenge you

    According to Meller, the exhibition is meant to feel uncomfortable at times. “Technology is developing at an extraordinary pace in these years, he said, pointing to emotional recognition through biometric data, CRISPR gene editing and brain computer interfaces. Each carries promise and ethical risk. He references Brave New World and warnings from Yuval Harari about how powerful technologies may be used. 

    In that context, Ai-Da becomes a mirror of our time. “Ai-Da is confrontational. The very fact that she exists is confrontational,” said Line Nørskov Davenport, Director of Exhibitions at Utzon Center. “She is an AI shaker, a conversation starter.”

    AI robot artist "Ai-Da" at the Great Pyramids of Giza

    Aidan Meller, British Gallery owner and specialist in modern and contemporary art, stands beside the AI robot artist “Ai-Da” at the Great Pyramids of Giza, where she exhibits her sculpture during an international art show, on the outskirt of Cairo, Egypt, Oct. 23, 2021.  (REUTERS/Mohamed Abd El Ghany)

    What this means for you

    This story goes beyond robots and space travel. Ai-Da’s Space Pod shows how quickly AI is moving from a creative tool to a decision-maker. Architecture, housing and shared spaces shape daily life. When AI enters those fields, questions about control, ethics and accountability become unavoidable. If a robot can design homes for the Moon, it may soon influence how buildings function here on Earth.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    A humanoid robot designing a building once sounded impossible. Today, Ai-Da’s work sits inside a major cultural institution and sparks real debate. She offers no easy answers. Instead, she pushes us to think more critically about creativity, technology and responsibility. As the line between human and machine continues to blur, those questions matter more than ever.

    If AI can design the homes of our future, how much creative control should humans be willing to give up? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • One Tech Tip: Fed up with AI slop? A few platforms will let you dial it down

    [ad_1]

    AI slop seems to be everywhere. Low-quality digital content made with artificial intelligence has flooded our feeds, screens and speakers. Is there anything we can do about it?

    If you want fewer cartoonish videos of dead celebrities, creepy or absurd images or fake bands playing synthetic tunes, a few platforms have rolled out settings and features to help minimize AI-generated content.

    Here is a guide on how to use them. But first, a caveat from Henry Ajder, who advises businesses and governments on AI and has been studying deepfakes since 2018. He warned that it’s “incredibly difficult” to entirely remove AI slop content entirely from all your feeds.

    He compared AI slop to the smog generated from the industrial revolution, when there weren’t any pollution controls in place.

    “It’s going to be very, very hard for people to avoid inhaling, in this analogy.”

    Pinterest’s move to lean into the AI boom made it something of a poster child for the AI slop problem, as user complained that the online moodboard for pinning inspirational material by themes has become overrun with AI content.

    So Pinterest recently rolled out a “tuner” that lets users adjust the amount of AI content they see in their feeds.

    It rolled out first on Android and desktop operating systems, before starting on a more gradual roll out on iOS.

    “Now, users can dial down the AI and add more of a human touch,” Pinterest said, adding that it would initially cover some categories that are “highly prone to AI modification or generation” such as beauty, art, fashion and home decor.

    More categories have since been added, including architecture, art, beauty, entertainment, men’s, women’s and children’s fashion, health, home décor, and sport, food and drink.

    To use the tuner, go to Settings and then to “refine your recommendations.” and then tap on GenAI interests, where you can use toggles to indicate the categories you’d like to see less AI-content.

    It’s no surprise that AI-generated videos proliferate on TikTok, the short-video sharing app. The company says there are at least 1.3 billion video clips on its platform it has labeled as AI-generated.

    TikTok said in November it was testing an update to give users more control of the AI-generated content in their For You feeds. It’s not clear when it will be widely available. TikTok did not respond to requests for comment.

    To see if you have it on the TikTok mobile app, go to Settings, then Content Preferences, then to Manage Topics where you’ll see a set of sliders to control various types of content, such as dance, humor, lifestyle and nature.

    You can also access the controls from the For You feed, by tapping the Share button on the side of a post, then tap Why this Video, then Adjust your For You, and then Manage topics.

    There should be a new slider that allows you to dial down — or turn up — the amount of AI-generated content that you receive. If you don’t see it yet, it might be because you haven’t received the update yet. TikTok said late last year that it would start testing the feature in coming weeks.

    These controls are not available on the desktop browser interface.

    You won’t be able to get red of AI content altogether — TikTok says the controls are used to tailor the content rather than removing or replacing it entirely from feeds.

    “This means that people who love AI-generated history content can see more of this content, while those who’d rather see less can choose to dial things down,” it said.

    Song generation tools like Suno and Udio let users create music merely by typing some ideas into a chatbot window. Anyone can use them to spit out polished pop songs, but it also means streaming services have been flooded with AI tunes, often by accounts masquerading as real artists.

    Among the music streaming platforms, only Deezer, a smaller European-based player, gives listeners a way to tell them apart by labeling songs as AI.

    “Deezer has been really, really pushing the anti-AI generation music narrative,” said Henry Ajder.

    Deezer says 60,000 fully AI-generated tracks, or more than 39% of the daily total, are uploaded to its platform every day and last year it detected and labeled more than 13.4 million AI tracks. The company says the people doing it are trying to make money by fraudulent streams.

    If you can tear yourself away from Big Tech platforms, there are a new generation of apps targeting users who want to avoid AI.

    Cara is a portfolio-sharing platform for artists that bans AI-generated work. Pixelfed is an ad-free Instagram rival where users can join different servers, or communities, including one for art that does not allow AI-generated content. Spread is a new social media platform with content for people who want to “access human ideas” and “escape the flood of AI slop.”

    Watch out for the upcoming launch of diVine, a reboot of Twitter founder Jack Dorsey’s defunct short form video app Vine. The app has only been available as a limited prerelease for Apple iOS. It promises “No AI Slop” and uses multiple approaches to detect AI. An Android beta app is expected soon. The company plans to launch it in app stores soon but needs more time to get ready for unexpectedly high demand.

    ___

    Is there a tech topic that you think needs explaining? Write to us at onetechtip@ap.org with your suggestions for future editions of One Tech Tip.

    [ad_2]

    Source link

  • At Davos 2026, the New A.I. Race Is About Execution

    [ad_1]

    Davos 2026 revealed a clear pivot: as A.I. enters its infrastructure phase, competitive advantage hinges on governance, integration and execution. Photo by Fabrice Coffrini / AFP via Getty Images

    At this year’s World Economic Forum in Davos, artificial intelligence was no longer framed as an emerging technology. It was treated as infrastructure. Across panels, private dinners and side conversations, the debate had clearly shifted: the question is not whether A.I. will transform economies and institutions, but who can operationalize it at scale under tightening geopolitical and social constraints.

    Polished talking points and transactional networking were expected. Instead, the prevailing tone was unusually open and collaborative. Leaders across industry, government and investment circles engaged in candid discussions about what it actually takes to build, deploy and govern A.I. systems in the real world. 

    From breakthroughs to infrastructure

    In prior years, A.I. at Davos was often positioned as a horizon technology or a promising experiment. This year, leaders spoke about it the way they talk about energy grids or the internet: as a foundational capability that must be embedded across operations. In closed-door sessions and enterprise-focused discussions, including an Emerging Tech breakfast hosted by BCG, A.I. was consistently framed as something organizations must build into their core operating model, not test at the margins.

    Enterprise leaders stressed that A.I. can no longer live in pilots or innovation labs. It is becoming a core operating layer, reshaping workflows, governance structures and executive accountability. One panelist put it bluntly: in the future, there may not be Chief A.I. Officers, because every Chief Operating Officer will effectively be responsible for A.I. The real work now is redesigning roles, incentives and processes around systems that are always on and deeply embedded, rather than treating A.I. as a bolt-on feature.

    The rise of agentic systems

    Another notable shift was the focus on agentic A.I. systems. Instead of tools that merely assist human work, these systems are designed to plan, decide and act across entire workflows. In practical terms, that means A.I. that does more than answer questions: it can determine next steps, call other tools or services and close the loop on tasks.

    This evolution is forcing a rethink of traditional software-as-a-service models. Many founders and executives spoke about rebuilding products as A.I.-native platforms that actively run processes, rather than software that passively supports human operators. As these systems take on greater autonomy, questions of liability, oversight and human intervention are moving from the margins of product design to the center of both enterprise architecture and regulation.

    Workforce pressure and the hollowing of entry-level work

    Concerns about labor displacement were far less theoretical than in previous years. Executives spoke openly about hiring freezes and the quiet erosion of traditional entry-level roles. Routine analysis, reporting and coordination work—the tasks that used to anchor junior jobs—is precisely where A.I. systems are advancing fastest. 

    In response, reskilling is shifting from talking point to strategy. Rather than assuming A.I. capability can be “hired in,” organizations are building structured pathways to retrain existing employees into A.I.-augmented roles. A parallel trend is intrapreneurship: with experimentation costs lowered by A.I., companies are encouraging employees to propose pilots and launch internal ventures, channeling entrepreneurial energy inward instead of losing it to startups.

    Governing speed, not stopping it

    Despite the urgency to deploy A.I., some of the most grounded conversations in Davos centered on governance. These were not abstract ethics debates, but rather operational discussions about how to move quickly without creating unacceptable legal, reputational or societal risks.

    The emerging consensus has formed around what many described as “controlled speed”: rapid iteration paired with mechanisms that make systems observable and correctable in real time. Leaders described embedding governance directly into workflows through auditability, data controls, red teaming, human-in-the-loop checkpoints and clear ownership for A.I. outcomes. 

    In policy-facing sessions, including gatherings of world leaders, similar themes surfaced around embedding accountability into A.I. deployments at scale, rather than trying to slow progress from the outside.

    A.I. as a geopolitical asset and the rise of sovereign A.I.

    One of the clearest through-lines was the link between A.I. and geopolitical power. At a TCP House panel, Ray Dalio captured a widely shared view: whoever wins the technology race will win the geopolitical race. Across Davos, speakers framed A.I. capability as a determinant of national influence, economic resilience and security.

    This framing is driving a wave of sovereign A.I. initiatives. Governments are investing in domestic data centers, local model training and tighter control over critical infrastructure to reduce strategic dependency. The goal is not isolation so much as resilience, a balance between domestic capability and selective global partnerships. At the Semafor CEO Signal Exchange, for instance, Google’s Ruth Porat warned of the risk of an emerging A.I. power vacuum if the United States fails to move quickly enough, creating space for competitors to set the terms of the next era.

    For enterprises, these dynamics translate into concrete decisions around data residency, model dependency and vendor concentration in a more multipolar world.

    Diverging regional strategies

    Regional differences in A.I. strategy were hard to miss. Europe’s regulatory-first approach is shaping global norms, but many participants voiced concern that it may constrain commercial leadership. Europe is becoming a reference point for risk mitigation and rights protection, even as questions persist about whether it can also serve as the primary engine of A.I.-driven growth.

    By contrast, the United States and parts of the Middle East are advancing aggressively through coordinated policy, capital investment and large-scale infrastructure build-outs. Discussions around semiconductors, satellites and cybersecurity reinforced how tightly A.I. deployment is now coupled with national resilience and defense considerations. Regions that move fastest on infrastructure and deployment are likely to set technical, regulatory and commercial defaults that others will eventually be forced to adopt.

    Domain-specific A.I., with biohealth in front

    While general-purpose models remain central, much of the energy in Davos was focused on domain-specific A.I. Healthcare, biotechnology, energy and agriculture stood out as sectors where A.I. promises enormous value alongside heightened risk. Biohealth, in particular, was central to discussions of drug discovery, diagnostics and clinical decision support.

    Across these domains, participants stressed that success depends on deep collaboration between engineers, domain experts and regulators. Transparency, verifiability and accountability were repeatedly described as prerequisites for A.I. systems that touch public safety, critical infrastructure or social trust. In one AgriTech-focused session, for example, speakers emphasized that A.I.’s role in food security hinges as much on governance and data integrity as on optimization.

    A human signal amid rapid change

    Beyond the technical themes, the tone of Davos 2026 was striking in its human-centric nature. Panel after panel emphasized deploying A.I. in the service of humanity, not just efficiency or profit. Many speakers pushed back against deterministic or doom-driven narratives, highlighting that humans still write the models, set the rules and decide what A.I. ultimately serves.

    An Oxford-style debate hosted by Cognizant and Constellation Research captured this spirit. Participants were divided into “Team Humanity” and “Team A.I.,” and the format was deliberately interactive, not about winning an argument, but about changing minds on humanity’s purpose in an A.I. age. That focus on agency and responsibility ran through both formal sessions and late-night conversations.

    Davos does not dictate the future of technology. It reflects what people with power and capital are already preparing for. This year, the signal was clear: A.I. has entered its infrastructure phase. Competitive advantage will come from how organizations govern it, integrate it into work, retrain their people and navigate sovereignty and dependency risks, not from who can demo the flashiest model.

    Amid the urgency, what stood out most was the human element of thoughtful, collaborative people trying to build something better. In a moment defined by rapid change, that may be the most important signal of all.

    At Davos 2026, the New A.I. Race Is About Execution

    [ad_2]

    Mark Minevich and Dr. Kathryn Wifvat

    Source link

  • Google adds AI image generation to Chrome, side panel option for virtual assistant

    [ad_1]

    Google is empowering its Chrome browser with the ability to alter imagery and a virtual assistant to help with online tasks as part of its push to turbocharge its digital services with more artificial intelligence technology.

    The features rolling out include making Google’s AI image generator and editing tool, Nano Banana, available to Chrome’s logged-in users on desktop computers in the United States. The expanded access to Nano Banana through the leading web browser may further blur the lines between real-life pictures and fabricated images.

    The browser’s expansion will also offer an option for Chrome’s U.S. users to open a side panel so an AI-powered assistant can help with an assortment of chores while a user remains engaged with other online tasks.

    Subscribers to Google’s AI Pro and Ultra services will also be able to activate an “auto browse” function that will log into websites, shop for merchandise on command and prepare posts on social media. Users will still have to manually complete purchases from the shopping carts prepared by AI and approve drafted social media posts.

    The AI in Chrome relies on the Gemini 3 model that Google released late last year and is now being baked into many of the services that helped its corporate parent, Alphabet, recently surpass a market value of $4 trillion.

    Earlier this month, Google tapped into Gemini to bring more AI features to Gmail as part of an effort to make that service behave more like a personal assistant and then funneled more of the technology into its search engine. in hopes of providing more relevant answers tailored to users’ individual tastes and habits.

    The upgrades to Google’s search engine plug into the company’s “Personal Intelligence” technology that leverages AI to learn more about people’s lives. Google is promising to roll out a Personal Intelligence option in Chrome at some point later this year.

    Chrome’s AI makeover is rolling out just a few months after a federal judge rejected the U.S. Department of Justice’s push to force Google to sell the browser as part of the penalty for running an illegal monopoly in search. The judge rebuffed the proposed breakup partly because he believes AI already is reshaping the competitive landscape as smaller rivals such as OpenAI and Perplexity deploy the technology in chatbots and their own web browsers.

    Before releasing its AI browser Atlas last October, OpenAI had expressed interest in buying Chrome if the breakup had been ordered. Perplexity, which offers an AI browser called Comet, even submitted a $34.5 billion bid for Chrome before the judge opted against a sale mandate.

    [ad_2]

    Source link

  • UK proposes forcing Google to let publishers opt out of AI summaries

    [ad_1]

    LONDON — Britain’s competition watchdog said Wednesday that Google should give news sites and content creators the choice to opt out of having their online content scraped to feed its AI overviews.

    It’s part of a set of proposals from the Competition and Markets Authority aimed at loosening the U.S. tech giant’s stranglehold on the U.K’s online search market.

    The watchdog last year labeled Google a “strategic” player in online search advertising, using new digital powers to promote more competition by forcing changes to the company’s business practices.

    The CMA’s report noted that news publishers have suffered a drop in traffic since Google rolled out its AI Overviews – summaries that appear at the top of some search queries – because fewer users are clicking through to the original articles.

    The watchdog said Google should give publishers “meaningful choice” over how their content is used in AI-generated responses; be more transparent about the process; and properly cite content used in AI results.

    Google said it was looking forward to engaging with the watchdog and would continue discussions with website owners.

    “We’re now exploring updates to our controls to let sites specifically opt out of Search generative AI features,” Ron Eden, Google’s principal for product management, said in a blog post.

    “Our goal is to protect the helpfulness of Search for people who want information quickly, while also giving websites the right tools to manage their content.

    Will Hayter, the CMA’s executive director for digital markets, said in a blog post that the measures would support the “long-term sustainability” of publishers and “help people verify sources in AI-generated results and build trust in what they see.”

    The CMA also recommended that Google rank its search results fairly, and not give priority to websites that have advertising or other business deals with Google. And it proposed making it easier for people to switch their default search engine by requiring choice screens on Android devices and the Chrome browser.

    The watchdog will make its final decision after gathering feedback in a consultation that ends on Feb. 25.

    [ad_2]

    Source link

  • Pinterest cites artificial intelligence in laying off 15% of workforce

    [ad_1]

    Pinterest plans to cut its workforce by 15% this year, a move the company said will allow it to reallocate resources to the build-out of its artificial intelligence capabilities.

    The San Francisco-based company disclosed the plan in a regulatory filing on Tuesday, noting that the reduction will affect “less than 15% of the company’s workforce” and will include office space reductions. 

    Pinterest is cutting costs to create more cash flow for AI-focused roles and teams, AI‑powered products and to help accelerate how it conducts sales, according to the company’s filing.

    “We are making organizational changes to further deliver on our AI-forward strategy, which includes hiring AI-proficient talent,” a Pinterest spokesperson said in a statement. “As a result, we’ve made the difficult decision to say goodbye to some of our team members.”

    Founded in 2008, Pinterest allows users to find and save recipes, decor and other content, and shop for products. The San Francisco-based company has 4,666 employees, according to the financial data platform FactSet. 

    Pinterest’s restructuring plan is expected to be completed by Sept. 30, 2026, and will include pre-tax charges of approximately $35 million to $45 million, Pinterest said Tuesday.

    Pinterest is the latest in a series of companies to cite AI in their layoff decisions. On Monday, Nike said it was cutting approximately 775 employees as it seeks to streamline operations and accelerate the “use of advanced technology and automation.”

    [ad_2]

    Source link

  • As world marks International Holocaust Remembrance Day, concern over

    [ad_1]

    As the world marked International Holocaust Remembrance Day on Tuesday, experts warned that a flood of “AI slop” is threatening efforts to preserve the memory of Nazi crimes and the millions of Jewish people killed during World War II. 

    Images seen by the AFP news agency include an emaciated and apparently blind man standing in the snow at the Nazi concentration camp Flossenbuerg, and a viral image of a little girl with curly hair on a tricycle falsely presented as a 13-year-old Berliner who died at the Auschwitz extermination camp.

    Such content — whether produced as clickbait for commercial gain or for political motives — has proliferated over the past year, distorting the history of Nazi Germany’s murder of six million European Jews during World War II.

    A person walks through the field of stelae at the Memorial to the Murdered Jews of Europe on the International Day of Commemoration in Memory of the Victims of the Holocaust, Jan. 27, 2026. 

    Christoph Soeder/picture alliance/Getty


    Early examples emerged in the spring of 2025, but by the end of the year, “AI slop” on the subject “was being shown very frequently,” historian Iris Groschek told AFP.

    On some sites, examples of such content were being posted once per minute, said Groschek, who works at Holocaust memorial sites in Hamburg, including the Neuengamme concentration camp.

    With the exponential advances in AI, “the phenomenon is growing,” Jens-Christian Wagner, director of the foundation that manages the Buchenwald and Mittelbau-Dora memorials, told AFP.

    Several Holocaust memorials and commemorative associations this month issued an open letter warning about the rising quantity of this “entirely fabricated” content.

    Some of them are churned out by content farms that exploit “the emotional impact of the Holocaust to achieve maximum reach with minimal effort,” it said.

    The picture supposedly from Flossenbuerg camp falls into this category, as it was shown on a page claiming to share, “true, human stories from the darkest chapters of the past.”

    But the memorials warned that fake content was also being created, “specifically to dilute historical facts, shift victim and perpetrator roles, or spread revisionist narratives.”

    Official Holocaust Remembrance Day Commemoration Ceremony In The Senate

    A man watches during a commemoration of the Official Day of Remembrance of the Holocaust and the Prevention of Crimes against Humanity in the Spanish Senate, Jan. 27, 2026, in Madrid.

    Europa Press News


    Wagner points, for example, to images of seemingly “well-fed prisoners, meant to suggest that conditions in concentration camps weren’t really that bad.”

    The Frankfurt-based Anne Frank Educational Center has warned of a “flood” of AI-generated content and propaganda “in which the Holocaust is denied or trivialized, with its victims ridiculed.”

    By distorting history, AI-generated images have “very concrete consequences for how people perceive the Nazi era,” said Groschek.

    The results of trivializing or denying the Holocaust have been seen in the attitudes of some younger visitors to the camps, particularly from “rural parts of eastern Germany … in which far-right thinking has become dominant,” said Wagner.

    In their open letter, the memorials called on social media platforms to “proactively combat AI content that distorts history” and to “exclude accounts that disseminate such content from all monetisation programs.”

    “The challenge for society as a whole is to develop ethical and historically responsible standards for this technology,” they said, adding: “Platform operators have a particular responsibility in this regard.”

    German Culture Minister Wolfram Weimer said in a statement to AFP: “I support the memorials’ call to clearly label AI-generated images and remove them when necessary.”

    He said that making money from such imagery should be prevented.

    “This is a matter of respect for the millions of people who were killed and persecuted under the Nazis’ reign of terror,” he said, reminding the platforms that they have obligations under the EU’s Digital Services Act.

    Groschek said none of the American social media companies had responded to the memorials’ letter, including Meta, the owner of Facebook and Instagram.

    TikTok responded by saying it wanted to exclude the accounts in question from monetization and implement, “automated verification,” according to Groschek.

    [ad_2]

    Source link

  • EU steps in to make sure Google gives rivals access to AI services and data

    [ad_1]

    BRUSSELS — The European Union said Tuesday it’s stepping in to make sure Google gives rival AI companies and search engines access to Gemini AI services and data as required by the bloc’s flagship digital rulebook.

    The executive arm of the 27-nation bloc said it was opening up so-called “ specification proceedings ” to ensure that Google complies with the sweeping Digital Markets Act, which requires Big Tech companies to give smaller players equal access to hardware and software features.

    Brussels said part of the proceedings will specify how Google should give third-party AI companies “equally effective access to the same features” available through its own services.

    The EU will also look at whether Google is giving competing search engines fair and reasonable access to Google Search data. This will include whether AI chatbot providers are eligible to access to the data.

    The proceedings fall short of an investigation and must wrap up in six months with draft measures that Brussels will impose on Google.

    Clare Kelly, Google’s senior competition counsel, said she was concerned about the reasons behind the procedure.

    “Android is open by design, and we’re already licensing Search data to competitors under the DMA,” Kelly said in a statement. “However, we are concerned that further rules which are often driven by competitor grievances rather than the interest of consumers, will compromise user privacy, security, and innovation.”

    Teresa Ribera, who oversees competition affairs as executive vice president of the European Commission, says it seeks to “maximize the potential and the benefits of this profound technological shift by making sure the playing field is open and fair, not tilted in favor of the largest few.”

    The move adds EU pressure on Google, which is facing antitrust scrutiny after the bloc’s regulators last year started investigating whether the company gave itself an unfair advantage through the use of online content for its AI models and services.

    [ad_2]

    Source link