ReportWire

Tag: AI

  • Anthropic Tells Pete Hegseth to Take a Hike

    [ad_1]

    Anthropic is holding the line. At least for now.

    The Pentagon approached Anthropic this week with a demand that it remove guardrails in its AI model Claude to prohibit mass domestic surveillance and fully automated weapons. But Anthropic is refusing to do that, according to a new statement from CEO Dario Amodei, who writes, “we cannot in good conscience accede to their request.”

    There’s a lot of money on the line. And it’s anyone’s guess what happens next.

    Earlier this week, Defense Secretary Pete Hegseth gave Anthropic a deadline of 5:01 p.m. ET on Friday to agree to the removal of all safeguards, threatening to boot Claude from U.S. military systems or designate the company as a “supply chain risk,” a label used for adversaries of the U.S. that’s never been applied to an American company before.

    Hegseth, who refers to the Defense Department as the Department of War, has even threatened to invoke the Defense Production Act, which would theoretically allow the Pentagon to just demand Anthropic do whatever Hegseth wants.

    Amodei pointed out Thursday in a letter posted online: “These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” Experts have called the contradictory messages from Hegseth “incoherent,” a label that might also apply to the Trump regime more broadly.

    Anthropic, which has a $200 million contract with the Department of Defense, told CBS News that the Pentagon’s “best and final offer,” which was sent Wednesday, seemed to have loopholes that would allow the military to disregard the protections put in place.

    “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW’s recent public statements, these narrow safeguards have been the crux of our negotiations for months,” Anthropic reportedly said.

    The new letter released by Anthropic on Thursday made sure to point out that the AI company works with the military and intelligence communities and that they “remain ready to continue our work to support the national security of the United States.” But asking to drop all safeguards is just a bridge too far.

    “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” the company wrote.

    “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

    The company went on to list the two use cases where it believes safeguards are needed to protect American interests. In the section on mass domestic surveillance, Amodei put the word domestic in italics, as if to warn Americans more broadly about what’s happening right under our noses.

    The letter notes that the government can purchase “detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” something that obviously infringes on the rights of Americans. The Pentagon has suggested it doesn’t have a plan for mass surveillance of Americans, telling CNN the conflict with Anthropic has “nothing to do with mass surveillance and autonomous weapons being used.”

    The second section of Amodei‘s letter, which covers autonomous weapons, acknowledges that AI-assisted weapons are already being used on battlefields today in places like Ukraine. But it warns, “frontier AI systems are simply not reliable enough to power fully autonomous weapons.” The letter goes on to say, “We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.”

    Amodei met with Hegseth on Tuesday in a meeting that was described by CNN as “cordial,” but it will obviously be interesting to see where this goes.

    Hegseth is not known as a particularly smart or level-headed guy, so it’s entirely possible that he tries to label Anthropic as both a national security threat and a part of America’s warfighting machine so vital that he’ll essentially draft the company to do what he wants. It sounds like we all get to find out by end of day Friday.

    [ad_2]

    Matt Novak

    Source link

  • This isn’t a real image of Puerto Vallarta on fire

    [ad_1]

    The Mexican military killed Nemesio “El Mencho” Oseguera Cervantes, Mexico’s most wanted cartel boss, during an operation aided by U.S. intelligence information in Tapalpa, a town within the Mexican state of Jalisco.  

    Violence spread after Oseguera Cervantes’ Feb. 22 killing, with suspected gang members torching buses and businesses while clashing with the authorities in multiple Mexican cities, including Puerto Vallarta in Jalisco. 

    Images of Puerto Vallarta in flames have been widely reported, but one photo shared online is not real. 

    A Feb. 22 TikTok post said it shows an image of Puerto Vallarta with scattered buildings on fire.

    “This is not a scene from a movie, this is the city of Puerto Vallarta, Jalisco in Mexico. Look at all these fires going around the city,” says the man in the TikTok video. “Well, what’s happening is they’re saying that they took down the leader of El Cartel de Jalisco Nueva Generación, AKA El Mencho… and all his people are going around all the city and just burning cars, shooting random people, fighting against the police.”

    Instagram and X users also shared the same image with English and Spanish captions claiming to show the unrest in Puerto Vallarta.

    (Screenshot of the Instagram post.)

    But that was generated with artificial intelligence. 

    The image shows the logo of Gemini, Google’s AI chatbot, at the bottom right corner. 

    PolitiFact uploaded the image to Gemini and it confirmed the image was generated using its generative AI program. 

    Visual inconsistencies signal the image is fake. Some of the cars on the streets are indistinguishable, while others look on top of each other. Some of the buildings look distorted and the smoke and the fire have unusual patterns. For example, the fire is bright orange and it sits on top of the buildings without consuming the structure, and the smoke seems to be going up in the same direction without being disrupted by the wind. 

    (Screenshot of AI-generated image highlighting with red circles visual inconsistencies. At the bottom right is the Google Gemini logo.)

    This image doesn’t show Puerto Vallarta after the killing of Oseguera Cervantes. We rate this claim False. 

    [ad_2]

    Source link

  • AI Added ‘Basically Zero’ to US Economic Growth Last Year, Goldman Sachs Says

    [ad_1]

    Meta, Amazon, Google, OpenAI, and other tech companies spent billions last year investing in AI. They’re expected to spend even more, roughly $700 billion, this year on dozens of new data centers to train and run their advanced models.

    This spending frenzy has kept Wall Street buzzing and fueled a narrative that all this investment is helping prop up and even grow the U.S. economy.

    President Donald Trump has cited that argument as a reason the industry should not face state-level regulations.

    “Investment in AI is helping to make the U.S. Economy the ‘HOTTEST’ in the World — But overregulation by the States is threatening to undermine this Growth Engine,” Trump wrote in a post on Truth Social in November. “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.”

    Some prominent economists have also given credibility to this story with their analysis. Jason Furman, a Harvard economics professor, said in a post on X that investments in information processing equipment and software accounted for 92% of GDP growth in the first half of the year. Meanwhile, economists at the Federal Reserve Bank of St. Louis similarly estimated that AI-related investments made up 39% of GDP growth in the third quarter of 2025.

    But now some Wall Street analysts are starting to rethink this narrative.

    “It was a very intuitive story,” Joseph Briggs, a Goldman Sachs analyst, told The Washington Post on Monday. “That maybe prevented or limited the need to actually dig deeper into what was happening.”

    Briggs’ colleague, Goldman Sachs Chief Economist Jan Hatzius, said in an interview with the Atlantic Council that AI investment spending has had “basically zero” contribution to the U.S. GDP growth in 2025.

    “We don’t actually view AI investment as strongly growth positive,” said Hatzius. “I think there’s a lot of misreporting, actually, of the impact AI investment had on U.S. GDP growth in 2025, and it’s much smaller than is often perceived.”

    Hatzius said one major reason is that much of the equipment powering AI is imported. While U.S. companies are spending billions, importing chips and hardware offsets those investments in GDP calculations.

    “A lot of the AI investment that we’re seeing in the U.S. adds to Taiwanese GDP, and it adds to Korean GDP but not really that much to U.S. GDP,” he said.

    On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth.

    So far, many business leaders say AI hasn’t significantly improved productivity.

    A recent survey of nearly 6,000 executives in the U.S., Europe, and Australia found that despite 70% of firms actively using AI, about 80% reported no impact on employment or productivity.

    [ad_2]

    Bruce Gil

    Source link

  • Sam Altman: Know What Else Used a Lot of Energy? Human Civilization

    [ad_1]

    At last week’s India AI Impact Summit in New Delhi, industry leaders convened to discuss the future of artificial intelligence and how best to squeeze it into parts of your life you haven’t even considered. Notably absent was Bill Gates, who dropped out hours before his scheduled keynote over the ongoing scrutiny about his presence in the Epstein Files (though he continues to deny any wrongdoing). While the convention was reportedly a bit chaotic, what with the protests and all, the luminaries from around the tech world present nonetheless kept things upbeat and optimistic, declaring “full steam ahead” on the technological hype train carrying our species and planet off a cliff.

    Also in attendance was OpenAI’s Sam Altman, who earned numerous headlines over the course of the event for his words and antics. His buzz blitzkrieg started on Thursday at a seemingly easy photo-opp layup with Indian Prime Minister Narendra Modi and other AI executives all raising their joined hands in a celebratory display of industry-wide solidarity. Altman and the former colleague and present CEO of Anthropic to his left, Dario Amodei, notably refused to complete the chain and hold each other’s hands, making for an all-too-poignant moment. Altman would continue to make news throughout the summit for his comments on the industry’s “urgent” need for global regulation and his sneaking suspicion that companies might actually be using AI as a scapegoat to whitewash their layoffs.

    Ever the yapper, Altman has bagged yet another round of earned media for an interview with The Indian Express’ Anant Goenka, during which he posited some controversial rebuttals to concerns about AI’s environmental impact.

    Altman started off by saying the claims about ChatGPT consuming “‘17 gallons of water for each query’ or whatever,” are “completely untrue, totally insane, no connection to reality,” before qualifying that, OK, maybe it was a valid concern when his company “used to do evaporative cooling in data centers.”

    He went on to say that there is “fair” concern about the amount of energy data centers eat to crank out the most soulless slop you’ve ever seen, but suggested the onus of responsibility for dealing with AI’s ravenous appetite falls to the energy sector itself, which Altman feels needs to “move towards nuclear or wind and solar very quickly.”

    Altman then stunned the crowd and firmly re-entered the discourse with a mind-blowing truth bomb for those who still felt AI was consuming too much energy.

    “It also takes a lot of energy to train a human,” Altman rejoined euphorically. “It takes like 20 years of life, and all the food you eat before that time, before you get smart. And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever to produce you, and then you took whatever you took.”

    It is true that every person and the sum total of human civilization have consumed a sizable amount of energy (and water) to get to where we are today. While the value comparison of a nascent tech industry and its models to the entirety of civilization and human beings may have elicited adulation at the summit, Altman got an icier reception from the internet. Social media quickly took to roasting the remarks as “dystopian” and “deeply antisocial and antihuman.”

    Perhaps further illuminating the backlash, Altman’s energy comments butt up against the frustrating lack of transparency within the industry our collective futures now hinge upon. There are currently no regulations in place requiring data centers to disclose their water and energy consumption. Furthermore, center employees and business partners are typically muzzled by nondisclosure agreements. This has made reporting and research on the true expenditure levels a tricky figure to pin down.

    At least we’ve got Sam to keep us informed while waiting for some clarity about what’s actually going on and being used in those centers.

    [ad_2]

    Justin Caffier

    Source link

  • An Unbothered Jimmy Wales Calls Grokipedia a ‘Cartoon Imitation’ of Wikipedia

    [ad_1]

    In our increasingly enshittified online experience, the last bastions of the Internet’s initial egalitarian promise shine like diamonds. These holdout Golden Era vestiges somehow remain useful and unadulterated by corporate greed, while under constant siege for their recalcitrance. The crown jewel of these stalwarts is Wikipedia. Sustained by a legion of volunteer editors and beg-a-thon donations since 2001, the humble open-source encyclopedia is generally regarded as our best effort yet to amass the sum of all human knowledge. Free, citation-filled, and perpetually self-auditing, it’s no wonder so many consider the online encyclopedia to be one of the few wonders of the digital world.

    Beyond an incalculable benefit to humans, this font of free information has also made model-training a whole lot easier for AI companies. But once Wikipedia-trained models began spitting out facts that comported with reality’s well-known liberal bias and pierced the industry’s echo chamber bubble, some were displeased. Cognitive dissonance now at the wheel, they declared Wikipedia yet another victim of the “woke mind virus” and set out to build their own Library of Alexandria. Leading the charge in this crusade is Elon Musk, who launched an AI-powered competitor, Grokipedia, last October.

    While speaking at India’s AI Impact Summit in New Delhi this week, Wikipedia co-founder and spokesperson Jimmy Wales was asked about the threat the site faced from Grokipedia and its ilk. Unbothered, he dismissed the xAI project as “a cartoon imitation of an encyclopedia.”

    Wales went on to champion the humans behind Wikipedia—and the mastery and due diligence they provide—as key ingredients to the site’s success.

    “Why do I go to Wikipedia? I go to Wikipedia because it’s human-vetted knowledge,” explained Wales. “We would not consider for a second today letting an AI just write Wikipedia articles because we know how bad they can be.”

    Wales described the propensity for AI models to “hallucinate” erroneous, misleading, or tangential information as their primary disqualifying factor. And he’s not wrong. A 2025 OpenAI study showed even their advanced models were still hallucinating at rates as high as 79% in some tests.

    As Wales explained, these sorts of errors become even more common and apparent when AI is asked to delve increasingly deeper into a subject—one that may already be niche. Where AI models fail here, their human counterparts shine. Wales touted these subject-matter experts—the “obsessives”—as the best guards against inaccuracies and providers for optimal knowledge-seeking experiences.

    “That sort of full, rich human context of understanding is actually quite important in terms of really understanding both what does the reader want and what does the reader need,” said Wales.

    If anything, Wales did Grokipedia a kindness by keeping the conversation hallucination-focused. Plenty of journalists and critics have already dug into the many controversies arising from Musk’s white nationalist, navel-gazing facsimile.

    Even with Wikipedia still being the universally agreed-upon ark of earthly info, a larger issue remains. We aren’t arguing over a shared reality anymore. With Grokipedia, a distinctly rival one has been created. And the more who use it, the further we get from ever fusing our two worlds back together.

    [ad_2]

    Justin Caffier

    Source link

  • What is Seedance and why does it have Hollywood spooked? – Tech Digest

    [ad_1]

    Share

    Seedance 2.0. Image: https://www.youtube.com/watch?v=KUKpIVaU12A

    In February 2026, the release of Seedance 2.0 marked a significant shift in the generative AI landscape.

    Developed by ByteDance, the model has gained international attention for its ability to generate high-fidelity video content that challenges traditional production methods. Its arrival has prompted immediate reactions from major media organizations and industry bodies regarding copyright and the protection of digital likeness.

    What is it and who developed it?

    Seedance is a generative AI video model developed by the Chinese technology giant ByteDance, the parent company of TikTok. The 2.0 version, launched in early 2026, is an evolution of ByteDance’s “Seed” ecosystem of foundation models. It is currently integrated into ByteDance’s creative suite, including Jianying, the Chinese counterpart to the video-editing app CapCut.

    What are its technical capabilities?

    Seedance 2.0 is capable of generating hyper-realistic video clips up to 15 seconds long. Unlike previous models that relied solely on text-to-video, this model utilizes a multimodal “@ reference system.” This allows creators to provide specific anchors for the AI to follow, including:

    • Face Reference: Users can upload a photo to ensure a character’s face remains consistent across different scenes.

    • Motion Reference: A separate video can be used to dictate specific choreography or physical movements.

    • Audio Integration: The AI can synchronize visual movements with provided audio tracks.

    By using these specific references, the tool solves the “consistency problem” that previously plagued AI video, where characters’ features would often drift or change between frames.

    Why is the film industry concerned?

    The primary concern for the film industry is the precision with which Seedance can replicate the likeness of established actors. Shortly after its launch, a viral video surfaced showing Tom Cruise and Brad Pitt in a cinematic sequence. The realism of these “digital twins” was high enough to spark a swift response from industry unions and advocacy groups.

    Legal and Ethical Issues:

    • Consent and Likeness: Labour union SAG-AFTRA has raised alarms over the ease with which the tool can infringe on an actor’s right of publicity. The union argues that the ability to generate a performance without the actor’s physical presence or consent threatens the livelihood of human performers.

    • Copyright Infringement: The Motion Picture Association (MPA), representing studios like Disney and Paramount, has alleged that ByteDance likely trained the model on vast amounts of copyrighted film and television content without authorization. Legal representatives for Disney and Paramount have reportedly issued cease-and-desist notices to address these training data concerns.

    What is the broader impact?

    The tension surrounding Seedance 2.0 highlights the widening gap between rapid technological advances and existing legal frameworks. While ByteDance has stated it intends to implement safeguards and respect intellectual property, the efficiency of the tool is undeniable.

    Production analysts estimate that while a traditional visual effects shot can cost thousands of dollars, a Seedance-generated clip costs less than a dollar. This economic shift, combined with the technical ability to maintain character consistency, is forcing a fundamental reassessment of how digital content is protected and produced globally.

    Disney threatens ByteDance with legal action over AI tool, Four new astronauts arrive at the ISS


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Why People Think AI Fight Between Tom Cruise & Brad Pitt Was A Scam

    [ad_1]

    On February 10, filmmaker Ruairí Robinson made a bold claim on X. “This was a 2 line prompt in seedance 2. If the hollywood is cooked guys are right maybe the hollywood is cooked guys are cooked too idk,” he wrote. The post was accompanied by a video of Tom Cruise and Brad Pitt performing a fight scene together. The clip gained attention for its apparent sophistication; it appeared well-choreographed, competently shot, and appropriately lit, which are all elements that other AI video tools have struggled to convincingly replicate. If Robinson’s claim was true, this was a significant leap forward in AI video technology. The kind of thing AI hype-men have been shilling for years and which has—until, if Robinson is to be believed, right now—turned out to be nothing more than snake oil. There’s just one problem: It’s probably still snake oil.

    Aron Peterson, a writer and software developer who has also worked in film production, post-production, and visual effects, posted a blog on his website, Shokunin Studio, questioning Robinson’s story. “The claims being made immediately rubbed me up the wrong way,” Peterson wrote. “Other demos of the Seedance model had the usual errors we have come to expect from AI video generators [but this one didn’t].” In particular, Peterson explained, “AI video generators are really bad at simulating realistic camera moves, especially handheld shaky cam,” but in the Cruise/Pitt video, “we can see the camera movement.”

    So Peterson started researching Seedance 2.0, the new AI tool from TikTok developer ByteDance that’s already doing large-scale copyright infringement, which Robinson used to create the video. Peterson “hopped over to Seedance’s website and it only took 10 seconds to find green screen footage of two stuntmen performing the same fight choreography we see in the Cruise vs Pitt scene,” he said. He also posted a comparison of the two videos on YouTube.

     

    “Was the input really just a 2 line prompt or was it actually 2 lines, green screen video footage, and face references too?” Peterson asked. “The evidence appears to show that stuntmen were filmed from several angles, that a clip had to be generated for every angle, and then finally all clips were stitched together for marketing.” Peterson’s evidence implies that the Cruise/Pitt fight scene wasn’t entirely AI generated; instead, it was probably just face replacement and background creation laid on top of footage that already existed. As TV writer David Slack put it on Bluesky, “In other words, like most AI hype — it was a con.”

    [ad_2]

    Jen Lennon

    Source link

  • Android AI app exposes nearly 2m user files – including private videos – Tech Digest

    [ad_1]

    Share

    Image: Cybernews

    A popular Android AI application has left millions of private user files exposed, allowing anyone with the correct link to view private videos and photos without a password.

    Researchers from Cybernews discovered that “Video AI Art Generator & Maker,” an app designed to transform media using artificial intelligence, suffered from a critical server misconfiguration. The lapse highlights the growing privacy risks associated with the rapid rise of AI-powered creative tools.

    The security failure centered on a misconfigured Google Cloud Storage bucket which lacked any form of authentication. Because the server was left open, every single piece of media uploaded to the app since its launch in June 2023 was accessible to the public.

    In total, the exposed bucket contained approximately 8.27 million media files, creating a massive digital footprint of sensitive user data.

    Millions of private memories at risk

    The breach is particularly severe because it involves nearly 2 million original, private files uploaded by users from their personal devices. Specifically, the leak includes over 1.57 million private images and more than 385,000 personal videos.

    Beyond these original uploads, the database also spilled millions of AI-generated assets, including 2.87 million generated videos, 2.87 million images, and over 386,000 audio files.

    The app was developed by Codeway Dijital Hizmetler Anonim Sirketi, a firm registered in Turkey. While the developers have since secured the bucket, the exposure affects anyone who has used the application to generate AI art over the past several years.

    The scale of the leak is compounded by the app’s own privacy documentation, which explicitly warns that shared information “cannot be regarded as 100% secure” and may be subject to unauthorized access.

    Legal experts suggest these disclaimers may fall short of strict international privacy standards, such as Europe’s General Data Protection Regulation (GDPR), which mandates that companies provide “material and verifiable” security for user data.

    For the affected users, the primary risks include targeted phishing, identity theft, or the potential for private videos to be repurposed for malicious “deepfake” content.

    Security researchers advise that users of AI editing tools should regularly audit their app permissions and remain cautious about uploading highly personal or identifying content to cloud-based platforms that do not guarantee end-to-end encryption.

    This is not the first time the company’s apps have leaked user data. Reportedly, an independent security researcher has discovered that another app developed by Codeway, Chat & Ask AI, had a misconfigured backend using Google Firebase. According to the researcher, he accessed roughly 300 million messages tied to more than 25 million users.

    For more information, see the full report: https://cybernews.com/security/android-ai-app-photo-video-editor-leak/


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Frontline AI in action: How AI-powered tools are reshaping work where it matters most – Microsoft in Business Blogs

    [ad_1]

    Frontline workers are the foundation of every industry—from retail and healthcare to hospitality and field services. Yet for years, they’ve been asked to increase productivity and deliver more value, faster, often with tools that weren’t designed for the specific realities of frontline work.

    Today, that dynamic is shifting.

    When AI is applied in practical, governed ways, it has the power to transform everyday work—reducing friction in daily workflows, empowering faster and more confident decision-making, and giving workers back time for what matters most: human connection. This shift isn’t theoretical. It’s already unfolding across frontline environments, driven by tools that meet workers where they are—on shared devices, on mobile, and inside the applications they already use.

    Voices from the Frontline: AI in Action is a limited podcast series, hosted by bestselling author and industry influencer Ron Thurston and sponsored by Microsoft. Across the series, frontline leaders and practitioners share how AI is being used today to simplify work, strengthen service, and support people—not replace them.

    Below are the key themes emerging from those conversations.

    Bringing AI into everyday frontline workflows

    For frontline teams, adoption starts with simplicity.

    Rather than introducing entirely new systems, organizations are embedding AI into familiar tools—making it easier to access intelligence without disrupting the flow of work. AI agents are emerging as the next evolution of workplace apps: purpose-built, task-focused assistants that help frontline employees find information, complete routine tasks, and stay organized. Microsoft 365 Copilot is centering agents at the core of frontline digital transformation.

    Because Copilot is embedded across Microsoft tools, frontline workers can access support through a single, intuitive entry point. This reduces context switching and lowers the barrier to adoption—especially in high-paced environments.

    As Abbie Sweeney, a program leader on the Microsoft 365 Copilot team, explained during the podcast series, “the goal isn’t automation for its own sake. It’s removing everyday friction so workers can focus on customers, patients, and guests.”

    Simplifying scheduling, reporting, and communication

    Some of the most immediate impact of AI shows up in the least glamorous tasks.

    Across industries, frontline leaders spend hours each week on scheduling, reporting, and administrative follow-up. AI can help streamline these processes—summarizing emails, generating meeting notes, and answering operational questions in seconds.

    For frontline employees, this means faster access to information like inventory availability, shift details, or process guidance without leaving the floor or logging into multiple systems. These time savings compound quickly, freeing up capacity for higher value, customer facing work.

    Sweeney also emphasized that, “making those processes efficient is really what Copilot is about—giving time back to the people who need it most.”

    AI in action on Microsoft’s own frontlines

    Microsoft applies the same tools internally that it brings to customers.

    At the Microsoft Experience Center in New York City, frontline associates use Copilot in Microsoft Teams and Microsoft Dynamics 365 to coordinate work, manage events, and support customers in a live retail environment. From onboarding new hires to managing high volumes of customer interactions, AI helps associates stay informed and responsive—even during peak demand.

    New employees can ask Copilot questions to quickly learn procedures and find answers without digging through long documents. Managers rely on AI to help them keep track of schedules, emails, and event logistics, ensuring teams have what they need to deliver consistent experiences.

    This “customer zero” approach allows Microsoft to learn, iterate, and scale frontline innovation based on real-world use.

    Scaling AI responsibly, with people at the center

    One theme cuts across every conversation in the series: successful AI adoption is people led.

    Rather than imposing new tools from the top down, organizations are seeing stronger results when they empower frontline employees to experiment, provide feedback, and shape how AI fits into their work. With clear governance and responsible AI principles in place, this approach supports organic adoption, faster iteration and sustainable scale—without compromising trust or security.

    The result is not just operational efficiency, but improved customer experiences, greater consistency, and enhanced connection at the frontline.

    The future of frontline work

    Technology alone doesn’t transform work—people do.

    When frontline teams are equipped with AI tools that respect how they work and what they value, the impact is immediate and tangible. Communication becomes clearer. Decisions happen faster. And workers gain more time to focus on the human moments that define great service.

    These aren’t future-state aspirations. They’re happening now, across industries, as organizations rethink how AI can truly support the people on the frontlines.

    Listen to the full series

    Explore Voices from the Frontline: AI in Action, a limited podcast series, hosted by bestselling author and industry influencer Ron Thurston and sponsored by Microsoft.

    🎧 Listen on Apple Podcasts
    🎧 Listen on Spotify
    📺 Watch on YouTube

    [ad_2]

    Microsoft in Business Team

    Source link

  • Elder Scrolls 6 Director Says AI’s Not A Fad But It Can’t Replace Art

    [ad_1]

    Every big game studio is currently trying to figure out if genAI tools are the real deal and how they can be used to help make games more quickly and efficiently. The Elder Scrolls 6 and Fallout 5 director Todd Howard recently pushed back against suggestions that the LLM boom is just a “fad” but said that just where and how AI might be implemented into the game development pipeline is still far from clear.

    “It’s certainly not a fad,” he said in a February 18 interview with Kinda Funny Games. “I think the AI answer now becomes ‘ask me in six months,’ right? It changes so much what you’re seeing out there. For us, we’re being incredibly cautious.” He confirmed that Bethesda is experimenting with AI for data-heavy tasks but keeping it out of the creative department, at least for now.

    “We can’t ignore it, in terms of it’s coming, it’s changing, every few months there’s a new model, particularly on the tech side with code or productivity or other things,” Howard said, adding, “it can help us get better at some big data tasks that just take us a lot of time, that we wish were done now so we can move onto the creative stuff.”

    At the same time, the veteran RPG designer confirmed AI isn’t being used to create anything that goes into Bethesda’s games. He didn’t get into the nitty-gritty about whether it’s being used for concept art references or placeholder text, but did suggest the company doesn’t see the technology as a potential replacement for human-made art.

    “We’re not using it to generate anything,” Howard said. “I think there’s an element of artistic intention that is essential to what we do and what others do. And if you look across things outside of AI, go back a hundred years, this idea of craftsmen, I still think craftsmen, and that handcrafted human intention, is what makes things special, and that’s where we want to be.”

    Speaking in generalities is one way to avoid the hot water some game company leaders have gotten into by embracing AI experimentation. Some evangelists are predicting computer gods and mass unemployment by 2030. My only hope is we’re playing The Elder Scrolls 6 by then.

    [ad_2]

    Ethan Gach

    Source link

  • Amazon halts Blue Jay robotics project after less than six months | TechCrunch

    [ad_1]

    Amazon has hundreds of thousands of robots in its warehouses, but that doesn’t mean all of its robotic initiatives are a success story.

    The ecommerce giant has halted its Blue Jay warehouse robotics project just months after unveiling the tech, as originally reported by Business Insider and confirmed by TechCrunch.

    Blue Jay, a multi-armed robot designed to sort and move packages, was unveiled in October for use in the company’s same-day delivery facilities. At the time, the company was testing the robots at a facility in South Carolina and said it took Amazon significantly less time to develop Blue Jay — only about a year— than it did to develop its other warehouse robots, a speed the company credited to advancements in AI.

    Amazon spokesperson Terrance Clark told TechCrunch that Blue Jay was launched as a prototype — although that was not made clear in the company’s original press release.

    The company plans to use Blue Jay’s core technology for other robotics “manipulation programs” with employees who worked on Blue Jay being moved to other projects.

    “We’re always experimenting with new ways to improve the customer experience and make work safer, more efficient, and more engaging for our employees,” Clark told TechCrunch over email. “In this case, we’re actually accelerating the use of the underlying technology developed for Blue Jay, and nearly all of the technologies are being carried over and will continue to support employees across our network.”

    Amazon also unveiled the Vulcan robot last year, which is used in the storage compartments of the company’s warehouses. Vulcan is a two-armed robot, with one arm meant to rearrange and move items in a compartment while the other is equipped with a camera and suction cups to grab goods. The Vulcan can allegedly “feel” the objects that it touches and was trained on data gathered from real-world interactions.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    Amazon has been developing its internal robotics program since 2012 when it purchased Kiva Systems, a robotics company whose warehouse automation technology formed the foundation of Amazon’s fulfillment operations. It surpassed 1 million robots in its warehouses last July.

    [ad_2]

    Rebecca Szkutak

    Source link

  • Bitcoin May Gain If AI Job Losses Trigger Bank Stress, Hayes Says

    [ad_1]

    Arthur Hayes has issued a stark market warning: he sees a growing split between his preferred risk gauge, Bitcoin, and the tech-heavy Nasdaq 100 as a signal that credit stress may be building under the surface.

    Related Reading

    Hayes, a co-founder and former CEO of cryptocurrency exchange BitMEX, calls Bitcoin a “fiat liquidity fire alarm” — an asset that reacts quickly when credit conditions change.

    A Warning From Market Signals

    When two assets that often moved together start to pull apart, traders take notice. Hayes believes that a gap like this deserves investigation because it could point to trouble in bank balance sheets or in the flow of lending.

    He argues the move is not about one stock or one trade; it is about the plumbing of credit and how fast liquidity can dry up when things turn.

    Source: Arthur Hayes

    How AI Job Cuts Could Ripple Through Credit

    Reports note that companies cited AI as a reason for thousands of layoffs in recent years, with an outplacement firm counting roughly 55,000 cuts in 2025 that were tied to AI. Much of that hit was inside tech.

    Hayes sketches a rough scenario: a sizable drop in knowledge-worker employment would weaken mortgage and consumer credit repayment, which could then shave bank equity and tighten lending.

    The numbers he offers are approximate and built on multiple assumptions, but they are intended to show how a shock to white-collar paychecks could cascade into the credit system.

    Source: Arthur Hayes

    Expectations About Central Bank Action

    Hayes expects a policy response if banks start to fail and credit freezes. He argues the Federal Reserve would step in with fresh liquidity, and that more money creation would follow — a move he says would be favorable for Bitcoin’s price outlook.

    That scenario has been a recurring theme in his commentary; past essays and posts have linked anticipated Fed liquidity to sharp rallies in crypto markets.

    BTCUSD currently trading at $67,298. Chart: TradingView

    Altcoin Bets And Fund Positioning

    His fund, Maelstrom, is said to plan staking or stablecoin deployments into privacy-focused and exchange-native plays once liquidity policy shifts occur, naming Zcash and Hyperliquid as examples. That kind of tactical stance is meant to profit from a short-term surge in risk assets after a policy pivot.

    Related Reading

    A Measured View

    This is a dramatic chain of events: AI job losses lead to credit losses, which cause bank stress, which forces the central bank to expand money supply, which lifts Bitcoin.

    Each link is plausible, but none is guaranteed. Some of Hayes’ figures are rough estimates meant to illustrate risk rather than to act as a precise forecast.

    Market history shows that central banks do sometimes step in, and that policy moves can power asset rallies, but outcomes depend on timing, scale and public confidence — factors that are hard to predict in advance.

    Featured image from Unsplash, chart from TradingView

    [ad_2]

    Christian Encila

    Source link

  • More data centers coming to Illinois as residents complain about noise, electric bills: What to know

    [ad_1]

    AURORA, Ill. (WLS) — Data centers are moving in. They power everything from streaming services to artificial intelligence, but critics say they are noisy and can jack up your electric bills.

    Now, the I-Team and ABC News are finding that more than 3,000 data centers are already operating nationwide, with at least 1,000 more planned. Some are in the Chicago area.

    ABC7 Chicago is now streaming 24/7. Click here to watch

    Companies point to economic benefits, but residents are raising concerns about noise and power usage.

    When David Szala moved into his Aurora home in 2015, he knew he was by a data center.

    “You can hear it as soon as you walk out. Fans, just constant with the noise,” Szala said.

    But in recent years, the CyrusOne data center campus has expanded significantly.

    Szala and his neighbor, Bryan Castro, both say they hear cooling fans all day and night, and sometimes, generators create more noise.

    “You feel it in your bones,” Szala said.

    Castro says the buzzing bounces through his backyard, which looked a lot different when he moved there in 2007.

    “You can feel the vibrations in the house,” Castro said. “This was 25 acres of nothing but forest.”

    Neighbors say CyrusOne put up a sound recorder to monitor noise levels and erected walls, but both residents ABC7 spoke with said the walls do not help much.

    “The noise doesn’t drop down and get stopped. The noise radiates from above,” Castro said.

    CyrusOne told the I-Team the noise issue is unique to their Aurora location, and it apologizes “for the impact this situation has had on our neighbors in Aurora. We take responsibility and are well underway with a three-phase engineering project.” The company says additional rooftop sound walls and other noise reduction equipment are on schedule for completion and “we anticipate continued improvement in sound levels.” The city of Aurora also says these steps should help.

    There is also a concern over the rising cost of electric bills.

    “Our electric bills this past year are probably 50% higher than they’ve been years past,” Castro said.

    CyrusOne says it understands that higher energy bills are a concern and it “pays for all electricity we consume at rates established through Illinois’ regulatory framework,” and that it takes steps with utilities to “protect households from cost volatility” and “moderate costs over time.”

    Illinois watchdog group Citizens Utility Board says the cost of improving the infrastructure for data centers can get passed on to consumers.

    “Some of them use a decent amount and some use massive amounts of electricity,” said Citizens Utility Board Executive Director Sarah Moskowitz. “The way that our power system is regulated, you have to build infrastructure, and then, it takes decades to pay it off.”

    Moskowitz continued, “What if the data centers don’t show up, or what if they are there for only a short period of time? Or, what if they don’t use as much electricity as they said? Then, they’re not going to be able to pay that off. And the rest of the customers, those of us who’ve been here, are left holding the bag.”

    ABC7 has also been covering public meetings over proposed data centers, and there are questions about water use and the environment.

    The I-Team and ABC News studied a private company’s Data Center Map and found that there are at least 4,302 data center projects across the U.S., large and small. Of those, 3,038 are currently operational, with another 1,203 either under construction or planned for construction. Sixty-one have acquired land.

    In Illinois, there are 164 operating data centers, with another 81 planned for construction. The largest state project planned is in Yorkville. It would be 2 Gigawatts, and according to ABC7 data team, would use the same energy that would power approximately 1.7 million homes. That’s more than every home in the city of Chicago.

    Industry experts say the facilities are needed for modern digital infrastructure and can benefit the economy.

    “So, for poor communities that specifically need a big increase in tax revenue, data centers are really good for that. They’re really not very good for jobs. They create a lot of construction jobs, and then a few additional maintenance jobs. But they create very few jobs relative to the resources that they use,” said Effective Altruism DC Director and artificial intelligence expert Andy Masley.

    The Illinois Pollution Control Board says that there have been no noise enforcement proceedings for data centers in the entire state, in 2025, and there are no open cases right now.

    “They have to build these things to support what’s going with computers, but they need to keep them away from neighborhoods,” Castro said.

    Illinois state legislators recently introduced a bill that could require data centers to reveal how much water and energy they are using. The bill could also limit the amount of energy costs passed on to consumers.

    You can watch more on “Data Land USA: AI on overdrive next door” on Tuesday morning on “Good Morning America” and throughout the day on ABC News.

    Copyright © 2026 WLS-TV. All Rights Reserved.

    [ad_2]

    Jason Knowles

    Source link

  • Tech giants accused of greenwashing over AI claims – Tech Digest

    [ad_1]

    Share

    Tech giants are facing mounting accusations of “greenwashing” as a new report claims they are misleading the public by conflating traditional machine learning with energy-intensive Generative AI.

    The research, commissioned by nonprofits including Climate Action Against Disinformation, suggests that the industry is using “diversionary tactics” to mask the massive environmental cost of the current AI gold rush.

    According to energy analyst Ketan Joshi, the report’s author, tech companies frequently cite the climate benefits of “old-school” predictive models – which can optimize power grids or track deforestation – to justify the explosive growth of gas-guzzling data centres required for generative tools such as OpenAI’s ChatGPT or Microsoft’s Copilot.

    The analysis of 154 corporate and industry statements failed to find a single example where Generative AI led to a “material, verifiable, and substantial” reduction in global emissions.

    The debate centres on the stark difference in energy profiles. While predictive AI uses relatively modest resources, Generative AI requires massive clusters of high-performance GPUs. Sasha Luccioni, a climate lead at Hugging Face, notes that when the industry discusses AI that is “bad for the planet,” it is almost exclusively referring to large language models and image generators.

    This surge in demand has sparked a critical question. Can Generative AI ever be carbon neutral? Certainly, the hardware fuelling this revolution generates immense heat, requiring sophisticated cooling systems that often consume vast amounts of water and electricity.

    While companies like Google claim their emission reduction estimates are based on robust science, data centres are projected to account for 20% of electricity demand growth in wealthy nations by the end of the decade.

    The report likens these tech claims to fossil fuel companies overstating the potential of carbon capture while their core business continues to drive pollution. As complex functions such as video generation and deep research proliferate, analysts argue that the narrative of AI as a climate saviour is being used to distract from the “preventable harms” of unrestricted data centre expansion.

    Unless transparency regarding the carbon footprint of GPUs and cooling improves, the industry’s green claims will remain under intense scrutiny.

    Via The Guardian


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Disney threatens ByteDance with legal action over AI tool, Four new astronauts arrive at the ISS – Tech Digest

    [ad_1]

    Share

    Disney has the rights to Marvel characters such as Spider-Man (above). Image: Marvel

    Chinese technology giant ByteDance has pledged to curb a controversial artificial intelligence (AI) video-making tool, following threats of legal action from Disney and complaints from other entertainment giants. In the last few days, videos made using the latest version of the app Seedance have proliferated online. Many have been lauded for their realism. But the trend has also sparked alarm from several Hollywood studios that have accused the AI platform’s makers of copyright infringement. On Friday, Disney sent a cease-and-desist letter to ByteDance accusing it of supplying Seedance with a “pirated library” of the studio’s copyrighted characters, including those from Marvel and Star Wars. BBC 

    Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong. When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries. “AI Overviews will inform people when it’s important to seek out expert advice or to verify the information presented,” Google has said. But the Guardian found the company does not include any such disclaimers when users are first presented with medical advice. Guardian 

    Andrei Fedyaev, Jack Hathaway, Jessica Meir and France’s Sophie Adenot (left to right front row), with Sergey Kud-Sverchkov, Christopher Will and and Sergei Mikayev behind. Pic: NASA

    Four new astronauts have arrived at the International Space Station to replace their colleagues who pulled out early over health concerns. SpaceX delivered the US, French and Russian astronauts to the orbital research laboratory 277 miles (446km) up in space, a day after they launched from Cape Canaveral. The new crew members include NASA‘s Jessica Meir and Jack Hathaway, France’s Sophie Adenot and Russia’s Andrey Fedyaev. The last group of astronauts were forced to evacuate after one of them suffered what officials described as a serious health issue. Sky News 

    A stock market crash triggered by fears around artificial intelligence (AI) has derailed the £575m takeover of a British company. Shares in Pinewood AI, which is listed as Pinewood Technologies, fell by 30pc on Monday after private equity firm Apax said it no longer planned to make a bid for the software provider. Apax said it had pulled out of talks owing to “prevailing challenging market conditions”, a reference to the widespread slump in software stocks in recent weeks. Telegraph 

    As a trillion-dollar company with one of the most recognizable brands in the world, I don’t think Apple has a lot to worry about. But when I looked at the results of a recent poll I ran, asking you, dear readers, if you use Apple Intelligence, the results made me grunt an ‘ooph’. That’s because a hefty 96% of respondents selected the ‘Nope, it’s not for me’ option, leaving a mere 4% to select ‘Yes, it’s pretty good’ as a response. TechRadar 

    Apple’s iOS 27 update will prioritize cleaning up the operating system’s internals, with engineers making changes that could result in better battery life, according to Bloomberg‘s Mark Gurman.

    iOS 27 Mock Quick
    The effort is said to be similar to what Apple did with its Snow Leopard Mac update years ago, and will involve removing old code, rewriting existing features, and subtly upgrading apps to improve their performance. The result should hopefully be a “snappier, more responsive” OS, says Gurman. Apple is also reportedly planning some interface tweaks, but nothing as dramatic as the Liquid Glass overhaul introduced with iOS 26, which will likely comfort some users.

     


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Paramount Latest Studio To Hit ByteDance With Cease And Desist Letter Over AI Models

    [ad_1]

    Paramount Skydance has joined Disney as the latest Hollywood studio to slam ByteDance over AI models Seedance and Seedream that it says are ripping off intellectual property and must stop.

    “We insist that ByteDance immediately take all necessary steps to (i) prevent violations of our intellectual property rights by ensuring that our content is not used or created by ByteDance or the Seed Platforms going forward, and (ii) remove all infringing instances of Paramount’s content from ByteDance’s platforms and systems,” the David Ellison company’s attorney wrote in a cease and desist letter to Beijing-based ByteDance CEO Liang Rubo.

    The missive was viewed by Deadline.

    “ByteDance markets the Seed Platforms as image and video generation tools that facilitate the creation and dissemination of visual and audiovisual content by their users in response to searches and prompts. However, much of the content that the Seed Platforms produce contains vivid depictions of Paramount’s famous and iconic franchises and characters, which are protected under copyright law, trademark law, and the law of unfair competition (among other doctrines),” Par wrote, ticking off South Park, SpongeBob SquarePants, Star Trek, Teenage Mutant Ninja Turtles, The Godfather, Dora the Explorer, and Avatar: The Last Airbender as just some of the properties that have been repeatedly infringed by the Seed Platforms in images and videos.

    Par also called it self-evident “that our company’s intellectual property was used to train the models that underlie these tools. Such training was also done without our consent and is a violation of the law. To be very clear, Paramount strongly objects to the use of our legally protected works in any of the manners described above—both as inputs trained upon by these types of models and as works that are created by them—without our express authorization.”

    Amid a rising tide of angst over the Seed platforms, especially video generated by the new Seedance 2.0, Disney Friday sent a cease and desist letter for IP infringement of properties from Star Wars to Marvel to Family Guy.

    The Motion Picture Association and, the Human Artistry Campaign issued statements last week slamming ByteDance.

    [ad_2]

    jillg366

    Source link

  • Dems Want to Ban Surveillance Pricing at Big Grocery Stores

    [ad_1]

    Sen. Ben Ray Luján, a Democrat from New Mexico, and Sen. Jeff Merkley, a Democrat from Oregon, introduced legislation Thursday that would ban so-called surveillance and surge pricing in grocery stores. Officially known as the Stop Price Gouging in Grocery Stores Act of 2026, the Senate legislation is modeled on a 2025 bill in the House.

    The new bill would require stores to disclose their use of facial recognition technology and would ban electronic shelf labels (ESL) in large grocery stores. ESLs are controversial because they allow retailers to change the price of a given item remotely, opening up the possibility that they could be tied to algorithms which raise and lower prices based on conditions in the store or who’s trying to buy something.

    Hypothetically, stores can charge different prices at different times of day or rely on different inputs, right down to personalizing the price based on an individual who was looking at a given item, spotted with facial recognition tech. The concern is that factors like race, gender, and income level could be used to determine how much people are charged. A 2025 study found that Instacart was charging customers different prices for the same products, sometimes as much as 23% more. A few weeks after the study received negative press coverage, Instacart announced it was pulling the plug on its AI-powered pricing.

    “In New Mexico and across the country, Americans are struggling to put food on the table,” Sen. Luján said in a statement posted online. “With rising costs driven by President Trump’s trade war and Republican cuts to SNAP, Congress must act to ensure that technologies are being used to improve the lives of Americans, not increase their grocery bills. Our friends, family, and neighbors should be able to shop at their local grocery store without worrying about predatory pricing.”

    At least six states have seen legislation introduced to stop surge and surveillance pricing, according to the United Food and Commercial Workers International Union (UFCW), which has also developed a 30-second ad to spread the word on the threat.

    It’s not clear how many grocery outlets are actually utilizing in-store surveillance pricing, but part of the reason legislators feel like new laws are needed is that they want to get ahead of things before the practice becomes commonplace.

    “This legislation is actually pretty simple: If two people are in the same store buying the same item, they should pay the same price,” Washington State Representative Mary Fosse said in an emailed statement.

    “Large retailers are investing in AI, algorithms, and data systems that can change prices instantly, individually, and secretly,” Fosse continued. “We need to stop the rip-off at the register before these practices become the norm. Technology should serve workers and consumers, not exploit them.”

    The Biden administration launched an investigation into surveillance pricing in 2024 with FTC chair Lina Khan initiating a study on the ways it may harm U.S. consumers. But after President Donald Trump took power in 2025, his administration killed the study.

    Surge pricing for food is extremely unpopular, with one of the most famous cases happening in 2024 when Wendy’s merely discussed the possibility of introducing it in 2025. Within just a couple of days the backlash had gotten so bad the company denied even contemplating the idea, despite pretty clear evidence it was working on surge pricing. The restaurant chain’s CEO had even said it would “begin testing more enhanced features like dynamic pricing” in an earnings call.

    Consumers are extremely price sensitive when it comes to food these days, and it’s no wonder, as people struggle to get by in an economy that prioritizes stock prices and Wall Street.

    “Americans are hurting under the affordability crisis, and UFCW members see the pain in their faces every time they enter the grocery store,” UFCW International President Milton Jones said in a statement to Gizmodo. “Our members also feel it themselves when they shop for their families.”

    “We are starting this national campaign to stop corporations from being able to change prices in front of their eyes just because they live in the wrong zipcode or are a new parent. We are proud to work with elected officials in every part of the country to lead the fight for affordable groceries and good jobs because that is what our members want.”

    [ad_2]

    Matt Novak

    Source link

  • How Ethereum Could Become The Default Network For AI Development, Vitalik Explains

    [ad_1]

    My name is Godspower Owie, and I was born and brought up in Edo State, Nigeria. I grew up with my three siblings who have always been my idols and mentors, helping me to grow and understand the way of life.

    My parents are literally the backbone of my story. They’ve always supported me in good and bad times and never for once left my side whenever I feel lost in this world. Honestly, having such amazing parents makes you feel safe and secure, and I won’t trade them for anything else in this world.

    I was exposed to the cryptocurrency world 3 years ago and got so interested in knowing so much about it. It all started when a friend of mine invested in a crypto asset, which he yielded massive gains from his investments.

    When I confronted him about cryptocurrency he explained his journey so far in the field. It was impressive getting to know about his consistency and dedication in the space despite the risks involved, and these are the major reasons why I got so interested in cryptocurrency.

    Trust me, I’ve had my share of experience with the ups and downs in the market but I never for once lost the passion to grow in the field. This is because I believe growth leads to excellence and that’s my goal in the field. And today, I am an employee of Bitcoinnist and NewsBTC news outlets.

    My Bosses and co-workers are the best kinds of people I have ever worked with, in and outside the crypto landscape. I intend to give my all working alongside my amazing colleagues for the growth of these companies.

    Sometimes I like to picture myself as an explorer, this is because I like visiting new places, I like learning new things (useful things to be precise), I like meeting new people – people who make an impact in my life no matter how little it is.

    One of the things I love and enjoy doing the most is football. It will remain my favorite outdoor activity, probably because I’m so good at it. I am also very good at singing, dancing, acting, fashion and others.

    I cherish my time, work, family, and loved ones. I mean, those are probably the most important things in anyone’s life. I don’t chase illusions, I chase dreams.

    I know there is still a lot about myself that I need to figure out as I strive to become successful in life. I’m certain I will get there because I know I am not a quitter, and I will give my all till the very end to see myself at the top.

    I aspire to be a boss someday, having people work under me just as I’ve worked under great people. This is one of my biggest dreams professionally, and one I do not take lightly. Everyone knows the road ahead is not as easy as it looks, but with God Almighty, my family, and shared passion friends, there is no stopping me.

    [ad_2]

    Godspower Owie

    Source link

  • The Pentagon Wants to Raw Dog the Latest AI Models on Classified Systems

    [ad_1]

    The Pentagon is looking to expand its use of artificial intelligence across both unclassified and classified networks, but negotiations with major AI companies have hit a sticking point.

    Defense officials want access to the most advanced models without any usage restrictions or heavy guardrails. According to Reuters, military officials argue they should be allowed to deploy AI however they see fit, as long as it complies with U.S. law.

    The push comes as OpenAI announced Monday that it has made a customized version of ChatGPT available through the War Department’s AI platform, GenAI.mil. The platform, which launched in December, is used by roughly 3 million civilian and military personnel and already includes tailored versions of tools from xAI and Google’s Gemini.

    “We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America’s commercial genius, and we’re embedding generative AI into our daily battle rhythm,” Secretary of War Pete Hegseth said in a press release about the platform. “AI tools present boundless opportunities to increase efficiency, and we are thrilled to witness AI’s future positive impact across the War Department.”

    OpenAI’s version of ChatGPT on the platform is designed to help with day-to-day tasks like summarizing policy documents, drafting reports, and assisting with research. But Reuters reports that Pentagon officials are pushing to roll out AI systems across all classification levels, potentially opening the door to more sensitive applications like mission planning or weapons targeting.

    An unnamed official told Reuters that the Pentagon is “moving to deploy frontier AI capabilities across all classification levels.”

    Currently, Anthropic’s models are available in select classified settings through third-party providers, but with significant usage restrictions. Reuters reports that Anthropic executives have told military officials they do not want their systems used for autonomous weapons targeting or domestic surveillance.

    Meanwhile, Semafor reports that Anthropic has not agreed to allow its models to be used for “all lawful uses.” As of now, its tools are not currently available on GenAI.mil.

    The negotiations leave AI companies walking a delicate tightrope. On one side, there are employees who oppose military use of their systems and fear it will make it hard to recruit future employees. On the other side is the Pentagon, which represents a massive customer and a powerful political force. Semafor reported that Anthropic’s stance has “drawn ire from the Pentagon and the White House.”

    At the same time, some OpenAI employees have expressed concerns about giving competitors an advantage by stepping back from defense work, according to Semafor.

    The Pentagon, OpenAI, Anthropic, Google, and xAI did not immediately respond to requests for comment from Gizmodo.

    [ad_2]

    Bruce Gil

    Source link

  • AI researchers quit, warning that ‘world is in peril’ – Tech Digest

    [ad_1]

    Share


    A wave of high-profile resignations has hit the AI industry, with leading researchers abandoning prestigious roles and issuing dire warnings about the technology’s direction.

    Mrinank Sharma, a senior safety leader at Anthropic, has quit his position to move back to the UK and pursue a degree in poetry, claiming the “world is in peril.”

    Sharma, who led the Safeguards Research Team at the San Francisco-based firm, announced his departure in a cryptic letter shared on social media.

    He stated that humanity is approaching a critical threshold where “our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”

    The researcher’s concerns extend beyond just AI, citing “interconnected crises” involving bioweapons and a broader societal decline. During his tenure, Sharma’s work focused on preventing AI-assisted bioterrorism and investigating how digital assistants might “make us less human.”

    He admitted that even at a safety-focused firm like Anthropic, employees “constantly face pressures to set aside what matters most.”

    Values v commercial pressure

    The exodus is not limited to Anthropic. At rival firm OpenAI, researcher Zoe Hitzig also resigned this week, specifically citing the company’s decision to introduce advertising into ChatGPT. Hitzig warned that the chatbot has amassed an unprecedented archive of “human candor,” including users’ medical fears and religious beliefs.

    “Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent,” Hitzig wrote in a New York Times essay. She argued that the drive for engagement and revenue creates “strong incentives to override” safety rules, mirroring the early mistakes of social media giants.

    The trend of “technical exits” suggests a growing rift between the developers of AI and the corporate structures that fund them. For Sharma, the solution is a radical retreat from the industry entirely.

    He stated his intention to become “invisible” for a time, seeking “poetic truth” alongside scientific truth as a necessary way of navigating the current global moment.

    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link