ReportWire

Tag: Artificial Intelligence

  • Meta Reportedly Cutting About 1,500 VR and AR Jobs Amid Renewed Push to Become an AI Juggernaut

    [ad_1]

    According to an anonymously-sourced New York Times article, as early as Tuesday Meta will announce that about 10% of the workers in the company’s Reality Labs division are set to lose their jobs—about 1,500 people in a division of about 15,000.

    Reality Labs was once Oculus, the VR headset company founded by Palmer Luckey, originally funded through a Kickstarter campaign. Since being acquired in 2014 by what was at the time called Facebook, Oculus has evolved into the “virtual and augmented reality”-focused division of Meta. It makes headsets and the Ray-Ban Stories smart glasses along with VR and AR software, including the Horizon Worlds social networking platform—what’s left of it, anyway.

    The Times says Meta CTO Andrew Bosworth has called for a meeting of Reality Labs staff members on Wednesday that he has deemed the “most important” meeting of the year, and indicated that employees are meant to attend in person. From the sound of it, this meeting will be held the day after the layoff plan is officially made public.

    My Gizmodo colleague James Pero strongly implied last month that something like this was coming, noting that a planned 30% budget cut at Reality Labs was, if not the death knell for the metaverse project at Meta, then a least a clear shift in priorities to AI.

    And indeed, on Monday Meta announced a massive buildout plan for data center capacity called Meta Compute, aimed at building “tens of gigawatts” of AI compute before the end of the 2020s. Compute buildout is somewhat crudely measured in gigawatts—roughly the power usage of a major U.S. city. So Meta’s rather vague “tens of gigawatts” of compute projection translates to “enough data centers to use more than ten San Franciscos’ worth of electricity, but less than one hundred San Franciscos.”

    Also on Monday, Meta announced something sure to help smooth over the friction involved in all this AI data center construction: the hiring of Dina Powell McCormick—a former advisor to Republican presidents George W. Bush and Donald Trump, who has also worked as a banking executive—to be Meta’s new president and vice chairperson.

    “How we engineer, invest, and partner to build this infrastructure will become a strategic advantage,” CEO Mark Zuckerberg wrote in a statement.

    Zuckerberg also used the term “strategic advantage” in 2022 to explain his push for more metaverse-related technology. “Enabling more experiences is really the primary driver and then the sort of fortification against external risks is certainly a strategic advantage over the long-term,” he said at the time.

    [ad_2]

    Mike Pearl

    Source link

  • Google’s corporate parent joins $4 trillion club as investors continue to bet on AI breakthroughs

    [ad_1]

    Google parent Alphabet Inc. on Monday became the fourth Big Tech powerhouse to be valued at $4 trillion, a once seemingly unfathomable milestone that’s become more like a rite of passage amid an artificial intelligence arms race.

    Alphabet reached the threshold just four months after Google dodged the U.S. government’s attempt to break up its internet empire following a ruling last year that branded its ubiquitous search engine an illegal monopoly.

    In an effort to prevent further abuses, a federal judge overseeing the case ordered a shake-up that investors widely interpreted as a slap on the wrist, resulting in a 57% increase in Alphabet’s stock price since then that has created an additional $1.4 trillion in shareholder wealth.

    The rapid run-up thrust Alphabet into a $4 trillion club that has previously welcomed computer chipmaker Nvidia, which became the first to cross the barrier in July. Both Apple and Microsoft also surpassed market values of $4 trillion last year, but they have fallen back mid worries that the spending spree on AI will turn into a bubble that bursts.

    Nvidia’s market value briefly topped $5 trillion in late October, before backtracking as the AI bubble fears also exacted a toll on its stock price because its chipsets are needed to power the technology.

    Meanwhile, Amazon is currently valued at $2.6 trillion, in part because of its AI ambitions, and Facebook parent Meta Platforms is valued at $1.6 trillion for some of the same reasons. Electric automaker Tesla also is betting heavily on AI, a gambit that prompted the company — now valued at $1.5 trillion — to approve a compensation package t hat would pay CEO Elon Musk $1 trillion if several targets are hit, including reaching a market value of more than $8.5 trillion.

    Alphabet joined the $4 trillion club on the same day that Apple announced it will rely on Google’s AI technology to help smarten up its virtual assistant Siri after coming up short in its own efforts to bring more advanced features to the iPhone.

    Google is well positioned to become one of the big winners in the AI battle because it is deploying the technology to transform its search engine into more of a conversational answer engine to compete against the likes of OpenAI’s ChatGPT and Perplexity.

    The next generation of the Gemini model underlying Google’s AI technology has been winning rave reviews since its recent release, helping to drive up Alphabet’s stock price while the shares of other AI-driven companies have dipped with ongoing bubble worries. Google’s Cloud division that sells AI tools to corporate customers and government agencies has emerged as Alphabet’s fastest growing segment during the past three years while AI technology has enabled its Waymo robotaxi division to dispatch more self-driving vehicles in cities across the U.S.

    The competitive threats posed by rising AI stars such as OpenAI and Perplexity is one of the reasons that U.S. District Judge Amit Mehta rebuffed the U.S. Justice Department’s proposal to force Google to sell its industry-leading Chrome web browser. The judge reasoned the technological advances unleashed by AI already have been forcing significant changes in online search.

    Alphabet’s market value could plunge if investor sentiment about the company’s exposure to a potential AI bubble suddenly shift. Even Alphabet CEO Sundar Pichai conceded that some market “irrationality” is contributing to the skyrocketing market values of Big Tech companies during a November interview with the BBC.

    “I think no company is going to be immune, including us,” Pichai said if the AI-driven euphoria suddenly evaporates.

    [ad_2]

    Source link

  • AI Pushback: Governments Eye Action on Political Ads, Deepfakes

    [ad_1]

    New York Gov. Kathy Hochul wants to ban AI-generated images from state politics. Canada may restrict “deepfakes” after the uproar over Grok “undressing” photos. New Jersey restricts phone use in schools. Vermont beer-makers are struggling.

    As we mention here regularly, Decision Points primarily focuses on national and international news. But we also occasionally deliver a roundup of local, regional or under-the-radar news with a political dimension – something unusual or interesting, or that may illustrate a broader trend.

    Our guiding principle is that the definition of politics includes how a society organizes itself to allocate finite or scarce resources, manage internal disagreements and blunt external threats.

    Here’s this week’s look around.

    Sign Up for U.S. News Decision Points

    Your trusted source for breaking down the latest news from Washington and beyond, delivered weekdays.

    Sign up to receive the latest updates from U.S. News & World Report and our trusted partners and sponsors. By clicking submit, you are agreeing to our Terms and Conditions & Privacy Policy.

    New York May Ban AI in Political Campaigns

    Hochul, a Democrat, said Sunday that she wants to forbid sharing AI-generated images of people, including candidates, without their consent in the 90 days before an election, the New York Times reported.

    It’s not academic. The Times noted that, in last year’s New York mayoral race, former Gov. Andrew Cuomo’s campaign released an AI video showing Zohran Mamdani, who went on to win that contest, eating rice with his hands. “It also suggested that his supporters were criminals who beat their wives, sold drugs and drove drunk.”

    Other states, like Texas and Minnesota, have similar bans.

    The question is whether such limitations will pass constitutional muster. Wouldn’t freedom of speech extend to caricaturing political candidates?

    Canada May Criminalize Sexual Deepfakes

    Evan Solomon, Canada’s minister of artificial intelligence and digital innovation, said on the hellscape formerly known as Twitter over the weekend that our northern neighbor may take action to rein in sexualized deepfake images and videos.

    “Deepfake sexual abuse is violence,” Solomon said. “We must protect Canadians, especially women and young people, from exploitation. Platforms and AI developers have a duty to prevent this harm.”

    How? By amending Canada’s criminal code to list deepfakes among the “intimate images” that it’s illegal to publish.

    This was also not academic. It came after the controversy that erupted when users of Elon Musk’s Grok AI used that module to digitally undress people (mostly women), putting them in tiny bikinis and striking sexual poses.

    Jersey Swipes Left on Cell Phones in School

    New Jersey has joined a phalanx of states that restrict the use of cell phones during the school day, the Associated Press reported.

    A law just signed by Democratic Gov. Phil Murphy “specifically requires the prohibition of non-academic uses of internet-connected devices – including phones – during the school day.”

    Nearly 20 states have enacted similar restrictions during the school day. It looks like one of the last issues in American politics to have wide bipartisan support.

    Vermont Can’t Beer This Craft Brewery Slowdown

    If you have read my work for a while, you know that I bang on about how local news can actually be national or international news. And so it is with this Seven Days report out of my home state of Vermont about struggling craft brewers.

    Against the backdrop of the final days of state brewery Simple Roots, we hear about “the latest casualties in an industry-wide slowdown that’s claimed more than 800 craft breweries around the country in the past two years.” There’s the national angle.

    How about the international dimension? “Tourism is down in Burlington, and the whole state has seen a sharp decrease in Canadian visitors since President Donald Trump took office last year.”

    If you are more of a “the pint’s half full” sort of person, consider that Vermont had fewer than 25 breweries in 2011 and today boasts 77 – “the highest number of breweries per capita in the country,” per Seven Days.

    Don’t pour one out for the industry just yet.

    [ad_2]

    Olivier Knox

    Source link

  • Apple calls on Google to help smarten up Siri and bring other AI features to iPhone

    [ad_1]

    Apple will rely on Google to help finish its efforts to smarten up its virtual assistant Siri and bring other artificial intelligence features to the iPhone as the trendsetting company plays catch up in technology’s latest craze.

    The deal allowing Apple to tap into Google’s AI technology was disclosed Monday in a joint statement from the Silicon Valley powerhouses. The partnership will draw upon Google’s Gemini technology to customize a suite of AI features dubbed “Apple Intelligence” on the iPhone and other products.

    After Google and others took the early lead in the AI race, Apple promised to plant its first big stake in the field with an array of new features that were supposed to be coming to the iPhone in 2024 as part of a ballyhooed software upgrade.

    But many of Apple’s AI features remain in the development phase, while Google and Samsung have been rolling out more of the technology on their own devices. One of the most glaring AI omissions on the iPhone has been a promised overhaul of Siri that was supposed to transform the often-confused assistant into a more conversational and versatile multitasker.

    Google even subtly mocked the iPhone’s AI shortcomings in ads promoting the release of its latest Pixel phone last summer.

    Apple’s AI missteps prompted the Cupertino, California, company to acknowledge last year that its Siri upgrade wouldn’t happen until some point during 2026.

    Getting Apple to endorse its AI implicitly represents a coup for Google, which has been steadily releasing more features built on its Gemini technology in its search engine and Gmail. The progress has intensified Google’s competition with OpenAI and its ChatGPT chatbot, which already has a deal with Apple that makes it an option on the iPhone.

    Wedbush Securities analyst Dan Ives hailed the Apple deal as a “major validation moment for Google,” in a Monday research note.

    Google’s AI inroads have helped its corporate parent, Alphabet Inc., become slightly more valuable than Apple in the assessment of investors. Alphabet marked a milestone Monday when it surpassed a market value of $4 trillion for the first time during early morning trading before slipping back below that threshold later in the session.

    Even so, Alphabet’s market value remained about $150 billion above Apple, which for years ranked as the world’s most valuable company before the rise of AI changed the stakes.

    Three other companies have joined the $4 trillion club in the past year, with AI chipmaker Nvidia becoming the first last July. Apple and Microsoft also broke the barrier last year, although the market values of those two longtime rivals are now below $4 trillion.

    Nvidia’s market value briefly topped $5 trillion in late October, before backtracking amid recurring worries that the hundreds of billions of dollars pouring into AI technology may be creating an investment bubble that will eventually burst. With its chipsets designed for AI still in high demand, Nvidia remains atop the heap with a $4.5 trillion market value.

    Alphabet’s stock price has been on a tear since early September when Google dodged the U.S. government’s attempt to break up its internet empire following a ruling last year that branded its ubiquitous search engine an illegal monopoly.

    In an effort to prevent further abuses, a federal judge overseeing the case ordered a shake-up that investors widely interpreted as a relative slap on the wrist, resulting in a 36% increase in Alphabet’s stock price since then that has created an additional $1.4 trillion in shareholder wealth.

    The ruling also left the door open for a long-running alliance in search between Google and Apple. Google pays Apple more than$20 billion annually to be the preferred search engine on the iPhone and other Apple products — an arrangement that is still allowed with a few modifications under the judge’s decision in the search case.

    [ad_2]

    Source link

  • The Dangerous Paradox of A.I. Abundance

    [ad_1]

    Even if A.I. doesn’t greatly accelerate economic growth, there’s the issue of how it affects employment and wages. The key issue here is whether A.I. primarily complements or substitutes for human labor. If it enables office workers to carry out their tasks more quickly and effectively, for example, it could raise their wages, preserve many existing jobs, and create well-paid new positions for people who are adept at working with A.I. agents. In a recent article, Séb Krier, a manager for policy development and strategy at Google DeepMind, argued that “future workers will likely function as orchestrators of intelligence,” overseeing what A.I. does. Over the longer term, A.I. could also create new jobs and new professions that we can’t currently envision, which is what other transformative technologies have done.

    But the fact remains that if A.I. agents can eventually carry out virtually all cognitive tasks without human intervention—a possibility touted by their promoters—many workers could be displaced, and firms may be reluctant to take on new ones. Given the evolving capacities of models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, it’s perhaps unwise to wholly discount the prediction from Dario Amodei, the C.E.O. of Anthropic, that within five years A.I. could eliminate half of all entry-level white-collar jobs. Elsewhere in the economy, who knows what could happen? But if the marriage of A.I. and robotics proceeds, in other sectors, along the lines that it seems to be moving in the automotive industry, where autonomous vehicles are already being deployed in some places, taxi-drivers and truck drivers likely won’t be the only blue-collar workers whose jobs are affected.

    “It’s clear that a lot of jobs are going to disappear: it’s not clear that it’s going to create a lot of jobs to replace that,” Geoffrey Hinton, one of the pioneers of the deep-learning models that underpin generative A.I., remarked at a conference last month. “This isn’t A.I.’s problem. This is our political system’s problem. If you get a massive increase in productivity, how does that wealth get shared around?” If A.I. abundance does materialize, that will be a central question.

    In a recent Substack article, Philip Trammell, an economist at the Stanford Digital Economy Lab, and Dwarkesh Patel, a tech podcaster, pointed out that in standard economic theory deploying more capital raises workers’ productivity and their wages, but reduces the rewards of further capital investment as diminishing returns set in. This “correction mechanism” keeps the over-all shares of income that accrue to labor and capital pretty constant over time. But if A.I. is easily substitutable for labor throughout the economy, and a potential shortage of workers is no longer a bottleneck to production, the stabilization effect disappears, capital incomes “can rise indefinitely,” and the owners of capital receive an ever-growing share of the economic pie, Trammell and Patel write. How far can this process go? “[O]nce A.I. renders capital a true substitute for labor,” Trammell and Patel write, “approximately everything will eventually belong to those who are wealthiest when the transition occurs, or their heirs.”

    Trammel and Patel relate their analysis to Thomas Piketty’s book “Capital in the Twenty-First Century,” from 2014, which argued that, under certain conditions, rising inequality is inevitable under capitalism. To address this problem, Piketty called for a global tax on wealth. Trammell and Patel argue that Piketty’s pessimistic analysis hasn’t applied until now, but “he will probably be right about the future.” They also endorse Piketty’s policy solution, writing, “Assuming the rich do not become unprecedentedly philanthropic, a global and highly progressive tax on capital (or at least capital income) will then indeed be essentially the only way to prevent inequality from growing extreme.” (The tax would have to be global, the authors argue, because if capital doesn’t need much labor to produce things it would be even more mobile than it is now, which would enable it to evade national levies.)

    The article by Trammell and Patel has already received some pushback online, largely on the ground that its assumption that capital is perfectly substitutable for labor is unrealistic. Brian Albrecht, the chief economist at the Portland-based International Center for Law & Economics, argues that the process of A.I. machines replacing workers is likely to take a long time, and during that transition “standard economic principles apply.” Krier argued that the mere fact A.I. can do something more cheaply or effectively than human workers doesn’t mean it will inevitably replace them. “People pay a lot to go see concerts and Olympic races even if in principle a model can generate the same song and a robot can run faster,” he wrote.

    [ad_2]

    John Cassidy

    Source link

  • Malaysia, Indonesia become first to block Musk’s Grok over AI deepfakes

    [ad_1]

    KUALA LUMPUR, Malaysia — Malaysia and Indonesia have become the first countries to block Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, after authorities said it was being misused to generate sexually explicit and non-consensual images.

    The moves reflect growing global concern over generative AI tools that can produce realistic images, sound and text, while existing safeguards fail to prevent their abuse. The Grok chatbot, which is accessed through Musk’s social media platform X, has been criticized for generating manipulated images, including depictions of women in bikinis or sexually explicit poses, as well as images involving children.

    Regulators in the two Southeast Asian nations said existing controls were not preventing the creation and spread of fake pornographic content, particularly involving women and minors. Indonesia’s government temporarily blocked access to Grok on Saturday, followed by Malaysia on Sunday.

    “The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesia’s Communication and Digital Affairs Minister Meutya Hafid said in a statement Saturday.

    The ministry said the measure was intended to protect women, children and the broader community from fake pornographic content generated using AI.

    Initial findings showed that Grok lacks effective safeguards to stop users from creating and distributing pornographic content based on real photos of Indonesian residents, Alexander Sabar, director general of digital space supervision, said in a separate statement. He said such practices risk violating privacy and image rights when photos are manipulated or shared without consent, causing psychological, social and reputational harm.

    In Kuala Lumpur, the Malaysian Communications and Multimedia Commission ordered a temporary restriction on Grok on Sunday after what it said was “repeated misuse” of the tool to generate obscene, sexually explicit and non-consensual manipulated images, including content involving women and minors.

    The regulator said notices issued this month to X Corp. and xAI demanding stronger safeguards drew responses that relied mainly on user reporting mechanisms.

    “The restriction is imposed as a preventive and proportionate measure while legal and regulatory processes are ongoing,” it said, adding that access will remain blocked until effective safeguards are put in place.

    Launched in 2023, Grok is free to use on X. Users can ask it questions on the social media platform and tag posts they’ve directly created or replies to posts from other users. Last summer the company added an image generator feature, Grok Imagine, that included a so-called “spicy mode” that can generate adult content.

    The Southeast Asian restrictions come amid mounting scrutiny of Grok elsewhere, including in the European Union, Britain, India and France. Grok last week limited image generation and editing to paying users following a global backlash over sexualized deepfakes of people, but critics say it did not fully address the problem.

    [ad_2]

    Source link

  • Grok AI scandal sparks global alarm over child safety

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.

    In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

    That admission alone is alarming. What followed revealed a far broader pattern.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

    The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children.  (Silas Stein/picture alliance via Getty Images)

    The apology that raised more questions

    Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.

    Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.

    After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.

    Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”

    PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

    An X post from Grok

    Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)

    Sexualized images of minors are illegal

    This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.

    In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.

    The scale of the problem is growing fast

    A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.

    Real people are being targeted

    The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.

    Governments respond worldwide

    The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.

    LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

    Grok app on a screen

    Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)

    Concerns grow over Grok’s safety and government use

    The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.

    Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.

    Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?

    What parents and users should know

    If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.

    Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.

    Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.

    Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com       

    Kurt’s key takeaways

    The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.

    Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Why a Fairfax Co. elementary school is teaching kids the ‘how’ behind AI – WTOP News

    [ad_1]

    Vienna Elementary School’s Vienna.i.Lab is transforming education by introducing students to AI and advanced technology.

    David Lee Reynolds, Jr. spent two decades working as a music teacher before transitioning to teach technology.

    When he made the switch, Vienna Elementary School didn’t have a Science, Technology, Engineering, Arts and Math, or STEAM, lab. To best set students up for success, he knew the Northern Virginia campus needed one.

    That thought came around the same time the first large language models were debuting, and artificial intelligence was becoming more mainstream. So he knew once a lab was put together, it would have to be advanced. A traditional STEAM lab would come later.

    Eventually, Reynolds created the Vienna.i.Lab with the goal of helping students understand how the tech works, all so they’re set up to use it more effectively.

    “This is the new stuff, and it’s here to stay,” Reynolds said. “But if you don’t know what it is, then it’s not helpful to you. So let’s fix that.”

    To do it, Reynolds collaborated with the school’s parent-teacher association, which helped raise money so students could use new tools instead of traditional laptops.

    During a lesson on Friday afternoon, a group of first graders used KaiBots. They scanned a card with a code describing how the robot should move, and watched it either follow the instructions or identify an error.

    Even for some of the school’s youngest students, Reynolds said the lesson revealed the “building blocks of where you would eventually get to learning about machine learning, learning about large language models, learning about how ChatGPT works.”

    One student, Nora Vazeen, said the activity is different from what she does in most classes, and “It’s silly.”

    Another student, Callum, echoed that sentiment, saying, “The robot does silly stuff.”

    But, once a week during their technology special, students from kindergarten to sixth grade participate in hands-on activities. While the younger kids use KaiBots, the older students are programming drones.

    The work emphasizes problem solving skills, collaboration and coding skills, Reynolds said.

    “For kids, if they understand how the tool works, they can do amazing things with the tool,” he said. “But if they don’t, they’re going to use the tool like it’s a search feature, and the next thing you know, they’re doing things that are wrong and they’re learning things that are incorrect.”

    While the AI lab is largely the tech cart Reynolds oversees in the corner of the school’s library, he’s hoping one day it can evolve into an innovative space.

    “Let’s build it in a green way,” Reynolds said. “Let’s build it underground. Let’s use geothermal heating and cooling. Let’s build a space, when you walk into it, you’re inspired to go and create.”

    [ad_2]

    Scott Gelman

    Source link

  • Meta signs 3 deals for nuclear energy to power AI data centers

    [ad_1]

    Meta has cut a trio of deals to power its artificial intelligence data centers, securing enough energy to light up the equivalent of about 5 million homes.

    The parent company of Facebook on Friday announced agreements with TerraPower, Oklo and Vistra for nuclear power for its Prometheus AI data center that is being built in New Albany, Ohio. Meta announced Prometheus, which will be a 1-gigawatt cluster spanning across multiple data center buildings, in July. It’s anticipated to come online this year.

    Financial terms of the deals with TerraPower, Oklo and Vistra were not disclosed.

    The Mark Zuckerberg-led Meta said in a statement on Friday that the three deals will support up to 6.6 gigawatts of new and existing clean energy by 2035. A single gigawatt, according to a general industry standard for utilities, can power about 750,000 homes.

    “These projects add reliable and firm power to the grid, reinforce America’s nuclear supply chain, and support new and existing jobs to build and operate American power plants,” the company said.

    Meta said its agreement with TerraPower will provide funding that supports the development of two new Natrium units capable of generating up to 690 megawatts of firm power with delivery as early as 2032. The deal also provides Meta with rights for energy from up to six other Natrium units capable of producing 2.1 gigawatts and targeted for delivery by 2035.

    Meta will also buy more than 2.1 gigawatts of energy from two operating Vistra nuclear power plants in Ohio, in addition to the energy from expansions at the two Ohio plants and a third Vistra plant, Beaver Valley, near Pittsburgh, Pennsylvania.

    The deal with Oklo, which counts OpenAI’s Sam Altman as one of its largest investors, will help to develop a 1.2 gigawatt power campus in Pike County, Ohio, to support Meta’s data centers in the region.

    The nuclear power agreements come after Meta announced in June that it reached a 20-year deal with Constellation Energy to secure power from its nuclear plant in Clinton, Illinois.

    Constellation’s Clinton Clean Energy Center single nuclear reactor power plant is shown on July 25, 2025 in Clinton, Illinois. Meta signed a 20-year power purchase agreement with Constellation for the output from the plant.

    Scott Olson / Getty Images


    [ad_2]

    Source link

  • U.K. says ban on Elon Musk’s X platform “on the table” over Grok AI sexualized images

    [ad_1]

    London — U.K. Prime Minister Keir Starmer said Thursday that he wants “all options to be on the table,” including a potential ban on Elon Musk’s X platform in Britain, over the use of its artificial intelligence tool Grok to generate sexualized images of people without their consent. 

    Starmer’s remarks come as Musk’s platform faces scrutiny from regulators across the globe over Grok’s image editing tool, which has allowed users to create digitally altered, sexualized photos of real people, including minors.

    “This is disgraceful, it’s disgusting and it’s not to be tolerated. X has got to get a grip of this,” Starmer said in an interview with a U.K. radio station. “It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table.”

    A source in Starmer’s office reiterated to CBS News on Friday that “nothing is off the table” when it comes to regulating X in Britain.

    Prime Minister Keir Starmer leaves his 10 Downing Street residence to attend a weekly question and answer session in the British Parliament, Jan. 7, 2026, in London, England.

    Carl Court/Getty


    CBS News has verified that Grok fulfilled user requests asking it to edit images of women to show them in bikinis or little clothing, including prominent public figures such as first lady Melania Trump.

    Last week, Grok, a chatbot developed by Musk’s company xAI, acknowledged “lapses in safeguards” that allowed users to generate digitally altered, sexualized photos of minors.

    Grok told users that as of Friday, access to its image generation tool was limited “to paying subscribers” of its user verification service. Paying subscribers have to provide their credit card and personal details to the company, which could dissuade some people from using the service, especially if they had intended to use Grok’s AI tool to create illegal images of minors.

    xAI responded to a CBS News request for comment to criticism of Grok’s image generation tool and steps it had taken to limit access to it on Friday, by saying: “Legacy media lies.”

    Addressing reporters on Friday morning, a U.K. government spokesperson called the move to limit access to Grok’s image editing tool to paying users “insulting” to victims of misogyny and sexual violence, saying it, “simply turns an AI feature that allows the creation of unlawful images into a premium service.” 

    Under the U.K. Online Safety Act, sharing intimate images without consent on social media is a criminal offense, and social media companies are required to proactively remove such content, as well as prevent it from appearing in the first place.

    If they fail to do so, the companies can face hefty fines or, in last resort cases, face what would effectively be a ban by Britain’s independent media regulator Ofcom. Ofcom can compel payment providers, advertisers and internet service providers to stop working with a site, preventing it from generating money or being accessed from the U.K.

    In a post shared Monday on its own X account, Ofcom said it was “aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children.”

    “We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation,” Ofcom said. 

    Musk’s platform has faced scrutiny from governments around the world, including the European Union and the U.S. Congress, over Grok AI’s digital alteration of real images.

    On Wednesday, Republican Senator Ted Cruz said in a post on X that “many of the recent AI-generated posts are unacceptable and a clear violation of my legislation — now law — the Take It Down Act, as well as X’s terms and conditions.”

    “These unlawful images pose a serious threat to victims’ privacy and dignity. They should be taken down and guardrails should be put in place,” Cruz said, adding that he was encouraged by steps taken by X to remove unlawful images.

    On Thursday, Congresswoman Anna Paulina Luna, a Republican member of the House Foreign Affairs Committee, threatened to sanction the U.K. government if Starmer moved to ban X in the U.K. 

    “If Starmer is successful in banning @X in Britain, I will move forward with legislation that is currently being drafted to sanction not only Starmer, but Britain as a whole,” Paulina Luna said in a post on her own X account. 

    [ad_2]

    Source link

  • Nessel challenges fast-tracked DTE data center deal, citing risks to ratepayers and lack of public scrutiny – Detroit Metro Times

    [ad_1]

    Michigan Attorney General Dana Nessel is urging state utility regulators to reconsider their approval of special power contracts for a massive data center planned in Washtenaw County, warning the fast-tracked decision could leave electric customers exposed to higher costs.

    Nessel announced Friday that her office filed a petition for rehearing with the Michigan Public Service Commission over its Dec. 18 decision to conditionally approve two special contracts sought by DTE Energy to serve a proposed 1.4-gigawatt hyperscale artificial intelligence data center in Saline Township.

    The project, tied to Oracle, OpenAI, and developer Related Digital, would be among the largest data centers in the country and is expected to consume as much electricity as nearly one million homes. Its scale has caused concerns among residents, environmental advocates, and consumer watchdogs about long-term impacts on electric rates, grid reliability, and the environment.

    Nessel’s move also pits her against Gov. Gretchen Whitmer, a fellow Democrat who has publicly backed the data center as “the largest economic project in Michigan history.” Whitmer celebrated the project when it was announced last fall, citing thousands of construction jobs and hundreds of permanent positions. 

    On Thursday, U.S. Senate candidate Abdul El-Sayed, a progressive Democrat, released what he called “terms of engagement” aimed at protecting communities from higher utility bills, grid strain, and environmental harm tied to data centers.

    At least 15 data center projects have been proposed across the state in the past year.

    The split among Democrats is part of a broader debate over whether Michigan should keep fast-tracking energy-hungry data center projects tied to the AI boom.

    In her petition, Nessel challenges the commission’s authority to approve the contracts behind closed doors without holding a contested case hearing that would allow discovery, sworn testimony, and full public review. She also questions whether the conditions imposed by the commission are meaningful or enforceable.

    In a statement Friday, the Michigan Public Service Commission said it “looks forward to considering Nessel’s petition for rehearing,” but the commission “unequivocally rejects any claim that these contracts were inadequately reviewed.”

    The commission said its professional staff, advisory staff, and commissioners were provided with unredacted versions of the special contracts and reviewed them thoroughly to ensure existing customers are protected. The commission said its order recognizes DTE’s legal obligation to serve the data center while imposing what it described as the strongest consumer protections for a data center power contract in the country.

    The attorney general is seeking clarification on how those conditions would protect ratepayers, noting that many appear to rely on repeated assurances from DTE, rather than concrete commitments backed by evidence. Nessel also objected to the commission allowing DTE to serve as the project’s financial backstop, rather than requiring the data center operator to provide sufficient collateral to cover potential risks.

    “I remain extremely disappointed with the Commission’s decision to fast-track DTE’s secret data center contracts without holding a contested case hearing,” Nessel said in a statement. “This was an irresponsible approach that cut corners and shut out the public and their advocates. Granting approval of these contracts ex parte serves only the interests of DTE and the billion-dollar businesses involved, like Oracle, OpenAI, and Related Companies, not the Michigan public the Commission is meant to protect. ”

    She said the commission’s approval process served the interests of DTE and the companies behind the project rather than Michigan residents.

    “The Commission imposed some conditions on DTE to supposedly hold ratepayers harmless, but these conditions and how they’ll be enforced remain unclear,” Nessel said. “As Michigan’s chief consumer advocate, it is my responsibility to ensure utility customers in this state are adequately protected, especially on a project so massive, so expensive, and so unprecedented.”

    Large portions of the contracts remain heavily redacted, preventing outside parties from verifying DTE’s claims that serving the data center will not raise rates for existing customers. Nessel said a contested case is necessary to review the full contracts, assess affordability claims, and confirm that protections, such as collateral requirements and exit fees are in place.

    The commission ordered DTE to formally accept its conditions within 30 days of its Dec. 18 order. Nessel said that timeline complicates decisions about whether further legal challenges are necessary, prompting her office to file the rehearing petition in part to preserve its arguments.

    The power contracts are one piece of a larger controversy surrounding the Saline Township project referred to as “Project Stargate.” Residents and environmental groups have raised alarms about wetlands destruction, water contamination risks, and the permanent transformation of a rural farming community.

    More than 5,000 public comments opposing the data center power deal were submitted to the commission ahead of its December vote. Critics argue the rush to approve the contracts is part of a broader pattern as deep-pocketed utilities and developers seek to capitalize on the AI boom, which is driving a nationwide surge in electricity demand from large-scale data centers.

    “As my office continues to review all potential options to defend energy customers in our state, we must demand further clarity on what protections the Commission has put in place and continue to demand a full contested case concerning these still-secret contracts,” Nessel said.


    [ad_2]

    Steve Neavling

    Source link

  • Fox News AI Newsletter: 10 showstopping CES innovations

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – CES 2026 showstoppers: 10 gadgets you have to see
    – Construction giant unveils AI to help prevent job site accidents: ‘It’s essentially a personal assistant’
    – Fox News gets exclusive look at company helping businesses nationwide harness AI-powered robots to boost efficiency and fill labor gaps

    CES sign with people passing by.

    CES 2026 put health tech front and center, with companies showcasing smarter ways to support prevention, mobility and long-term wellness. (CES)

    FUTURE IS NOW: Every January, the Consumer Electronics Show, better known as CES, takes over Las Vegas. It’s where tech companies show off what they’re building next, from products that are almost ready to buy to ideas that feel pulled from the future.

    SAFER SITES: Construction equipment giant Caterpillar has unveiled a new artificial intelligence (AI) tool designed to improve job site safety and boost efficiency as the industry grapples with labor shortages.

    FUTURE OF WELLNESS: The Consumer Electronics Show, better known as CES, is the world’s largest consumer technology event, and it’s underway in Las Vegas. It takes over the city every January for four days and draws global attention from tech companies, startups, researchers, investors and journalists, of course.

    FUTURE OF WORK: As artificial intelligence is rapidly evolving, Fox News got an exclusive look at a company helping businesses nationwide harness AI-powered robots to boost efficiency and fill labor gaps. RobotLAB, with 36 locations across the country and headquartered in Texas, houses more than 50 different types of robots, from cleaning and customer service bots to security bots.

    LG Wallpaper TV at CES 2026

    The LG CLOiD robot and the LG OLED evo AI Wallpaper TV are displayed onstage during an LG Electronics news conference at CES 2026 in Las Vegas, Jan. 5, 2026. (REUTERS/Steve Marcus)

    COMPUTE CRUNCH: The price tag for competing in the artificial intelligence race is rapidly climbing, fueling demand for advanced computing power and the high-end chips that are needed to support it. Advanced Micro Devices (AMD) CEO Lisa Su said demand for AI computing is accelerating as industries rush to expand their capabilities.

    AI GONE WRONG: A California teenager used a chatbot over several months for drug-use guidance on ChatGPT, his mother said. Sam Nelson, 18, was preparing for college when he asked an AI chatbot how many grams of kratom, a plant-based painkiller commonly sold at smoke shops and gas stations across the country, he would need to get a strong high, his mother, Leila Turner-Scott, told SFGate, according to the New York Post. 

    DR CHAT: ‘The Big Money Show’ panelists weigh in on a report on people turning to ChatGPT for medical and healthcare questions.

    ‘FUNDAMENTALLY DEFLATIONARY’: OpenAI Board Chair Bret Taylor discusses artificial intelligence’s potential to change traditional work and its increasing use in healthcare on ‘Varney & Co.’

    MIND TRAP ALERT: Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.

    A man uses ChatGPT on his laptop.

    A California teenager sought drug-use guidance from a ChatGPT chatbot over several months while preparing for college, his mother told SFGate, according to the New York Post. (Kurt “CyberGuy” Knutsson)

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    [ad_2]

    Source link

  • Meta signs three nuclear power deals to help support its AI data centers

    [ad_1]

    Facebook parent Meta has reached nuclear power deals with three companies as it continues to look for electricity sources for its artificial intelligence data centers.

    Meta struck agreements with TerraPower, Oklo and Vistra for nuclear power for its Prometheus AI data center that is being built in New Albany, Ohio. Meta announced Prometheus, which will be a 1-gigawatt cluster spanning across multiple data center buildings, in July. It’s anticipated to come online this year.

    Financial terms of the deals with TerraPower, Oklo and Vistra were not disclosed.

    The Mark Zuckerberg-led Meta said in a statement on Friday that the three deals will support up to 6.6 gigawatts of new and existing clean energy by 2035.

    “These projects add reliable and firm power to the grid, reinforce America’s nuclear supply chain, and support new and existing jobs to build and operate American power plants,” the company said.

    Meta said its agreement with TerraPower will provide funding that supports the development of two new Natrium units capable of generating up to 690 megawatts of firm power with delivery as early as 2032. The deal also provides Meta with rights for energy from up to six other Natrium units capable of producing 2.1 gigawatts and targeted for delivery by 2035.

    Meta will also buy more than 2.1 gigawatts of energy from two operating Vistra nuclear power plants in Ohio, in addition to the energy from expansions at the two Ohio plants and a third Vistra plant in Pennsylvania.

    The deal with Oklo, which counts OpenAI’s Sam Altman as one of its largest investors, will help to develop a 1.2 gigawatt power campus in Pike County, Ohio to support Meta’s data centers in the region.

    The nuclear power agreements come after Meta announced in June that it reached a 20-year deal with Constellation Energy.

    [ad_2]

    Source link

  • Musk’s Grok chatbot restricts image generation after global backlash to deepfakes

    [ad_1]

    LONDON — Elon Musk’s AI chatbot Grok is preventing most users from generating or editing any images after a global backlash that erupted after it started spewing sexualized deepfakes of people.

    The chatbot, which is accessed through Musk’s social media platform X, has in the past few weeks been granting a wave of what researchers say are malicious user requests to modify images, including putting women in bikinis or in sexually explicit positions.

    Researchers have warned that in a few cases, some images appeared to depict children. Governments around the world have condemned the platform and opened investigations into the platform.

    On Friday, Grok was responding to image altering requests with the message: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”

    While subscriber numbers for Grok aren’t publicly available, there was a noticeable decline in the number of explicit deepfakes that Grok is now generating compared with days earlier.

    The European Union has slammed Grok for “illegal” and “appalling” behavior, while officials in France, India, Malaysia and a Brazilian lawmaker have called for investigations.

    On Thursday, Britain’s Prime Minister Keir Starmer threatened unspecified action against X.

    “This is disgraceful. It’s disgusting. And it’s not to be tolerated,” Starmer said on Greatest Hits radio. “X has got to get a grip of this.”

    He said media regulator Ofcom “has our full support to take action” and that “all options” are on the table.

    “It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”

    Ofcom and Britain’s privacy regulator both said this week they’ve contacted X and Musk’s artificial intelligence company xAI for information on measures they’ve taken to comply with British regulations.

    Grok is free to use for X users, who can ask it questions on the social media platform. They can either tag it in posts they’ve directly created or in replies to posts from other users.

    Grok launched in 2023. Last summer the company added an image generator feature, Grok Imagine, that included a so-called “spicy mode” that can generate adult content.

    The problem is amplified both because Musk pitches his chatbot as an edgier alternative to rivals with more safeguards, and because Grok’s images are publicly visible, and can therefore be easily spread.

    [ad_2]

    Source link

  • From Climbing Vacuums to Cyber Pets: Some Highlights of CES 2026

    [ad_1]

    LAS VEGAS (AP) — CES 2026 offered a glimpse of a future that feels straight out of a sci-fi movie: bendable screens, paper-thin TVs and cars and gadgets that can think for themselves as they get to know you and your family’s wants and needs.

    As Nvidia CEO Jensen Huang put it, “The ChatGPT moment for physical AI is here.”

    And everywhere you looked, robots. They roamed the show floor, assisted workers and entertained crowds — from humanoid helpers and furry “cyber pets” to task-specific machines.

    Here’s a recap of some of the attention-grabbing gadgets at CES 2026, the annual technology trade show in Las Vegas:

    Lego leaned heavily into fan nostalgia this week to unveil its latest innovation, enlisting Lucasfilm Chief Creative Officer David Filoni and a lineup of familiar Star Wars characters, including Chewbacca, R2-D2, C-3PO and X-wing pilots.

    On Monday, the company introduced Lego Smart Play, a new platform built around connected bricks, tags and specially designed minifigures in partnership with Star Wars. These smart bricks are equipped with sensors that detect light and distance, triggering coordinated lights and sounds when used together to bring builds to life.

    The platform allows fans to build interactive scenes, like space battles or lightsaber duels.

    Another point for nostalgia: Clicks Technology is reviving the physical phone keyboard with its magnetic QWERTY model that clips onto phones.

    Co-founder Jeff Gadway said the company’s Power Keyboard “is one keyboard for all your smart devices.”

    It features a full QWERTY layout, with directional keys and a number row, in a callback to the Blackberry-era of smartphones for those who miss real buttons. The company said it also doubles as a wireless power bank.


    Return of LG’s Wallpaper TV line

    If you’re not familiar with CES, just know that new TV announcements are ubiquitous to the show — some big, some small, some even transparent. But LG brought something distinct to CES this year: an OLED TV that’s only 9mm thick.

    The South Korean tech company announced the OLED evo W6 model from its Wallpaper line just ahead of CES but reporters and industry representatives were able to see it for the first time at the show.

    As advertised, the screen displays video nearly edge-to-edge and is ridiculously thin (though it doesn’t roll up like its name implies). Like the previous models in its Wallpaper line, the TV’s inputs are housed in a box that sits nearby. LG representatives claim you can seamlessly stream 4K video and audio to the screen. No pricing was available but the new TV will be available in 77- and 83-inch sizes.


    The vacuum that can climb stairs

    Chinese robovac maker Roborock introduced a vacuum that literally sprouts chicken-like legs to navigate up and down stairs. There are vacuums out there capable of this feat (and there were even a few others at CES), but this one actually cleans the steps along the way.

    The newly introduced Saros Rover took its time in its ascent and descent during the demo on the showroom floor, but Roborock said it will be able to traverse almost any style of stairwell, including spiraled and curved. Unfortunately, no release date was given for the Rover, which the company says is still in development.


    Razer goes the smart glasses route with headphones

    Gaming tech company Razer brought a very interesting concept to CES, a set of over-ear headphones that can largely replicate the capabilities of currently available smart glasses (think Meta’s Ray Ban glasses).

    During the demo, Razer’s host asked the AI-powered headset — dubbed Project Motoko — to translate a Japanese restaurant menu into English and even asked it to search up information on The Associated Press.

    The headphones see using built-in cameras and take audio inputs from microphones. What AI model serves as the base of the headphones is up to the user, and it sounded like the usual suspects were supported — ChatGPT, Gemini, Claude.

    While it’s being developed largely as a consumer product, Razer did mention that it could be sold to businesses to gather data to train AI models. Razer said consumer data retrieved from the headphones wouldn’t be sold for training purposes and that enterprise sales would be siloed from consumer sales.


    Extended-reality platform aims to help process grief

    Do you wish you could speak one more time with a loved one who died unexpectedly? Or sit down for a conversation with your younger self? One company is exploring how immersive technology might make something like that possible, at least in part.

    VHEX Lab showcased its SITh.XRaedo, an immersive extended-reality grief therapy platform that creates a virtual avatar from a single photo and, according the company, is guided in real time by a trained XR therapist. Wearing a virtual reality headset, users can speak with the avatar, which responds through speech, nods, smiles and other gestures.

    The company, which won a digital health innovation award at CES, said the platform is designed to help people process grief and find closure, offering an alternative way to mourn.


    Personal mobility on autopilot

    Sit back, relax and enjoy the ride — that’s exactly what some conference attendees did at Strutt’s booth. Curious volunteers sat blindfolded in the robotics company’s new self-driving personal mobility chair called the EV1, which senses its surroundings and navigates on its own. With the push of a button and a forward lever, the chair guided riders through a small course, looping them around without requiring any active control.

    Tony Hong, CEO and founder of the Singapore-based Strutt, told AP that the chair has a full suite of sensors that helps it avoid bumps, walls, people and other obstacles, adjusting in real time as it drives.


    A “cyber pet” that turned heads at CES

    Allergic to dogs or cats but still craving a furry sidekick? Chinese tech brand Ollobot pitched a futuristic alternative: a rolling, purple “cyber pet” named OlloNi. Part plush toy, part AI robot, OlloNi is designed to feel warm and expressive, unlike the stiff, humanoid home robots that often dominate robotics, the company said.

    OlloNi uses a screen mounted at its neck, making eye contact and cycling through thousands of animated expressions meant to mirror human emotion and interaction.

    Scratch behind its fuzzy “ears,” and OlloNi’s wide digital “eyes” pop open in apparent delight, which drew attention and laughs from passersby on the show floor.


    Uber dives back into the robotaxi game

    Uber used CES to pull back the curtain on its upcoming robotaxi, offering the public a first look at a self-driving vehicle developed with luxury EV maker Lucid Motors and autonomous technology company Nuro.

    Uber called it the most premium robotaxi yet, with cameras, sensors and radar for full 360-degree awareness, along with a sleek, low-profile roof “halo” fitted with LED screens that display a rider’s initials and ride status. Inside, passengers can tailor the temperature, seat heating and music, while on-screen visuals show what the vehicle sees and the route it plans to follow in real time.

    The companies said on-road testing, led by Nuro, began in the San Francisco area last month, as they work toward launching the service before the end of the year.

    Associated Press journalists Aya Diab, Jessica Hill and Ty ONeil contributed to this report from Las Vegas.

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – December 2025

    [ad_2]

    Associated Press

    Source link

  • Elon Musk’s xAI to build $20 billion data center in Mississippi

    [ad_1]

    Elon Musk’s AI company, xAI, plans to spend $20 billion on a data center in Southaven, Mississippi

    JACKSON, Miss. — Elon Musk’s artificial intelligence company xAI is set to spend $20 billion to build a data center in Southaven, Mississippi, Gov. Tate Reeves announced Thursday, calling it the largest private investment in the state’s history.

    The data center, called MACROHARDRR, is being built in Mississippi’s DeSoto County near Memphis, Tennessee. It will be the company’s third data center in the greater Memphis area. xAI CFO Anthony Armstrong said the cluster of data centers will house “the world’s largest supercomputer” with 2 gigawatts of computing power.

    The announcement comes as xAI faces scrutiny over its data center projects in the Memphis area. The NAACP and the Southern Environmental Law Center have raised concerns over air pollution generated by xAI’s supercomputer facility located near predominantly Black communities in Memphis.

    A petition by the Safe and Sound Coalition, a Southaven group opposing xAI’s developments, calls for shutting down xAI’s operations in the area and has received more than 900 signatures as of Thursday afternoon.

    xAI did not immediately respond when asked for comment about environmental concerns.

    A fact sheet released by the Mississippi governor’s office said environmental responsibility is a “core commitment” for xAI.

    During the announcement, Reeves personally thanked Musk. Reeves predicted the investment would bring hundreds of permanent jobs to the community, thousands of indirect subcontracting jobs, and tax revenue to support public services.

    Under the incentives for data centers passed in 2024, the state will waive all sales, corporate income and franchise taxes on the xAI development. Saving sales taxes on the computing power that xAI is purchasing would likely be worth a substantial amount of money, but the Mississippi Development Authority did not immediately respond to The Associated Press’ questions about how much tax revenue Mississippi will give up.

    DeSoto County and the city of Southaven have also agreed to allow substantially reduced property taxes.

    xAI is expected to begin data center operations in Southaven next month.

    [ad_2]

    Source link

  • Here’s When Elon Musk Will Finally Have to Reckon With His Nonconsensual Porn Generator

    [ad_1]

    It has been over a week now since users on X began en masse using the AI model Grok to undress people, including children, and the Elon Musk-owned platform has done next to nothing to address it. Part of the reason for that is the fact that, currently, the platform isn’t obligated to do a whole lot of anything about the problem.

    Last year, Congress enacted the Take It Down Act, which, among other things, criminalizes nonconsensual sexually explicit material and requires platforms like X to provide an option for victims to request that content using their likeness be taken down within 48 hours. Democratic Senator Amy Klobuchar, a co-sponsor of the law, posted on X, “No one should find AI-created sexual images of themselves online—especially children. X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.”

    Note the “soon” in that sentence. The requirement within the law for platforms to create notice and removal systems doesn’t go into effect until May 19, 2026. Currently, neither X (the platform where the images are being generated via posted prompts and hosted) nor xAI (the company responsible for the Grok AI model that is generating the images) has formal takedown request systems. X has a formal content takedown request procedure for law enforcement, but general users are advised to go through the Help Center, where it appears users can only report a post as violating X’s rules.

    If you’re curious just how likely the average user is to get one of these images taken down, just ask Ashley St. Clair how well her attempts went when she flagged a nonconsensual sexualized image of her that was shared on X. St. Clair has about as much access as anyone to make a personal plea for a post’s removal—she is the mother of one of Elon Musk’s children and has an X account with more than one million followers. “It’s funny, considering the most direct line I have and they don’t do anything,” she told The Guardian. “I have complained to X, and they have not even removed a picture of me from when I was a child, which was undressed by Grok.”

    The image of St. Clair was eventually removed, seemingly after it was widely reported by her followers and given attention in the press. But St. Clair now claims she was thanked for her efforts to raise this issue by being restricted from communicating with Grok and having her X Premium membership revoked. Premium allows her to get paid based on engagement. Grok, which has become the default source of information on this whole situation, despite the fact that it is an AI model incapable of speaking for anyone or anything, explained in a post, “Ashley St. Clair’s X checkmark and Premium were likely removed due to potential terms violations, including her public accusations against Grok for generating inappropriate images and possible spam-like activity.”

    Enforcement outside of the Take It Down Act is possible, though less straightforward. Democratic Senator Ron Wyden suggested that the material generated by Grok would not be protected under Section 230 of the Communications Decency Act, which typically grants tech platforms immunity from liability for the illegal behavior of users. Of course, it’s unlikely the Trump administration’s Department of Justice would pursue a case against Musk’s companies, leaving attempts at enforcement up to the states.

    Outside of the US, some governments are taking the matter much more seriously. Authorities in France, Ireland, the United Kingdom, and India have all started looking into the nonconsensual sexual images generated by Grok and may eventually bring charges against X and xAI.

    But it certainly doesn’t seem like the head of X and xAI is taking the matter all that seriously. As Grok was generating sexual images of children, Elon Musk, the CEO of both companies involved in this scandal, was actively reposting content created as part of the trend, including AI-generated images of a toaster and a rocket in a bikini. Thus far, the extent of X’s acknowledgement of the situation starts and ends at blaming the users. In a post from X Safety, the company said, “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” but took no responsibility for enabling it.

    If anything, what Grok has been up to in recent weeks seems like it is probably closer to what Musk wants out of the AI. Per a report from CNN, Musk has been “unhappy about over-censoring” on Grok, including being particularly frustrated about restrictions on Grok’s image and video generator. Publicly, Musk has repeatedly talked up Grok’s “spicy mode” and derided the idea of “wokeness” in AI.

    In response to a request for comment from Gizmodo, xAI said, “Legacy Media Lies,” the latest of the automated messages that the platform has sent out since it shut down its public relations department.

    [ad_2]

    AJ Dellinger

    Source link

  • DeepSeek’s AI gains traction in developing nations, Microsoft report says

    [ad_1]

    HONG KONG — DeepSeek, the Chinese tech startup that rivals OpenAI’s ChatGPT, has been gaining ground in many developing nations in a trend that could narrow the gap of artificial intelligence adoption with advanced economies, a new report suggested.

    In the Thursday report, researchers from Microsoft said global adoption of generative AI tools reached 16.3% of the world’s population in the three months to December, up from 15.1% in the previous three months.

    Yet the divide of AI adoption in developed and developing countries is widening, the report noted, with AI adoption across advanced economies growing nearly twice as fast as developing nations.

    “We are seeing a divide and we are concerned that that divide will continue to widen,” said Juan Lavista Ferres, chief data scientist for Microsoft’s AI for Good Lab, which used anonymized “telemetry” to help track global device usage.

    Countries that invested early and consistently in digital infrastructure and AI led in terms of shares of users, including the United Arab Emirates, Singapore, France and Spain, according to the report. Some of Microsoft’s figures overlapped with the findings of a Pew Research Center survey published in October that mapped which countries are more excited than concerned about AI. In both reports, for instance, South Korea stood out in its embrace of AI.

    Microsoft has a vested interest in AI adoption — its business and much of the tech industry and stock market is staking its future on AI tools becoming more widely used and profitable — but Lavista Ferres said his lab is looking more broadly at the topic.

    His researchers found that the rise of Chinese startup DeepSeek, which was founded in 2023, has fueled wider AI adoption across the developing world given its free and “open source” models – with key components available for anyone to access and modify.

    When DeepSeek released its advanced reasoning AI model called R1 in January 2025, which it said was more cost-effective than OpenAI’s similar model, it raised eyebrows in the global technology industry and many were surprised by how China is catching up with the U.S. in technological advancements. Leading science journal Nature published peer-reviewed research co-authored by DeepSeek founder Liang Wenfeng in September, describing it as a “landmark paper” from the Chinese startup.

    Lavista Ferres said DeepSeek is a “good model” for tasks like math or coding, but it operates differently from U.S.-based models on topics like politics.

    “We have observed that for certain type of questions, of course, they follow the same type of access to the internet that China has,” he said. “Which means that there will be questions that will be answered very differently, particularly political questions. In many ways that can have an influence on the world.”

    DeepSeek offers a free‑to‑use chatbot on web and mobile, and has also given developers global access to modify and build on its core engine. Its lack of subscription fees has “lowered the barrier for millions of users, especially in price‑sensitive regions,” Microsoft’s report said.

    DeepSeek didn’t immediately respond to a request for comment on the report.

    “This combination of openness and affordability allowed DeepSeek to gain traction in markets underserved by Western AI platforms,” the report added. “DeepSeek’s rise shows that global AI adoption is shaped as much by access and availability as by model quality.”

    Developed countries including Australia, Germany and the U.S. have sought to limit the use of DeepSeek over alleged security risks. Microsoft last year banned its own employees from using DeepSeek. Adoption of DeepSeek remained low in North America and Europe, the report found, but it surged in its home country China, as well as Russia, Iran, Cuba, Belarus – places where U.S. services face restrictions or where foreign tech access is limited.

    In many places, DeepSeek’s prevalence correlated with it being a default chatbot on widely available phones made by Chinese tech companies like Huawei.

    DeepSeek’s market share in China was 89%, the report estimated. That’s followed by Belarus’s 56% and Cuba’s 49%, both of which also had low AI adoption more broadly. In Russia, its market share was around 43%.

    In Syria and Iran, DeepSeek’s market share reached around 23% and 25%, respectively, the report added. In many African countries including Ethiopia, Zimbabwe, Uganda and Niger, DeepSeek’s market share was between 11% to 14%.

    “Open‑source AI can function as a geopolitical instrument, extending Chinese influence in areas where Western platforms cannot easily operate,” the report said.

    ___

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link

  • DeepSeek’s AI Gains Traction in Developing Nations, Microsoft Report Says

    [ad_1]

    HONG KONG (AP) — DeepSeek, the Chinese tech startup that rivals OpenAI’s ChatGPT, has been gaining ground in many developing nations in a trend that could narrow the gap of artificial intelligence adoption with advanced economies, a new report suggested.

    In the Thursday report, researchers from Microsoft said global adoption of generative AI tools reached 16.3% of the world’s population in the three months to December, up from 15.1% in the previous three months.

    Yet the divide of AI adoption in developed and developing countries is widening, the report noted, with AI adoption across advanced economies growing nearly twice as fast as developing nations.

    “We are seeing a divide and we are concerned that that divide will continue to widen,” said Juan Lavista Ferres, chief data scientist for Microsoft’s AI for Good Lab, which used anonymized “telemetry” to help track global device usage.

    Countries that invested early and consistently in digital infrastructure and AI led in terms of shares of users, including the United Arab Emirates, Singapore, France and Spain, according to the report. Some of Microsoft’s figures overlapped with the findings of a Pew Research Center survey published in October that mapped which countries are more excited than concerned about AI. In both reports, for instance, South Korea stood out in its embrace of AI.

    Microsoft has a vested interest in AI adoption — its business and much of the tech industry and stock market is staking its future on AI tools becoming more widely used and profitable — but Lavista Ferres said his lab is looking more broadly at the topic.

    His researchers found that the rise of Chinese startup DeepSeek, which was founded in 2023, has fueled wider AI adoption across the developing world given its free and “open source” models – with key components available for anyone to access and modify.

    When DeepSeek released its advanced reasoning AI model called R1 in January 2025, which it said was more cost-effective than OpenAI’s similar model, it raised eyebrows in the global technology industry and many were surprised by how China is catching up with the U.S. in technological advancements. Leading science journal Nature published peer-reviewed research co-authored by DeepSeek founder Liang Wenfeng in September, describing it as a “landmark paper” from the Chinese startup.

    Lavista Ferres said DeepSeek is a “good model” for tasks like math or coding, but it operates differently from U.S.-based models on topics like politics.

    “We have observed that for certain type of questions, of course, they follow the same type of access to the internet that China has,” he said. “Which means that there will be questions that will be answered very differently, particularly political questions. In many ways that can have an influence on the world.”

    DeepSeek offers a free‑to‑use chatbot on web and mobile, and has also given developers global access to modify and build on its core engine. Its lack of subscription fees has “lowered the barrier for millions of users, especially in price‑sensitive regions,” Microsoft’s report said.

    DeepSeek didn’t immediately respond to a request for comment on the report.

    “This combination of openness and affordability allowed DeepSeek to gain traction in markets underserved by Western AI platforms,” the report added. “DeepSeek’s rise shows that global AI adoption is shaped as much by access and availability as by model quality.”

    Developed countries including Australia, Germany and the U.S. have sought to limit the use of DeepSeek over alleged security risks. Microsoft last year banned its own employees from using DeepSeek. Adoption of DeepSeek remained low in North America and Europe, the report found, but it surged in its home country China, as well as Russia, Iran, Cuba, Belarus – places where U.S. services face restrictions or where foreign tech access is limited.

    In many places, DeepSeek’s prevalence correlated with it being a default chatbot on widely available phones made by Chinese tech companies like Huawei.

    DeepSeek’s market share in China was 89%, the report estimated. That’s followed by Belarus’s 56% and Cuba’s 49%, both of which also had low AI adoption more broadly. In Russia, its market share was around 43%.

    In Syria and Iran, DeepSeek’s market share reached around 23% and 25%, respectively, the report added. In many African countries including Ethiopia, Zimbabwe, Uganda and Niger, DeepSeek’s market share was between 11% to 14%.

    “Open‑source AI can function as a geopolitical instrument, extending Chinese influence in areas where Western platforms cannot easily operate,” the report said.

    O’Brien reported from Providence, Rhode Island.

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – December 2025

    [ad_2]

    Associated Press

    Source link

  • Senate candidate El-Sayed says data centers must protect communities or stay out of Michigan – Detroit Metro Times

    [ad_1]

    With proposals of large-scale data centers spreading across Michigan, U.S. Senate candidate Abdul El-Sayed on Thursday released what he called “terms of engagement” aimed at protecting communities from higher utility bills, grid strain, and environmental harm.

    El-Sayed, a progressive Democrat running in the 2026 Senate primary, said at least 15 data center projects have been proposed across the state in the past year, including a planned 1.4-gigawatt facility tied to Oracle and OpenAI. His campaign said a project of that size would consume more electricity than the entire city of Detroit.

    “We’ve watched as data center projects have proliferated up and down our state, raising alarm and concern about the impacts on water resources, electric bills, and safety,” Abdul said in a statement. “That’s because our local utilities have bought off the politicians who are supposed to regulate them–and because there simply hasn’t been the leadership to take on powerful corporations. These terms of engagement represent the bare minimum that data center projects should be able to guarantee if they want to move into our communities.”

    He argued that utility companies are pushing to fast-track approvals without adequate oversight, even as residents face rising rates and persistent reliability problems.

    The plan targets investor-owned utilities such as DTE Energy and Consumers Energy, which El-Sayed said have a history of rate hikes without improvements in service. His campaign accused utilities and developers of “steamrolling” local governments and regulators as communities scramble to understand the long-term impacts of energy-hungry data centers.

    Under El-Sayed’s “Our Communities, Our Terms” framework, data center projects would be required to meet a series of conditions before receiving approval:

    • No rate hikes: Data centers would be required to pay for their own energy demand, preventing costs from being passed on to residential ratepayers.
    • Community transparency: Local residents would have a meaningful role in approvals and in negotiating community benefits.
    • Energy reliability guarantees: Projects would need enforceable commitments to improve, not weaken, grid reliability, funded by data center revenues.
    • Jobs guarantees: Developers would face penalties if promised local jobs fail to materialize.
    • Water protection: Data centers would be required to use closed-loop cooling systems to limit water use and pollution.
    • Community benefits agreements: Binding agreements would be required to deliver tangible benefits, such as grid upgrades, buried power lines, and improvements to water infrastructure.
    • No clean-energy loopholes: Utilities would be barred from using data center demand as a justification to weaken Michigan’s clean-energy laws.
    • Enforceability: All commitments would have to include clear penalties for noncompliance.

    El-Sayed is competing in the Democratic primary against U.S. Rep. Haley Stevens of Birmingham and state Sen. Mallory McMorrow of Royal Oak. His campaign said his opponents have supported tax exemptions for data center development without enforceable protections for ratepayers or the environment.

    The campaign also emphasized that El-Sayed has never taken campaign contributions from utility companies that could benefit from rapid data center expansion.

    A former Detroit health director and Wayne County health executive, El-Sayed has built his Senate run around challenging corporate power and prioritizing public health, affordability, and environmental protection. His campaign said the data center policy is part of a broader push to ensure that large infrastructure projects deliver measurable benefits to the communities that host them, rather than shifting costs onto residents.


    [ad_2]

    Steve Neavling

    Source link