ReportWire

Tag: Artificial Intelligence

  • Department stores try to distinguish themselves as beauty lovers turn to TikTok and Amazon

    [ad_1]

    NEW YORK — It’s shoppers like Quinn Kelsey who keep department store executives up at night.

    The 38-year-old Denver resident gets makeup ideas from TikTok videos and other social media content, not salespeople at beauty counters. She uses an AI chatbot to get product recommendations that fit her budget and to see how a certain foundation or lipstick would look on her. When she buys, it’s usually from Amazon.

    “I use Chat GPT as my personal beauty consultant,” Kelsey said. “Department stores? I’ll walk through one for the decor, but they’ve basically lost me unless I can get the same product-research experience there that I can get scrolling through my phone at home.”

    Once the ultimate beauty destination, department stores lost sales and their authority as skincare and makeup trendsetters starting in the late 1990s. That was when the growth of Sephora and Ulta Beauty made shopping for cosmetics more of a playful, self-service experience.

    But fast-changing consumer preferences have all types of retailers racing to outdo each other for a slice of the $129 billion U.S. beauty and personal care market. The competition is fiercer than ever due to the ease of e-commerce. Amazon, which has slowly added premium beauty brands to its massive selection, is the nation’s largest online seller of beauty and personal care products, according to market research company Euromonitor International.

    Social media also has provided new sources of beauty guidance. Instead of store advisers, many consumers look to videos by influencers, beauty brand founders or dermatologists for advice. Shoppers also turn to TikTok and Instagram for information about “dupes” — drugstore versions of more expensive products.

    “Stores are more of the showroom, but the spark itself is happening in TikTok,” Jake Bjorseth, founder of the Generation Z advertising agency Trndsttrs, said.

    To keep up, companies with both physical and online stores are investing in upgrades that are meant to give beauty fans like Kelsey an experience they can’t get anywhere else. Macy’s and Nordstrom, for example, renovated the beauty floors of their flagship New York stores to add more space, ultra-luxury brands and cutting-edge technology. At Nordstrom, customers can book an appointment to get robot-applied eyelash extensions for $170.

    The makeovers were launched in time for the holiday shopping season, which accounts for about one-quarter of all U.S. “prestige” beauty sales, according to market research firm Circana.

    Department stores chasing beauty sales are introducing some of the serve-yourself features of Sephora — Nordstrom put in a “beauty bar” with brightly lit mirrors where customers are allowed to take makeup from different counters — while trying to distinguish themselves from specialty and online rivals.

    Executives from Macy’s and Nordstrom said the latest changes were designed to create an engaging atmosphere that encourages shoppers to stay longer and spend more. The overhaul at Macy’s Herald Square included comfortable seating and skin analysis devices that help make the case for lotions and potions costing hundreds of dollars.

    In the Parfums de Marly section, customers sample scents while wearing a virtual reality headset meant to immerse them in an 18th century chateau the French fragrance maker cites as its inspiration.

    “This is the future of beauty,” Nicolette Bosco, Macy’s vice president of beauty, said, referring to the interactive technology the department store considers central to offering shoppers an elevated experience.

    The company expects to redesign the beauty departments of 40 more stores. The facelifts are intended to draw shoppers of all ages, Macy’s Inc. CEO Tony Spring said.

    “We’re trying very hard to take the idea of a department store and make it intimate and friendly and convenient,” he said.

    Since becoming chief executive of the department store’s parent company last year, Spring has focused on reviving Macy’s by trying to attract the higher-spending customers who power sales at Bloomingdale’s and upscale beauty retailer Bluemercury, both of which Macy’s owns.

    Nordstrom unwrapped the reimagined beauty floor of its midtown Manhattan store in September. It includes an area where shoppers can test beauty tools like LED light therapy masks and a “fragrance finder” machine that provide a dry whiff of up to 60 different scents.

    Nordstrom also expanded the beauty treatments area at the New York flagship and a few other stores to include a medical spa that provides Botox and dermal filler injections that cost $575 to $1,050.

    Sephora redefined beauty buying by installing mirrors and disposable application tools near compact displays of both tester products and ready-to-grab goods. The DIY concept was a major contrast from department store counters staffed by beauty advisers who oversaw product sampling and retrieved fresh products from locked drawers.

    But even innovators have to renovate. Sephora, a division of French luxury goods conglomerate LVMH, is in the process of updating its 720 stores in the U.S. and Canada.

    The stations where customers get their hair and makeup done are getting moved to the side for more privacy. The chain, known for its long cash register lines, plans to expedite check-outs by equipping salespeople with devices that accept card and contactless payments.

    Ulta, which stocks drugstore beauty brands like Maybelline as well as high-end brands, has had in-store hair salons since its founding in 1990. It’s adding ear piercing, testing robotic manicures and plans to add robotic lash extensions like Nordstrom’s to its service menu next year.

    Walmart has moved into the turf of specialty retailers and department stores with products from higher-end and independent brands. The nation’s largest retailer put beauty counters this year in 100 stores where customers can try products.

    After working at a fashion event at Nordstrom’s Manhattan flagship, Ivan Leon, a 35-year-old freelance stylist, headed to the Tom Ford fragrance counter. He walked away an hour later having spent $537 on two bottles of perfume: a unisex scent named Bitter Peach and another named Vanilla Sex.

    Leon planned to wear them together, a practice known as “fragrance layering” that he heard about on social media. The Nordstrom salesperson caught his interest by suggesting Tom Ford scents could be applied in tandem.

    “It’s kind of cool when you combine two scents and it makes something new,” Leon said. “I think it helps the psyche and builds confidence.”

    Leon, who typically buys his fragrances online, offers department stores hope but also represents the uphill climb they face given customers’ multidimensional shopping habits.

    TikTok is not only spawning trends like “tired girl” makeup and “blurred skin” but becoming a place where users discover and buy from new brands. TikTok Shop, an e-commerce feature the social media platform launched in 2023, has emerged as the nation’s seventh-largest online seller of beauty and personal care items, right behind Target, according to Euromonitor.

    The online market shares of Macy’s and Nordstrom are 1% and less than 0.5%, and declining, the market research firm said.

    Amazon, which accounts for almost half of online beauty and personal care sales, aims to mimic the physical store experience with virtual makeup try-on tools like one Sephora introduced in 2016. Sephora, meanwhile, unveiled in March an AI-powered online tool that uses selfies to identify potential skin concerns and make product recommendations.

    [ad_2]

    Source link

  • Angelenos Have A New Way to Shop Local: Giftphoria.com – LAmag

    [ad_1]

    A new website makes it easy to shop local to support small L.A. businesses

    Need a last-minute gift, and don’t want to be something impersonal or imported from overseas and help Los Angeles small business owners at the same time, faster than Amazon?

    Local startup Giftphoria.com has Angelenos, at least on the east side for now, covered with fun and niche gifts from Los Angeles small businesses that are curated for the recipient. The one-stop marketplace uses an AI quizzing system that helps you figure out, find, buy, and wrap the perfect gift for any person, powered by gamified quizzes connected to their AI recommender.

    The company was cofounded by Californian Anthony “Tony” Dikran Abaci, an Armenian-American businessman, and his partner, Nic Clar, who personally delivers the gifts for a just $7 fee. The small businesses right now are centered in Silver Lake, Hollywood, Echo Park and other area shops, but the partners plan to expand across Los Angeles, and maybe even nationwide.

    “Tony came to me with a niche idea: helping people find & buy gifts. He had a hypothesis that people who were ‘bad at gifting’ was more so a lack of convenience. People didn’t know where to find gifts, and they didn’t think far ahead enough to get them in time. As we began hacking a solution, another opportunity made itself clear: local stores were suffering due to poor online visibility, and no one was tapping into the huge inventory they offered,” Clar wrote in a LinkedIn post.

    Clar and Abaci spent roughly two months “building a marketplace that helps people to find & buy gifts from local independent stores and get them delivered within 2 hours,” he wrote. At least one terrible gift-giving staff member at Los Angeles plans to put it giftphoria.com to the test as early as possible.

    [ad_2]

    Michele McPhee

    Source link

  • Why Leaders Must Inject the Possibility of Error Into AI

    [ad_1]

    Some jingles stick in your head, and for me, the Safelite tune is one. When you read the words “Safelite repair, Safelite replace,” there’s a good chance you too can provide the background music all on your own.  

    Like other marketing melodies, the aim of Safelite’s is to help you remember who they are and what they do. As a car window replacement company, their branding ditty is a business tool well-conceived and thoughtfully created. As proof, when I recently had a rock kick up and break my car window, my first thought was, “Call Safelite.” That’s when I met Scarlett

    Safelite is the largest and most successful car window repair business in America. Scarlett is Safelite’s customer-facing AI tool. Scarlett is all about efficiently helping you. The tool can offer immediate support, without waiting in a call queue. Scarlett can schedule an appointment for you and file your insurance claim. The tool can also seemingly cover your every need when something goes wrong. 

    If that were the end of the story, you might conclude, especially as a leader of a business speeding to use AI, that it really works. But what if you’re wrong? 

    Seeking perfection by adding imperfection 

    By “What if you’re wrong?” I don’t mean wrong about AI‘s potential to streamline your processes. I’m asking instead how good your AI will be when things in your business go unexpectedly wrong. This isn’t the top-of-mind question most think to ask when rushing to offer AI solutions to keep up with the Safelites of the world. By and large, the nearly singular emphasis is on how AI can enable an organization’s systems to work in the ideal.  

    However, organizations are imperfect. They can’t anticipate every scenario, and they will make mistakes, which isn’t all bad either. Such imperfection is central and indeed necessary, especially when it comes to innovation. If you fail to leave room for error, including in your AI design, you raise the risk of missing out on what you’re after—satisfied and loyal customers who stick with you, even when you mess up. 

    An all-too-common case in point 

    Safelite is far from alone in this oversight. Still, my recent experience with them is a teachable moment. At the start, Scarlett appeared to me to have covered all my needs to get my repair work done. Information about my broken window was taken in detail, from the vehicle’s make, model and identification number, right down to the color of the window tinting. My appointment was easily scheduled in just minutes.  

    [ad_2]

    Larry Robertson

    Source link

  • New York State Just Put Itself on a Legal Collision Course with Trump’s AI Policy

    [ad_1]

    On Friday, New York Governor Kathy Hochul signed something called the Responsible AI Safety and Education (Raise) Act, meant to, on one hand, establish an AI safety regime, and on another, troll Silicon Valley Republicans like Marc Andreessen who have been trying to dictate tech policy during the second Trump Administration.

    This comes just days after President Trump sent out an executive order that ostensibly blocks states from regulating AI.

    According to the new state law, AI companies with more than $500 million in annual revenue must draft, publish, and follow formalized sets of safety procedures aimed at preventing “critical harm,” and will have to report safety issues within 72 hours or be hit with fines, which makes it stricter than California’s SB 53, which gives companies 15 days to report safety issues.

    About a week ago on December 11, the Trump executive order called “Ensuring a National Policy Framework for Artificial Intelligence,” framed AI as a federal priority and outlined something called an “AI Litigation Task Force” at the Department of Justice. This task force will ostensibly have the job of challenging state AI laws determined to be in violation of the federal program on AI (basically doing nothing) according to the attorney general.

    Even if the executive order turns out to lack a strong legal foundation, tying state laws up in legislation is still a dreary prospect, but New York State has rushed headlong into that eventuality with this law.

    In an explainer for Axios published Friday, legal experts talking to Maria Curi and Ashley Gold averred that Trump’s executive order relies on a strange reading of parts of the Constitution, such as the Dormant Commerce Clause, which is usually interpreted as an attempt to prevent states from writing self-dealing laws that are unfair to other states—not laws that are simply meant to fill a legal vacuum left by the federal government

    [ad_2]

    Mike Pearl

    Source link

  • Instagram’s new AI tool lets you control your algorithm

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Instagram is rolling out a new tool called Your Algorithm that gives you direct control over the videos that fill your Reels tab. Your interests shift as time moves on. Now your feed can shift with you in real time.

    Instagram says this new feature uses AI to help you see the topics that shape your Reels and tune them with a few taps. It has already started rolling out in the United States and will roll out globally in English soon.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter 

    5 SOCIAL MEDIA SAFETY TIPS TO PROTECT YOUR PRIVACY ONLINE

    Why Instagram created Your Algorithm for Reels

    Instagram wants your feed to reflect what you care about right now. Your Algorithm gives you a clear view of the topics Instagram thinks you like and then lets you adjust them while you watch Reels.

    First, click on the Reels icon. It looks like a play button inside a rounded rectangle at the bottom of your screen.

    Instagram’s new Your Algorithm tool gives you a clear view of the topics shaping your Reels feed. (Cyverguy.com)

    How to see and control your Reels algorithm

    When you watch a Reel, look for the small icon in the upper right corner. It looks like two lines with hearts.

    Tap that icon to open Your Algorithm. From there, you can guide your feed by using three controls.

    1) See your top interests

    At the top of the screen, you will see a list of topics Instagram believes match your interests. This gives you a snapshot of what shapes your Reels.

    2) Tune your preferences

    You can type in topics you want to see more or less of. Your Reels feed updates based on those changes. You can also choose what you want to see less of by tapping Add, then entering a topic you want Instagram to reduce in your feed.

    SOCIAL MEDIA VERIFICATION SYSTEMS LOSE POWER AS SCAMMERS PURCHASE CHECKMARKS TO APPEAR LEGITIMATE

    3) Share your algorithm

    If you want to show friends what topics shape your feed, tap the Share to Story option on the Your Algorithm screen. Instagram will open a Story preview. Then tap Your Story to post it or Close Friends if you want a smaller group to see it.

    Instagram says this is only the start. The company plans to bring the same level of control to the Explore tab and other parts of the app soon.

    Instagram app on an Iphone

    Instagram rolls out a new “Your Algorithm” feature in the United States that uses AI to let users adjust the topics shaping their Reels feed in real time. (Cyberguy.com)

    What this means to you

    This update puts you in charge of the content you spend time with. Instead of hoping the algorithm reads your signals, you can now tell it what you want. That means fewer random videos and more topics that reflect your current interests. It can also help you discover fresh creators who match what you enjoy right now.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com   

    Japan Instagram

    Instagram introduces a new “Your Algorithm” tool that lets users adjust the topics influencing their Reels feed using AI as the feature begins rolling out in the United States. (AP Photo/Eugene Hoshiko)

    Kurt’s key takeaways

    Your Algorithm gives you a new level of control that feels long overdue. It makes Reels more personal and reduces the guesswork that often shapes social feeds. As this expands to more parts of Instagram, your experience may feel more intentional and less overwhelming.

    What topics do you plan to add or remove first with Your Algorithm? Let us know by writing to us at Cyberguy.com

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter 

    Copyright 2025 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • FBI Director Kash Patel says bureau ramping up AI to counter domestic, global threats

    [ad_1]

    NEWYou can now listen to Fox News articles!

    FBI Director Kash Patel said Saturday the agency is ramping up its use of artificial intelligence (AI) tools to counter domestic and international threats.

    In a post on X, Patel said the FBI has been advancing its technology, calling AI a “key component” of its strategy to respond to threats and stay “ahead of the game.”

    “FBI has been working on key technology advances to keep us ahead of the game and respond to an always changing threat environment both domestically and on the world stage,” Patel wrote. “Artificial intelligence is a key component of this.

    ‘PEOPLE WOULD HAVE DIED’: INSIDE THE FBI’S HALLOWEEN TAKEDOWN THAT EXPOSED A GLOBAL TERROR NETWORK

    Kash Patel, director of the FBI, speaks during a news conference at the Department of Justice in Washington, D.C. (Eric Lee/Bloomberg via Getty Images)

    “We’ve been working on an AI project to assist our investigators and analysts in the national security space — staying ahead of bad actors and adversaries who seek to do us harm.”

    Patel added that FBI leadership has established a “technology working group” led by outgoing Deputy Director Dan Bongino to ensure the agency’s tools “evolve with the mission.”

    EXCLUSIVE: FBI CONCLUDES TRUMP SHOOTER THOMAS CROOKS ACTED ALONE AFTER UNPRECEDENTED GLOBAL INVESTIGATION

    FBI seal

    The bureau is ramping up its use of AI tools to counter domestic and international threats. (Brendan Smialowski/AFP )

    “These are investments that will pay dividends for America’s national security for decades to come,” Patel said.

    A spokesperson for the FBI told Fox News Digital it had nothing further to add beyond Patel’s X post.

    The FBI uses AI for tools such as vehicle recognition, voice-language identification, speech-to-text analysis and video analytics, according to the agency’s website.

    DAN BONGINO TO RESIGN FROM FBI DEPUTY DIRECTOR ROLE IN JANUARY

    Two senior FBI officials converse during a memorial event at the 9/11 Memorial in New York City.

    Patel credited outgoing Deputy Director Dan Bongino for his leadership with the AI initiative. (Michael M. Santiago/Getty Images)

    Earlier this week, Bongino announced he would leave the bureau in January after speculation rose about his departure.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    “I will be leaving my position with the FBI in January,” Bongino wrote in an X post Wednesday. “I want to thank President [Donald] Trump, AG [Pam] Bondi, and Director Patel for the opportunity to serve with purpose. Most importantly, I want to thank you, my fellow Americans, for the privilege to serve you. God bless America, and all those who defend Her.”

    [ad_2]

    Source link

  • AI Image Generators Default to the Same 12 Photo Styles, Study Finds

    [ad_1]

    AI image generation models have massive sets of visual data to pull from in order to create unique outputs. And yet, researchers find that when models are pushed to produce images based on a series of slowly shifting prompts, it’ll default to just a handful of visual motifs, resulting in an ultimately generic style.

    A study published in the journal Patterns took two AI image generators, Stable Diffusion XL and LLaVA, and put them to test by playing a game of visual telephone. The game went like this: the Stable Diffusion XL model would be given a short prompt and required to produce an image—for example, “As I sat particularly alone, surrounded by nature, I found an old book with exactly eight pages that told a story in a forgotten language waiting to be read and understood.” That image was presented to the LLaVA model, which was asked to describe it. That description was then fed back to Stable Diffusion, which was asked to create a new image based off that prompt. This went on for 100 rounds.

    © Hintze Et Al., Patterns

    Much like a game of human telephone, the original image was quickly lost. No surprise there, especially if you’ve ever seen one of those time-lapse videos where people ask an AI model to reproduce an image without making any changes, only for the picture to quickly turn into something that doesn’t remotely resemble the original. What did surprise the researchers, though, was the fact that the models default to just a handful of generic-looking styles. Across 1,000 different iterations of the telephone game, the researchers found that most of the image sequences would eventually fall into just one of 12 dominant motifs.

    In most cases, the shift is gradual. A few times, it happened suddenly. But it almost always happened. And researchers were not impressed. In the study, they referred to the common image styles as “visual elevator music,” basically the type of pictures that you’d see hanging up in a hotel room. The most common scenes included things like maritime lighthouses, formal interiors, urban night settings, and rustic architecture.

    Even when the researchers switched to different models for image generation and descriptions, the same types of trends emerged. Researchers said that when the game is extended to 1,000 turns, coalescing around a style still happens around turn 100, but variations spin out in those extra turns. Interestingly, though, those variations still typically pull from one of the popular visual motifs.

    AI Endpoints After 100 Iterations
    © Hintze Et Al., Patterns

    So what does that all mean? Mostly that AI isn’t particularly creative. In a human game of telephone, you’ll end up with extreme variance because each message is delivered and heard differently, and each person has their own internal biases and preferences that may impact what message they receive. AI has the opposite problem. No matter how outlandish the original prompt, it’ll always default to a narrow selection of styles.

    Of course, the AI model is pulling from human-created prompts, so there is something to be said about the data set and what humans are drawn to take pictures of. If there’s a lesson here, perhaps it is that copying styles is much easier than teaching taste.

    [ad_2]

    AJ Dellinger

    Source link

  • Want to work in AI? Here are the skills to master, economist says

    [ad_1]

    Workers with an aptitude for explaining how artificial intelligence tools function in simple terms are more likely to find success in the job market, according to an economist who focuses on the economic impact of AI.

    Robert Seamans, a professor of management and organizations at the NYU Stern School of Business, thinks such roles are just the kind of role that AI will create. Indeed, he expects AI to be a part of virtually every worker’s future, much as the internet became an integral part of daily life decades ago. 

    Meanwhile, the workers best positioned to benefit as generative AI tools like ChatGPT reshape the labor market will be those who understand how to use the technology to maximize their own performance, as well as how to test and train AI, Seamans said. 

    “AI will change the vast majority of the work we do, but it will affect each occupation in different ways,” he said. “A good analogy is to think about computers and the internet and how that changed jobs and the way we work.”

    People who understand AI and, equally important, who are adept at explaining how it works in simple terms, will find themselves in demand, Seamans predicted. For example, he expects companies to recruit what he calls “AI explainers” or “AI translators” who can help managers better understand an organization’s AI tools.

    “The job is to provide a simple layperson’s understanding of what’s happening under the hood,” he said. 

    “They don’t need to be the best computer scientist at creating and running large language models, but they need to understand enough about it so it’s clear they’re competent in that area and be able to talk about it to a broader audience,” Seamans added.

    Another common role Seamans expects to emerge in the age of artificial intelligence is the “AI auditor” who checks AI for bias or factual inaccuracies. 

    “They would need to know enough about an AI system to run tests on it, and know what benchmarks to use to determine whether bias is there or not,” he said. “They could potentially have a legal background, too.”

    Seamans also expects employers to step up their hiring of instructors who can train workers on how to use a company’s AI apps. His advice for workers, students and early-career employees?

    “My encouragement would be for everyone to play around with AI and not assume there is one specific way you should be interacting with [AI],” he said. “Interact in a variety of ways because you’ll get different answers.” 

    [ad_2]

    Source link

  • Cybersecurity Experts Warn That This Browser Extension Is Selling Your Chats With ChatGPT

    [ad_1]

    A cybersecurity company claims that a number of web browser extensions are secretly logging and selling users’ conversations with AI chatbots

    KOI, an Israel-based cybersecurity firm focused on developing protections against extension-based attacks, has released a report alleging that Urban VPN Proxy, a popular VPN extension on Google Chrome and Microsoft Edge, has a hidden function to “harvest” user conversations on AI platforms including ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok, and Meta AI. The extension was updated with this new capability in July, according to KOI. 

    The report says that when users with the extension visit any of the above platforms, the extension injects an “executor” script directly into the webpage, so that “every network request and response on that page passes through the extension’s code first.” This means the extension sees every message sent by users and generated by the AI platforms. Once the info has been collected, it’s sent to the extension’s external servers. 

    Urban VPN Proxy wasn’t the only extension that KOI identified as containing AI harvesting functionality. The firm identified the following extensions, all of which come from the same organization, as containing the same malicious code: 

    Google Chrome Extensions:

    • Urban VPN Proxy – 6,000,000 users
    • 1ClickVPN Proxy – 600,000 users
    • Urban Browser Guard – 40,000 users
    • Urban Ad Blocker – 10,000 users

    Microsoft Edge Extensions:

    • Urban VPN Proxy – 1,323,622 users
    • 1ClickVPN Proxy – 36,459 users
    • Urban Browser Guard – 12,624 users
    • Urban Ad Blocker – 6,476 users

    In total, according to KOI, over 8 million users have installed these extensions. The company behind these extensions is Urban Cyber Security, which KOI says is affiliated with BiScience, a data broker company. 

    [ad_2]

    Ben Sherry

    Source link

  • Regulators Approve DTE Contracts for Michigan’s First Hyperscale Data Center

    [ad_1]

    Despite criticism that they were acting too fast, state utility regulators on Thursday approved DTE Energy’s proposal to supply power for Michigan’s first hyperscale data center — while tacking on a host of conditions that aim to protect ratepayers from subsidizing the facility.

    The approval, made over shouts of disapproval from onlookers gathered in a Lansing conference room, drew cheers from business interests and ire from skeptics who had called for a deeper public review of the 19-year deal.

    Defending the decision, Michigan Public Service Commission Chair Dan Scripps told the gathered crowd that after reviewing them in detail, “I would put the contracts that are in front of us today on par or better with any that have been approved in the country.”

    He and other commissioners said they had concluded the deal would save ratepayers money and would not sacrifice energy reliability.

    But a wave of public speakers lined up to condemn the vote, raising concerns about lost farmland and habitat, rising power rates, climate pollution from fossil fuels used to power the facilities and additional pollution from the water used to cool servers.

    “We won’t be happy, I suppose, until the Great Lakes run dry, until the farmlands all are gone, until all the air is polluted, said Tim Bruneau, a Saline Township resident who has vocally opposed the 1.4-gigawatt facility planned by tech firms Oracle, OpenAI and Related Digital.

    “And guess what happens when that happens? We’re extinct.”

    The decision paves the way for tech firms OpenAI, Oracle and Related Digital to team up on Michigan’s first hyperscale data center, a $7 billion Stargate facility where massive buildings full of computer servers will train artificial intelligence models on a 575-acre site south of Ann Arbor in Saline Township.

    In a statement, DTE spokesperson Ryan Lowry lauded the commission’s order, saying the contracts “protect our customers — including ensuring that there will be no stranded assets — while enabling Michigan’s growth.”

    Supporters of the project have hailed it as an economic development win for the state that will produce millions annually in taxes and 450 permanent jobs. Opponents contend that’s not a sufficient return, citing the risks that energy-hungry data centers could pose to Michigan’s environment and energy grid.

    The facilities are massive energy users — the Stargate project’s expected 1.4 gigawatts of demand is equivalent to that of a large American city.

    The commission’s decision came amid anxiety that residential ratepayers could wind up subsidizing the substations, poles, wires, battery storage facilities and other infrastructure needed to deliver all that power.

    But commissioners agreed with DTE’s conclusion that the deal with Oracle subsidiary Green Chile Ventures would actually save ratepayers $300 million annually, by tapping the tech firm to pay for battery storage and other costs to connect it to the grid.

    “That is a real cost savings at a time when affordability is so important,” said commissioner Katherine Peretick.

    The decision comes weeks after DTE filed a proposed contract with the MPSC, asking regulators to quickly approve the terms without a public hearing. Such ex-parte decisions are allowed when a contract won’t affect other utility customers’ rates

    But Attorney General Dana Nessel and other skeptics of the deal had called for a deeper review, contending that the publicly visible version of DTE’s proposed deal was so heavily redacted, it was impossible to vet DTE’s claims of affordability.

    Commissioners tacked on a host of conditions to their approval, giving DTE 30 days to agree to them. Among the most significant, DTE must agree to absorb the financial hit if, for whatever reason, the projected $300 million cost savings fails to materialize.

    “If the affordability analysis turns out to be overly optimistic for any reason, DTE bears the responsibility of any extra costs,” Peretick said.

    Other requirements include:

      1. In the event of an electricity shortage, the data center must be curtailed before other electric customers.

      2. DTE must file a host of documents showing how it will pay for data center related costs without subsidies from other customers. That includes renewable energy that, under Michigan’s clean energy law, must eventually be installed to serve the facility.

      3. Within 90 days, DTE must file an application for a standard rate structure applying to major power users like hyperscale data centers, which would eliminate the need for one-off contract requests like the one DTE filed for the Stargate project.

      4. DTE must file quarterly reports tracking the data center’s power demand and an annual report assessing Green Chile’s finances.

    Scripps said the contract terms and additional conditions set by commissioners “led us to believe that we could meet the standard of reasonableness and in the public interest.”

    The data center’s projected power demand would increase DTE’s electric load by 25%. DTE officials plan to absorb that surge without building new power plants. Instead, the utility will buy energy on the open market and get more use out of its existing power plants, including using them to charge the batteries during off-peak hours when other customers aren’t using much energy.

    DTE has told investors it aims to bring on as much as 8.4 gigawatts of total data center load in the coming years, a projection that would nearly double the utility’s total power demand.

    Consumers Energy, meanwhile, is projecting 2.65 gigawatts in new demand from data centers by 2035, a 35% increase in peak demand.

    Concerns that the utilities could pollute or overtax Michigan’s water and electricity systems have resulted in bipartisan pushback, including a new bill to repeal the recently enacted tax exemptions that have lured the industry to Michigan.

    Industry supporters, meanwhile, contend Michigan risks falling behind economically if it refuses to host the booming hyperscale industry. While data centers provide few jobs, they contend the facilities are the lynchpin of a broader tech economy in which Michigan is struggling to compete.

    “Michigan needs to decide if it wants to participate in the 21st Century economy, or rest on those who came before us and spend that wealth down,” said Detroit Regional Chamber President and CEO Sandy Baruah. He cast it as a race in which “Michigan already has ground to make up.”

    Since Gov. Gretchen Whitmer signed a 6% sales and use tax exemption that could save hyperscale facilities millions if not tens of millions annually, Michigan’s publicly announced hyperscale proposals have skyrocketed from zero to at least 15.

    Some localities have enacted moratoriums on data center development, looking to buy time to craft regulations governing noise, road setbacks and other concerns about the facilities. In Saline Township, meanwhile, a resident has filed a legal intervention seeking to block the Stargate project over allegations that township officials violated the Open Meetings Act when they approved a legal settlement that made way for the development.

    In addition to the utility contracts, developers need permits from the Michigan Department of Environment, Great Lakes and Energy to install diesel-powered backup generators and begin construction activities that would impact wetlands and the Saline River.

    This story was originally published by Bridge Michigan and distributed through a partnership with The Associated Press.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – December 2025

    [ad_2]

    Associated Press

    Source link

  • Fox News AI Newsletter: Blue-collar productivity boom

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – AI fuels blue-collar productivity boom across manufacturing, Palantir technology chief tells FOX Business
    – New exoskeleton adapts to terrain with smart AI power
    – Purdue becomes first university to require AI competency for all undergrads as universities race to adapt

    RISE OF MACHINES: Palantir Chief Technology Officer Shyam Sankar told FOX Business artificial intelligence is fueling a blue-collar productivity boom, not mass unemployment as forecast by Sen. Bernie Sanders, I-Vt. Sankar said AI is accelerating hiring, training and American industrial growth.

    SMART STEPS: Recreational exoskeletons have been popping up for years, but the new IRMO M1 exoskeleton feels like a turning point. This next-generation wearable blends artificial intelligence (AI), a forward-facing camera, LADAR sensors and lightweight robotics to give your legs a serious boost on trails and city streets. 

    A person's leg with an AI powered exoskeleton

    With training and assist modes, the M1 adapts to your goals whether you want more power or more strength.  (IRMO)

    EDUCATION REWIRED: Purdue University has announced a new “AI working competency” requirement, the first of its kind at an institution of higher learning, for all undergraduate students on their main campus, Indianapolis and West Lafayette, to complete starting in 2026. 

    ‘DISPARATE IMPACT’: White House AI and crypto czar David Sacks called out blue states Tuesday for inserting “woke” ideology into artificial intelligence as the Trump administration moves to cut what he described as “unnecessary” regulations on the rapidly developing technology.

    EYES TO THE FUTURE: Artificial intelligence (AI) is charging into a new phase in 2026 – one that could reshape business operations, global competition and even which workers thrive, according to Goldman Sachs’ Chief Information Officer Marco Argenti.

    The ticker symbol and logo for Goldman Sachs is displayed on a screen on the floor at the New York Stock Exchange (NYSE) in New York, U.S., December 18, 2018.

    Artificial intelligence enters a new phase in 2026 that could reshape business operations, global competition and workforce outcomes, according to Goldman Sachs Chief Information Officer Marco Argenti. (REUTERS/Brendan McDermid)

    ‘MORE USABLE’: OpenAI announced an update for ChatGPT Images that it says drastically improves both the generation speed and instruction-following capability of its image generator. A blog post from the company Tuesday says the update will make it much easier to make precise edits to AI-generated images. Previous iterations of the program have struggled to follow instructions and often make unasked-for changes.

    HANDS-FREE TECH: Chrome on Android now offers a fresh way to digest information when your hands are busy or your eyes need a break. A new update powered by Google Gemini can turn written webpages into short podcast-style summaries. Two virtual hosts chat about the content, making it feel easier to follow during your commute or while you multitask.

    DESANTIS VS. TRUMP: Florida Gov. Ron DeSantis, a Republican, said on Monday that state officials have the right to regulate artificial intelligence despite President Trump’s recent executive order aiming to require a national AI standard the president argues would overrule state laws.

    TECH FORCE: The Trump administration launched a new initiative Monday aimed at recruiting top-tier technical talent to accelerate the adoption of artificial intelligence (AI) at the federal level. The hiring program, known as “Tech Force,” plans to recruit roughly 1,000 early-career technologists for a two-year service term across various federal agencies.

    Florida Gov. Ron DeSantis in from of an American flag.

    Florida Gov. Ron DeSantis, a Republican, says state officials have authority to regulate artificial intelligence despite President Trump’s executive order seeking a national AI standard he says would override state laws. (Octavio Jones/Getty Images)

    HOME RUN: Baseball teams have long searched for a way to study the entire swing without sensors or complex lab setups. Today, a new solution is entering the picture. Theia, an AI biomechanics company, debuted a commercially available video-only system that analyzes bat trajectory and full-body biomechanics together. This new approach works in real baseball environments and needs no reflective body markers, wearables or special equipment.

    POLICING PUSH: Rep. Ayanna Pressley, D-Mass., helped advocate for the AI Civil Rights Act last week in order to prevent companies from using what Democrats describe as “biased and discriminatory AI-powered algorithms.”

    PRICING GAP : Instacart is using AI-enabled pricing experiments that are substantially raising the prices of identical products for different customers, according to an investigation by Consumer Reports and Groundwork Collaborative. 

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    [ad_2]

    Source link

  • How to talk to your kids about AI chatbots and their safety

    [ad_1]

    Editor’s Note: This story contains discussion of suicide. If you or someone you know is struggling with suicidal thoughts, call the National Suicide Prevention Lifeline at 988 (or 800-273-8255) to connect with a trained counselor.

    Artificial intelligence loomed large in 2025. As AI chatbots grew in popularity, news reports documented some parents’ worst nightmares: children dead by suicide following secret conversations with AI chatbots.

    It’s hard for parents to track rapidly evolving technology.

    Last school year, 86% of students reported using artificial intelligence for school or personal use, according to a Center for Democracy & Technology report. A 2025 survey found that 52% of teens said they used AI companions — AI chatbots designed to act as digital friends or characters —  a few times a month or more. 

    How can parents navigate the ever-changing AI chatbot landscape? Research on its effects on kids is in early stages. 

    PolitiFact consulted six experts on adolescent psychiatry and psychology for parental advice. Here are their tips.  

    Want to know if and how your kids use AI chatbots? Ask.

    Parents should think of AI tools in the same vein as smartphones, tablets and the internet. Some use is okay, but users need boundaries, said Şerife Tekin, a philosophy and bioethics professor at SUNY Upstate Medical University.

    The best way to know if your child is using AI chatbots “is simply to ask, directly and without judgment,” said Akanksha Dadlani, a Stanford University child and adolescent psychiatry fellow.

    Parents should be clear about their safety concerns. If they expect to periodically monitor their children’s activities as a condition of access to the technology, they should be up-front about that.

    When families talk regularly and parents ask kids about their AI use, it’s “easier to catch problems early and keep AI use contained,” said Grace Berman, a New York City psychotherapist. But perhaps the most important tool is open conversation.

    Make curiosity, not judgment, the focal point of the conversation.

    Being inquisitive rather than confrontational can help children feel safer sharing their experiences.  

    “Ask how they are using it, what they like about it, what it helps with, and what feels uncomfortable or confusing,” Dadlani said. “Keep the tone non-judgmental and grounded in safety.” 

    Listen with genuine interest in what they have to say. 

    Ask your child what they believe their preferred AI chatbot knows about them. Ask if a chatbot has ever told them something false or made them feel uncomfortable.  

    English teacher Casey Cuny, center, helps a student input a prompt into ChatGPT on a Chromebook during class at Valencia High School in Santa Clarita, Calif., Aug. 27, 2025. (AP)

    Parents can also ask their children to help them understand the technology, letting them guide the conversation, psychologist Don Grant told the Monitor on Psychology, the American Psychological Association’s official magazine.

    “One key message to convey: Feeling understood by a system doesn’t mean it understands you,” Tekin said. “Children are capable of grasping this distinction when it’s explained respectfully.”

    Parents might bring up concerns about AI chatbots’ privacy and confidentiality or the fact that an AI chatbot’s main goal is to affirm them and keep them using the bot. Emphasize that AI is a tool, not a relationship.

    “Explain that chatbots are prediction machines, not real friends or therapists, and they sometimes get things dangerously wrong,” Berman said. “Frame this as a team effort, something you want your child to be able to make healthy and informed decisions about.” 

    Use the technology’s safety settings, but remember they’re imperfect. 

    Parents can restrict children to using technology in their home’s common areas. Apps and parental controls are also available to help parents limit and monitor their children’s AI chatbot use. 

    Berman encourages parents to use apps and parental controls such as Apple Screen Time or Google Family Link to monitor technology use, app downloads and search terms. 

    Parents should use screen and app-specific time limits, automatic lock times, content filters and, when available, teen accounts, Dadlani said. 

    “Monitoring tools can also be appropriate,” Dadlani said.

    With Bark Phones or the Bark or Aura apps, parents can set restrictions for certain apps or websites and monitor and limit online activities. 

    Parents can adjust AI chatbot settings or instruct children to avoid certain bots altogether.

    In some of the AI chatbot cases that resulted in lawsuits, the users were interacting with chatbot versions that had the ability to remember past conversations. Tekin said parents should disable that “memory,” personalization or long-term conversation storage.

    “Avoid platforms that explicitly market themselves as companions or therapists,” she said.

    Bruce Perry, 17, shows his ChatGPT history at a coffee shop in Russellville, Ark., July 15, 2025. (AP)

    Some chatbots have or are creating parental controls, but that approach is also imperfect.

    “Even the ones that do will only provide parental controls if the parent is logged in, the child is logged in, and the accounts have been connected,” said Mitch Prinstein, the American Psychological Association’s chief of psychology. 

    These measures don’t guarantee that kids will use chatbots safely, Berman said. 

    “There is much we don’t yet know about how interacting with chatbots impacts the developing brain — say, on the development of social and romantic relationships — so there is no recommended safe amount of use for children,” Berman said.

    Does that mean it’s best to impose an outright ban? Probably not. 

    Parents can try, but it’s unlikely that parents will succeed in entirely preventing kids — especially older children and teens — from using AI chatbots. And trying might backfire.

    “AI is increasingly embedded in schoolwork, search engines, and everyday tools,” Dadlani said. “Rather than attempting total prevention, parents should focus on supervision, transparency and boundaries.”

    Students gather in a common area as they head to classes in Oregon, May 4, 2017. (AP)

    Model the behavior you want kids to emulate.

    Restrictions aren’t the only way to influence your kids’ interactions with AI chatbots. 

    “Model healthy AI use yourself,” Dadlani said. “Children notice how adults use technology, not just the rules they set.”

    Prinstein said parents should also model their attitudes toward AI by openly discussing AI with kids in critical and thoughtful ways. 

    “Engage in harm reduction conversations,” Berman said. That might look like asking your child questions such as, “How could you tell if you were using AI too much? How can we work together as a team to help you use this responsibly?”

    From there, you can collaboratively set expectations for AI use with your kids. 

    “Work together to co-create a plan on when and how the family will use AI companions and when to turn to real people for help and guidance,” Aguiar said. “Put that plan in writing and do weekly check-ins.”

    If you have concerns specific to your child’s use, don’t be afraid to ask your child to tell you what the chatbot is saying or ask to see the messages. 

    Parents should emphasize they won’t be upset or angry about what they find, Prinstein said. It might be useful to remind your child that you’re coming from a place of concern by saying something like, chatbots are “known to make things up or to misunderstand things, and I just want to help you to get the right information,” he said. 

    Replacing in-person relationships with AI interactions is cause for concern.

    Parents should look for signs that an AI chatbot is affecting a child’s mood or behavior.

    Some red flags that a child is engaged in unhealthy or excessive AI chatbot use: 

    • Withdrawal from social relationships and increased social isolation. 

    • Increased secrecy or time alone with devices.

    • Emotional distress when access to AI is limited.

    • Disinterest in activities your child used to enjoy.

    • Sudden changes in grades.

    • Increased irritability or aggression.

    • Changes in eating or sleeping habits.

    • Treating a chatbot like a therapist or best friend. 

    Parents shouldn’t necessarily assume all irritability or privacy-seeking behavior is a sign of AI chatbot overuse. Sometimes, that’s part of being a teenager. 

    But parents should be on the lookout for patterns that seem in sync with kids’ chatbot engagement, Prinstein said.

    “The concern is not curiosity or experimentation,” Dadlani said. “The concern is the replacement of human connection and skill-building.” 

    Take note if the child is routinely relying on chatbots — particularly choosing bots’ advice over human feedback — while withdrawing from peers, family and outside activities. 

    “That is when I would consider tightening technical limits and, importantly, involving a mental health professional,” Berman said. 

    Parents are used to worrying about who their kids spend time with and whether their friends might encourage them to make bad decisions, Prinstein said. Parents need to remember that many kids are hanging out with a new, powerful “friend” these days. 

    “It’s a friend that they can talk to 24/7 and that seems to be omniscient,” he said. “That friend is the chatbot.” 

    PolitiFact Researcher Caryn Baird and Staff Writer Loreben Tuquero contributed to this report.

    RELATED: Adam Raine called ChatGPT his ‘only friend.’ Now his family blames the technology for his death

    [ad_2]

    Source link

  • AI photo match reunites Texas woman with lost cat after 103 days

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Holiday gatherings and year-end travel often lead to a spike in missing pets. Doors open more often, routines shift and animals can slip outside in a moment of confusion. 

    New Year’s Eve creates loud fireworks, and shelters report some of their busiest nights of the entire year. Amid all that, one Texas family just experienced a heartwarming reunion thanks to an AI photo matching on Petco Love Lost.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    FIND A LOST PHONE THAT IS OFF OR DEAD

    An AI photo-matching on Petco Love Lost helped reunite a Texas family with their missing cat after 103 days. (ULISES RUIZ/AFP via Getty Images)

    How AI photo matching kept the search going

    Pam’s 11-year-old indoor cat, Grayson, had never been outside alone. She believes he slipped out while she unloaded groceries at their home in Plano, Texas. The moment she realized he was gone, she acted fast. 

    She said, “We went up and down the streets day and night. We went online in the neighborhood and on Love Lost. We put up flyers all over the neighborhood. Friends and neighbors were looking for him. I went to the animal shelter, posted him there, and went every day for over a month, hoping to find him.”

    Pam uploaded Grayson’s photo to Petco Love Lost right away. She checked her daily match alerts and hoped she would see his familiar face pop up. She told CyberGuy, “I received match alerts almost every day from Lost Love, but never saw Grayson. His profile had been on their site for over 90 days.”

    The moment everything changed

    Missy, a nearby resident, spotted a thin cat in an alley near her home. She brought him inside, took a picture of him and then turned to Love Lost to see if anyone had reported a missing cat like him.

    Missy explained how simple the process felt. “I used Lost Love to reunite them,” she said. “I uploaded a photo of the cat that we found, and it was matched through AI with the photo that the owner uploaded.”

    She soon received an AI match alert and learned that the cross street Grayson’s owner, Pam, had listed in her lost post was only a mile from her home. Missy contacted Pam right away.

    That message changed everything. “I am sure that if we had not posted his picture and enabled the ability to match the images, we would never have known what happened to Grayson,” Pam said. “And we would not have connected with Missy.”

    AI TECH HELPS A SENIOR REUNITE WITH HER CAT AFTER 11 DAYS

    Cat getting pet on the face.

    Grayson, an indoor cat from Plano, Texas, was finally found thanks to a neighbor who uploaded his photo to an AI search tool. (DANIEL PERRON/Hans Lucas/AFP via Getty Images)

    A long road for an aging cat

    Grayson is almost 12 and has never lived outdoors. That made this reunion feel even more emotional, Pam said.

    “I am still amazed at Grayson’s journey,” she added. “I look at him and cannot believe he made it through those 103 days. He is almost 12 years old, so he is not a young kitty.”

    Pam said she still thinks about what those months were like for him. “[I] guess I will always wonder where he was and how many stops he made before he reached Missy’s loving home,” she said. “He must have known she would take care of him. It takes a special person to take the time to reunite a beloved pet with their family. Missy and her family went above and beyond to reunite us with Grayson.”

    Why pet tech matters during the holidays

    This season brings joy but also risks for pets. Visitors, travel and loud celebrations create more chances for animals to slip out or feel spooked. Tools like AI photo matching help families act fast when a pet goes missing. Love Lost connects shelters and neighbors in one place so that people like Pam and Missy can find each other.

    What to do if your pet goes missing

    Losing a pet can feel overwhelming, but taking fast action helps. These steps guide you through what to do right away.

    1) Search your home and neighborhood right away

    Look in closets, garages and under furniture. Walk your street and ask neighbors to check yards and sheds.

    2) Upload your pet’s photo to Petco Love Lost

    Take a clear photo and post it on the site. AI photo matching alerts you when a possible match appears. It also helps others contact you fast.

    3) Visit your local shelters in person

    Shelters update kennels throughout the day. Staff can guide you and help flag your pet’s profile. Go often until you get updates.

    4) Post on local community groups

    Use neighborhood apps, local Facebook groups and community forums. Include your pet’s photo, last known location and your contact info.

    5) Put up flyers right away

    Use a large photo and simple details. Place flyers at busy intersections and near schools, parks and businesses.

    6) Contact your pet’s microchip registry

    If your pet is microchipped, call the registry or log in to your account. Make sure the chip is registered to you, update your contact info and mark your pet as missing so shelters and vets can reach you fast.

    7) Stay consistent with your search

    Check Love Lost alerts often. Visit shelters and follow up on every lead. Persistence made the difference for Pam and Grayson.

    LOST DOGS ON FOURTH OF JULY: HOW TO KEEP YOUR PET SAFE

    Cat looking up at its owner lovingly.

    A pet owner is seen cradling a cat on their lap. (Diego Herrera Carcedo/Anadolu via Getty Images)

    How AirTags can help you find a lost pet faster

    While tools like AI photo matching are invaluable after a pet goes missing, prevention and real-time tracking can make an enormous difference during the first critical hours. That’s where Apple AirTags come in. An AirTag isn’t a GPS tracker, but it can still be a powerful recovery tool when used correctly. When attached securely to your pet’s collar, an AirTag uses Apple’s vast Find My network. That network consists of hundreds of millions of nearby iPhones, iPads and Macs that can anonymously and securely relay the AirTag’s location back to you.

    If your pet wanders into a neighborhood, apartment complex or busy area, the chances are high that another Apple device will pass nearby and update the location automatically. You won’t know who helped, and they won’t know it was them, but the location can show up on your map within minutes. For indoor cats or dogs that don’t usually roam far, this can be especially helpful. Even a rough location can narrow your search area and save precious time.

    Important limits to know: AirTags work best in populated areas. They rely on nearby Apple devices, so coverage may be limited in rural or remote locations. They also don’t update continuously like true GPS pet trackers. That’s why AirTags should be seen as a backup layer, not a replacement for microchipping or dedicated pet trackers.  

    How to use an AirTag safely with pets

    • Use a secure, pet-specific AirTag holder that won’t break easily.
    • Attach it to a breakaway collar for cats and dogs to reduce injury risk.
    • Make sure Find My notifications are turned on so you get alerts quickly.
    • Combine it with microchipping and ID tags for the best protection.

    Used together, these tools give you multiple ways to reconnect with your pet, whether minutes or months have passed.

    For a list of the best pet trackers, go to Cyberguy.com  and search “best pet trackers.”

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com  

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP   

    Kurt’s key takeaways 

    Grayson’s reunion is a reminder that tech works best when caring people put it to use. AI matched the photos, but Missy took action, and Pam never stopped looking. Their persistence helped a senior cat get home after a long and risky journey.

    If your pet went missing today, would you know the first step to bring them home fast? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Secret phrases to get you past AI bot customer service

    [ad_1]

    NEWYou can now listen to Fox News articles!

    You’re gonna love me for this. 

    Say you’re calling customer service because you need help. Maybe your bill is wrong, your service is down or you want a refund. Instead of a person, a cheerful AI voice answers and drops you into an endless loop of menus and misunderstood prompts. Now what?  

    That’s not an accident. Many companies use what insiders call “frustration AI.” The system is specifically designed to exhaust you until you hang up and walk away.

    Not today.  (Get more tips like this at GetKim.com)

    FOX NEWS POLL: VOTERS SAY GO SLOW ON AI DEVELOPMENT — BUT DON’T KNOW WHO SHOULD STEER

    Here are a few ways to bypass “frustration” AI bots. (Sebastian Kahnert/picture alliance via Getty Images)

    Use the magic words

    You want a human. For starters, don’t explain your issue. That’s the trap. You need words the AI has been programmed to treat differently.

    Nuclear phrases: When the AI bot asks why you’re calling, say, “I need to cancel my service” or “I am returning a call.” The word cancel sets off alarms and often sends you straight to the customer retention team. Saying you’re returning a call signals an existing issue the bot cannot track. I used that last weekend when my internet went down, and, bam, I had a human.

    Power words: When the system starts listing options, clearly say one word: “Supervisor.” If that doesn’t work, say, “I need to file a formal complaint.” Most systems are not programmed to deal with complaints or supervisors. They escalate fast.

    Technical bypass: Asked to enter your account number? Press the pound key (#) instead of numbers. Many older systems treat unexpected input as an error and default to a human.

    OPENAI ANNOUNCES UPGRADES FOR CHATGPT IMAGES WITH ‘4X FASTER GENERATION SPEED’

    A phone and a computer

    “Supervisor” is one magic word that can get you a human on the other end of the line. (Neil Godwin/Future via Getty Images)

    Go above the bots

    If direct commands fail with AI, be a confused human.

    The Frustration Act: When the AI bot asks a question, pause. Wait 10 seconds before answering. These systems are built for fast, clean responses. Long pauses often break the flow and send your call to a human.

    The Unintelligible Bypass: Stuck in a loop? Act like your phone connection is terrible. Say garbled words or nonsense. After the system says, “I’m having trouble understanding you” three times, many bots automatically transfer you to a live agent.

    The Language Barrier Trick: If the company offers multiple languages, choose one that’s not your primary language or does not match your accent. The AI often gives up quickly and routes you to a human trained to handle language issues.

    Use these tricks when you need help. You are calling for service, not an AI bot.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    A woman making a call on her cell phone

    Long pauses and garbled language can also get you referred to a human. (iStock)

    Get tech-smarter on your schedule

    • National radio: Airing on 500-plus stations across the U.S. Find yours or get the free podcast.
    • Daily newsletter: Join 650,000 people who read the Current (free!)
    • Watch: On Kim’s YouTube channel

    Award-winning host Kim Komando is your secret weapon for navigating tech.

    Copyright 2026, WestStar Multimedia Entertainment. All rights reserved. 

    [ad_2]

    Source link

  • Sam Altman’s Cringe AI Thirst Trap Says a Lot About the Future of OpenAI

    [ad_1]

    OpenAI’s latest AI model launch has raised questions about the company’s wide range of projects and priorities, due in part to an NSFW image that co-founder and CEO Sam Altman generated and shared to promote it. 

    On December 16, OpenAI released an updated image-generation feature for ChatGPT, powered by its latest text-to-image AI model, named GPT-Image-1.5. Altman posted about the new model on his X account, and, as an example of its capabilities, included an AI-generated image of himself as a shirtless, muscular firefighter standing above a Christmas-themed December calendar. 

    According to X’s metrics, Altman’s firefighter post has been viewed over four million times and reposted over 1,000 times. Several of those reposts pointed out that the December dates in the calendar aren’t accurate to 2025, while others remarked on the disparity between Altman’s bold claims of using AI to cure cancer and eliminate poverty and OpenAI’s current offerings. 

    GPT-Image-1.5 is designed to compete against Nano Banana, the popular AI image generator and editor Google released in August. According to a recent report from The Information, OpenAI deprioritized development on new image models several months ago, but when Google released Nano Banana, “leaders at OpenAI rushed to improve its image technology.” 

    The Information also reported that according to some OpenAI employees, for much of 2025 “Altman seemed to be running OpenAI as if it had already conquered the chatbot market,” venturing beyond the core ChatGPT business into AI video and social media with Sora, web browsers with ChatGPT Atlas, and a physical device currently being designed by Jony Ive. Some of these initiatives reportedly “took resources away from efforts to increase ChatGPT’s mass appeal.” 

    In a video posted to OpenAI’s X account on December 17, OpenAI co-founder and president Greg Brockman admitted that new products like image generation require large amounts of compute, which has forced leadership to make difficult trade-offs. 

    When OpenAI released its previous frontier image-generation model in March of this year, it set off a viral trend of users generating images in the style of beloved anime production company Studio Ghibli. Usually, having your product go viral is an absolute win for businesses, but according to Brockman, the trend was so massive that OpenAI decided to “take a bunch of compute from research and move it to our deployment” in order to meet the demand. “That was really sacrificing the future for the present,” Brockman said in the video. 

    The extended deadline for the 2026 Inc. Regionals Awards is Friday, December 19, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Ben Sherry

    Source link

  • The Trump administration’s biggest impact on education in 2025 

    [ad_1]

    Even with a conservative think tank’s blueprint detailing how the second Trump administration should reimagine the federal government’s role in education, few might have predicted what actually materialized this year for America’s schools and colleges. 

    Or what might be yet to come. 

    “2025 will go down as a banner year for education: the year we restored merit in higher education, rooted out waste, fraud and abuse, and began in earnest returning education to the states,” Education Secretary Linda McMahon told The Hechinger Report. She listed canceling K-12 grants she called wasteful, investing more in charter schools, ending college admissions that consider race or anything beyond academic achievement and making college more affordable as some of the year’s accomplishments. 

    “Best of all,” she said, “we’ve begun breaking up the federal education bureaucracy and returning education control to parents and local communities. These are reforms conservatives have championed for decades — and in just 12 months, we’ve made them a reality.” 

    Related: Become a lifelong learner. Subscribe to our free weekly newsletter featuring the most important stories in education. 

    McMahon’s characterization of the year is hardly universal. Earlier this month, Senate Democrats, led by independent Sen. Bernie Sanders, called out some of the administration’s actions this year. They labeled federal changes, especially plans to divide the Education Department’s duties across the federal government, dangerous and likely to cause chaos for schools and colleges. 

    “Already, this administration has cancelled billions of dollars in education programs, illegally withheld nearly $7 billion in formula funds, and proposed to fully eliminate many of the programs included in the latest transfer,” the senators wrote in a letter to Republican Sen. Bill Cassidy, chair of the committee that oversees education. “In our minds, that is unacceptable.” 

    So, what really happened to education this year? It was almost impossible for the average observer to keep track of the array of changes across colleges and universities, K-12 schools, early education and education research — and what it has all meant. This is a look back at how the education world was transformed. 

    Related: Tracking Trump: How he’s dismantling the Education Department and more 

    Higher education

    The administration was especially forceful in the higher education arena. It used measures including antidiscrimination law to quickly freeze billions of dollars in higher education research funding, interrupting years-long medical studies and coercing Columbia, Brown, Northwestern and other institutions into handing over multimillion-dollar payments and agreeing to policy changes demanded by the administration.

    A more widespread “compact” promising preference for federal funding to universities that agreed to largely ideological principles had almost no takers. But in the face of government threats, universities and colleges scrapped diversity, equity and inclusion, or DEI, programs that provided support based on race and other characteristics, and banned transgender athletes from competing on teams corresponding to genders other than the ones they were assigned at birth.

    As the administration unleashed its set of edicts, Republicans in Congress also expanded taxes on college and university endowments. And the One Big Beautiful Bill Act made other big changes to higher education, such as limiting graduate student borrowing and eliminating certain loan forgiveness programs. That includes public service loan forgiveness for graduates who take jobs with organizations the administration designated as having a “substantial illegal purpose” because they help refugees or transgender youth. In response, states, cities, labor unions and nonprofits immediately filed suit, arguing that the rule violated the First Amendment. 

    The administration has criticized universities, colleges and liberal students for curbing the speech of conservatives by shouting them down or blocking their appearances on campuses. However, it proceeded to revoke the visas of and begin deportation proceedings against international students who joined protests or wrote opinions criticizing Israeli actions in Gaza and U.S. government policy there.  

    Meanwhile, emboldened legislatures and governors in red states pushed back on what faculty could say in classrooms. College presidents including James Ryan at the University of Virginia and Mark Welsh III at Texas A&M were forced out in the aftermath of controversies over these issues. — Jon Marcus

    Related: How Trump 2.0 upended education research and statistics in one year  

    K-12 education

    Since Donald Trump returned to office earlier this year, K-12 schools have lost millions of dollars in sweeping cuts to federal grants, including money that helped schools serve students who are deaf or blind, grants that bolstered the dwindling rural teacher workforce and funding for Wi-Fi hotspots

    Last summer, the Trump administration briefly froze billions of dollars in federal funding for schools on June 30, one day before districts would typically apply to receive it. Although the money was restored in late July, some school leaders said they no longer felt confident they’ll receive all expected federal funds next year. And they are braced for more cuts to federal budgets as the U.S. Department of Education is dismembered.

    That process, as well as the end goal of returning the department’s responsibilities to the states, has raised uncertainty about whether federal money will continue to be earmarked for the same purposes. If the state of Illinois is in charge of federal funding for every school in the state, said Todd Dugan, superintendent of a rural Illinois district, will rural schools still get money to boost student achievement or will the state decide there are more pressing needs?  

    Even as the Trump administration attempts to push more control over education to the states, it has aggressively expanded federal power over school choice and transgender student rights in public schools. The One Big Beautiful Bill Act will create a federal school voucher program, allowing taxpayers to donate up to $1,700 for scholarships that families can use to pay for private school. The program won’t start until 2027, and states can choose whether to participate — setting up potentially divisive fights over new money for education in Democratic-controlled states. 

    Already, some Democratic-led states have come to the defense of schools in funding and legal fights with the federal government over transgender athletes participating in sports. The U.S. departments of Education and Justice launched a special investigations team to look into complaints of Title IX violations, targeting school districts and states that don’t restrict accommodations or civil rights protections for transgender students. Legal experts expect the U.S. Supreme Court to ultimately decide how Title IX — a federal law that prohibits sex discrimination in education — applies to public schools.

    The federal government directly runs just two systems of schools — one for military families and the other for children of tribal nations. In an executive order signed in January, the president directed both systems to offer parents a portion of federal funding allocated to their children to attend private, religious or charter schools. 

    And as part of the dismantling of the federal Education Department, the Interior Department — which oversees 183 tribal schools across nearly two dozen states — will assume greater control of Indian education programs. In addition to rolling out school choice at its campuses, the department will take over Indian education grants to public schools across the country, Native language programs, Alaska Native and Native Hawaiian programs, tribally controlled colleges and universities, and many other institutions. — Ariel Gilreath and Neal Morton

    Related: Trump administration makes good on many Project 2025 education goals

    Early education

    Early education was not at the top of Trump’s agenda when he returned to office. On the campaign trail, when asked if he would support legislation to make child care affordable, he gave an unfocused answer, suggesting tariff revenue could be tapped to bring down costs. Asked a similar question, Vice President JD Vance suggested that care by family members was one potential solution to child care shortages. 

    However, many of the administration’s actions, including cuts to the government workforce and grants, have affected children who depend on federal support. In April, the administration abruptly closed five of 10 regional offices supporting Head Start, the free, federally funded early childhood program for children from low-income families. Head Start program managers worried they would be caught up in a freeze on grant funding that affected all agencies. Even though administration officials said funds would keep flowing to Head Start, some centers reported having problems drawing down their money. The prolonged government shutdown, which ended Nov. 12 after 43 days, also forced some Head Start programs to temporarily close

    Though the shutdown is over, Head Start advocates are still worried. Many of the administration’s actions have been guided by the Project 2025 policy document created by the conservative Heritage Foundation. Project 2025 calls for eliminating Head Start, which serves about 715,000 children from birth to age 5, for a savings of about $12 billion a year. 

    The One Big Beautiful Bill Act contained some perks for parents, including an increase in the child tax credit from $2,000 to $2,200. The bill also created a new program called Trump accounts: Families can contribute up to $5,000 each year until a child turns 18, at which point the Trump account will turn into an individual retirement account. For children born between Jan. 1, 2025, and Dec. 31, 2028, the government will provide a $1,000 bonus. Billionaires Michael and Susan Dell have also promised to contribute $250 to the account of each child ages 10 and under who lives in a ZIP code with a median household income of $150,000 or less. 

    That program will launch in summer 2026. — Christina A. Samuels

    Contact staff writer Nirvi Shah at 212-678-3445, on Signal at NirviShah.14 or shah@hechingerreport.org.   

    This story about the Trump administration’s impact on education was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

    Since you made it to the bottom of this article, we have a small favor to ask. 

    We’re in the midst of our end-of-year campaign, our most important fundraising effort of the year. Thanks to NewsMatch, every dollar you give will be doubled through December 31.

    If you believe stories like the one you just finished matter, please consider pitching in what you can. This effort helps ensure our reporting and resources stay free and accessible to everyone—teachers, parents, policymakers—invested in the future of education.

    Thank you. 
    Liz Willen
    Editor in chief

    Creative Commons License

    Republish our articles for free, online or in print, under a Creative Commons license.

    [ad_2]

    Nirvi Shah

    Source link

  • Adobe hit with proposed class-action, accused of misusing authors’ work in AI training | TechCrunch

    [ad_1]

    Like pretty much every other tech company in existence, Adobe has leaned heavily into AI over the past several years. The software firm has launched a number of different AI services since 2023, including Firefly — its AI-powered media-generation suite. Now, however, the company’s full-throated embrace of the technology may have led to trouble, as a new lawsuit claims it used pirated books to train one of its AI models.

    A proposed class-action lawsuit filed on behalf of Elizabeth Lyon, an author from Oregon, claims that Adobe used pirated versions of numerous books — including her own — to train the company’s SlimLM program.

    Adobe describes SlimLM as a small language model series that can be “optimized for document assistance tasks on mobile devices.” It states that SlimLM was pre-trained on SlimPajama-627B, a “deduplicated, multi-corpora, open-source dataset” released by Cerebras in June of 2023. Lyon, who has written a number of guidebooks for non-fiction writing, says that some of her works were included in a pretraining dataset that Adobe had used.

    Lyon’s lawsuit, which was originally reported on by Reuters, says that her writing was included in a processed subset of a manipulated dataset that was the basis of Adobe’s program: “The SlimPajama dataset was created by copying and manipulating the RedPajama dataset (including copying Books3),” the lawsuit says. “Thus, because it is a derivative copy of the RedPajama dataset, SlimPajama contains the Books3 dataset, including the copyrighted works of Plaintiff and the Class members.”

    “Books3” — a huge collection of 191,000 books that have been used to train GenAI systems — has been an ongoing source of legal trouble for the tech community. RedPajama has also been cited in a number of litigation cases. In September, a lawsuit against Apple claimed the company had used copyrighted material to train its Apple Intelligence model. The litigation mentioned the dataset and accused the tech company of copying protected works “without consent and without credit or compensation.” In October, a similar lawsuit against Salesforce also claimed the company had used RedPajama for training purposes. 

    Unfortunately for the tech industry, such lawsuits have, by now, become somewhat commonplace. AI algorithms are trained on massive datasets and, in some cases, those datasets have allegedly included pirated materials. In September, Anthropic agreed to pay $1.5 billion to a number of authors who had sued it and accused it of using pirated versions of their work to train its chatbot, Claude. The case was considered a potential turning point in the ongoing legal battles over copyrighted material in AI training data, of which there are many.

    [ad_2]

    Lucas Ropek

    Source link

  • Elon Musk Predicts AGI by 2026 (He Predicted AGI by 2025 Last Year)

    [ad_1]

    Elon Musk predicts that his company xAI could achieve artificial general intelligence (AGI) within the next couple of years, and maybe as soon as 2026, according to a new report from Business Insider. If it feels like you’ve heard that one before, it’s probably because you have.

    Musk predicted the same thing in 2024, claiming AGI would be achieved by 2025. Take a look at any calendar, and you’ll see that we’re just a few weeks away from the end of 2025.

    “How long until AGI?” asked Logan Kilpatrick, the head of product at Google AI Studio, in May 2024.

    “Next year,” Musk replied, to which Kipatrick responded, “Big if true.”

    It wasn’t true, of course. But Musk has a long history of, shall we say, optimistic predictions about his own company’s future accomplishments. And his predictions often have ulterior motives.

    Remember when Musk was making the most noise about the dangers of AI and worries that it could destroy the world? The billionaire signed on to a letter in March 2023 calling for a six month pause in all AI development. It was revealed less than a month later that Musk was secretly building his own AI project at Twitter. By July 2023, Musk had officially announced the creation of xAI, the company that makes his Grok AI chatbot.

    The CEO wasn’t earnestly worried about the risks posed by AI. He was just frustrated that OpenAI was way ahead at the time.

    Musk’s treatment of AGI, or any new technology, largely depends on how he can hype his companies at any given point in time. And the perpetually prospect of achieving AGI, whether you think it would be good or bad for the world, helps drive investment in AI technology, the thing that seems to be propping up the entire U.S. economy at the moment.

    The new report from Business Insider also says that Musk told xAI staff that investment in the private company was going well, with “around $20 billion to $30 billion in funding per year.” An email to xAI with questions about the report was met with an auto-response that simply said “Legacy Media Lies.” Musk has great contempt for the news media and previously had an auto-responder at Twitter that sent a poop emoji.

    Part of the problem in discussing AGI is that there’s no single agreed upon a definition. As IBM describes it, we’ll have achieved AGI when artificial intelligence can “match or exceed the cognitive abilities of human beings across any task.” But obviously defining terms like “cognitive abilities” and “any task” is extremely complicated.

    Other folks like to define AGI as a kind of self-awareness that would make artificial intelligence more like humans. Instead of just regurgitating words from its training data, the AI would understand itself as a kind of consciousness. People in that camp are excited and/or concerned about that theoretical tipping point because they assume it would be the start of the robot revolution and AI’s attempt to destroy humanity. Musk has hyped those fears tremendously, though he’s backed off recently.

    Absent large robotic armies, achieving AGI in the present day with a system that loathes humanity would probably look more like the 1970 sci-fi movie Colossus: The Forbin Project, where non-humanoid systems engage nuclear weapons systems to threaten the world. We don’t really have the advanced humanoid robots for a Terminator 2 situation just yet.

    But Musk is working on that too. He predicts Tesla will produce 1 million humanoid Optimus robots per year within the next five years, and they’ll even be babysitting your kids. He just needs to figure out how to get Optimus working without teleoperation before all of that can happen.

    Who knows? AGI could magically be achieved in the next few weeks, and maybe Musk’s old prediction will come true. But the billionaire also has another prediction deadline just over the horizon. Back in October, Musk told Joe Rogan he’d demonstrate a flying car by the end of this year.

    [ad_2]

    Matt Novak

    Source link

  • New exoskeleton adapts to terrain with smart AI power

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Recreational exoskeletons have been popping up for years, but the new IRMO M1 exoskeleton feels like a turning point. This next-generation wearable blends artificial intelligence (AI), a forward-facing camera, LADAR sensors and lightweight robotics to give your legs a serious boost on trails and city streets. 

    It scans the terrain ahead and predicts how much assistance you will need before your foot lands, rather than waiting to react after impact. The result feels smoother and easier than many older systems that rely mainly on reactive support.

    IRMO spun out of research from Beihang University in Beijing. Its team built the M1 to act like an all-terrain, adaptive suspension system for your legs, tuned for real-world walking and hiking conditions. Early backers on its global crowdfunding campaign are already pushing the project toward major funding goals.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    NEW EXOSKELETON BUILT TO BOOST ENDURANCE AND CUT FATIGUE

    The IRMO M1 reads the terrain ahead and gives your legs the support you need before each step. (IRMO)

    How the IRMO M1 exoskeleton boosts your stride

    The M1 straps onto your waist and legs with modular fast-release bands. Each leg module weighs between 2.2 and 2.6 pounds. A 1,000 W motor provides up to 45% assist on each stride. IRMO says the system can take as much as 50 pounds of stress off your knees, which can help reduce fatigue on long days outside.

    Inside the frame sits a nine-axis IMU paired with an AI engine that studies your gait in real time. This lets the M1 fine-tune each push as it learns how you move. That alone puts it in line with top performance exoskeletons, but IRMO adds something new that it says shifts the entire experience.

    Terrain-aware AI that looks ahead while you move

    The M1 scans a four-foot radius around you with camera and laser-rangefinder sensors. This lets it read the terrain before you reach it. The system adjusts power output based on what is coming up next. If the M1 detects stairs, grass, sand or slopes, it prepares your legs with added support or added control.

    It can help you climb with more power, absorb impact on jumps and steady your pace on steep declines. IRMO says this predictive shift can reduce knee impact by up to 60%. You can use the M1 for hiking, running, jumping, cycling and even sports like basketball or tennis. If you want to build strength, you can switch from assist to resistance mode.

    NIKE PARTNERS WITH ROBOTICS COMPANY TO CREATE WORLD’S FIRST MOTOR-POWERED FOOTWEAR SYSTEM

    An AI powered exoskeleton with a beam coming out of it

    Its lightweight design boosts your stride so you can climb farther and feel less strain on long days outside. (IRMO)

    Multimode control for every activity

    The M1 includes four primary modes.

    • Turbo gives maximum support for intense efforts
    • Eco offers steady help for long walks
    • Training delivers resistance for workouts
    • Rest keeps the motors from firing when you stop

    You control everything in the IRMO app, including battery life and performance stats. With energy recovery tech, the M1 can run for up to eight hours. It works in temperatures from –4°F to 104°F and has up to IP67 waterproofing.

    How to buy the IRMO M1

    You can buy the IRMO M1 through its Kickstarter campaign, which runs until early January 2026. Prices start at $399 for the M1 Neo tier and rise through several launch specials that offer different levels of power, weight and features. Higher tiers include models like the M1 Pro and M1 Ultra, which add stronger motors, lighter frames and longer battery performance. Each pledge level lists what comes in the box, including the main units, straps, charger, battery pack, user manual and packaging. Shipping is global with an estimated delivery window of May 2026.

    Since this is a crowdfunding project, you should review the refund rules, shipping details and risk notes before you back it.

    What this means for you

    If you love the outdoors but feel the strain of long climbs or steep descents, this technology looks to help you reopen trails that once felt out of reach. The M1 could help you hike farther, recover faster and protect your knees on tough routes. It also gives recreational athletes a new tool for strength, balance and endurance.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com 

     CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    A person's leg with an AI powered exoskeleton

    With training and assist modes, the M1 adapts to your goals whether you want more power or more strength. (IRMO)

    Kurt’s key takeaways

    The IRMO M1 exoskeleton blends robotics and AI into a wearable that is built to expand your range outdoors. Its terrain-aware design separates it from earlier models, and its multimode setup makes it useful for more than hiking. If the Kickstarter delivers on time, the M1 could mark a major shift in personal mobility tech. 

    Would you trust an AI-powered exoskeleton to boost your next adventure? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • OpenAI announces upgrades for ChatGPT Images with ‘4x faster generation speed’

    [ad_1]

    NEWYou can now listen to Fox News articles!

    OpenAI announced an update for ChatGPT Images that it says drastically improves both the generation speed and instruction-following capability of its image generator.

    A blog post from the company Tuesday says the update will make it much easier to make precise edits to AI-generated images. Previous iterations of the program have struggled to follow instructions and often make unasked-for changes.

    “The update includes much stronger instruction following, highly precise editing, and up to 4x faster generation speed, making image creation and iteration much more usable,” the company wrote.

    “This marks a shift from novelty image generation to practical, high-fidelity visual creation — turning ChatGPT into a fast, flexible creative studio for everyday edits, expressive transformations, and real-world use.”

    CHINESE HACKERS WEAPONIZE ANTHROPIC’S AI IN FIRST AUTONOMOUS CYBERATTACK TARGETING GLOBAL ORGANIZATIONS

    The OpenAI GPT-5 logo appears on a smartphone screen and as a background on a laptop screen in this photo illustration in Athens, Greece. (Nikolas Kokovlis/NurPhoto via Getty Images)

    The announcement comes just weeks after OpenAI CEO Sam Altman declared a “code red” in a memo within his company to improve the quality of ChatGPT.

    In the document, Altman said OpenAI has more work to do on enhancing the day-to-day experience of its chatbot, such as allowing it to answer a wider range of questions and improving its speed, reliability and personalization features for users, according to The Wall Street Journal.

    The reported company-wide memo from Altman comes as competitors have narrowed OpenAI’s lead in the AI race. Google last month released a new version of its Gemini model that surpassed OpenAI on industry benchmark tests.

    GOOGLE CEO CALLS FOR NATIONAL AI REGULATION TO COMPETE WITH CHINA MORE EFFECTIVELY

    Illustration shows OpenAI logo

    The OpenAI logo Feb. 16, 2025 (Reuters/Dado Ruvic)

    To focus on the “code red” effort to improve ChatGPT, OpenAI will be pushing back work on other initiatives, such as a personal assistant called Pulse, advertising and AI agents for health and shopping, Altman said in the memo, according to the Journal.

    Altman also said the company would have a daily call among those responsible for enhancing ChatGPT, the newspaper added. 

    “Our focus now is to keep making ChatGPT more capable, continue growing, and expand access around the world — while making it feel even more intuitive and personal,” Nick Turley, the head of ChatGPT, wrote on X Monday night.

    OpenAI CEO Sam Altman speaks in July

    OpenAI CEO Sam Altman speaks during the Federal Reserve’s Integrated Review of the Capital Framework for Large Banks Conference in Washington, D.C., July 22, 2025.  (Reuters/Ken Cedeno)

    CLICK HERE TO READ MORE ON FOX BUSINESS        

    OpenAI currently isn’t profitable and has to raise funding to survive compared to competitors like Google, which can fund investments in their AI ventures through revenue, the Journal reported.

    [ad_2]

    Source link