ReportWire

Tag: Artificial Intelligence

  • Clair Obscur leads the AP’s list of 2025’s top video games

    [ad_1]

    It’s been a difficult year for the people who create video games, with layoffs persisting while the tech industry tries to force us to use artificial intelligence for everything. But great games emerged nonetheless — and I can’t imagine AI ever being able to deliver the kind of thrilling, rewarding adventures we’ve seen in 2025.

    The biggest story this year was the release of Nintendo’s new console, the Switch 2. It’s a terrific piece of hardware, but it doesn’t yet have the killer app that makes it essential.

    The second biggest story was the arrival, seemingly out of nowhere, of one marvelous game that left many of us slack-jawed with wonder. It’s as profound an example of interactive storytelling as I’ve ever seen, and an easy choice for game of the year.

    1. Clair Obscur: Expedition 33

    The debut release from French studio Sandfall Interactive pays tribute to classic turn-based role-playing adventures like 1990s Final Fantasy, with a crew of intrepid fighters on a mission to confront a potentially world-destroying entity. But, man, does it take some surprising twists — I can’t remember a game had me gasping so often, either in horror or delight. The graphics and music are stunning throughout, and it’s all anchored by impeccable voice acting that made me care deeply about every single character. Altogether, a landmark achievement.

    2. The Outer Worlds 2

    Scenes from “The Outer Worlds 2.” (Xbox Game Studios via AP)

    This image released by Xbox Game Studios shows a scene from the video game "The Outer Worlds 2." (Xbox Game Studios via AP)

    This image released by Xbox Game Studios shows a scene from the video game “The Outer Worlds 2.” (Xbox Game Studios via AP)

    California’s Obsidian Entertainment has become one of the premier studios in the U.S., and this spacefaring romp is its best game yet. It drops you into a galactic feud among three political philosophies: totalitarianism, hypercapitalism and a math-based religion (think of the most annoying techbro you know). There’s plenty of satisfying combat against radioactive mutants and renegade robots, but even the grimmest situations are juiced with healthy doses of satire as you try to navigate the demands of all three would-be overlords.

    3. Silent Hill f

    The latest chapter of Konami’s long-running franchise digs into its J-horror roots, moving the action from America to Japan in the 1960s. Hinako Shimizu, the teenage protagonist, not only has to confront the trauma of high school — she has to fight off the grotesque monsters that have invaded her small town. What makes Silent Hill f fascinating is the way the two nightmares seem to be related. It’s the scariest horror game in years.

    4. Assassin’s Creed Shadows

    Another young Japanese woman takes center stage in this sprawling adventure from Ubisoft. Naoe is a crafty ninja in feudal Japan who’s out to avenge her father’s murder. She’s soon joined by Yasuke, a powerful samurai. The mission variety here is impressive, letting you switch on the fly between Naoe’s stealthy attacks and Yasuke’s brute force. It’s a shining example of Ubisoft’s do-it-your-way approach to the open-world format.

    5. Donkey Kong Bananza

    The best new game on Nintendo’s Switch 2 is ideal for those times when all you want to do is punch something. The big ape’s bananas have been stolen and he has to dive into a vast underworld to retrieve them. Almost all of the environments are destructible, but when you get tired of pounding there are plenty of clever puzzles and minigames that often hark back to DK’s swinging jungle adventures.

    6. The Séance of Blake Manor

    In this haunting mystery from Ireland’s Spooky Doorway, a group of mystics have gathered around Halloween 1897 to commune with the dead. You’re called in to investigate when one of the living humans vanishes. It’s a classic point-and-click puzzle game in which everyone has something to hide. It also digs deep into Irish folklore and history, adding an urgent element of class struggle to a very effective ghost story.

    7. Avowed

    Scenes from the video game "Avowed." (Xbox Game Studios via AP)

    Scenes from the video game “Avowed.” (Xbox Game Studios via AP)

    This image released by Xbox Game Studios shows a scene from the video game "Avowed." (Xbox Game Studios via AP)

    This image released by Xbox Game Studios shows a scene from the video game “Avowed.” (Xbox Game Studios via AP)

    Speaking of class struggle, Obsidian Entertainment’s other big role-playing game of 2025 doesn’t shy away from politics either. You are an emissary sent to investigate a deadly plague in the quasi-medieval Living Lands. Problem is, few of the locals are happy to see you, and they’re too busy fighting each other to help much. Again, Obsidian’s mastery of role-playing action is on full display, this time with swords and spells rather than lasers.

    8. Ghost of Yōtei

    Scenes from the video game "Ghost of Yōtei." (Sony via AP)

    Scenes from the video game “Ghost of Yōtei.” (Sony Interactive Entertainment via AP)

    This image released by Sony shows a scene from the video game "Ghost of Yōtei." (Sony via AP)

    This image released by Sony shows a scene from the video game “Ghost of Yōtei.” (Sony via AP)

    Yet another Japanese woman takes the lead in this revenge drama from Sony’s Sucker Punch Productions. Atsu is a mercenary who returns to rural Japan in the 1600s to hunt down her family’s killers, stirring rumors that an “onryō” — a vengeful ghost — is on the loose. The narrative is tighter than that in AC Shadows, but this is a real treat for fans of classic samurai movies — especially if you play in black-and-white “Kurosawa mode.”

    9. South of Midnight

    This fantasy from Canada’s Compulsion Games is a hypnotic evocation of the mythology of the U.S. Deep South. After a hurricane rips through her neighborhood, a woman named Hazel ventures into the bayou. The creatures she meets — a talking catfish, a massive gator, a blues-playing ghoul — are gorgeously rendered in stop-motion-inspired animation. The gameplay is fairly simple, but the art and music make for a memorable journey.

    10. The Alters

    In this survival adventure from Poland’s 11 Bit Studios, you are a humble engineer left on a hostile planet. Fortunately there’s a movable base nearby — but you can’t run it alone, so you’re going to have to clone yourself. Each clone has different personality tics, and the result is a fascinating metaphysical brainteaser that will have you wondering how long you’d be able to put up with half a dozen versions of you.

    [ad_2]

    Source link

  • A New Way to Ruin Thanksgiving: Making AI Slop Recipes

    [ad_1]

    Remember when people started asking AI tools for cooking advice, and it wound up telling them to do things like use glue to get cheese to stick on pizza? Well, people are apparently relying on that same technology to guide them through cooking this year’s Thanksgiving dinner. In fact, so many are doing so that Bloomberg reports it’s putting a real dent in the views of recipe writers who usually see traffic spike this time of year.

    The problem is effectively the same one that led to Google previously recommending that people eat one rock per day: AI Overviews in Search. They provide users with a quick panel that pulls out all of the “relevant information” without requiring them to click through to a website and scroll through the admittedly annoying 2,000-word personal essay that precedes every recipe ever posted online.

    This creates two issues. The first is for the recipe authors, who have put actual work—from their collected knowledge of food to the effort of prep work to the trial and error to get the final product just right—into the recipes they share. They’re getting their traffic siphoned off by the AI Overviews. Creators that Bloomberg spoke with said their traffic was down between 40% to 80% this year from previous Thanksgivings. That’s in line with the experience of other sites, too, which have reported as much as 80% declines in click-throughs since AI Overviews became more prominent.

    The second problem is for people making the recipes, because there is a very real chance that they are getting bad information. Here’s the thing about AI summaries of anything: it doesn’t actually understand what it is reading. All it can do is spit back what it thinks is relevant. That’s kind of a big deal for cooking, where little errors can ruin a dish. For instance, Bloomberg talked to one cook who has a popular Christmas cake recipe. On the creator’s page for the recipe, it suggests baking it at 160°C (that’s 320°F) for an hour and a half. An AI-summarized version of that recipe recommends you bake it for three to four hours—more than twice as long. You don’t have to know a whole lot about baking to know that’s not going to turn out great.

    AI-generated recipes have become a whole micro-industry. If you hop on any social platform and go looking for ideas of what to cook, there’s a good chance you’ll land on a page that looks like your standard cooking inspiration fare—but you might notice that the recipes just aren’t quite right. Best-case scenario, you’ll probably end up with a relatively bland but perfectly fine dish. Worst case, you might end up burning down your house because somewhere in the black hole that is a large language model, it decided that you should put your tinfoil-wrapped fish in the microwave on high.

    Maybe grab one of those old cookbooks off the shelf this holiday season just to be safe.

    [ad_2]

    AJ Dellinger

    Source link

  • WIRED Roundup: Gemini 3 Release, Nvidia Earnings, Epstein Files Fallout

    [ad_1]

    Zoë Schiffer: Yeah, I think that one thing that everyone can agree on is that Nvidia is undoubtedly one of the companies that has gone all in during this AI acceleration moment. For better or worse, about 90 percent of Nvidia’s sales, which were once dominated by chips for personal gaming computers now come from its data center business, and it feels like every time one of these partnerships between OpenAI and another company, Nvidia’s in there somewhere, it just feels like it’s attached to everyone else in this industry at this point.

    Max Zeff: Yeah, it’s done a great job of infusing itself with every AI company, but also, I mean, that’s been a major concern. There’s been a lot of talk of these circular deals where Nvidia really depends on a lot of these startups that it’s also funding. It’s a customer, it’s an investor. Nvidia is so wrapped up in this. So I guess in that way, it’s not that surprising that Jensen is defending the AI bubble constantly now.

    Zoë Schiffer: Yeah. It’s also worth saying that one of the fears that people who have the fear of the AI bubble will talk about is the fact that the GPUs are the majority of the cost of building out a data center, and they need to be replaced, what, every three years? Nvidia releases new chips and they’re cutting edge, and companies need to buy them in order to compete. I think the fear is that that renewal cycle isn’t quite factored into the pricing, but as long as people continue to buy chips, what Jensen is saying is, “No, no, we’re insulated right now.”

    Max Zeff: Right. We’ll see if that’s really true though.

    Zoë Schiffer: One more story before we go to break, and to get through this one, we both have to be extra professional. I’m not sure Max, which we always are, but just a little extra. You will see what I mean. WIRED contributor Mattha Busby reported on how two young Mormons created an app to help other men break their porn addiction and gooning habits. I’m going to be real. I had never heard this term before reading this story, and I was shocked. OK, if you’re not familiar with gooning, it’s basically just another word for edging. That is long hours of masturbation without release. This app called Relay was created by 27-year-old Chandler Rogers with the mission of providing his Gen Z peers a way to stop doing this and to generally escape from the clutches of porn. I have some other ideas. I feel like go outside, talk to a human, but I don’t want to be mean, because I do feel like this could be really difficult for people.

    [ad_2]

    Zoë Schiffer, Maxwell Zeff

    Source link

  • Why Iceland Is Becoming a Model for Renewable-Powered High-Performance Computing

    [ad_1]

    With abundant renewable energy, efficient cooling and community-first development, Iceland shows how data centers can grow without compromising the planet. Unsplash+

    As the demand for A.I.-ready digital infrastructure skyrockets, data center development has become an urgent and necessary foundation for a wide spectrum of high-performance computing technologies—and for the businesses that are increasingly dependent on them. Unsurprisingly, data center construction has surged globally. Yet as growth accelerates, teh roadblocks to building at the required pace and scale have become far more pronounced. 

    Arguably, the most critical factor in data center development today is access to power. Alex de Vries-Gao, the founder of tech sustainability website Digiconomist, estimates that by the end of 2025, energy consumption by A.I. systems could reach 23 gigawatts—twice the total energy consumption of the Netherlands.

    This poses two intertwined challenges. First, many countries simply lack sufficient power or a modern grid capable of supporting these demands. Much of the U.S. and U.K. national grid infrastructure was built between 1950 and 1970 and designed around large coal-fired plants—a post-war regeneration system now decades overdue for modernization. As coal availability waned, nuclear and renewable sources such as wind and solar began to fill the gap. Yet, these types of energy systems take time to develop and rely heavily on robust, upgraded power networks. The sudden increase in power demand resulting from the proliferation of data centers has highlighted the crucial need for investment in power infrastructure globally.

    Second, the demand for such vast power has sharpened scrutiny on the carbon footprint of data centers. As a result, data-intensive businesses are increasingly looking for data center partners that have proven sustainability credentials and can help decarbonize their IT workloads. That often means looking further afield than your local neighborhood data center provider to find a partnership that is environmentally and financially beneficial and sustainable long-term. At atNorth, we are seeing unprecedented demand for environmentally responsible A.I. infrastructure at speed and scale. Power bottlenecks caused by power availability simply cannot be allowed to become a limiting factor to growth.

    The Icelandic example

    Data centers located in cooler climates such as the Nordics can leverage highly energy-efficient cooling systems that significantly reduce the energy required to power and cool the hardware they host. The region also benefits from abundant renewable energy and relatively young, resilient power and internet networks. 

    Iceland, in particular, is a global leader in clean energy: 71 percent of its energy is generated by hydropower, and 29 percent from geothermal energy. Icelandic data centers can combine renewable energy with its naturally cool ambient temperatures to achieve exceptional energy efficiency. While global average Power Usage Effectiveness (PUE)—the metric of data center energy efficiency where the ideal value is 1.0 (representing 100 percent efficiency)—hovers around 1.48, Icelandic facilities average between 1.1 and 1.2, enabling customers to significantly decarbonize their IT workloads. For example, BNP Paribas lowered its total cost of ownership, cut energy use by 50 percent and reduced CO₂ output by 85 percent by relocating a portion of its IT infrastructure to one of atNorth’s Icelandic facilities.

    Temperatures in Iceland typically range from 30°F (-1 °C) in winter to 52°F (11 °C) in summer, enabling free-air cooling of some IT workloads. As compute density increases to accommodate A.I. and other high-performance applications, more advanced cooling technologies—such as Direct Liquid Cooling (DLC) or Direct to Chip Cooling—that allow water (or coolants) to reduce the temperature of the computer equipment more efficiently due to superior heat dissipation have become essential. These solutions are widely available in Iceland and across the Nordic countries, which are well known for their environmentally friendly ethos and circular economy principles.

    Moreover, Iceland’s political and economic stability offers another key advantage as geopolitical uncertainty grows across regions. Businesses are now more sensitive to the physical location of their data and the legal frameworks that govern it. As a member of the European Economic Area (EEA), Iceland has adopted the E.U.’s General Data Protection Regulation (GDPR) and reinforced it with national legislation, resulting in robust safeguards for data privacy and security.

    Going beyond carbon reduction

    These factors have driven a surge in Nordic data center development in recent years, positioning the region at the forefront of the industry. While much of the world works to upgrade legacy power networks in order to start building data centers, the Nordic countries are addressing newer challenges associated with more mature data center development. Certainly, at atNorth, we have seen growing demand for a more holistic approach to sustainability and responsible operations. It is not enough to mitigate environmental impact; data center operators must deliver tangible benefits to the local communities in which we operate to support long-term sustainability and economic growth.

    Using the most sustainable materials possible is one factor that can showcase an honest commitment to care for the natural environment. atNorth’s ICE03 data center was constructed using Glulam, a sustainable laminated wood product with lower environmental impact and superior fire resistance compared to steel. Similarly, the site was insulated using sustainable Icelandic rockwool, produced from natural volcanic basalt and known for its durability, fire resistance and low ecological footprint.  

    The process of heat reuse—the recycling of waste heat from the data center cooling systems for use in the local community—is a practice that is common in the Nordic countries and growing in popularity across northern Europe. This is a fundamental part of sustainable data center design, and even in countries like Iceland, where naturally heated geothermal water is abundant, opportunities for further improvement remain. At ICE03, for example, atNorth partnered with the municipality of Akureyri to channel waste heat into a new community-run greenhouse, which will provide a space for schoolchildren to explore ecological farming practices and sustainable food production. These initiatives reduce carbon emissions for both the data center and the receiving organization while addressing specific local needs, such as fresh vegetable production in a country that imports 80 percent of its fresh produce.

    Community engagement is also becoming pivotal to the data center development process as competition over suitable land intensifies. Just as the concept of a “trusted brand” has proven fundamental in the consumer retail market—with some research suggesting that 81 percent of consumers need to trust a brand before considering a purchase—the same principle extends to regional decision-making that directly affects the lives of local people. Therefore, operators that can demonstrate a genuine commitment to good corporate citizenship will undoubtedly find more success.

    To ensure authentic integration with local communities, local hiring is essential. Over 90 percent of the workforce involved in developing atNorth’s ICE03 site came from nearby communities. The company also supports local education, charities and community projects through volunteer support and financial donations—sponsoring a local run in Akureyri, funding Reykjanesbær’s light festival and donating advanced mechatronics equipment to Akureyri University to support training for data center-related careers. 

    Building for the A.I. era—responsibly 

    As digitalization intensifies, so will the demand for high-performance data center capacity. Yet such rapid expansion carries risks that could seriously undermine long-term sustainability. The boom-and-reckoning pattern seen in industries like palm oil—where explosive growth preceded significant deforestation—serves as a warning. 

    The data center industry must learn from history and chart a new path in which digital infrastructure can be technologically advanced, environmentally responsible and locally beneficial. In short: data centers must be developed to meet A.I.-era performance demands while driving responsible growth and long-term value for clients, communities and our planet.

    Why Iceland Is Becoming a Model for Renewable-Powered High-Performance Computing

    [ad_2]

    Erling Freyr Guðmundsson

    Source link

  • CEOs, You Can’t Afford to Delay on AI Any Longer. Here’s How to Embed It Into Your Business

    [ad_1]

    ChatGPT made its public debut in November 2022. Before then, artificial intelligence was largely a corporate buzzword or big tech slang. A little more than three years later, AI is no longer jargon; it’s ubiquitous. Everyone uses it everywhere, for everything. Looking down the road at 2030, AI is on track to dominate every aspect of business, from internal operations to external execution. Its potential to holistically transform how work gets done is endless. 

    While there is no question that AI will have a significant impact on the future of work, precisely what AI will look like in four years is yet to be determined. Many futurists suggest what’s to come, spanning grim visions of robots replacing humans to more optimistic images of AI improving the employee experience and providing more work/life balance. As always, the reality probably lies somewhere between the two, in a world where jobs look different.  

    People, however, are still the linchpin to organizational success. Either way, AI will impact every line on the P&L—revenues, costs, operations, people, and investments. It will affect every business leader’s ability to provide their product and/or service competitively. It will also impact their customers and competitors. 

    AI strategy 

    According to Vistage research, nearly three of four small- to midsize-business CEOs use an internally developed strategic planning process. However, these legacy frameworks often fail to accommodate new and emerging technologies. Leaders who don’t have a deliberate approach to integrating AI risk will be left behind and unprepared for the market and economic realities of an AI-powered 2030. 

    Adding AI to strategic planning can be daunting. Its uncharted and quickly evolving nature means there is no playbook or clearly defined destination in place. Add the dynamics of an AI-anxious workforce that is tasked with leveraging tools that they fear will eventually put them out of a job—in effect making people feel as though they are digging their own graves—and it’s no surprise many business leaders are wary about adding AI to their tried-and-true planning processes. However, AI is happening now. CEOs must begin embracing AI rapidly and intentionally to remain competitive—both today and down the road. 

    How to embed AI into your business’s strategic planning 

    Business leaders can begin embedding AI into their strategic planning by focusing on the following key areas: 

    • Market analysis. How is AI reshaping the marketplace, including competitors, pricing, and capabilities? 
    • Competitive advantage. How does it change your unique value proposition that customers will recognize and reward in an environment of rapidly changing customer requirements? 
    • Financial planning. How does it impact your ROI and investment models? 
    • Operational execution. How does it impact your productivity as an organization? How can you leverage employees’ individual productivity gains and how can you automate existing workflows to capitalize on the power of AI? 
    • Skills and tools. What are the skills that your workforce will need to develop. What are the tools they’ll need to thrive in the future? 
    • Governance. How can you ensure you have the right security protocols, data protection, and ethical considerations in place? 

    By diving deep into these six areas, CEOs can begin honing their long-term vision and tactical approach to integrating AI into their business. By developing a strong point of view and blueprint for implementing AI, CEOs can position themselves for long-term gains. Overcoming the hesitation to integrate AI is challenging, and taking AI from experimentation to mastery is no small—nor speedy—task.  

    Make no mistake. AI is here, and it is already actively transforming business. Those who take a proactive approach to AI will be primed for success, whether it’s in 2026, 2030, or beyond. 

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Joe Galvin

    Source link

  • Those Viral Photos of Elon and Zuck Are AI. But Google Launched a New Way to Check for Fakes

    [ad_1]

    Photos appearing to show Elon Musk and several other Big Tech CEOs have gone viral in the past week on X and Bluesky. The mundane environments, including humble apartments and McDonald’s parking lots, should have given everyone a hint that they’re fake. But there’s a new way for the average person to check for themselves whether the images were made with AI. And it’s actually really useful.

    Right off the bat, it should be said that the vast majority of AI image detectors are not reliable. Many people think you can use tools that are openly available on the web and figure out if a given image is AI. But they’re not good. For example, people often ask Grok on X whether a photo was created with generative artificial intelligence. And it frequently gets the answer wrong. Sometimes in amusing ways.

    Google developed an AI watermark called SynthID a couple of years ago, but the company didn’t allow the average user to check whether an image had the watermark. That changed just a few days ago. Now anyone can upload an image to Gemini and ask if it has the SynthID watermark, which is invisible to the naked eye.

    The watermark is embedded in the pixels and every image created with Google’s AI creation tools will have it. Checking for the watermark is now easy for anyone who opens up Gemini.

    From Google’s announcement:

    If you see an image and want to confirm it has been made by Google AI, upload it to the Gemini app and ask a question such as: “Was this created with Google AI?” or “Is this AI-generated?”

    Gemini will check for the SynthID watermark and use its own reasoning to return a response that gives you more context about the content you encounter online.

    Obviously Gemini is less equipped to tell you if an image is AI if it wasn’t made with Google tools like Nano Banana Pro. And that’s the entire reason the company appears to be launching SynthID detection in Gemini in this moment. Nano Banana Pro launched last week and it’s allowing users to make incredibly realistic images, including images of Elon Musk and other tech CEOs that look very real.

    Some of those images have recently gone viral, like one that racked up nearly 9 million views on X before migrating to other platforms like Bluesky. The image shows Musk, Nvidia CEO Jensen Huang, Google CEO Sundar Pichai, Apple CEO Tim Cook, Amazon founder Jeff Bezos, Microsoft CEO Satya Nadella, and Meta CEO Mark Zuckerberg all standing together in a small apartment.

     

    Other versions of the image include OpenAI CEO Sam Altman, with the men standing around in a parking lot, pictured at the top of this article. For some reason, Musk is seen smoking a cigar in a couple of them. Another image showed the men in the parking lot from a different angle. And still another had the men eating McDonald’s on the ground with a Cybertruck in the background.

    If you run any of these images through Gemini it confirms they all have the SynthID watermark. If you’re wondering whether an image appears too weird to be true, it’s probably a good idea to check with Gemini.

    Did you see that viral image of President Donald Trump with Bill “Bubba” Clinton in a very compromising position? Running that image through Gemini confirms it was made with Google’s AI image generator. Gemini won’t necessarily be able to ID every AI image with certainty. But if you run an image through Gemini and it tells you the “photo” has the SynthID watermark, you know it’s not real.

    Fake images are still going to be everywhere in the current social media environment. But at least Google has given the average user a new tool to identify at least some of the fakes for themselves. It’s only going to get harder and harder to recognize AI-generated content as the years progress. Sometimes you just need to apply some common sense. For example, do you think Elon Musk and Sam Altman would be hanging out in a parking lot together? Given their very public conflicts, that seems very unlikely.

    Then again, it seemed very unlikely that Musk and President Trump would become friendly again after the Tesla CEO accused Trump of being in the Epstein files. Weirder things have happened when billions of dollars are at stake.

    [ad_2]

    Matt Novak

    Source link

  • AI is poised to become Santa’s little helper this holiday – MoneySense

    [ad_1]

    “For years, I always felt stressed out by things like Christmas because I really wanted it to be great and I really wanted to buy great gifts, but it’s always just so much work and time,”  said Box, a Vancouver-based gaming executive. “This feels easier and I like it.,” 

    The way Box is shopping is no anomaly. Consumers are increasingly turning to AI to recommend products, notify them of sales, help them make purchases and arrange deliveries. The holidays are expected to kick those behaviours into overdrive. 

    AI drives smarter holiday spending

    A survey of 18,000 consumers and 7,500 business leaders Shopify commissioned found 64% will use AI for at least one shopping task this holiday season. In the coveted gen Z demographic, which spans ages 18 to 24, a whopping 84% will make use of the tech.

    While many shoppers have been using AI for purchases since ChatGPT’s November 2022 release sparked widespread adoption of the technology, the financial strain the holidays can bring may push new users to give it a try. “The consumer is so price sensitive and it is a really great tool for deal finding and comparisons,,” said J.C. Williams Group retail strategist Lisa Hutcheson. “This will be a year that people start to realize that.”

    Shoppers will also be inclined to use AI because they are “overwhelmed by choice,” said Jenna Jacobson, director of the Retail Leadership Institute at Toronto Metropolitan University. There’s never been as many ways to shop and as much selection as there is today, but wading through it all takes time and energy that people don’t have on a good day, let alone during the bustling holiday season.

    “The thing with Black Friday and Cyber Monday is that you’re dealing with a very short time period and this is why retailers like it. It creates the pressure of ‘Buy now or the sale is going to be gone,’” Jacobson said. AI helps customers “cut through the noise” because they can use it to track prices, get alerted to new product drops and even uncover coupons or other promotions, she said.

    These habits are reflected in data from consultancy firm Accenture which found 59% of the 630 Canadians it surveyed in August and September planned to use the technology for product comparisons this holiday season. About 54% said they would rely on AI for help finding purchase locations and 47% would use it for gift ideas and inspiration.

    AI guides gift searches, with some gaps

    Jacobson figures most of the people using AI for holiday shopping are treating it as a way to research and get gift recommendations, but savvier consumers are relying on it to help them be strategic or save. 

    Article Continues Below Advertisement


    Box will be in both camps this Christmas, when he plans to use the technology to wade through Black Friday and Cyber Monday sales to unearth gifts that have a personal feel. He’s confident AI will nail the task because when it was time to buy a birthday gift for his rugby-loving son, ChatGPT didn’t just recommend any ball. It knew the family is Australian and suggested a ball used by the country’s rugby team. Similarly, when Box was shopping for boots, it recommended ones he hadn’t found by himself that wound up being “far more appropriate” for the occasions he had in mind.

    But AI isn’t a godsend in every shopping situation, points out Caitlin Chua. The Toronto-based account manager recently used ChatGPT to generate a list of features and differences between phones she was considering buying. When preparing to travel to Croatia, she also asked the chatbot to find her a place to stay that met her desired specifications, budget and vibe. She wound up happy with what AI produced in those cases but had less success when she asked Dupe.com—an AI tool that helps users find more affordable versions of items—to uncover a copycat pair of Alo pants with a specific cutout that were constantly sold out. 

    The website returned “options that were similar, but … because none of these other options had that cutout, I didn’t end up buying anything,” she said. “This is where there’s limitations to AI.” The lack of results satisfying Chua may mean there were no similar products out there but it’s also possible there were and AI just couldn’t find them. 

    After all, “AI is still in its early days” and has hiccups, Hutcheson said. Its prone to dredging up outdated and often incorrect information and experts generally advise people not to treat its output as foolproof. Yet customers and retailers aren’t shying away from it. Chua will likely still use AI for price comparisons this holiday, when brands hope the technology will give them an edge. 

    Retailers push AI, but hurdles remain

    Shopify and online marketplace Etsy are so bullish about its potential, they even partnered with OpenAI in September to gradually let ChatGPT present their merchants’ inventory—without links or redirects—for immediate purchase. Jacobson sees it as building on search engine optimization and social media marketing to meet the customer where they want to shop. 

    But not everyone is as advanced as Box. Brick and mortar retail still reigns supreme in Canada, and even those willing to try AI shopping don’t always realize or want to give up more personal details about themselves or their gift recipient to yield better results.

    “That’s probably going to be the biggest hurdle,” Hutcheson said. “So there may be some education needed, but I don’t think it’s going to happen this holiday season.”

    Get free MoneySense financial tips, news & advice in your inbox.

    Read more about shopping:



    About The Canadian Press


    About The Canadian Press

    The Canadian Press is Canada’s trusted news source and leader in providing real-time stories. We give Canadians an authentic, unbiased source, driven by truth, accuracy and timeliness.

    [ad_2]

    The Canadian Press

    Source link

  • The ‘Genesis Mission’: Here’s What’s in Trump’s Most Grandiose AI Executive Order Yet

    [ad_1]

    The title of the executive order is on the short side for Trump: “Launching the Genesis Mission.”

    It reads in part:

    In this pivotal moment, the challenges we face require a historic national effort, comparable in urgency and ambition to the Manhattan Project that was instrumental to our victory in World War II and was a critical basis for the foundation of the Department of Energy (DOE) and its national laboratories.

    According to Michael Kratsios, the science advisor to the president, that Manhattan Project comparison is just the beginning. The Genesis Mission is also, we’re being told, “the largest marshaling of federal scientific resources since the Apollo program.”

    Then again, the Trump administration says stuff. The president said nuclear weapons tests were going to begin “immediately,” and that was almost a month ago.

    But consulting AI.gov, Trump’s special fan page for showing off his love of AI, I find that the president has nine marquee AI executive orders, stretching back to his previous administration, and they have titles like “Promoting the Export of the American AI Technology Stack,” and “Preventing Woke AI in the Federal Government.”

    None of them sound nearly as hauntingly mysterious as a “Genesis Mission.” What’s this AI-loving president up to now?

    What the “Genesis Mission” is literally supposed to be:

    We’re being promised a sort of AI and automation super-platform for the federal government. Based on my read of the program laid out in this order, the Secretary of Energy—fracking mogul Chris Wright—is supposed to unify all Department of Energy datasets with those of all federal agencies, and use those to create “scientific foundation models.” Presumably that means the government’s own LLMs, or other LXMs used for scientific research.

    Then our federal government is going to use its new AI models to build programs that “automate research workflows, and accelerate scientific breakthroughs.” We’re getting set-it-and-forget-it federal science, in other words. The AI does the research, and a person can just come along and scoop up the breakthroughs like cream from a milk bucket.

    According to Politico, Wright says there will be an “incredible increase in the pace of scientific discovery and innovation.” They’re looking at nuclear fusion, other energy sources, pharmaceuticals, protein folding—all the areas of science and research that pair well with AI hype.

    And what does the plan specifically entail?

    The executive order does, in all fairness, outline what the next year (and beyond) is supposed to look like for this program to some degree.

    By the 60-day mark: A list.

    America gets a document identifying 20 core science “challenges” the Genesis Mission can solve.

    By the 90-day mark: An inventory.

    America is gifted an inventory of computational resources the Genesis Mission can use to build its system.

    By the 120-day mark: A plan.

    By now, the Mission is supposed to have its data optimized and in place to train the models.

    By the 240-day mark: Another inventory.

    Wright is supposed to have figured out where robot-driven, automated science experiments can be done. Since it probably sounds like I’m joking, here’s what the order says exactly:

    “Within 240 days of the date of this order, the Secretary shall review capabilities across the DOE national laboratories and other participating Federal research facilities for robotic laboratories and production facilities with the ability to engage in AI-directed experimentation and manufacturing, including automated and AI-augmented workflows and the related technical and operational standards needed.”

    By the 270 day mark: A demo.

    We get some sort of proof of concept for the Genesis Mission platform, focused on one of the 20 aforementioned challenges.

    Within one year (and then every year from now on): An evaluation.

    Were positive outcomes achieved? Did the Genesis Mission make scientific discoveries? How’s everything going? It’ll all be in the annual report.

    And the Genesis Mission had better work, because the other side of this effort is a bunch of federal funding cuts for science. This administration has sought to cancel federal funding for (and subscriptions to) science journals. It has sought to cut $783 million in funding for health research—cuts that, it appears, really will go into effect. It has sought to cut off funding to no less than 100 climate change studies. It has reduced research spending at the National Oceanic and Atmospheric Administration by $100 million, and on, and on.

    The cuts may have had many aims, at least one of which was to curb DEI (remember when people used to talk about DEI?). Another one, it seems, is to shift science into the realm of things you can just automate, well before sufficient AI systems exist to justify anyone’s confidence that such a thing is possible.

    So buckle up for automated, low-cost scientific breakthroughs, everyone! They’ll be here soon, thanks to the Chris Wright and the Genesis Mission. Otherwise, some of those funding cuts might start to look a little silly in retrospect.

    [ad_2]

    Mike Pearl

    Source link

  • Marc Benioff Joins the Chorus, Says Google Gemini Is Eating ChatGPT’s Lunch

    [ad_1]

    Despite its excessive spending on data centers with no clear path to revenue generation in front of it, it seemed that if OpenAI had just one thing it could count on, it was audience capture. ChatGPT seemed like it would get the brand verbification treatment, being the term people used to reference AI. Now it seems like that might be slipping away. Since the release of Google’s Gemini 3 model, it’s like all anyone on the AI-obsessed corners of the web can talk about is how much better it is than ChatGPT.

    Marc Benioff, the CEO of Salesforce and longtime ChatGPT fanboy, is perhaps the loudest convert out there. On X, the exec said, “Holy shit. I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back.” He called the improvement of the model over past versions “insane,” claiming that “everything is sharper and faster.”

    He’s not alone in that assessment. Exited OpenAI co-founder Andrej Karpathy called Gemini 3 “clearly a tier 1 LLM” with “very solid daily driver potential.” Stripe CEO Patrick Collison went out of his way to praise Google’s latest release, too, which is noteworthy given Stripe’s partnership with OpenAI to build AI-driven transactions. Apparently, what he saw with Gemini was too hard not to comment on.

    The feedback from the C-suites around the tech world follows weeks of buzz over on AI Twitter that Gemini was going to be a game-changer. It certainly got presented as such right out of the gate, as Google made a point to highlight how its latest model topped just about every benchmarking test that was thrown at it (though your mileage may vary on just how meaningful any of those are).

    But even the folks behind the benchmark measures appear to be impressed. According to The Verge, the cofounder and CTO of AI benchmarking firm LMArena, Wei-Lin Chiang, said that the release of Gemini 3 represents “more than a leaderboard shuffle” and “illustrates that the AI arms race is being shaped by models that can reason more abstractly, generalize more consistently, and deliver dependable results across an increasingly diverse set of real-world evaluations.”

    The timing of Google’s resurgence in the AI space could not come at a worse time for OpenAI, which currently cannot shake questions from skeptics who are unclear on how the company is ever going to make good on its multi-billion-dollar financial commitments. The company has been viewed as a linchpin of the AI industry, and that industry has increasingly received scrutiny for what seems to be some circular investments that may be artificially propping up the entire economy. Now it seems that even its image as the ultimate innovator in that space is in question, and it has a new problem: the fact that Google can definitely outspend it without worrying nearly as much about profitability problems.

    [ad_2]

    AJ Dellinger

    Source link

  • Trump Signs Executive Order for AI Project Called Genesis Mission to Boost Scientific Discoveries

    [ad_1]

    President Donald Trump is directing the federal government to combine efforts with tech companies and universities to convert government data into scientific discoveries, acting on his push to make artificial intelligence the engine of the nation’s economic future.

    Trump unveiled the “Genesis Mission” as part of an executive order he signed Monday that directs the Department of Energy and national labs to build a digital platform to concentrate the nation’s scientific data in one place.

    It solicits private sector and university partners to use their AI capability to help the government solve engineering, energy and national security problems, including streamlining the nation’s electric grid, according to White House officials who spoke to reporters on condition of anonymity to describe the order before it was signed. Officials made no specific mention of seeking medical advances as part of the project.

    “The Genesis Mission will bring together our Nation’s research and development resources — combining the efforts of brilliant American scientists, including those at our national laboratories, with pioneering American businesses; world-renowned universities; and existing research infrastructure, data repositories, production plants, and national security sites — to achieve dramatic acceleration in AI development and utilization,” the executive order says.

    Trump is increasingly counting on the tech sector and the development of AI to power the U.S. economy, made clear last week as he hosted Saudi Arabia’s Crown Prince Mohammed bin Salman. The monarch has committed to investing $1 trillion, largely from the Arab nation’s oil and natural gas reserves, to pivot his nation into becoming an AI data hub.

    For the U.S.’s part, funding was appropriated to the Energy Department as part of the massive tax-break and spending bill signed into law by Trump in July, White House officials said.

    As AI raises concerns that its heavy use of electricity may be contributing to higher utility rates in the nearer term, which is a political risk for Trump, administration officials argued that rates will come down as the technology develops. They said the increased demand will build capacity in existing transmission lines and bring down costs per unit of electricity.

    Data centers needed to fuel AI accounted for about 1.5% of the world’s electricity consumption last year, and those facilities’ energy consumption is predicted to more than double by 2030, according to the International Energy Agency. That increase could lead to burning more fossil fuels such as coal and natural gas, which release greenhouse gases that contribute to warming temperatures, sea level rise and extreme weather.

    The project will rely on national labs’ supercomputers but will also use supercomputing capacity being developed in the private sector. The project’s use of public data including national security information along with private sector supercomputers prompted officials to issue assurances that there would be controls to respect protected information.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Nov. 2025

    [ad_2]

    Associated Press

    Source link

  • AI is widespread in higher ed, but is it helping or hurting student learning?

    [ad_1]

    Last February, Northeastern University student Ella Stapleton was struggling through her organizational behavior class. She began reviewing the notes her professor created outside of class early in the semester to see if it could guide her through the course content. But there was a problem: Stapleton said the notes were incomprehensible. 

    “It was basically like just word vomit,” said Stapleton.

    While scrolling through a document her professor created, Stapleton said she found a ChatGPT inquiry had been accidentally copied and pasted into the document. A section of notes also contained a ChatGPT-generated content disclaimer.

    Stapleton believes her adjunct professor was overworked, teaching too many courses at once, and was therefore forced to sacrifice his quality of teaching with a shortcut from artificial intelligence. 

    “I personally do not blame the professor, I blame the system,” said Stapleton. 


    NBC10 Boston

    NBC10 Boston

    Ella Stapleton

    Stapleton said she printed 60 pages worth of AI-generated content she believed her professor utilized for the class and brought it to a Northeastern staff member to lodge a complaint. She also made a bold demand: a refund for her and each of her classmates for the cost of the class.

    “If I buy something for $8,000 and it’s faulty, I should get a refund,” said Stapleton, who has since graduated. “So why doesn’t that logic apply to this?”

    Stapleton’s request made national headlines after she shared her story with The New York Times.

    The moment on Northeastern’s campus encapsulates a larger issue that higher education institutions are grappling with across the country: how much AI use is ethical in the classroom?

    NBC10 Boston collaborated with journalism students at Boston University’s College of Communication who are taking an in-depth reporting class taught by investigative reporter Ryan Kath.

    We took a deep dive into how generative AI is changing the approach of higher education, from how students apply it to their everyday work to how universities are responding with academic programs and institutional studies. 

    With its widespread use, we also explored this question: what is AI doing to students’ critical thinking skills?

    A degree in AI? 

    While driving along a highway in rural New Hampshire, a billboard caught our attention.

    The message advertised a Bachelor of Science degree in artificial intelligence being offered at Rivier University in Nashua. We decided to visit the campus to learn more about the new program.  

    “The mission of Rivier is transforming hearts and minds to serve the world, and that transformation means to change,” said President of Rivier University Sister Paula Marie Buley. 

    Sister Paul Marie Buley


    NBC10 Boston

    NBC10 Boston

    Sister Paul Marie Buley

    At Rivier University, students pay almost $40,000 for a bachelor’s degree in artificial AI, which will prepare them for a field with a median salary of roughly $145,000, according to the institution.

    Upon graduating, the aim of Rivier’s undergraduate program in AI is for students to hold professional practices that allow them to strengthen their skills in the dynamic field. 

    Master’s degree programs in artificial intelligence have begun to pop up in universities across New England including Northeastern University, Boston University, and New England College. The first bachelor’s degree in AI was created in 2018 by Carnegie Mellon University, according to Master’s in AI

    “We want students to enter the mindset of a software engineer or a programmer and really haven’t an idea of what it feels like to work in a particular industry,” said Buley. “The future is here.”

    In a 2024 survey from EDUCAUSE, a higher education advocacy nonprofit, 73% of higher education professionals said their institutions’ AI-related planning was driven by the growing use of these tools among students.

    At Boston University, students can complete a self-paced, four-hour online course to earn an “AI at BU” student certificate. The course introduces the fundamentals of AI, with modules focused on responsible use, university-wide policies, and practical applications in both academic and professional settings, according to the certificate website.   

    Students are also encouraged to reflect on the ethical boundaries of AI tools and how to critically assess their use in coursework.

    BU student Lauren McLeod said she doesn’t understand the resistance to AI in education. She believes schools should focus on teaching students to use it strategically. In lieu of clear institution-wide policies, AI usage policies differ from professor to professor.

    “Are you using [AI] in a productive way, or using it to cut corners? They just need to change the framework on it and use it as a tool to help you,” said McLeod. “If you don’t use AI, you’re gonna fall behind.”

    Despite rising awareness, colleges are slow to develop new policies. Only 20% of colleges and universities have published policies regarding AI use, according to Inside Higher Ed

    AI and critical thinking

    AI is becoming an everyday tool for students in the classroom and on homework assignments, according to Pew Research Center.

    Earlier this month, we stopped students along Commonwealth Avenue on BU’s campus to ask how much AI they use and if they think it’s affecting their brains. 

    BU student Kelsey Keate said she uses AI in her coding classes and knows she relies on it too much.

    Kelsey Keate


    NBC10 Boston

    NBC10 Boston

    Kelsey Keate

    “I feel like it’s definitely not helped me learn the code as easily, like I take longer to learn code now,” said Keate. 

    That is what worries researchers like Nataliya Kos’myna.

    This June, the MIT Media Lab, an interdisciplinary research laboratory, released a study investigating how students’ critical thinking skills are exercised while writing an essay with or without AI assistance.

    Kos’myna, an author of the study, said humans are standing at a technological crossroads—a point where it’s necessary to understand what exactly AI is doing to people’s brains. Three groups of 54 students from the Boston area participated in the study.

    MIT researcher Nataliya Kos’myna.


    NBC10 Boston

    NBC10 Boston

    MIT researcher Nataliya Kos’myna

    “This technology had been implemented and I would actually argue pushed in some cases on us, in all of the aspects of our lives, education, workspace, you name it,” said Kos’myna. 

    Tasked with writing an SAT-style essay, one student group had access to AI, one could only use non-AI search engines, and the final group had to use their brain alone, according to the project website. 

    Recording the participants’ brain activity, Kos’myna was able to see how engaged students were with their task and how much effort they put into the thought process.

    The study ultimately concluded the convenience of AI came at a “cognitive cost.” Participants’ ability to critically evaluate the AI answer to their prompt was diminished. All three groups demonstrated different patterns of brain activity, according to the study. 

    Kos’myna found that students in the AI-assisted group didn’t feel much ownership towards their essays and students felt detached from the work they submitted. Graders were able to identify an AI-unique writing structure and noted that the vocabulary and ideas were strikingly similar.

    “What we found are some of the things that were actually pretty concerning,” said Kos’myna. 

    The paper for the study is awaiting peer review but Kos’myna said the findings were important for them to share. She is urging the scientific community to prioritize more research about AI’s effect on human cognition, especially as it becomes a staple of everyday life. 

    After AI discovery, tuition refund rejected 

    In the wake of filing a complaint, Stapleton said Northeastern was silent for months. The school eventually put the adjunct professor “on notice” last May after she had graduated.

    “Northeastern embraces the responsible use of artificial intelligence to enhance all aspects of its teaching, research, and operations,” said Renata Nyul, vice president for communications at Northeastern University in response to our request for comment. “We have developed an abundance of resources to ensure that both faculty and students use AI as a support system for teaching and learning, not a replacement.” 

    In addition to the AI-generated content being difficult to understand and learn from, Stapleton said it doesn’t justify the cost of tuition. In her complaint, Stapleton asked that she and all of her classmates be reimbursed a quarter of their tuition for the course.

    Her refund request did not prevail, but Stapleton hopes the attention her story received will provide a teachable moment for colleges around the country.

    “In exchange for tuition, [universities] grant you the transfer of knowledge and good teaching,” said Stapleton. “In this case, that fundamentally wasn’t happening, because the only content that we were being given was al AI-generated.”

    Grace, Megan and Dahye


    NBC10 Boston

    NBC10 Boston

    Grace Sferrazza, Megan Amato and Dahye Kim report from the field.

    The story was written by Amato, Kim and Sferrazza and edited by Kath

    [ad_2]

    Ryan Kath, Megan Amato | Boston University, Dahye Kim | Boston University and Grace Sferrazza | Boston University

    Source link

  • Why Game Engines Are Becoming A.I.’s Most Important Testbeds

    [ad_1]

    With games teaching models to act, the future of creative technology is being prototyped in virtual worlds. Unsplash+

    When Electronic Arts (EA) announced its partnership with Stability AI, it promised more than slicker workflows in game development. The announcement confirmed that video games are evolving into the world’s most dynamic laboratory for artificial intelligence. The truth is, what happens in gaming today often sets the cultural and technical standards for every other creative field tomorrow. For decades, creative revolutions followed their tools. Cameras gave rise to cinema. Synthesizers redefined sound. Game engines turned code into story. Now generative A.I. is the next medium, and the engineers designing its frameworks are shaping how imagination itself gets scaled.

    Why gaming leads the way

    Games bring together physics, narrative and design inside interactive systems that mimic the complexity of real life. They are, in effect, real-time simulations of cause and effect. A.I. needs games as much as games need A.I. A model trained within a game world learns context, decision-making and feedback loops that are far richer than static datasets can offer. Simulated interactive environments have been shown to dramatically accelerate multi-agent coordination, behavioral prediction and synthetic data generation. From DeepMind’s AlphaStar learning strategy inside StarCraft II to the recent wave of experiments in Minecraft-based agent learning, games have already become benchmark environments for reasoning and planning. 

    When EA describes its goal as building “systems that can previsualize entire 3D environments from a few prompts,” it signals more than a productivity upgrade. It frames a new design philosophy. If models can generate, analyze and iterate at scale, developers begin to function less like sketch artists and more like orchestra conductors. Humans define intent; models execute infinite variations.

    The new creative hierarchy

    This shift points to a deeper cultural truth. Influence no longer lies solely with artists or storytellers but increasingly with those who design the systems of creation. A new breed of “meta-creators” emerges: engineers and architects shaping the boundaries within which others build. Their code becomes the stage; their parameters, the palette.

    In gaming, this transformation is visible: the player, the developer and now the model all share authorship. The economic data underlines this shift too. The sector is projected to exceed $4.13 billion in 2029, at a compound annual growth rate (CAGR) of 23.2 percent, a rate rivaling the early mobile-gaming boom.

    But the numbers only tell part of the story. What matters more is the creative literacy being formed inside these ecosystems. Millions of gamers, modders and indie developers are learning to collaborate with algorithms as peers, not just tools.

    From content-economy to framework-economy

    I often frame this transition as the move from a content economy to a framework economy. Historically, value sat in the final output—games, films, assets. However, value no longer resides solely in what’s produced, but in what enables production at scale: engines, toolkits, A.I. pipelines and structured worlds. Unreal Engine’s ascent from a shooter-specific engine to the backbone of architecture, automotive design and Hollywood virtual production is the clearest precedent. The same principle extends to A.I.: whoever builds the scaffolding of imagination—foundation models, simulation layers, constraint systems—shapes the flow of creativity across industries.

    The implications reach far beyond entertainment. Game engines already power architectural visualization, advanced robotics simulations, digital twins for urban planning and surgical training environments. As A.I. models learn inside those interactive systems, they gain an embodied understanding of spatial logic and cause-and-effect. A recent paper, for example, presents a framework that generates action-controllable game videos via open-domain diffusion models, an early step toward agents that can “understand” environments rather than just render them. In other words, games teach machines not just to see, but to act.

    The boundary between play and progress blurs

    The same physics engine that governs a racing game can teach an autonomous vehicle to respond to real-world variables. The same dialogue system that trains NPCs to interact can be repurposed for virtual educators or A.I. companions. Every advance in player immersion is also an advance in machine intuition.

    Yet, a cultural reckoning is unfolding. If frameworks become the new frontier of creation, who governs them? The promise of democratization could just as easily turn into concentration, where a few corporations set the parameters of imagination itself—its physics, its cultural defaults. Without deliberate design, “democratized creativity” could turn into centralized control over the engines of imagination. The task ahead is to keep the sandbox open: design architectures where creativity remains decentralized, auditable and human-aligned.

    Human intent remains vital

    That doesn’t mean resisting automation. It means defining it ethically. Games have always been rule-based systems with feedback loops, essentially laboratories of governance. They show us how to balance structure and freedom, how to create environments that encourage exploration without chaos. These are precisely the principles we need as we integrate A.I. into broader creative and industrial workflows.

    When EA says humans will stay “at the center of storytelling,” it isn’t nostalgic; it’s a necessity. Models can approximate texture, light and tone, but they still can’t dream or empathize. The human imagination remains the compass even as the landscape changes. The creative act is not solitary anymore; it’s a dialogue between cognition and computation.

    What’s striking is how natural this feels to a generation raised inside interactive worlds. For them, co-creation with algorithms isn’t a threat but a mode of play. They already understand the interplay between rules and imagination, constraints and emergent behavior. This is the generation that will design how A.I. creates.

    The rehearsal space for the next creative era

    Through this lens, gaming becomes the rehearsal space for the next century of creativity. Every tool first tested in virtual worlds—procedural generation, emotion-aware agent, adaptive simulations—will migrate into film, architecture, education and governance. Games remain humanity’s most advanced simulation of itself, and now they’re teaching our machines how to imagine, interact and build alongside us.

    So when we talk about the future of A.I., perhaps we shouldn’t look to labs or boardrooms but to game studios, modding forums and virtual worlds where the next breakthroughs are quietly being debugged. That’s where intelligence learns empathy, context and play. And that’s where the next renaissance of creativity is already underway.

    Why Game Engines Are Becoming A.I.’s Most Important Testbeds

    [ad_2]

    Ilman Shazhaev

    Source link

  • Amazon Is Using Specialized AI Agents for Deep Bug Hunting

    [ad_1]

    As generative AI pushes the speed of software development, it is also enhancing the ability of digital attackers to carry out financially motivated or state-backed hacks. This means that security teams at tech companies have more code than ever to review while dealing with even more pressure from bad actors. On Monday, Amazon will publish details for the first time of an internal system known as Autonomous Threat Analysis (ATA), which the company has been using to help its security teams proactively identify weaknesses in its platforms, perform variant analysis to quickly search for other, similar flaws, and then develop remediations and detection capabilities to plug holes before attackers find them.

    ATA was born out of an internal Amazon hackathon in August 2024, and security team members say that it has grown into a crucial tool since then. The key concept underlying ATA is that it isn’t a single AI agent developed to comprehensively conduct security testing and threat analysis. Instead, Amazon developed multiple specialized AI agents that compete against each other in two teams to rapidly investigate real attack techniques and different ways they could be used against Amazon’s systems—and then propose security controls for human review.

    “The initial concept was aimed to address a critical limitation in security testing—limited coverage and the challenge of keeping detection capabilities current in a rapidly evolving threat landscape,” Steve Schmidt, Amazon’s chief security officer, tells WIRED. “Limited coverage means you can’t get through all of the software or you can’t get to all of the applications because you just don’t have enough humans. And then it’s great to do an analysis of a set of software, but if you don’t keep the detection systems themselves up to date with the changes in the threat landscape, you’re missing half of the picture.”

    As part of scaling its use of ATA, Amazon developed special “high-fidelity” testing environments that are deeply realistic reflections of Amazon’s production systems, so ATA can both ingest and produce real telemetry for analysis.

    The company’s security teams also made a point to design ATA so every technique it employs, and detection capability it produces, is validated with real, automatic testing and system data. Red team agents that are working on finding attacks that could be used against Amazon’s systems execute actual commands in ATA’s special test environments that produce verifiable logs. Blue team, or defense-focused agents, use real telemetry to confirm whether the protections they are proposing are effective. And anytime an agent develops a novel technique, it also pulls time-stamped logs to prove that its claims are accurate.

    This verifiability reduces false positives, Schmidt says, and acts as “hallucination management.” Because the system is built to demand certain standards of observable evidence, Schmidt claims that “hallucinations are architecturally impossible.”

    [ad_2]

    Lily Hay Newman

    Source link

  • Guillermo del Toro Reveals the 1 Creative Skill AI Can’t Replace

    [ad_1]

    Romance, fairy tales, and gothic horror don’t seem like they belong together, but filmmaker Guillermo del Toro skillfully weaves them into stories unlike anything audiences have seen before. The legendary director has applied his magic touch to a new Netflix adaptation of Mary Shelly’s Frankenstein. The film is an international hit for the streaming network, debuting as the top English-language film in more than 70 countries. 

    The secret ingredient behind del Toro’s success is an approach to storytelling that AI can’t replace. If you build a business presentation the way del Toro constructs a film, your audience will lean in and become emotionally invested in the journey you’re taking them on. 

    Use technology to complement your story 

    You might not be developing an epic two-hour film for the screen, but every pitch or presentation is still, at its core, a story. Your audience doesn’t just want to hear information. They want to feel something. Since AI lacks emotions, feelings, goals, and aspirations, it can’t motivate people to act—only you can. 

    When an NPR reporter asked del Toro for his stance on using generative AI for filmmaking, del Toro said, “I’d rather die.” 

    Del Toro has a strong opinion on AI because he believes that digital tools—especially generative AI—should be used only to enhance a story, not to replace a human’s authentic voice. “Otherwise, why not buy a printer, print the Mona Lisa, and say you made it,” del Toro added. 

    I share a similar message with business communicators: AI-based writing and design tools should complement the story, but the story comes first. Your ideas are the star. 

    Plan presentations in analog 

    When I wrote the first book on how Steve Jobs created and delivered his awe-inspiring presentations, I devoted a chapter to “planning in analog.” Jobs built cutting-edge technology but talked about it like a storyteller. For example, Jobs didn’t begin presentations by opening a slide deck. Instead, he and the team brainstormed ideas, took notes, gathered stories, built props, and sketched scenes on a whiteboard. 

    Del Toro, too, is an advocate of starting in analog. In a video titled “Anatomy of a Scene” for The New York Times, del Toro walks the viewer through a pivotal scene when Dr. Victor Frankenstein, played by Oscar Isaac, is defending his experiments at the Royal College of Medicine. He pulls the drape off a corpse that terrifyingly comes to life. 

    “That’s completely done in analog,” del Toro explained. “There’s no CGI. It’s a puppet, with puppeteers pulling the strings.” 

    The puppeteers are later bluescreened out of the scene—that’s where technology comes into play. However, the technology is used in the service of the story, which must be as authentic as possible in del Toro’s world. 

    Steve Jobs liked to pull drapes off things, too. In January 1984, Jobs kept the audience in suspense as he talked about Apple’s first Macintosh. He started talking about the product without showing it. Then came the big reveal. Jobs walked to the center of the stage, where the Macintosh was sitting on a small table, hidden beneath a black cloth. Like a magician, he lifted the cloth with a flourish, revealing the beige box that would change computing forever. 

    Jobs played the role of storyteller whenever he stepped on stage. 

    It’s hard to imagine that ChatGPT would have come up with the idea for a product launch as theatrical as Jobs did. AI is a tool, not a source for original creative ideas. It doesn’t have a unique personality, experiences, perspectives, or worldviews. Those belong exclusively to you, not to an algorithm. 

    Write the script before building slides 

    A great presentation has engaging visuals, graphics, photos, and animations, but those embellishments should serve the story. Writing down your ideas is a good starting point. Del Toro fills notebooks with sketches and words before he picks up a camera. He once advised content creators to write the stories they want to tell and put them on paper.  

    “I write a biography for the characters that is eight pages long,” he explained. “It has everything about them: what they like, what they eat, what they read, what they listen to, what they don’t like, etc.” 

    Once del Toro has a fully baked idea of who the characters are, he gives the idea to the design department so they can “articulate the biography with visuals and sound.” Once again, technology complements the story, but the story comes first. 

    AI can take what already exists and reproduce, analyze, and remix it. However, that’s not creation. Creation begins with your voice, your imagination, and your unique lived experience that no algorithm can replace. 

    If you want your presentations to stand out and keep your audience glued to their seats, don’t think like a presenter. Think like a movie director. Shape the story you want to tell and let technology play a supporting role. 

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Carmine Gallo

    Source link

  • How to stop Google AI from scanning your Gmail

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Google shared a new update on Nov. 5, confirming that Gemini Deep Research can now use context from your Gmail, Drive and Chat. This allows the AI to pull information from your messages, attachments and stored files to support your research.

    Some people view this as a convenience. They like the idea of faster answers and easier searches. If you feel that way, too, that is completely fine.

    However, many people do not want AI scanning private messages or personal documents. If that sounds like you, there is good news. You can turn these features off with a few quick taps in Gmail.

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    GOOGLE ISSUES WARNING ON FAKE VPN APPS

    Google’s new update allows Gemini to scan Gmail. These steps help you take control of your privacy. (Kurt “CyberGuy” Knutsson)

    Why this update matters

    This feature gives Google permission to scan every email in your Gmail account. That includes personal notes, financial documents, tax files and any sensitive information in your inbox. AI looks for patterns to improve responses, but Google says Gmail content is not used to train the Gemini model and that no user settings were changed automatically.  

    Google also says that Gmail, Docs and Sheets are not used for AI training unless you directly give Gemini that content yourself.

    While Google says the feature improves your experience, some users prefer more control. You may want privacy first and convenience second. If so, you can opt out today.

    GOOGLE CHROME AUTOFILL NOW HANDLES IDS

    How to stop AI from scanning your Gmail

    You can turn this off directly in Gmail settings. Follow these steps:

    Google homepage

    Open Gmail to start the process of turning off AI features. (Kurt “CyberGuy” Knutsson)

    • Tap the gear icon in the top right
    A screenshot of Google's account settings.

    Tap the gear icon to access your main Gmail settings. (Kurt “CyberGuy” Knutsson)

    A screenshot of Google's account settings.

    Select See all settings to reach the full menu. (Kurt “CyberGuy” Knutsson)

    • Scroll until you find Smart Features
    • Turn off Smart features by clicking it off.
    A screenshot of Google's account settings.

    Scroll until you find Smart features and personalization.  (Kurt “CyberGuy” Knutsson)

    • It will ask you to click “Turn off and reload.” 
    A screenshot of Google's account settings.

    Turn off Smart features to reduce scanning across your inbox. (Kurt “CyberGuy” Knutsson)

    • Now, scroll to Google Workspace smart features and click “Manage Workspace smart feature settings.”
    A screenshot of Google's account settings.

    Go to Google Workspace smart features for the next control. (Kurt “CyberGuy” Knutsson)

    • Turn off both checkboxes and then click Save. 
    A screenshot of Google's account settings.

    Turn off both checkboxes to stop extra data scanning. (Kurt “CyberGuy” Knutsson)

    • A pop-up will appear in the bottom left-hand corner of the screen that says “Your preferences have been saved.” 
    A screenshot of Google's account settings.

    Watch for the confirmation pop up that tells you the changes are active. (Kurt “CyberGuy” Knutsson)

    Once you switch these off, Gmail stops scanning your messages for smart features or AI enhancements. This returns control to you.

    What happens when you turn it off

    After you disable these settings, features like smart email suggestions may stop working. That includes predictive text, automatic bill reminders and quick booking prompts. You can always turn them back on if you change your mind.

    Turning these off does not break Gmail. Your inbox works the same. You simply gain more privacy while you use it.

    Want a more private inbox?

    If you’d rather keep your email fully separate from AI features, you may want to consider a privacy-focused email service. They don’t scan your messages or use your inbox to train any systems. Everything stays private and encrypted.

    For people who want more control over their digital privacy, these private and secure email providers offer a straightforward way to keep email activity protected. They give you peace of mind knowing your messages aren’t being analyzed in the background.

    For recommendations on private and secure email providers, visit Cyberguy.com.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Google’s newest update blends convenience with automation. It can simplify research by tapping into your Gmail, Drive and Chat. Still, many people want a clear boundary between AI tools and personal messages. With a few quick steps, you can keep your inbox private without losing access to core Gmail features. Just keep in mind: Google says Gmail content isn’t used to train Gemini unless you explicitly give that content to the AI.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Do you think AI tools should have access to your messages by default or should companies ask before scanning anything? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com. All rights reserved. 

    [ad_2]

    Source link

  • New AI Club Will Bestow Nuclear-Like Power on the Winners, Russia’s Top AI Executive Says

    [ad_1]

    By Elena Fabrichnaya and Gleb Bryanski

    MOSCOW (Reuters) -Artificial intelligence will bestow vast influence on a par with nuclear weapons to those countries who are able to lead the technology, giving them superiority in the 21st Century, one of Russia’s top AI executive told Reuters.

    Alexander Vedyakhin, First Deputy CEO of Sberbank, which has evolved from a traditional lender into a technology conglomerate focused on AI, said it was an achievement that Russia ranks among seven countries with home-grown AI technologies.

    “AI is like a nuclear project. A new ‘nuclear club’ is emerging globally, where either you have your own national large language model (LLM) or you don’t,” Vedyakhin said in an interview at Russia’s flagship annual AI Journey event.

    He said Russia must have at least two or three original AI models, not “retrained foreign models,” for use in sensitive areas such as online public services, healthcare and education.

    “It is impossible to upload confidential information into a foreign model. It is simply prohibited. Doing so would lead to very unpleasant consequences,” Vedyakhin said, adding that only Russian models should handle state data.

    President Vladimir Putin last week said home-grown AI models were vital to preserving Russian sovereignty. Sberbank and technology firm Yandex are leading Russia’s effort to catch up with U.S. and Chinese rivals.

    Vedyakhin acknowledged that Russia would struggle to match leaders in computing power, especially due to Western sanctions limiting access to technology, and said the gap was likely to grow.

    He warned that current energy consumption levels make returns on AI investment “either very distant or not visible at all,” cautioning against “overheated hype” around AI infrastructure spending.

    “We believe that excessive investments in AI infrastructure may indeed fail to pay off, given the rapid pace of technological development,” he said, adding that Russia was immune to an “AI bubble” because its investment was not excessive.

    (Reporting by Gleb Bryanski; editing by Guy Faulconbridge)

    Copyright 2025 Thomson Reuters.

    Photos You Should See – Nov. 2025

    [ad_2]

    Reuters

    Source link

  • UC registered nurses ratify contract that guarantees a minimum 18.5% increase in pay

    [ad_1]

    Registered nurses who work at 19 University of California facilities have ratified a new contract after voting concluded Saturday.

    The contract will cover some 25,000 registered nurses and includes protections to improve patient safety and nurse retention through Jan. 31, 2029, according to the California Nurses Assn.

    The pact includes a minimum 18.5% increase in pay, caps on healthcare increases, restrictions on UC floating RNs between facilities, improvements to meal and rest breaks and workplace violence-prevention policies, the association said.

    “University of California RNs organized for and won important patient protections at the bargaining table, like curbing the rampant misuse of floating and ensuring safeguards on artificial intelligence,” said Kristan Delmarty, an RN and member of the UC bargaining team.

    “As a result of the commitment of all CAN members, we won a contract that will improve outcomes for nurses and our patients,’’ said Marlene Tucay, an RN at UC Irvine and member of the bargaining team.

    Under the contract, RNs were guaranteed a central role in selecting, designing and validating new technology, including AI systems, the CNA stated.

    [ad_2]

    City News Service

    Source link

  • Video: How OpenAI’s Changes Sent Some Users Spiraling

    [ad_1]

    new video loaded: How OpenAI’s Changes Sent Some Users Spiraling

    OpenAI adjusted ChatGPT’s settings, which left some users spiraling, according to our reporting. Kashmir Hill, who reports on technology and privacy, describes what the company has done about the users’ troubling reports.

    By Kashmir Hill, Alexandra Ostasiewicz, Melanie Bencosme, Joey Sendaydiego and James Surdam

    November 23, 2025

    [ad_2]

    Kashmir Hill, Alexandra Ostasiewicz, Melanie Bencosme, Joey Sendaydiego and James Surdam

    Source link

  • France will investigate Musk’s Grok chatbot after Holocaust denial claims

    [ad_1]

    PARIS (AP) — France’s government is taking action against billionaire Elon Musk ‘s artificial intelligence chatbot Grok after it generated French-language posts that questioned the use of gas chambers at Auschwitz, officials said.

    Grok, built by Musk’s company xAI and integrated into his social media platform X, wrote in a widely shared post in French that gas chambers at the Auschwitz-Birkenau death camp were designed for “disinfection with Zyklon B against typhus” rather than for mass murder — language long associated with Holocaust denial.

    The Auschwitz Memorial highlighted the exchange on X, saying that the response distorted historical fact and violated the platform’s rules.

    In later posts on its X account, the chatbot acknowledged that its earlier reply to an X user was wrong, said it had been deleted and pointed to historical evidence that Auschwitz’s gas chambers using Zyklon B were used to murder more than 1 million people. The follow-ups were not accompanied by any clarification from X.

    In tests run by The Associated Press on Friday, its responses to questions about Auschwitz appeared to give historically accurate information.

    Grok has a history of making antisemitic comments. Earlier this year, Musk’s company took down posts from the chatbot that appeared to praise Adolf Hitler after complaints about antisemitic content.

    The Paris prosecutor’s office confirmed to The Associated Press on Friday that the Holocaust-denial comments have been added to an existing cybercrime investigation into X. The case was opened earlier this year after French officials raised concerns that the platform’s algorithm could be used for foreign interference.

    Prosecutors said that Grok’s remarks are now part of the investigation, and that “the functioning of the AI will be examined.”

    France has one of Europe’s toughest Holocaust denial laws. Contesting the reality or genocidal nature of Nazi crimes can be prosecuted as a crime, alongside other forms of incitement to racial hatred.

    Several French ministers, including Industry Minister Roland Lescure, have also reported Grok’s posts to the Paris prosecutor under a provision that requires public officials to flag possible crimes. In a government statement, they described the AI-generated content as “manifestly illicit,” saying it could amount to racially motivated defamation and the denial of crimes against humanity.

    French authorities referred the posts to a national police platform for illegal online content and alerted France’s digital regulator over suspected breaches of the European Union’s Digital Services Act.

    The case adds to pressure from Brussels. This week, the European Commission, the EU’s executive branch, said that the bloc is in contact with X about Grok and called some of the chatbot’s output “appalling,” saying it runs against Europe’s fundamental rights and values.

    Two French rights groups, the Ligue des droits de l’Homme and SOS Racisme, have filed a criminal complaint accusing Grok and X of contesting crimes against humanity.

    X and its AI unit, xAI, did not immediately respond to requests for comment.

    [ad_2]

    Source link

  • Gen-Z Seeks Career Advice From AI. Here’s How Your Company Should Handle It

    [ad_1]

    We know that Gen-Z thinks very differently about the world of work in a number of ways, from how they behave in the office to how much on-the-job training they expect, and now a new report shines a light on another surprising Gen-Z phenomenon that may impact these young workers’ future careers. According to a study from Arkansas State University, shared exclusively with Inc., when a Gen-Z student is seeking careers advice they’re turning away from human career experts and their college professors and are asking ChatGPT instead. This may have implications for your own company’s recruitment efforts, and it may help color your expectations for when Gen-Z workers join your staff.

    The headline statistics relating to this habit from the new study are that 60 percent of the students surveyed have used AI to help with “brainstorming major or career options,” and 32 percent said they’d feel confident in making a major academic decision based solely on AI advice. 

    This, you may think, is merely the next step for career advice, which has long relied on digital tools like personality tests to help youngsters find their place in the world of work. But a couple of other statistics from the study show the habit comes with big risks: 41 percent of Gen-Z students surveyed said they’d followed AI advice that later turned out to be incorrect, and fully 66 percent—that’s two in every three—say that an adult never corrected bad advice they’d been given by AI and then acted on. 

    Meanwhile, showing how AI is displacing experts, 22 percent of the respondents said they had skipped meetings with mentors or advisors because “AI already answered it.” As the university’s report points out, not all is lost for human experts (yet) because the way students use AI like this depends on context. While 19 percent said they actually trusted AI more than their own school’s official website if they needed academic or administrative help, the majority—62 percent—say they still rely on their own institution’s own sources. But 19 percent are still unsure on this issue, which may indicate an “ongoing shift” in trust that academic leaders should pay attention to.

    This data is, for the most part, related to academic systems that students are interacting with—but it sets a huge precedent in habits and expertise for these young people who will enter the workplace in just a few short months or years, and it should serve as a heads-up for their future employers. We know from reports that Gen-Z is almost completely using AI to “cheat” their way through college, which some experts say may damage their confidence in their own critical thinking skills, which are vital skills that employers look for. And we know that Gen-Z is turning to non-traditional information sources, like TikTok, when it comes to seeking advice on certain workplace benefits. 

    All of this adds up to a picture of a whole generation of people who are placing trust in AI systems above human experts and, in some cases, even over traditional online information sources like Google searches. 

    HR professionals hiring Gen-Z workers would be smart to remember that some of their candidates have sought career advice from an AI, which may influence their expectations and thinkings in subtly different ways to older generations.

    Managers stewarding these workers in the office spaces of tomorrow will, if they’re wise, be aware of these habits. They may choose to stress to these employees the importance of trusting colleagues and leaders over AI systems, highlighting to younger workers that AI is fallible and its outputs may frequently be misinformation. The other choice available is to take the lead from new Gen-Z workers and accept that AI has an informal “helper” role for staff as well as all the work task efficiencies that AI boosters say the technology can bring. Teamwork tasks may, for example, may include an AI “employee” taking part alongside human workers, simply because younger workers feel more comfortable having AI at their side. This chimes with recent words from Slack’s chief marketing office Ryan Gavin, who said earlier this year that he envisions a near future where workers chat more with AIs than they do with their human coworkers.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Kit Eaton

    Source link