ReportWire

Tag: AI

  • DeepMind’s New AI Can Read a Million DNA Letters at Once—and Actually Understand Them

    Artificial intelligence has gotten a bad reputation lately, and often for good reason. But a team of scientists at Google’s DeepMind now claims to have found a revolutionary use case for AI: helping humanity unravel the “dark matter” of our genome more effectively than ever before.

    In a study published today in Nature, DeepMind researchers debuted their deep learning model, dubbed AlphaGenome. Compared to existing models, AlphaGenome can predict the function of much longer sequences of DNA while still maintaining a similar level of accuracy, the researchers claim. The team is hopeful its model can become a valuable tool to analyze how subtle variations in human DNA can affect our health and biology, particularly in the vast majority of the genome that works silently in the background.

    “We are thrilled to introduce AlphaGenome: our solution to deciphering the complex regulatory code,” said Pushmeet Kohli, vice president of research at Google DeepMind, in a press briefing held Tuesday.

    A guide to our genetic dark matter

    Our DNA contains the instructions for building and regulating every biological aspect of ourselves. But only a tiny portion of our genes, 2% or so, actually carry the code for the tens to hundreds of thousands of proteins that perform the functions a body needs to survive, such as insulin or collagen. The other 98% of our DNA is made of non-coding regions, more eloquently known as the dark matter of our genome. Scientists once assumed our genetic dark matter was comprised of worthless junk DNA, but we now know that it contains sequences vital to regulating our protein-making genes.

    While scientists have mapped out most of the human genome, we still know very little about how many of these genes work, especially those found in non-coding regions; we’re also largely in the dark about how variations in these genes can affect their functioning. Long before AI became a cultural buzzword (and punching bag), scientists had been using deep learning models—trained on lab data—to more efficiently sift through the mountains of the human genome and to predict a gene or DNA sequence’s function. But DeepMind researchers say AlphaGenome is the most comprehensive and accurate DNA sequence model to date.

    The DeepMind researchers trained the model on both human and mouse genomes. It can reportedly analyze up to 1 megabase (Mb)—about 1 million DNA letters—at a time, compared to older models capable of analyzing upwards of 500 kilobases (kb), though at some cost. From that sequence, the model is said to “predict thousands of functional genomic tracks.” These tracks don’t just include how a gene or DNA sequence is expressed but also other less visible functions. These include the interactions between coding and non-coding regions of DNA, or the structure of chromatins (the loose packages of genetic material typically found in a cell; chromosomes are the more neatly packaged version).

    In the paper, the researchers also detailed how AlphaGenome matched or outperformed other existing AI models in 25 out of 26 tests measuring how well it could predict the effects of a genetic variant. More than just accuracy, however, the model can also do more at once; it can simultaneously predict nearly 6,000 human genetic signals tied to specific functions, according to the researchers.

    The future of AI genomics

    At least some outside scientists have praised the capabilities of AlphaGenome, while noting that it can’t solve every lingering mystery about our genetic code just yet.

    “At the Wellcome Sanger Institute we have tested AlphaGenome using over half a million new experiments and it does indeed perform very well,” Ben Lehner, head of Generative and Synthetic Genomics at the University of Cambridge’s Wellcome Sanger Institute, told the Science Media Center. “However, AlphaGenome is far from perfect and there is still a lot of work to do. AI models are only as good as the data used to train them. Most existing data in biology is not very suitable for AI—the datasets are too small and not well standardized.”

    All that said, the DeepMind researchers—and others in the field—believe AlphaGenome marks a true milestone in AI genomics, one that could help make the technology practical for broader use. They argue that AlphaGenome, or similar models, could now be used to better diagnose rare genetic diseases, identify mutations that drive cancer, or uncover new drug targets.

    Ed Cara

    Source link

  • UK Government launches ‘Open University for AI’ to upskill 10 million workers – Tech Digest

    Share


    The UK government has unveiled a massive free training initiative to prepare the British workforce for the artificial intelligence revolution.

    Billed as the most ambitious national training scheme since the launch of the Open University in 1971, the programme aims to reach 10 million workers by 2030, offering a suite of online courses designed to demystify AI and integrate it into everyday professional life.

    Accessible through an upgraded “AI Skills Hub,” the curriculum focuses on practical applications such as mastering chatbot prompts, automating administrative “drudgery,” and drafting complex documents.

    While some specialized modules are paywalled, a significant portion of the training is free or subsidized for any adult in the UK. Many lessons are designed for busy professionals, with durations ranging from quick 20-minute tutorials to several-hour deep dives.

    The initiative is a collaborative effort with tech giants IBM, Google and Microsoft offering a government-backed “AI foundations badge” upon completion.

    Technology Secretary Liz Kendall emphasized that the move is essential for national competitiveness. “We want AI to work for Britain, and that means ensuring Britons can work with AI,” Kendall stated.

    “Change is inevitable, but the consequences of change are not. We will protect people from the risks of AI while ensuring everyone can share in its benefits.”

    However, the programme has faced scrutiny from policy experts. The Institute for Public Policy Research (IPPR) warned that “skills for the age of AI can’t be reduced to short technical courses alone.” Roa Powell, a senior research fellow at the IPPR, noted that workers need support to build “judgement, critical thinking and the confidence to use these tools safely,” rather than just learning how to prompt a chatbot.

    Despite the criticism, early adopters like Tracey Kasongo, founder of 20 MGMT, claim the training has already transformed their operations by creating more efficient workflows.

    With only 21% of UK workers currently feeling confident using AI, ministers argue that meeting the 10-million-worker goal could unlock up to £140 billion in annual economic growth.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • This is an AI-manipulated image of Alex Pretti

    Despite video evidence that Minneapolis nurse Alex Pretti was holding his phone before immigration officers shot and killed him, an image spreading on social media appears to show him wielding a handgun.

    The Department of Homeland Security said Border Patrol officers shot the 37 year old in self-defense after Pretti approached them with “a 9 mm semi-automatic handgun.” 

    Retired U.S. Gen. Raymond A. “Tony” Thomas III, shared the purported image of Pretti holding a gun on X. The image shows Pretti holding something resembling a handgun in his right hand. The account shared the photo without a caption in response to Jan. 25 statements about the incident from Deputy White House Chief of Staff Stephen Miller and Attorney General Pam Bondi. 

    Facebook, Instagram and Threads users also shared the image.

    But it’s AI-generated. 

    (Screenshot of the AI-generated image)

    Video evidence of the shooting shows Pretti holding his phone, not his handgun, before agents tackled him and removed his weapon. Multiple videos show different angles of the incident where Pretti is holding a phone. 

    The AI version is similar to footage showing Pretti held by agents; the manipulated version may have stemmed from a user asking an AI tool to “enhance” a screenshot of the footage. (Users also enhanced images after a federal immigration agent shot Renee Good. Users asked X’s artificial intelligence, Grok, to reveal the face of the agent, creating the image of a completely different person. ) AI often distorts images in response to user requests to enhance them. 

    PolitiFact uploaded the image to Gemini, Google’s AI tool. It found the image contains the SynthID watermark for images created or edited by the tool. It’s not visible looking at the image, but Google’s technology can detect it.

    Oren Etzioni, founder of TrueMedia, an organization that focuses on detecting false or manipulated AI content, said the image has many signs of AI manipulation.

    They include:

    • The kneeling officer is missing a head.

    • The hands and fingers of the people in the image are distorted and disproportionate.

    • Knees, arms and torsos appear dislocated.

    • The clothing textures and shadows don’t fully align with the lighting direction.

    • The rifle on the kneeling officer appears partially embedded into the ground.

    • The granular asphalt doesn’t match videos of the scene that show a paved road layered with dirt and snow.

    The New York Times and other news outlets reported that authenticated footage shows an agent removed Pretti’s gun from his belt holster. The Times also said witnesses corroborated the details in the videos. 

    We rate claims the image shared on X is a real photo of Pretti False.

    Source link

  • Meta to trial premium subscriptions for AI-powered social features – Tech Digest

    Share


    Meta is set to launch a new trial of premium subscription services across its core platforms – Instagram, Facebook, and WhatsApp – marking a significant shift in how the tech giant monetizes its user base.

    While the company confirmed that basic access to its social media services will remain free, the upcoming pilot program will put advanced artificial intelligence capabilities and specialized creative tools behind a paywall.

    A primary driver of this new subscription model is the integration of technology from Manus, a Singapore-based AI firm that Meta reportedly acquired in December for $2 billion.

    Manus specializes in “autonomous agents” – sophisticated AI tools capable of completing complex, multi-step tasks such as planning international travel or building business presentations with minimal human intervention.

    Meta intends to fold these agents into its “Meta AI” ecosystem to offer subscribers a more proactive digital assistant.

    The trial will also include paid access to “Vibes,” a new AI video generation app designed to transform text prompts into high-quality visual content. By offering these high-end AI tools as part of a premium tier, Meta is looking to diversify its revenue streams beyond its traditional advertising model, which has faced increasing regulatory and economic pressure in recent years.

    This move follows Meta’s previous experiments with paid services, such as the “Meta Verified” program launched in 2023, which allowed users to purchase a blue checkmark for a monthly fee. More recently, the company tested limits on the number of external links users could post on Facebook without a subscription, a move described as an effort to understand the “additional value” premium features provide to power users.

    However, the international rollout of these services faces potential hurdles. The acquisition of Manus has drawn the attention of Chinese regulators, who launched an investigation in January to determine if the deal violates national technology export laws. Despite these geopolitical tensions, Meta remains committed to deploying Manus’s “autonomous” technology across its business and consumer products.

    As the trial begins in the coming months, Meta hopes to prove that its massive user base is willing to pay for a smarter, more automated social media experience.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • Here’s Why You Shouldn’t Let AI Run Your Social Life

    AI might not have taken your job yet—but it’s already writing your breakup text.

    What began as a productivity tool has quietly become a social one, and people increasingly consult it for their most personal moments: drafting apologies, translating passive-aggressive texts, and, yes, deciding how to end relationships.

    “I wholeheartedly believe that AI is shifting the relational bedrock of society,” says Rachel Wood, a cyberpsychology expert and founder of the AI Mental Health Collective. “People really are using it to run their social life: Instead of the conversations we used to have—with neighbors or at clubs or in our hobbies or our faith communities—those conversations are being rerouted into chatbots.”

    As an entire generation grows up outsourcing social decisions to large language models (LLMs) like ChatGPT, Claude, and Gemini, Wood worries about the implications of turning the emotional work of connection over to a machine. What that means—for how people communicate, argue, date, and make sense of one another—is only beginning to come into focus.

    When AI becomes your social copilot

    It often starts as a second opinion. A quick paste of a text message into an AI chatbot. A question typed casually: “What do you think they meant by this?”

    “People will use it to break down a blow-by-blow account of an argument they had with someone,” Wood says, or to decode ambiguous messages. “Maybe they’re just starting to date, and they put it in there and say, ‘My boyfriend just texted me this. What does it really mean?’” They might also ask: Does the LLM think the person they’re corresponding with is a narcissist? Does he seem checked out? Does she have a pattern of guilt-tripping or shifting blame? 

    Read More: Is Giving ChatGPT Health Your Medical Records a Good Idea?

    Some users are turning to AI as a social rehearsal space, says Dr. Nina Vasan, a clinical assistant professor of psychiatry at Stanford University and the founder and director of Brainstorm: The Stanford Lab for Mental Health Innovation. People gravitate to these tools because they’re “trying to get the words right before they risk the relationship,” she says. That might mean asking their LLM of choice to draft texts to friends, edit emails to their boss, help them figure out what questions to ask on a first date, or navigate tricky group-chat dynamics.

    Vasan has also seen people use AI tools to craft dating-app profiles, respond to passive-aggressive family members, and set boundaries they’ve never before been able to articulate. “Some use it to rehearse difficult conversations before having them,” she says. “Others process social interactions afterward, essentially asking AI, ‘Did I handle that OK?’” ChatGPT and other LLMs, she says, have become a third party in many of our most intimate conversations.

    Meet the new relationship referee

    Consulting AI isn’t always a welcome development. Some young people, in particular, now use LLMs to generate “receipts,” deploying AI-backed answers as proof that they’re right.

    “They use AI to try to create these airtight arguments where they can analyze a friend’s statements or a boyfriend’s statements, or they especially like to use it with their parents,” says Jimmie Manning, a professor of communication studies at the University of Nevada, where he’s also the director of the Relational Communication Research Laboratory. (None of his students have presented him with an AI-generated receipt yet, but it’s probably only a matter of time, he muses.) A teen might copy and paste a text from her mom into ChatGPT, for example, and ask if her parents are being unreasonably strict—and then present them with the evidence that yes, in fact, they are.

    “They’re trying to get affirmation from AI, and you can guess how AI responds to them, because it’s here for you,” Manning says.

    Using LLMs in this way turns relationships into adversarial negotiations, he adds. When people turn to AI for validation, they’re usually not considering their friend or romantic partner or parent’s perspective. Plus, shoving “receipts” in someone’s face can feel like an ambush. Those on the receiving end typically don’t respond well. “People are still wary of the algorithm entering their intimate lives,” Manning says. “There’s this authenticity question that we’re going to face as a culture.” When he asks his students how their friends or partners responded, they usually say: “Oh, he came up with excuses,” or “She just rolled her eyes.”

    “It’s not really helping,” he says. “It’s just going to escalate the situation without any kind of resolution.”

    What’s at stake

    Outsourcing social tasks to AI is “deeply understandable,” Vasan says, “and deeply consequential.” It can support healthier communication, but it can also short-circuit emotional growth. On the more helpful side of things, she’s seen people with social anxiety finally ask someone on a date because Gemini helped them draft the message. Other times, people use it in the middle of an argument—not to prove they’re right, but to consider how the other person might be feeling, and to figure out how to say something in a way that will actually land.

    “Instead of escalating into a fight or shutting down entirely, they’re using AI to step back and ask: ‘What’s really going on here? What does my partner need to hear? How can I express this without being hurtful?’” she says. In those cases, “It’s helping people break out of destructive communication patterns and build healthier dynamics with the people they love most.”

    Yet that doesn’t account for the many potentially harmful ways people are using LLMs. “I see people who’ve become so dependent on AI-generated responses that they describe feeling like strangers in their own relationships,” Vasan says. “AI in our social lives is an amplifier: It can deepen connection, or it can hollow it out.” The same tool that helps someone communicate more thoughtfully, she says, can also help them avoid being emotionally present.

    Plus, when you regularly rely on a chatbot as an arbiter or conversational crutch, it’s possible you’ll erode important skills like patience, listening, and compromise. People who use AI intensely or in a prolonged manner may find that the tool skews their social expectations, because they begin expecting immediate replies and 24/7 availability. “You have something that’s always going to answer you,” Wood says. “The chatbot is never going to cancel on you for going out to dinner. It’s never going to really push back on you, so that friction is gone.” Of course, friction is inevitable in even the healthiest relationships, so when people become used to the alternative, they can lose patience over the slightest inconvenience.

    Then there’s the back-and-forth engagement that makes relationships work. If you grab lunch with a friend, you’ll probably take turns sharing stories and talking about your own lives. “However, the chatbot is never going to be, like, ‘Hey, hang on, Rachel, can I talk about me for a while?’” Wood says. “You don’t have to practice listening skills—that reciprocity is missing.” That imbalance can subtly recalibrate what people expect from real conversations.

    Plus, every relationship requires compromise. When you spend too much time with a bot, that skill begins to atrophy, Wood says, because the interaction is entirely on the user’s terms. “The chatbot is never going to ask you to compromise, because it’s never going to say no to you,” she adds. “And life is full of no’s.”

    The illusion of a second opinion

    Researchers don’t yet have hard data that provides a sense of how outsourcing social tasks to AI affects relationship quality or overall well-being. “We as a field don’t have the science for it, but that doesn’t mean there’s nothing going on. It just means we haven’t measured it yet,” says Dr. Karthik V. Sarma, a health AI scientist and physician at the University of California, San Francisco, where he founded the AI in Mental Health Research Group. “In the absence of that, the old advice remains good for almost any use of almost anything: moderation and patterns are key.”

    Greater AI literacy is essential, too, Sarma says. Many people use LLMs without understanding exactly how and why they respond in certain ways. Say, for example, you’re planning to propose to your partner, but you want to check-in with people close to you first to confirm it’s the right move. Your best friend’s opinion will be valuable, Sarma says. But if you ask the bot? Don’t put too much weight on its words. “The chatbot doesn’t have its own positionality at all,” Sarma says. “Because of the way technology works, it’s actually much more likely to become more of a reflection of your own positionality. Once you’ve molded it enough, of course it’s going to agree with you, because it’s kind of like another version of you. It’s more of a mirror.”

    Looking ahead

    When Pat Pataranutaporn thinks about the effects of long-term AI usage, his main question is this: Is it limiting our ability to express ourselves? Or does it help people express themselves better? As founding director of the cyborg psychology research group and co-director of MIT Media Lab’s Advancing Humans with AI research program, Pataranutaporn is interested in ways that people can use AI to promote human flourishing, pro-social interaction, and human-to-human interaction.

    The goal is to use this technology to “help people be better, gain more agency, and feel that they’re in control of their lives,” he says, “rather than having technology constrain them like social media or previous technologies.”

    Read More: Why You Should Text 1 Friend This Week

    In part, that means using AI to gain the skills or confidence to talk to people face-to-face, rather than allowing the tool to replace human relationships. You can also use LLMs to help finesse your ideas and take them to the next level, as opposed to substitutes for original thought. “The idea or intent needs to be very clear and strong at the beginning,” Pataranutaporn says. “And then maybe AI could help augment or enhance it.” Before asking ChatGPT to compose a Valentine’s Day love letter, he suggests asking yourself: What is your unique perspective that AI can help bring to fruition?

    Of course, individual users are at the mercy of a bigger force: the companies that develop these tools. Exactly how people use AI tools, and whether they bolster or weaken relationships, hinges on tech companies making their platforms healthier, Vasan says. That means intentionally designing tools to strengthen human capacity, rather than quietly replacing it.

    “We shouldn’t design AI to perform relationships for us—we should design it to strengthen our ability to have them,” she says. “The key question isn’t whether AI is involved. It’s whether it’s helping you show up more human or letting you hide. We’re running a massive uncontrolled experiment on human intimacy, and my concern isn’t that AI will make our messages better. It’s that we’ll forget what our own voice sounds like.”

    Angela Haupt

    Source link

  • AI hitting UK jobs market harder than other economies, Apple to unveil AI-powered Siri next month – Tech Digest

    Share


    The UK is losing more jobs than it is creating because of artificial intelligence and is being hit harder than rival large economies, new research suggests. British companies reported that AI had resulted in net job losses over the past 12 months, down 8% – the highest rate among other leading economies including the US, Japan, Germany and Australia, according to a study by the investment bank Morgan Stanley. The research, which was shared with Bloomberg, surveyed companies using AI for at least a year across five industries: consumer staples and retail, real estate, transport, healthcare equipment and cars. Guardian 

    Apple is planning to unveil its newly revamped Siri assistant at an event next month, according to a report. The latest version of Apple’s digital assistant will be powered by Google’s market-leading Gemini AI model following a recently announced partnership between the two US tech giants. The long-overdue upgrade to Siri, which launched as Apple’s proprietary voice assistant on the iPhone in 2011, will arrive with iOS 26.4, according to Bloomberg. Beta testing is expected to begin in the second half of February before a public rollout in March or April. Independent 

    One of them is an “idiot”. The other is running a “cesspit”. Even for connoisseurs of corporate spats, the war of words that broke out this week between the world’s richest man Elon Musk and Ryanair’s Michael O’Leary has turned into a classic of the genre. The two men have been tearing lumps out of each other for the last few days, and the argument could even turn into a full-scale takeover of the airline. And yet, one point is surely clear. Sure, Musk has plenty to boast about. But so far he is no match for the pugnacious O’Leary – and right now he just looks envious of his wittier rival. Telegraph 

    You may well have noticed issues with the automatic filters and spam scanning in your Gmail inbox over the weekend: these are issues that Google has officially acknowledged, and a fix should now be making its way out to users. As per the Google Workspace Status Dashboard (via Engadget), numerous issues affected users of Google’s email app across the course of Saturday. These issues included “misclassification of emails” via Gmail’s built-in automatic filtering. Tech Radar 

    Chris Price

    Source link

  • Google Photos’ latest feature lets you meme yourself | TechCrunch

    Google Photos will now let you make memes with your own images. On Thursday, Google introduced a new generative AI-powered feature called “Me Meme,” which will allow you to combine a photo template and an image of yourself to generate an image of the meme.

    The new feature, which will be first available to U.S.-based users, was originally spotted in development last October by the blog Android Authority. It was formally announced by Google via its Photos Community site on Thursday.

    According to Google, the feature is experimental, so generated images “may not perfectly match the original photo.” It suggests uploading well-lit, focused, and front-facing photos to get the best results.

    The addition is meant to just be a fun way to explore your photos and experiment with Google’s Gemini AI technology, and specifically Nano Banana. Google’s popular AI image model powers other AI features in the Google Photos app, like the ability to re-create images in new styles, such as cartoons or paintings.

    Though a fairly unserious addition, all things considered, these types of features help remind users to return to the Photos app whenever they want to play around with AI tools, rather than going to a competitor’s product.

    Plus, users tend to gravitate toward features that show themselves in AI edits, as OpenAI found with its successful launch of the Sora app, which lets you make AI videos that can include yourself and your friends.

    “Me Meme” isn’t fully rolled out, so you may not see it in your updated Google Photos app just yet. When available, it will appear under the “Create” tab, Google says. A rep for Google told TechCrunch the feature will reach U.S. iOS and Android users over the “coming weeks.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    To use the feature, you’ll select a template or upload your own, then tap “add photo” and “Generate.” Google notes that more templates are being added over time. After the AI creates the image, you can save the photo, share it on other platforms, or tap “regenerate” to have it re-imagine the image a second time.

    Sarah Perez

    Source link

  • Not to be outdone by OpenAI, Apple is reportedly developing an AI wearable | TechCrunch

    Apple may be developing its own AI wearable, according to a report published Wednesday by The Information. The device will be a pin that users can wear on their clothing, and that comes equipped with two cameras and three microphones, the report says.

    Should the rumored device come to market, it would mark another sign that the AI hardware market is heating up. This news follows comments made Monday by OpenAI Chief Global Affairs Officer Chris Lehane, who told a Davos crowd that his company will likely announce its highly antipated, first AI hardware device in the second half of this year. Additional reporting suggests that the device may be a pair of earbuds.

    Apple’s device is described as a “thin, flat, circular disc with an aluminum-and-glass shell,” which engineers hope to make the same size as an AirTag, “only slightly thicker.” The pin will also have two cameras (one with a standard lens and another with a wide-angle) for pictures and video, as well as a physical button, a speaker, and a FitBit-like charging strip on its back, according to the report.

    Apple may even be in the process of trying to accelerate development of this product to compete with OpenAI’s. The pin could potentially be released in 2027 and involve 20 million units at launch, the report notes. TechCrunch reached out to Apple for more information.

    But it remains to be seen if consumers want this kind of AI device. Two Apple alums previously founded Humane AI, a startup which also sold an AI pin. Humane’s pin also included built-in microphones and a camera. However, it floundered upon release, and the company had to shut down operations and sell its assets to HP within two years of its product launch.

    Lucas Ropek

    Source link

  • Apple Is Reportedly Making Its Own Wearable AI Pin

    Humane’s Ai Pin might be dead and gone, but its awful legacy may live on thanks to the company you’d least expect. According to a new report from The Information, Apple is currently developing its own crappy AI pin to follow Humane’s now-defunct and bricked crappy Ai Pin. Hooray for the sequel no one asked for?

    The reported AI gadget sounds harrowing, to say the least. According to the report, Apple’s pin is a “thin, flat, circular disc with an aluminum-and-glass shell” and has two cameras, including a standard and a wide-angle one, built into the front. Those cameras are designed to take in the wearer’s surroundings via photos and videos for what I assume would be some kind of computer vision-based feature(s).

    Naturally, the pin also reportedly has microphones to pick up sound, which means it most likely uses a voice assistant and could maybe be used for stuff like translation. Weirdly, the pin is also said to have a speaker and a “physical button along one of its edges” as well as a “magnetic inductive charging interface on its back, similar to the one used on the Apple Watch.” Size-wise, The Information’s sources say they’re aiming to make this thing about the size of an AirTag.

    That’s quite a bit of info, but I still have lots of questions. For one, how does this thing attach? If it’s magnets, I have bad news, which is that the whole magnetic pin thing didn’t really work. There were a lot of problems with Humane’s Ai Pin, but magnets weren’t not a major one. Keeping an expensive AI gadget attached to your clothes is just objectively harder than it sounds, and I’m not sure that Apple has a solution for that.

    Also, does anyone even want an AI pin? If Humane’s expensive failed experiment is any indication, I would wager that answer is no. Sure, maybe Humane just didn’t have the right resources or acumen to make the idea work, or maybe the idea of an AI pin that replaces the smartphone just wasn’t a good idea to begin with. Personally, my imaginary AI-generated money is on the latter.

    Surprisingly, one of the most eyebrow-raising parts of the report isn’t that Apple seems to be retreading the dumpster fire that was Humane; it’s that it seems to be doing all of this to compete with none other than OpenAI. In case you missed it, OpenAI (with the help of ex-Apple exec, Jony Ive) also reportedly has several AI gadgets planned for the near-ish future, including what could be a competitor to AirPods and… a pen. The Information says that Apple is expediting the development of its ill-advised AI gadget to make sure it isn’t on the outside looking in at OpenAI’s success.

    The problem with that picture is that I’m not sure there will be any success to look in on. AI gadgets are about as unproven a category as it gets in the tech world, and rushing to get in on that unproven craze feels shortsighted, to say the least. I have my doubts that this thing (if it truly exists) will ever see the light of day, but who knows. Maybe Apple is really that caught up chasing the AI dragon. It’s what the investors want, right?

    James Pero

    Source link

  • The Agency partners with Rechat – Houston Agent Magazine

    Rechat is now integrated with The Agency and will serve as a centralized operating platform for the brokerage.

    Agents affiliated with The Agency will now have access to Rechat’s CRM, the People Center, as well as a range of tools including a marketing center and an AI agent assistant.

    “The Agency is one of the most respected luxury brands in real estate, and their commitment to thoughtful growth and agent empowerment aligns closely with how we build Rechat,” Shayan Hamidi, CEO of Rechat, said in a press release. “Our team across 18 countries and our platform are designed to help reduce complexity and support scale. This partnership reflects a shared belief that technology should enable great agents, not get in their way.”

    Rechat is also integrated with Follow Up Boss, SkySlope, ChatGPT, Zillow and Loft47.

    “The Agency was built on the belief that collaboration, innovation and world-class service go hand in hand,” said Mauricio Umansky, founder and CEO of The Agency. “Our partnership with Rechat reinforces that commitment, creating a more connected global ecosystem while delivering intuitive, best-in-class technology that drives efficiency, empowers our agents and ultimately elevates the client experience.”

    Emily Marek

    Source link

  • Consumers spent more on mobile apps than games in 2025, driven by AI app adoption | TechCrunch

    In 2025, consumers spent more money on non-game mobile apps than they did on games for the first time, according to the findings from market intelligence firm Sensor Tower’s annual “State of Mobile” report. While this milestone had been seen in particular markets, like the U.S., or during certain quarters, 2025 marked the first time it occurred globally. Worldwide, consumers spent approximately $85 billion on apps last year, representing a 21% year-over-year increase. The figure was also nearly 2.8x the amount spent just five years ago.

    Image Credits:Sensor Tower

    Generative AI, a defining trend over the past year, led the revenue growth, as in-app purchase revenue in this category more than tripled to top $5 billion in 2025. Downloads of AI apps also grew, doubling year-over-year to reach 3.8 billion.

    Image Credits:Sensor Tower

    The segment’s growth can be attributed to several factors. For one, the popularity of AI assistants among consumers was a large driver, with all of the top 10 apps by downloads being AI assistants. This group was led by OpenAI’s ChatGPT, Google Gemini, and DeepSeek. ChatGPT alone generated $3.4 billion in global in-app purchase (IAP) revenue — a figure that we reported on late last year.

    Image Credits:Sensor Tower

    In 2025, consumers spent 48 billion hours in generative AI apps, or 3.6x the total time spent in 2024 and 10x the level seen in 2023. Session volume, meaning the number of times users opened and used an app, topped one trillion in 2025. Of note, this figure was growing faster than downloads, suggesting that existing users were deepening their engagement faster than the apps were adding new users.

    Image Credits:Sensor Tower

    Another factor driving AI app revenue and adoption is that big tech companies like Google, Microsoft, and X have been heavily investing in their AI assistants to challenge ChatGPT. Over the past year, they’ve been rolling out new capabilities at a rapid pace, improving in areas like coding assistance, content generation, reasoning, task execution, accuracy, and more. The report specifically called out improvements in image and video generation, like ChatGPT’s GPT-4o image generation model released in March, and Google’s Nano Banana.

    Among the top AI publishers, OpenAI and DeepSeek accounted for nearly 50% of global downloads, up from 21% in 2024. Meanwhile, big tech publishers grew their share of the market from 14% to nearly 30% during this same time, crowding out earlier ChatGPT competitors like Nova, Codeway, and Chat Smith.

    Image Credits:Sensor Tower

    The report also highlighted the role that mobile plays in connecting users to generative AI services. Sensor Tower estimates that the total audience for AI assistants topped 200 million in the U.S. by year-end, and more than half (110M) were accessing the assistants exclusively on mobile devices. In 2024, for comparison, only around 13 million users were mobile-only.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Beyond assistants, other popular AI apps included the AI music generation app Suno; ByteDance’s text-to-video app, Jimeng AI; and AI companion apps like Character.ai and PolyBuzz.

    Mobile apps topped games in consumer spending in 2025, driven by AI revenue.
    Image Credits:Sensor Tower

    However, AI wasn’t the only revenue driver last year, Sensor Tower found. Other apps, including those in categories like social media, video streaming, and productivity, also helped fuel the growth, the report noted. For instance, consumers spent an average of 90 minutes per day on social media apps, totaling nearly 2.5 trillion hours, up 5% year-over-year.

    Sarah Perez

    Source link

  • Over 4 in 5 AI fraud cases in 2025 involved deepfakes, research claims – Tech Digest

    Share

    Image: Cybernews

    Deepfakes have emerged as the primary weapon for artificial intelligence-driven crime, accounting for over four in five AI fraud cases recorded last year.

    According to a new report from Cybernews, which analyzed data from the AI Incident Database, 81% of all AI-related fraud incidents in 2025 involved some form of synthetic impersonation.

    The research highlights a significant shift in the cybercrime landscape. Of the 346 total AI incidents documented in 2025, 179 involved deepfakes – ranging from voice cloning to hyper-realistic video manipulation.

    Within the specific category of fraud, 107 out of 132 recorded cases were driven by deepfake technology. These scams have proven exceptionally effective due to their ability to exploit human trust through highly targeted and realistic impersonations of family members, executives, and celebrities.

    Exploit of Trust

    The human cost of these digital deceptions is staggering. The Cybernews analysis pointed to several high-profile cases that illustrate the reach of the technology:

    • Romance Scams: A British widow lost £500,000 after falling victim to a scammer using a deepfake of actor Jason Momoa.

    • Family Emergencies: In Florida, a woman was defrauded of $15,000 after hearing an AI-generated clone of her daughter’s voice pleading for financial help.

    • Investment Fraud: High-net-worth individuals and private citizens alike have been targeted by fabricated “live” videos of CEOs such as Elon Musk, leading to individual losses as high as $45,000.


    The Growing Threat of Unsafe Content

    While financial fraud dominated the statistics, the report also warned of “violent and unsafe content” generated by popular AI tools. Though accounting for only 37 cases, these incidents often had more severe, non-financial consequences.

    The research found that some Large Language Models (LLMs) could still be manipulated into providing dangerous self-harm advice or detailed instructions for committing violent crimes when specific guardrails were bypassed.

    Specific AI tools were named in some reports, with ChatGPT appearing most frequently (35 cases), followed by Grok, Claude, and Gemini. However, the Cybernews team noted that the actual figures are likely higher, as many incidents do not specify the exact software used.

    The findings serve as a stark warning for 2026. As AI tools become more accessible, the barrier to entry for sophisticated fraud has collapsed, making verification and scepticism the most vital defences for the public.

    For more information, here’s the full research: https://cybernews.com/ai-news/346-ai-incidents-in-2025-from-deepfakes-and-fraud-to-dangerous-advice/


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • ChatGPT to show ads, Grandparents hooked on ‘Boomerslop’ – Tech Digest


    Adverts will soon appear at the top of the AI tool ChatGPT
    for some users, the company OpenAI has announced. The trial will initially take place in the US, and will affect some ChatGPT users on the free service and a new subscription tier, called ChatGPT Go. This cheaper option will be available for all users worldwide, and will cost $8 a month, or the equivalent pricing in other currencies. OpenAI says during the trial, relevant ads will appear after a prompt – for example, asking ChatGPT for places to visit in Mexico could result in holiday ads appearing. BBC 

    Doctors and medical experts have warned of the growing evidence of “health harms” from tech and devices on children and young people in the UK. The Academy of Medical Royal Colleges (AoMRC) said frontline clinicians have given personal testimony about “horrific cases they have treated in primary, secondary and community settings throughout the NHS and across most medical specialities”. The body, which represents 23 medical royal colleges and faculties, plans to gather evidence to establish the issues healthcare professionals and specialists are seeing repeatedly that may be attributed to tech and devices. Sky News 


    “What are you even doing in 2025?”
    says a handsome kid in a denim jacket, somewhere just shy of 18. “Out there it looks like everyone is glued to their phones, chasing nothing.” The AI-generated teenager features in an Instagram video that has more than 600,000 likes from an account dubbed Maximal Nostalgia. The video is one of dozens singing the praises of the 1970s and 1980sCreated with AI, the videos urge viewers to relive their halcyon days. The clips have gone viral across Instagram and Facebook, part of a new type of AI content that has been dubbed “boomerslop”. Telegraph

    More than 60 Labour MPs have written to Keir Starmer urging him to back a social media ban for under-16s, with peers due to vote on the issue this week. The MPs, who include select committee chairs, former frontbenchers, and MPs from the right and left of the party, are seeking to put pressure on the Prime Minister as calls mount for the UK to follow Australia’s precedent. Starmer has said he is open to a ban but members of the House of Lords are looking to force the issue when they vote this week on an amendment to the children, wellbeing and schools bill. Guardian


    Huawei has released a new update for the Watch Ultimate 2 smartwatch, installing new health features, including a heart failure risk assessment. The update comes with HarmonyOS firmware version 6.0.0.209 and is spreading in batches. The new additions include a coronary heart disease risk assessment. Users can join a coronary heart disease research project via the Huawei Research app on their smartphone. HuaweiCentral

    Google has just changed Gmail after twenty years. In among countless AI upgrades — including “personalized AI” that gives Gemini access to all your data in Gmail, Photos and more, comes a surprising decision. You can now change your primary Gmail address for the first time ever. You shouldn’t hesitate to do so. This new option is good — but it’s not perfect. And per 9to5Google, “Google also notes this can only be done once every 12 months, up to 3 times, so make this one count.” Forbes

    Chris Price

    Source link

  • Elon Musk backtracks on Grok AI image rules following global backlash – Tech Digest

    Share


    In a move that signals a significant retreat for the tech billionaire, Elon Musk’s social media platform, X, has announced it will restrict its Grok AI model from generating “undressed” images of real people.

    The update prevents users from editing photos of real individuals to appear in bikinis, underwear, or revealing attire, but only in territories where such content is illegal.

    The policy shift follows a week of intense international pressure. Governments in Malaysia and Indonesia were the first to ban the tool after reports surfaced of users creating explicit, non-consensual deepfakes.

    Simultaneously, the UK government and California’s top prosecutor launched inquiries into the platform, with UK Prime Minister Sir Keir Starmer calling for immediate safeguards to prevent the spread of sexualized AI imagery.

    The move marks a notable U-turn for Musk. Only days ago, the billionaire dismissed concerns as an “assault on free speech,” even mocking critics by posting AI-generated images of Sir Keir Starmer wearing a bikini. However, facing the threat of heavy fines and regional bans, Musk appears to have softened his absolute stance.

    Writing on X, Musk clarified that while the platform will “geoblock” certain capabilities to comply with local laws, the tool’s ‘Not Safe For Work’ (NSFW) settings will still allow for “upper body nudity of imaginary adult humans” in regions like the United States. “That is the de facto standard in America,” Musk stated. “This will vary in other regions according to the laws on a country-by-country basis.”

    The UK government claimed “vindication” following the announcement, though regulator Ofcom warned that its investigation into whether X broke online safety laws remains ongoing. To further mitigate abuse, X confirmed that image-editing features will remain restricted to paid subscribers, a move intended to ensure accountability for those who violate the law.

    While the “geofencing” of these features satisfies some legal requirements, critics argue the patchwork approach highlights the ongoing tension between Musk’s “free speech absolutism” and the global demand for AI regulation.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • Nvidia Proves It Still Has the Best Software for Better-Looking Games

    Nvidia’s latest version of its Deep Learning Super Sampling technology, aka DLSS, hit the scene early Wednesday. With the latest update in tow comes a slightly redesigned upscaler that is now better than ever, at least for most games. If you were hoping that you would be able to push your frame rates to ludicrous levels, you’ll need to wait.

    DLSS 4.5, which Nvidia announced back during CES 2026 last week, incorporates a new version of the existing transformer model upscaler. The original transformer model was a major part of the DLSS 4 update from 2025, which took an AI model trained on gameplay to generate the look you should see at higher resolution. Upscalers like DLSS take a frame at a lower resolution and massage it so it appears at a higher resolution, which enhances the visual resolution to the size your display supports while improving performance. With AMD and Intel nipping at its heels, Nvidia felt it needed to show up with even more frame generation software for 2026. Instead, the latest update proves that small enhancements make a bigger difference than the oft-touted “fake frames.”

    The big update for DLSS 4.5 is only perceivable when glancing at small environmental details. Previous versions of DLSS had a hard time picking up on minute environmental effects, like sparks from a fire. DLSS 4.5 is supposed to bring those details back. Plus, 4.5 should help sharpen textures and eliminate ghosting around some environmental details, where an image would appear to bleed from frame to frame.

    Small improvements make a big difference

    I tested DLSS 4.5 on a Framework Laptop 16 packed with a GeForce RTX 5070 laptop GPU. This is one of Nvidia’s lower-end graphics cards with only 8GB of VRAM. DLSS makes more of a difference for players running cheaper gaming rigs than for platforms with higher-end specs. I used a 1440p monitor for my testing, as the RTX 5070, especially the laptop version, isn’t going to enable a quality experience at 4K resolution.

    I compared DLSS 4 and DLSS 4.5 in games like Marvel’s Spider-Man II, Black Myth: Wukong, and The Outer Worlds II. The updated Nvidia app now allows players to override the DLSS model for supported games. The preset “L” and “M” models are both based on DLSS 4.5. “L” is for ultra-performance mode built for trying to hit 4K resolution, though “M” should fit more players’ needs who just want better performance in games at below 4K.

    DLSS 4.5 is a big step up. In Black Myth: Wukong, I saw a bump up to around 50 fps and even 60 fps in some scenes with the same graphics settings using the model M preset compared to DLSS 4, which hovered between 45 and 48 with very high graphics settings and ray tracing set to medium. Those promised graphical effects, like sparks coming off of fires, are indeed real. Latency with frame generation is marginally better with the update as well. In Spider-Man 2, running at medium settings with ray tracing set to high, I saw few performance improvements, though foliage appeared slightly sharper running DLSS 4.5.

    The one place I saw a drop in performance was in Outer Worlds II, which took a small hit looking at the same scene. However, I noticed that ground foliage and distant plants appeared sharper, even while using the same graphics settings. The small performance drop would necessitate some fine-tuning with DLSS settings to reach a higher standard frame rates, but I would take higher fidelity any day of the week.

    Dynamic frame gen won’t be here until later

    Small graphical enhancements are one thing, but Nvidia’s promising to maximize your monitor’s refresh rate with its new 6x frame gen capabilities. That will also spark a new “dynamic” frame gen mode, which will modify the frame gen between 4x and 6x to try and maximize your display’s refresh rate. Currently, you won’t find an override for 6x frame generation in the Nvidia app. In a message sent to Gizmodo, Nvidia said the dynamic frame gen plugin will be available to developers through the DLSS Multi Frame Generation Streamline Plugin this spring. For now, we’re stuck with the current 4x model.

    Dynamic mode makes sense. It pushes the frame rate to what your monitor is technically capable of. The one thing that Nvidia constantly fails to mention is that players actually need playable frame rates before they enable frame gen. You can get by with around 50 fps, but for fewer visual hiccups, you want at or close to 60 fps. There’s a certain point where frame gen is a tradeoff between performance and latency.

    Kyle Barr

    Source link

  • AI Holograms Are Here. What Does This Mean for AI Companions?

    Gaming peripheral company Razer is betting that people want AI holograms. So much so that it introduce a perplexing new product at CES 2026 that early critics have dubbed a “friend in a bottle.” Project AVA, is a small glass cylinder that features a 5.5-inch animated desk buddy that can interact with you, coach you, or offer gaming advice on demand—all powered by xAI’s Grok.

    Project AVA uses a technology Razer calls “PC Vision Mode” that watches your screen, allowing its 3D animated inhabitant to offer real-time commentary on your gameplay, track your mood, or simply hang out. It attempts to sell the illusion of presence—a companion that isn’t just an app you close, but a physical object that lives in your room.​

    It’s not a bad idea, in theory. Giving AI a face is not just a marketing ploy but a biological inevitability. Yet Project AVA marks a strange new milestone in our march toward AI companions.

    The inevitability of holographic AI

    When OpenAI’s introduced ChatGPT 4o voice chats in the summer of 2024, humanity entered a new form of computer interaction. Suddenly, we could interact with AI voices that were smart and natural enough for humans to maintain a conversation. Since then, we have seen other voice AIs like Gemini Live, which introduce pauses, breathing, and other elements that cross the uncanny valley and allow many to suspend disbelief and even form a bond with these assistants.

    Research has shown that for deep emotional venting, users currently prefer voice-only interfaces because they feel safer and less judgmental. Without a face to scrutinize, we avoid the social anxiety of being watched.​ However, some neuroscientists argue that this preference may just be a temporary work-around for bad technology.

    Our brains are evolutionarily hardwired for face-to-face interaction. The “Mirror Neuron System” in our brains—which allows us to feel empathy by watching others—remains largely dormant during voice-only chats. A 2024 study on “Generation WhatsApp” confirmed that neural synchrony between two brains is significantly weaker during audio-only exchanges compared to face-to-face ones. To feel truly “heard,” we need to see the listener.​

    Behavioral science also tells us that up to 93% of communication is nonverbal. Trust is encoded in micro-expressions: a pupil dilating, a rapid blink, an open posture. A voice assistant transmits 0% of these signals, forcing users to operate on blind faith. Humans still find them very engaging because our brain fills the gaps, imagining faces like when we read a book. Furthermore, according to a 2025 brain scan study, familiar AI voices activate emotional regulation areas, suggesting neural familiarity builds with repeated interaction.

    Fast Company

    Source link

  • 99% of BNY employees have AI access

    BNY is expanding its use of AI and has integrated the tech in its multiyear transformation plan to drive growth and efficiency.  Eliza, the bank’s multi-model agentic AI platform, can act as a copilot for employees and allows them to build individual agentic tools. Eliza is being used by 99% of its workforce, according to BNY’s fourth-quarter earnings report. Employees can use the […]

    Vaidik Trivedi

    Source link

  • AI Images Create Confusion as Real Gang of Monkeys Roams St. Louis

    Last Thursday, vervet monkeys were spotted near a park in St. Louis. Nobody knows who owns the monkeys or why they’re roaming around loose. But as police and health officials in the city are trying to keep an eye out for the little guys, one wrinkle of our modern age is complicating things. People are posting AI-generated pictures and videos to social media claiming to have found the monkeys, according to the Associated Press.

    “The Department of Health first became aware of the situation through reports from residents, as well as a sighting reported by a St. Louis Metropolitan Police Department Officer. Currently, the origin of these animals is unknown,” the local health department told First Alert 4.

    “A Department of Health Animal Care and Control Officer was dispatched on Thursday, Jan. 8, to investigate, but was not able to locate the animals. On Friday, Jan. 9, several officers patrolled the area based on continued reports of sightings, but the monkeys have still not been found,” the department’s statement continued.

    St. Louis Department of Health spokesperson Willie Springer told the AP that people have been posting fake images of the monkeys online, even claiming to have captured the monkeys. And it’s hard to tell what’s real.

    “It’s been a lot in regard to AI and what’s genuine and what’s not,” Springer told the AP. “People are just having fun. Like I don’t think anyone means harm.” The Health Department didn’t immediately respond to questions from Gizmodo on Monday afternoon.

    Some of the fake monkey images are pretty transparently fake, like those in the form of Instagram reels set to music from the Monkees music group. Others also show the Sora watermark, indicating they were created with OpenAI’s video creation tool. But a large percentage of the public doesn’t seem to know that a Sora watermark means a video is fake.

    Then there are the AI videos that show the monkeys doing ridiculous things, like stealing cars:

    To top it all off, there are also claims that a random goat is roaming around St. Louis, though photos posted to Facebook could be AI as well. It’s hard to tell in the age of AI, when you literally can’t believe your own eyes anymore.

    Animal control is reportedly talking with experts at the St. Louis Zoo in an effort to find the monkeys. But even if they’re found, the owners are unlikely to come forward, according to First Alert 4. It’s illegal to keep monkeys in the city.

    Anyone in St. Louis who spots monkeys (in real life, not online) is being asked to call Animal Care and Control at 314-657-1500.

    Matt Novak

    Source link

  • SMBC Americas deploys Fenergo AI for KYC, AML compliance

    Banking giant SMBC Americas is deploying AI solutions from fintech Fenergo to streamline KYC, AML and client lifecycle management at the $2.1 trillion bank.  

    The deployment comes as part of a multiyear transformation aimed at “simplifying the technology infrastructure and removing manual processes,” SMBC Americas Chief Operating Officer Greg Keeley stated in a Jan. 7 release. 

    Financial institutions are seeing up to 80% reductions in manual review times for KYC and AML compliance with the fintech’s AI solution, according to Fenergo.  

    The compliance service provider is also helping banks achieve up to 70% faster client onboarding and 50% fewer KYC remediation cycles by “automating data extraction, client verification and risk scoring,” Fenergo Director of Thought Leadership Tracy Moore told FinAi News 

    “AI-driven insights also enhance risk detection accuracy, helping institutions identify potential AML issues earlier and with greater precision,” she said.  

    Fenergo’s clients include: 

    Read more on Fenergo leadership here.  

    Fenergo is not the only fintech addressing growing demand for AI-driven compliance solutions in financial services.  

    • Digital solutions provider HGS today launched AMLens, an AML tool it says can reduce case-analysis time by up to 75%, according to a company release.  
    • Fintech Droit also recently launched a generative AI tool to enhance compliance decision-making.  

    Limiting disruption  

    Fenergo develops its platform internally and works with Amazon Web Services to power its AI tools securely and at scale, “ensuring financial institutions benefit from the latest advancements in cloud and machine learning technology,” Moore said.  

    “Our AI is delivered through Fenergo’s cloud-based SaaS platform, with flexible API integration so institutions can easily connect it to their existing systems.”

    — Tracy Moore, Fenergo

    “Our approach to AI is built with strong governance and transparency, giving financial institutions full oversight and control over how AI-driven insights are used in KYC, onboarding and compliance processes,” she said. 

    It typically takes between six to 12 weeks for the compliance tool to be fully integrated in banking operations, although it varies based on the size of the institution, Moore said. 

    To minimize disruptions during the implementation phase, Fenergo works “hand-in-hand” with each institution to design the right operating model for their specific needs, she said.  

    “We also reduce friction through open APIs, guided onboarding and AI-driven automation, which simplify integration, data migration and process setup,” she said.  

    Gen AI for compliance ops  

    The global market for generative AI in financial services is projected to more than double to $5.1 billion in 2029 from $1.9 billion in 2025, according to the Business Research Company, citing compliance-solutions demand as a key growth driver.  

    Fenergo uses gen AI for specific areas in its platform to enhance efficiency and decision-making, Moore said. 

    “For example, generative AI helps automate document summarization, data extraction and the generation of risk narratives or client due diligence summaries, all underpinned by strong governance and human oversight.”

    — Tracy Moore, Fenergo

    While gen AI presents significant opportunities to bolster KYC workflows, there are several associated risks, according to credit analysis and financial solutions provider Moody’s, including: 

    • Hallucinations in text generation; 
    • Regulatory variance by state or country; and 
    • Algorithmic bias.  

    Thus, FIs must use trusted KYC databases for machine learning, integrate global data, maintain human oversight and update systems as needed, according to Moody’s. 

    Many financial institutions including Ally Financial, Grasshopper, University of Michigan Credit Union and TD are deploying gen AI tech to fight money laundering and KYC processes, according to FinAi News’ prior reporting.  

    Register here by Jan. 16 for early bird pricing for the inaugural FinAi Banking Summit, taking place March 2-3 in Denver. View the full event agenda here.   

    Quinn Donoghue

    Source link

  • Data centers will need $3 trillion through 2030, Moody’s says

    At least $3 trillion is set to flow into data-center-related investments over the next five years, capital that will rely on the might of multiple areas of the credit markets to provide, according to Moody’s Ratings.

    Trillions of dollars will need to be invested across servers, computing equipment, data center facilities and new power capacity, and support the boom in artificial intelligence and cloud computing, the ratings firm said in a report on Monday.

    Much of that capital will come directly from big tech companies, which are facing rising demand for data centers and the power needed to operate them. Six US hyperscalers — Microsoft Corp., Amazon.com Inc., Alphabet Inc., Oracle Corp., Meta Platforms Inc. and CoreWeave Inc. — are on track to hit $500 billion in data center investments this year, as capacity growth continues, said Moody’s.

    Banks will continue to play a “prominent role” in providing financings, and other institutional investors will increasingly lend alongside banks given the vast amounts of capital required, according to the report.

    Moody’s also estimates that more US data centers will tap into the asset-backed securities, commercial mortgage-backed securities and private credit markets when it comes time to refinance debt. New financings will grow in size and concentration, per the report, after record levels of issuance in 2025.

    In the US ABS market specifically, about $15 billion was issued in 2025, with Moody’s expecting volume to “grow considerably” this year in part due to data center construction loans.

    The vast amounts of debt required to support the AI revolution have raised some concerns that a bubble may be building, and could eventually harm equity and credit investors if some of the technology underperforms high expectations.

    Demand to construct new data center capacity, however, shows no signs of slowing. Moody’s projects the race to build new capacity is still in its “early stages,” with growth poised to continue globally over the next 12 to 18 months.

    Capacity “will be needed at some point in the next 10 years or so,” said John Medina, senior vice president at Moody’s, adding that the pace of adoption is hard to predict as new technologies continue to emerge. “A ChatGPT that didn’t exist three years ago now uses a lot of compute.”

    Bloomberg News

    Source link