ReportWire

Tag: AI

  • Malaysia and Indonesia ban Musk’s Grok over sexually explicit deepfakes – Tech Digest

    Share


    Malaysia and Indonesia have blocked Elon Musk’s AI chatbot. The two countries are the first in the world to ban Grok following reports that the tool is being used to create sexually explicit deepfakes.

    This AI feature, hosted on Musk’s social media platform X, allows users to generate and edit images of real people without their consent. Regulators in both nations expressed deep concern that the technology is being weaponized to produce pornographic content involving women and children.

    Malaysia’s communications ministry stated that it issued multiple warnings to X regarding the “repeated misuse” of the chatbot earlier this year. However, officials claim the platform failed to address the inherent design flaws of the AI and instead focused only on its reporting process.

    Consequently, the service will remain blocked in Malaysia until effective safety safeguards are implemented to protect the public.

    In Indonesia, Digital Affairs Minister Meutya Hafid described the generation of such content as a direct violation of human dignity and online safety. The country has a history of strict digital enforcement, having already banned platforms like OnlyFans and Pornhub for similar reasons.

    Victims in the region have shared stories of finding their personal photos manipulated into revealing outfits, noting that the platform’s reporting tools often fail to remove the images quickly enough.

    The controversy is now spreading to the United Kingdom, where Prime Minister Keir Starmer described the situation as “disgraceful.” Technology Secretary Liz Kendall warned that the government would support regulators if they chose to block access to X entirely for failing to comply with safety laws.

    In response to these growing international restrictions, Elon Musk has accused government officials of attempting to suppress free speech.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • Why a Fairfax Co. elementary school is teaching kids the ‘how’ behind AI – WTOP News

    Vienna Elementary School’s Vienna.i.Lab is transforming education by introducing students to AI and advanced technology.

    David Lee Reynolds, Jr. spent two decades working as a music teacher before transitioning to teach technology.

    When he made the switch, Vienna Elementary School didn’t have a Science, Technology, Engineering, Arts and Math, or STEAM, lab. To best set students up for success, he knew the Northern Virginia campus needed one.

    That thought came around the same time the first large language models were debuting, and artificial intelligence was becoming more mainstream. So he knew once a lab was put together, it would have to be advanced. A traditional STEAM lab would come later.

    Eventually, Reynolds created the Vienna.i.Lab with the goal of helping students understand how the tech works, all so they’re set up to use it more effectively.

    “This is the new stuff, and it’s here to stay,” Reynolds said. “But if you don’t know what it is, then it’s not helpful to you. So let’s fix that.”

    To do it, Reynolds collaborated with the school’s parent-teacher association, which helped raise money so students could use new tools instead of traditional laptops.

    During a lesson on Friday afternoon, a group of first graders used KaiBots. They scanned a card with a code describing how the robot should move, and watched it either follow the instructions or identify an error.

    Even for some of the school’s youngest students, Reynolds said the lesson revealed the “building blocks of where you would eventually get to learning about machine learning, learning about large language models, learning about how ChatGPT works.”

    One student, Nora Vazeen, said the activity is different from what she does in most classes, and “It’s silly.”

    Another student, Callum, echoed that sentiment, saying, “The robot does silly stuff.”

    But, once a week during their technology special, students from kindergarten to sixth grade participate in hands-on activities. While the younger kids use KaiBots, the older students are programming drones.

    The work emphasizes problem solving skills, collaboration and coding skills, Reynolds said.

    “For kids, if they understand how the tool works, they can do amazing things with the tool,” he said. “But if they don’t, they’re going to use the tool like it’s a search feature, and the next thing you know, they’re doing things that are wrong and they’re learning things that are incorrect.”

    While the AI lab is largely the tech cart Reynolds oversees in the corner of the school’s library, he’s hoping one day it can evolve into an innovative space.

    “Let’s build it in a green way,” Reynolds said. “Let’s build it underground. Let’s use geothermal heating and cooling. Let’s build a space, when you walk into it, you’re inspired to go and create.”

    Scott Gelman

    Source link

  • FIs deploy AI to fight digital-asset related fraud

    Financial institutions are cautiously deploying AI to crack down on fraud associated with digital assets.  Crypto-related scams, ransomware, darknet markets transactions and money laundering cost financial institutions $154 billion in 2025, a 162% increase from 2024, according to blockchain company Chainanalysis’ Jan. 8 report. Banks are gearing up their infrastructure to help customers transact, store and invest in digital assets, which includes developing better fraud and anti-money laundering processes, Scott Southall, managing director at Citi Services, told FinAi New.  “We’ve seen AI tools being […]

    Vaidik Trivedi

    Source link

  • Grok Lies About Locking Its AI Porn Options Behind A Paywall

    A week ago, a Guardian story revealed the news that Elon Musk’s Grok AI was knowingly and willingly producing images of real-world people in various states of undress, and even more disturbingly, images of near-nude minors, in response to user requests. Further reporting from Wired and Bloomberg demonstrated the situation was on a scale larger than most could imagine, with “thousands” of such images produced an hour. Despite silence or denials from within X, this led to “urgent contact” from various international regulators, and today X has responded by creating the impression that access to Grok’s image generation tools is now for X subscribers only. Another way of phrasing this could be: you now have to pay to use xAI’s tools to make nudes. Except, extraordinarily—despite Grok saying otherwise—it’s not true.

    The story of the last week has in fact been in two parts. The first is Grok’s readiness to create undressed images of real-world people and publish them to X, as well as create far more graphic and sexual videos on the Grok website and app, willingly offering deepfakes of celebrities and members of the public with few restrictions. The second is that Grok has been found to do the same with images of children. Musk and X’s responses so far have been to seemingly celebrate the former, but condemn the latter, while appearing not to do anything about either. It has taken until today, a week since world leaders and international regulatory bodies have been demanding responses from X and xAI, for there to be the appearance of any action at all, and it looks as if even this isn’t what it seems.

    How we got here

    The January 2 story from The Guardian reported that the Grok chatbot posted that lapses in safeguards had led to the generation of “images depicting minors in minimal clothing” in a reply to an X user. The user, on January 1, had responded to a claim made by an account for the documentary An Open Secret stating that Grok was being used to “depict minors on this platform in an extremely inappropriate, sexual fashion.” The allegation was that a user could post a picture of a fully dressed child and then ask Grok to re-render the image but wearing underwear or lingerie, and in sexual poses. The user asked Grok if it was true, and Grok responded that it was. “I’ve reviewed recent interactions,” the bot replied. “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing.”

    By January 7, Wired published an investigation that revealed Grok was willing to make images of a far more sexual nature when the results weren’t appearing on X. Using Grok’s website and app, Wired discovered it was possible to create “extremely graphic, sometimes violent, sexual imagery of adults that is vastly more explicit than images created by Grok on X.” The site added, “It may also have been used to create sexualized videos of apparent minors.” The generative-AI was willing and able to create videos of recognizable celebrities “engaging in sexual activities,” including a video of the late Diana, Princess of Wales, “having sex with two men on a bed.”

    Bloomberg‘s reporting spoke to experts who talked about how Grok and xAI’s approach to image and video generation is materially different from that being done by other big names in generative-AI, stating that rivals offer a “good-faith effort to mitigate the creation of this content in the first place” and adding, “Obviously xAI is different. It’s more of a free-for-all.” Another expert said that the scale of deepfakes on X is “unprecedented,” noting, “We’ve never had a technology that’s made it so easy to generate new images.”

    Where we are now

    It is now being widely reported that access to Grok’s image and video generation has been restricted to only paying subscribers to X. This is largely because when someone without a subscription asks Grok to make an image, it is responding with “Image generation and editing are currently limited to paying subscribers,” then adding a link so people can pay up for access.

    However, as discovered by the The Verge, this isn’t actually true at all. While you cannot currently simply @ Grok to ask it to make an image, absolutely everyone can still click on the “Edit image” button and access the software that way. You can also just visit Grok’s site or app and use it that way.

    This means that the technology is currently lying to users to suggest they need to subscribe to X’s various paid tiers if they wish to generate images of any nature, but still offering the option anyway if the user has the wherewithal to either click a button, or if they’re on the app version of X, to long-press an image and use the pop-up.

    What does Elon Musk have to say?

    Musk, as you might imagine, has truly been posting through it. Moments before the story of the images of minors broke, following days of people discovering Grok’s willingness to render anyone in a bikini, Musk was laughing at images of himself depicted in a two-piece, before a rapid reverse-ferret on January 3 as he made great show of declaring that anyone discovered using Grok for images of children would face consequences, in between endlessly claiming that his Nazi salute was the same as Mamdani doing a gentle wave to crowds. Since then (alongside posting full-on white supremacist content), the X owner’s stance has switched to reposting other people’s use of ChatGPT to demonstrate that it, too, will render adults in bikinis, seemingly forgetting that the core issue was Grok’s willingness to depict children, and declaring that this proves the hypocrisy of the press and world leaders.

    Regarding today’s developments, he has not uttered a peep. Instead his feed is primarily deeply upsetting lies about the murder of Renee Nicole Good and uncontrolled rage at the suggestion from Britain’s Prime Minister, Keir Starmer, that X might be banned in the UK as a consequence of the issues discussed above.

    John Walker

    Source link

  • Allianz, Anthropic partner to expand AI use in insurance

    Global financial services group Allianz SE has partnered with AI solutions provider Anthropic to accelerate responsible AI deployment , with a focus on strengthening insurance decision-making.

    The collaboration centers on three projects: deploying AI to support Allianz software developers, automating labor-intensive insurance workflows such as claims processing, and building AI systems designed to meet compliance requirements by fully documenting decisions and data sources, according to an Allianz release today.

    Anthropic’s Claude AI models will be integrated into Allianz’s internal AI platform for employees, including tools designed to assist thousands of developers globally. The companies are also developing AI agents to automate multi-step processes in areas such as motor and health insurance, while maintaining a “human-in-the-loop” approach for complex or sensitive claims.

    The partnership builds on Allianz’ existing use of AI to improve customer service, including multilingual voice assistants for roadside assistance and automated claims processing that significantly reduces turnaround times.

    Register here by Jan. 16 for early bird pricing for the inaugural FinAi Banking Summit, taking place March 2-3 in Denver. View the full event agenda here. 

    FinAi News, AI-assisted

    Source link

  • AI surveillance stopping 1,400 shoplifting crimes daily across UK retailers – Tech Digest

    Share

    Over 1,400 shoplifters are being intercepted by facial recognition cameras every day as Britain’s retail industry turns to high-tech warfare to combat an industrial-scale wave of theft.

    New data reveals that the Facewatch AI system, currently deployed across major chains including Sainsbury’s, Sports Direct, and Home Bargains, issued more than half a million “known thief” alerts to shop staff in 2025. This represents a staggering 1,415 interventions per day, more than doubling the volume of detections recorded just one year prior.

    The technology works by scanning the faces of shoppers as they enter a store and cross-referencing them against a digital watchlist of prolific offenders. Within an average of nine seconds, the system can flag a “subject of interest,” allowing security teams to either monitor the individual or escort them from the premises before goods are taken.

    Facewatch CEO Nick Fisher defended the rapid expansion of the network, stating that the figures reflect a reality where retailers must act “faster and smarter” to protect employees and stock.

    While official police records show shoplifting offences hit a record high of 529,994 last year, industry experts believe the true scale of the crisis is closer to 20 million thefts annually, costing businesses £2.6 billion.

    However, the rise of the machines has sparked a fierce backlash from privacy campaigners who warn that innocent shoppers are being caught in a digital dragnet. Groups like Big Brother Watch have highlighted cases of “human error” where shoppers were humiliated and blacklisted from their local stores after being falsely flagged by the AI.

    One victim, a shopper named Jenny, described being blocked by security and accused of theft in front of other customers due to a false match. She warned that technology companies have effectively become “judge, jury and executioner” with no legal due process for those wrongly accused.

    Despite these concerns, retailers are doubling down on the technology. During the week leading up to Christmas Eve alone, the system issued nearly 15,000 alerts, marking the busiest period for AI-driven crime prevention in UK retail history.

    Battling Retail Crime with Tech: Body-Worn Cameras and Beyond


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    Chris Price

    Source link

  • Nessel challenges fast-tracked DTE data center deal, citing risks to ratepayers and lack of public scrutiny – Detroit Metro Times

    Michigan Attorney General Dana Nessel is urging state utility regulators to reconsider their approval of special power contracts for a massive data center planned in Washtenaw County, warning the fast-tracked decision could leave electric customers exposed to higher costs.

    Nessel announced Friday that her office filed a petition for rehearing with the Michigan Public Service Commission over its Dec. 18 decision to conditionally approve two special contracts sought by DTE Energy to serve a proposed 1.4-gigawatt hyperscale artificial intelligence data center in Saline Township.

    The project, tied to Oracle, OpenAI, and developer Related Digital, would be among the largest data centers in the country and is expected to consume as much electricity as nearly one million homes. Its scale has caused concerns among residents, environmental advocates, and consumer watchdogs about long-term impacts on electric rates, grid reliability, and the environment.

    Nessel’s move also pits her against Gov. Gretchen Whitmer, a fellow Democrat who has publicly backed the data center as “the largest economic project in Michigan history.” Whitmer celebrated the project when it was announced last fall, citing thousands of construction jobs and hundreds of permanent positions. 

    On Thursday, U.S. Senate candidate Abdul El-Sayed, a progressive Democrat, released what he called “terms of engagement” aimed at protecting communities from higher utility bills, grid strain, and environmental harm tied to data centers.

    At least 15 data center projects have been proposed across the state in the past year.

    The split among Democrats is part of a broader debate over whether Michigan should keep fast-tracking energy-hungry data center projects tied to the AI boom.

    In her petition, Nessel challenges the commission’s authority to approve the contracts behind closed doors without holding a contested case hearing that would allow discovery, sworn testimony, and full public review. She also questions whether the conditions imposed by the commission are meaningful or enforceable.

    In a statement Friday, the Michigan Public Service Commission said it “looks forward to considering Nessel’s petition for rehearing,” but the commission “unequivocally rejects any claim that these contracts were inadequately reviewed.”

    The commission said its professional staff, advisory staff, and commissioners were provided with unredacted versions of the special contracts and reviewed them thoroughly to ensure existing customers are protected. The commission said its order recognizes DTE’s legal obligation to serve the data center while imposing what it described as the strongest consumer protections for a data center power contract in the country.

    The attorney general is seeking clarification on how those conditions would protect ratepayers, noting that many appear to rely on repeated assurances from DTE, rather than concrete commitments backed by evidence. Nessel also objected to the commission allowing DTE to serve as the project’s financial backstop, rather than requiring the data center operator to provide sufficient collateral to cover potential risks.

    “I remain extremely disappointed with the Commission’s decision to fast-track DTE’s secret data center contracts without holding a contested case hearing,” Nessel said in a statement. “This was an irresponsible approach that cut corners and shut out the public and their advocates. Granting approval of these contracts ex parte serves only the interests of DTE and the billion-dollar businesses involved, like Oracle, OpenAI, and Related Companies, not the Michigan public the Commission is meant to protect. ”

    She said the commission’s approval process served the interests of DTE and the companies behind the project rather than Michigan residents.

    “The Commission imposed some conditions on DTE to supposedly hold ratepayers harmless, but these conditions and how they’ll be enforced remain unclear,” Nessel said. “As Michigan’s chief consumer advocate, it is my responsibility to ensure utility customers in this state are adequately protected, especially on a project so massive, so expensive, and so unprecedented.”

    Large portions of the contracts remain heavily redacted, preventing outside parties from verifying DTE’s claims that serving the data center will not raise rates for existing customers. Nessel said a contested case is necessary to review the full contracts, assess affordability claims, and confirm that protections, such as collateral requirements and exit fees are in place.

    The commission ordered DTE to formally accept its conditions within 30 days of its Dec. 18 order. Nessel said that timeline complicates decisions about whether further legal challenges are necessary, prompting her office to file the rehearing petition in part to preserve its arguments.

    The power contracts are one piece of a larger controversy surrounding the Saline Township project referred to as “Project Stargate.” Residents and environmental groups have raised alarms about wetlands destruction, water contamination risks, and the permanent transformation of a rural farming community.

    More than 5,000 public comments opposing the data center power deal were submitted to the commission ahead of its December vote. Critics argue the rush to approve the contracts is part of a broader pattern as deep-pocketed utilities and developers seek to capitalize on the AI boom, which is driving a nationwide surge in electricity demand from large-scale data centers.

    “As my office continues to review all potential options to defend energy customers in our state, we must demand further clarity on what protections the Commission has put in place and continue to demand a full contested case concerning these still-secret contracts,” Nessel said.


    Steve Neavling

    Source link

  • AI paves way for equipment lenders to predict residual values

    AI advancements are enabling lenders to better predict residual values, a boon for the equipment finance industry as machines become increasingly tech heavy.  

    The global market for AI in financial services is expected to grow 34.3% annually to $249.5 billion in 2032 from 2025, according to Verified Market Research. The global predictive AI market is projected to hit $88.6 billion by 2032, a more than fourfold increase from 2025, according to research firm Market.us 

    The potential benefits of AI for predicting residuals are especially relevant for equipment lenders as autonomous solutions, telematics systems, GPS systems and other machine technologies enter the market. Lenders have been reluctant to finance new tech-heavy machines due to residual-value uncertainty. The uncertainty is driven by:  

    • Limited historical performance data;  
    • Rapid obsolescence; and  
    • Lack of a resale market.  

    Nearest neighbor  

    Fintechs and lenders can overcome these hurdles by deploying the “nearest-neighbor technique” with machine learning, Timothy Appleget, director of technology services at Tamarack Technology, an AI and data solutions provider, told FinAi News’ sister publication Equipment Finance News 

    The nearest-neighbor method uses proximity to make predictions or classifications about the grouping of an individual data point, according to IBMThe technique helps “fill gaps in data that don’t exist,” Appleget said. 

    For example, rather than just gathering scarce residual-value data for autonomous equipment, lenders and fintechs should seek data for the technologies enabling them — or other asset types with similar systems.  

    Data integrity is crucial during this process, Tamarack President Scott Nelson told EFN 

    “If I can find an asset type that’s inside the definition of this more techy thing, then that’s like a nearest neighbor,” he said.  

    Borrower behavior 

    Borrower behavior is also an important factor to consider when developing AI tools for predicting residuals, Nelson said.  

    “One of the biggest effects on residuals is usage. So, an interesting question would be: Is anybody out there trying to aggregate data about the operators to predict the behavior of the people moving this equipment around?” 

    — Scott Nelson, president, Tamarack Technology

    To achieve this, fintech-lender partners can take advantage of the data collection and transmission capabilities of emerging equipment technologies, such as telematics, Nelson said. Even simple tech, like shock and vibration sensors, can aid this process, he said. 

    “You get two things immediately: You get runtime, because anytime the thing is vibrating, it’s running,” he said. “If you’ve got runtime, you’ve got hours on the engine, which is one of the big factors. The shock sensors tell you whether or not it got into an accident or whether or not it was abused.”

    “That runtime data can also be converted into revenue generation. How often is this thing generating revenue?” 

    — Scott Nelson, president, Tamarack Technology

    Integrating operator-behavior data with predictive AI could help lenders gain a competitive edge because many take a conservative approach when financing relatively new assets, Appleget said. 

    “This additional asset-behavioral data, to me, opens up the potential for having more flexibility in the residual values you set for a specific asset,” he said. “If you have that level of sophistication, you can gain a considerable advantage.” 

    Register here by Jan. 16 for early bird pricing for the inaugural FinAi Banking Summit, taking place March 2-3 in Denver. View the full event agenda here. 

    Quinn Donoghue

    Source link

  • Inside Singapore’s AI bootcamp to retrain 35,000 bankers

    Kelvin Chiang knew the five agentic AI models built by his team could in ten minutes do what used to take a private banker an entire day. With that in place, he went to show Singapore’s banking regulator the safeguards would sufficiently control the risks. Before rolling out the tool that drafts documents for relationship […]

    Bloomberg News

    Source link

  • IWF claims Grok creating ‘criminal imagery’ of girls, Anthropic planning $10bn fundraise – Tech Digest


    The Internet Watch Foundation (IWF) charity
    says its analysts have discovered “criminal imagery” of girls aged between 11 and 13 which “appears to have been created” using Grok. The AI tool is owned by Elon Musk’s firm xAI. It can be accessed either through its website and app, or through the social media platform X. The IWF said it found “sexualised and topless imagery of girls” on a “dark web forum” in which users claimed they used Grok to create the imagery. The BBC has approached X and xAI for comment. BBC 

    Cyber flashing became illegal in 2024. Now, the government is making it a priority offence, putting the pressure on tech companies to do something about it.  Cyber flashing is when someone sends a non-consensual explicit picture – best known as a “dick pic”. It’s most often women on the receiving end and, according to research by dating app Bumble, the adults most likely to receive those images are women between 40 and 45 years old. Sky News 


    Anthropic is planning a $10bn fundraise
    that would value the Claude chatbot maker at $350bn, according to multiple reports published on Wednesday. The new valuation represents an increase of nearly double from about four months ago, per CNBC, which reported that the company had signed a term sheet that stipulated the $350bn figure. The round could close within weeks, although the size and terms could change. Singapore’s sovereign wealth fund GIC and Coatue Management are planning to lead the financing, the Wall Street Journal reported. The Guardian 

    After kicking off its Moto Things accessory line with wireless earbuds, a Bluetooth tracker and a cheap smartwatch in 2024, Motorola is doubling down. At CES 2026, the company is announcing a sequel to its tracker, the Moto Tag 2, a stylus for its new folding phone, the Moto Pen Ultra and a more premium smartwatch called the Moto Watch. The Moto Watch has a 47mm round face with a stainless steel crown and an aluminum frame. The smartwatch comes with a PANTONE “Volcanic Ash” silicone band, but is designed to support third-party 22mm bands too. Engadget 

    The Roborock Saros Rover represents a literal step forward in robot vacuum mobility. On display at CES, the Rover features a pair of leg-like mechanisms designed to mimic human movement. This allows the nimble cleaner to lift itself over obstacles, pivot sharply, hop across gaps, and—most strikingly—climb stairs while continuing to clean. The company hasn’t yet announced pricing or a release date, but the unit I saw at CES was fully operational, signaling that it’s more than a distant concept. PC Mag

    Ring has announced a new line of security sensors, switches, and other smart home devices that use its low-power, long-range Sidewalk connectivity protocol and don’t need a hub — or even Wi-Fi — to connect to your smart home. Sidewalk works across three existing wireless radio technologies — Bluetooth Low Energy (BLE), LoRa, and 900 MHz — and “provides the benefits of a cellular network at the cost of a Wi-Fi one,” says Ring founder Jamie Siminoff. “It’s like a cellular network built for IOT.” The Verge 

    OnePlus has been updating its smartphones to OxygenOS 16 based on Android 16 for quite a while now, and it’s finally reached lower-midrange devices today. The update is now available for the Nord CE4 and the Nord CE4 Lite, which were both released in 2024. The Nord CE4 is seeing the rollout commencing in India with the new software build being labeled CPH2613_16.0.2.400(EX01).

    OnePlus Nord CE4 and CE4 Lite get Android 16

    The Nord CE4 Lite’s new build number is CPH2619_16.0.1.301(EX01). This too is only rolling out in India at the moment, with more territories supposedly to follow in the future. GSM Arena 

    Chris Price

    Source link

  • At CES 2026, iMogul AI pitches a smarter path into Hollywood – WTOP News

    iMogul AI, created by a Rockville startup, is designed to help screenwriters, actors and producers connect — using artificial intelligence not to create content, but to analyze it.

    iMogul CEO Chris LeSchack at CES 2026 in Las Vegas, Nevada.(Courtesy Steve Winter)

    Breaking into Hollywood has never been easy.

    For decades, aspiring screenwriters have faced a familiar cycle: write a script, submit it, wait, follow up, wait some more — and often never hear back. In an industry where who you know is invariably more valuable than what you know, even strong material can die on the vine before it ever reaches the right decision-makers.

    At CES 2026, a Rockville, Maryland-based startup believes they have found a way to disrupt that process.

    Exhibiting this year from Eureka Park at CES, iMogul AI is unveiling a platform designed to help screenwriters, actors and producers connect more efficiently — using artificial intelligence not to create content, but rather to analyze, validate and accelerate the acceptance process, essentially trimming that all-important barrier to entry.

    “The company and the product is called iMogul,” CEO Chris LeSchack said. “As we all know, it’s incredibly hard to get into Hollywood. iMogul is essentially designed for screenwriters who have created screenplays but don’t know where to go with it.”

    LeSchack speaks from personal experience. In 2005, he attempted to pitch a screenplay to Fox Studios. While the studio expressed interest, the project ultimately stalled.

    “They said, ‘Yeah, Jerry Bruckheimer has done this before. Maybe next time,’” LeSchack recalled.

    The experience planted the seed for what would eventually become iMogul AI.

    Rather than acting as another script-hosting site or marketplace, iMogul AI aims to create a feedback-driven ecosystem around each screenplay. Writers upload their scripts to the app, where audiences can read them, vote on elements such as casting, filming locations and creative direction, and provide validation that can be shared with potential investors and producers.

    “What if I had an app and got the demographics or the information from the audience that actually go and read the script, vote on actors, vote on directors and cinematographers?” LeSchack said. “And then I take that information and provide it to friends and family investors or actual real investors who are interested in Hollywood.”

    iMogul AI, LeSchack said, absolutely does not use generative AI to write or alter scripts.

    “I don’t use AI to do anything with the content itself,” he said. “That’s all the screenwriter.”

    Instead, the platform applies AI to market analysis — evaluating potential audiences, identifying tax incentives and shooting locations, and recommending actors who might align with a project’s budget and goals.

    “If the screenwriter is interested in selecting their own talent, they can go and do that,” LeSchack said. “While the higher tier actor or actress a film engages, the higher will be the value of the screenplay; but in many instances, we want to bring in relative unknowns … some B-listers and others … talent that might bring down the cost down while also helping the screenwriter pitch it to investors and producers.”

    The AI also analyzes scripts to suggest optimal filming locations. By parsing external and internal scenes, settings and themes, the system can flag regions with favorable tax incentives.

    “We’re using AI really to … deal with flow,” LeSchack said. “Help actors, screenwriters get back to work, producers — in fact, everybody in the film industry.”

    Bypassing traditional gatekeepers

    For emerging creatives, that promise resonates strongly.

    Zsuzsanna Juhasz, an employee of iMogul AI, is also a junior at USC majoring in film studies and production. As she embarks on her career in the entertainment industry, Juhasz is fully representative of the sort of individual for whom iMogul was created.

    “One of the scariest things about breaking into the industry is not knowing the right people,” Juhasz said. “If you don’t know the right people, maybe your work won’t be recognized or it won’t get out there. And that’s terrifying as you’ve invested four years into your education building your portfolio.”

    She sees iMogul AI as a way to bypass traditional gatekeepers.

    “This app will bridge that connection,” she said. “My work will be in front of audiences. People can read the kind of worlds I’m building, the characters I’m building, and they’ll be interested in that. They can vote for it.”

    The platform’s casting features are also central to its appeal. Actors can read sides, submit reels and audition directly through the app — opening doors for performers without agency representation.

    “It lets you have a sort of control that the industry doesn’t always offer you,” Juhasz said.

    That functionality will soon expand, thanks to a new feature called iMogul Take One, which LeSchack announced at CES.

    “Take One is going to invite actors to come in and read sides … and then pitch it out into the real world,” he said. “So we might be able to find the next up-and-coming actor.”

    The app is currently free to download on Apple’s App Store, with a Google version presently in the works. While screenwriters may eventually pay a modest monthly fee, LeSchack said the priority is growth.

    “The more screenwriters that put screenplays up there, more audience comes in,” he said.

    As iMogul AI makes its CES debut, the company is positioning itself not as a replacement for Hollywood, but as a smarter on-ramp. For creatives long locked out of the system, that may be the most compelling pitch of all.

    Thomas Robertson

    Source link

  • Droit launches gen AI-powered compliance tool

    Droit, a technology firm focused on computational law and regulation, today announced the launch of Decision Decoder, a generative AI-powered tool designed to explain regulatory compliance decisions. Decision Decoder provides context-aware, plain-language explanations for compliance determinations made by its patented Adept platform, according to the release. The Adept platform is used by major financial institutions […]

    FinAi News, AI-assisted

    Source link

  • Bluevine’s AI chatbot resolves 80% of customer queries

    Bank services provider Bluevine is seeing improved customer experience and efficiency through deployment of AI.  Chief Product Officer Herman Man told FinAi News Bluevine uses AI to provide its small and medium-sized business clients with:   Fraud monitoring;   Software development;  Customer assistance and servicing; and   Aiding the underwriting process.   “We’re focused on deploying AI on tasks that are tangible and provide simple solutions for our customers that will make their day-to-day better,” Man said. “For instance, we […]

    Vaidik Trivedi

    Source link

  • Trump ‘leaked’ audio about Epstein, Venezuela isn’t real

    Days after the capture of Venezuelan leader Nicolás Maduro, a viral audio clip appears to show President Donald Trump yelling at advisers to stop the release of the sex offender Jeffrey Epstein’s files. 

    “Leaked Donald Trump audio about the Epstein files and Venezuela,” reads the caption of a Jan. 5 Facebook post sharing the purported recording that drew over 2 million views.

    “(We’re) not releasing the Epstein file, f— Marjorie Taylor Greene, I don’t care what you do, start a f—— war, just don’t let them get out. If I go down, I will bring all of you down,” Trump appears to say. 

    A reporter can then be heard asking Trump if he is all right, to which Trump says, “I feel great, I was shouting at people because they were stupid about something.”

    That part of the recording is authentic. But the first part — about Epstein and Greene — isn’t.

    The fake audio matches the audio in a TikTok video from Nov. 18, 2025, before the U.S. captured Maduro on Jan. 3. Fact-checkers from Lead Stories and Snopes found a similar version of the audio first published Nov. 5, 2025 by the @fresh_florida_air TikTok account, which is no longer available. The archived version of that video shows a Sora watermark, which is OpenAI’s video-generating platform. With the launch of Sora 2 on Sept. 30, 2025, the tool can generate audio-only results. 

    The TikTok account, @fresh_florida_air, posted another version of the “leaked” audio that featured a Sora watermark that said @bradbradt31. PolitiFact searched for that username on the Sora app, but that account is also unavailable. 

    The TikTok user, @fresh_florida_air, told Snopes that the videos were AI-generated. “My intent is creative expression, not presenting anything as factual,” the user said. 

    The second part of the audio clip in the Facebook post that features a reporter asking Trump if he’s OK is real, but it was taken out of context. On Nov. 17, 2025, a reporter questioned why the president sounded hoarse. A longer version of Trump’s response reveals he said he had been shouting during trade talks with a foreign country. Trump was not being asked about a leaked audio or the Epstein files. 

    Our ruling

    A viral Facebook post claims to show “leaked Donald Trump audio about the Epstein files and Venezuela.”

    The audio was created with artificial intelligence. 

    PolitiFact found the first part of the clip was generated with OpenAI’s video-generating platform, Sora. 

    The second part of the clip is real but it’s from November 2025, before Maduro was captured by the U.S. government. At that moment, Trump was not being asked about leaked audio or the Epstein files. We rate this claim False.  

    Source link

  • An AI ‘Ghost’ That Plays Games For You Is The Inevitable Endpoint

    Whenever my kids get stuck playing a game, they run around the house yelling for me to help them. Doesn’t matter where I am or what I’m doing. Making dinner, taking out the trash, going to the bathroom, nowhere is safe. I patiently try to explain to them that back in my day, there was no grownup to help me beat Snake Man in Mega Man 3 or find Excalibur in Final Fantasy IV. I just had to bash my head against the wall until I figured it out or give up until I got older.

    They never find this paternal wisdom satisfactory, so there I am finding them Zonai devices in The Legend of Zelda: Tears of the Kingdom or turning off damage in the Minecraft settings menu like a personal accessibility assistant. Will they do the same for their children? They might not have to. New AI “ghosts” might be able to do everything in the game for them. The games will, on command, be able to play themselves. Perfect for grinding crypto-currency in the Roblox mines while the oceans rise. RIP my future grandchildren.

    A Sony patent for these AI ghosts has been making the rounds online today. As reported by VGC, the September 2024 registration documents which were publicized earlier this week reveal a technology that would allow people to get AI to help them beat games. These AI “ghost players” would be trained on existing game footage and either demonstrate the solution to an obstacle (“Guide Mode”) or beat it entirely (“Complete Mode”).

    It’s not clear from the patent whether Sony actually plans to move forward with this new AI help tool now or in the future. People have made jokes online about how bad current AI is at hallucinating gameplay, showing you something that looks normal enough before shifting into surreal nightmare fuel just moments later. There are also concerns about how the AI “helper” would be trained, which would seemingly include footage shared on social media and YouTube.

    the point of a movie is not to be done watching it. the point of a song is not to be done listening to it. the gamers’ obsession with completion as the only motivator to play a game has directly led to the medium’s worst traits. “AI gaming ghost” is a reflection of the lack of willingness to engage

    funbil (@funbil.bsky.social) 2026-01-06T14:58:57.706Z

    Gaming has a long history of companies trying to help players overcome the difficulty that they themselves designed into their games. In the past there were hotlines and strategy manuals. More recently, companies have tried to embed guides directly into the games. Game Help on PlayStation 5 shows you videos of how other players have completed a particular section of a game. It’s a neat idea whose implementation is messy and incomplete. Microsoft is trying to go a step further and embed its Copilot AI into games to offer chatbot-style assistance as an overlay like a new version of Clippy.

    Tools like this could be a boon for helping more people enjoy games or at least “unstuck” themselves before bouncing off in a fit of boredom or frustration. But there’s also a Black Mirror version where all of the friction of actually playing a game is offloaded to AI agents entirely. How many games would be improved by adding a skip button that lets you fast-forward your progression by 20 seconds or 20 minutes? How many games would you stop playing entirely if you could offload the drudgery entirely?

    AI ghosts grinding AI optimized battle passes

    Players love to optimize strategies and get one over on the games they’re playing. Sometimes that means doing a lot of work to grind as effectively as possible or craft the most broken build. Other times it means wrapping a rubber-band around an analog stick and going to sleep while the game does all the work for you. What would it be like to play Diablo 4 if those builds you had to look up online were automatically recommended inside the skill tree menu?

    What would be the point if at any time you could put the mouse and keyboard down and let an AI agent, trained on YouTube or even your own play history, take the wheel and grind until all of that hyper-rare loot finally drops? Not everyone would go for it. Maybe some would. We already know what choice Elon Musk would make.

    Experience-based games would probably be safe. The ones where you’re there for player choice or the story, though even fans of things like Dispatch might be tempted to have someone else handle all of the less engaging mini-games. Multiplayer games have faced an ongoing arms race with cheaters for years. Who wouldn’t be tempted to take credit for a duos Battle Royale win pulled off by their AI counterpart? None of this is in the Sony patent for AI help, but it’s all in the same Pandora’s box.

    In fact, some of the most popular games of the past few years play with automating the player’s role to some degree until they are irrelevant to the outcome. That’s what made people obsessed with Vampire Survivors. It’s what helped Ball X Pit sell over a million copies. It’s what made Megabonk so popular, it ended up being nominated for an award at the Game Awards that the developer had to recuse himself from. Some games call upon us to embrace the moment-to-moment drudgery of simulated work. Others lure us with the siren’s call of participating in a high-score chase where the big reward is seeing our own participation incrementally diminished.

    In 20 years, even that concept might sound as alien to my grandkids as calling something a “button masher.” By then, the computers will no doubt be able to read the inputs directly from their minds. What the AI chooses to do with those, well, that’s anybody’s guess.

    Ethan Gach

    Source link

  • How S&P Global built a multimodel AI platform

    S&P Global has opted for a multimodel AI approach to boost efficiency.  The strategy is comparable to a multicloud approach, Gia Winters, managing director of North America at Google Cloud, told FinAI News.   The financial services company teamed up with Google Cloud in December 2025 to use the tech provider’s Vertex AI platform, allowing S&P to access Google Gemini and other third-party and open-source models, Winters said.  Google Cloud’s Vertex AI platform is used by S&P to provide clients with […]

    Vaidik Trivedi

    Source link

  • AMD unveils new AI PC processors for general use and gaming at CES | TechCrunch

    AMD Chair and CEO Lisa Su kicked off her keynote at CES 2026 with a message about what compute could deliver: AI for everyone.

    As part of that promise, AMD announced a new line of AI processors as the company thinks AI-powered personal computers are the way of the future.

    The semiconductor giant revealed AMD Ryzen AI 400 Series processor, its latest version of its AI-powered PC chips, at the yearly CES conference on Monday. The company says the latest version of its Ryzen processor series allows for 1.3x faster multitasking than its competitors and are 1.7x times faster at content creation.

    These new chips feature 12 CPU Cores, individual processing units inside a core processor, and 24 threads, independent streams of instruction

    This is an upgrade to the Ryzen AI 300 Series processor that was announced in 2024. AMD started producing the Ryzen processor series in 2017.

    Rahul Tikoo, senior vice president and general manager of AMD’s client business, said AMD has expanded to over 250 AI PC platforms on the company’s recent press briefing. That represents a growth 2x over the last year, he added.

    “In the years ahead, AI is going to be a multi-layered fabric that gets woven into every level of computing at the personal layer,” Tikoo said. “Our AI PCs and devices will transform how we work, how we play, how we create and how we connect with each other.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    AMD also announced the release of the AMD Ryzen 7 9850X3D, the latest version of its gaming-focused processor.

    “No matter who you are and how you use technology on a daily basis, AI is reshaping everyday computing,” Tikoo said. “You have thousands of interactions with your PC every day. AI is able to understand, learn context, bring automation, provide deep reasoning and personal customization to every individual.”

    PCs that include either the Ryzen AI 300 Series processor or the AMD Ryzen 7 9850X3D processor become available in the first quarter of 2026.

    The company also announced the latest version of its Redstone ray tracing technology, which simulates physical behavior of light, which allows for better video game graphics without a performance or speed lag.

    Follow along with all of TechCrunch’s coverage of the annual CES conference here.

    Rebecca Szkutak

    Source link

  • AI-generated images and clips shared after Maduro’s capture

    After the Trump administration captured Venezuelan leader Nicolás Maduro and his wife, Cilia Flores, images and videos that claimed to show the aftermath went viral on social media. 

    “Venezuelans are crying on their knees thanking Trump and America for freeing them from Nicolas Maduro,” the caption of one Jan. 3 X post read. 

    The arrest unleashed complicated reactions in the U.S. and abroad. But that X post and other images and videos like it were generated with artificial intelligence, clouding social media with an inaccurate record.

    An X user said image of Maduro’s capture was AI-generated

    Facebook and X users shared an image of Maduro with his hands behind his back, with soldiers in fatigues flanking him and holding his arms. One of the soldiers has the letters DEA — which stands for the Drug Enforcement Administration — on his uniform. The image is timestamped Jan. 3. Conservative activist Benny Johnson shared the image in a Jan. 3 Facebook post that was shared 14,000 times. 

    Tal Hagin, an open source intelligence analyst, found that the image appeared to have been created by X user Ian Weber, who describes himself as an “AI video art enthusiast.” In a Jan. 5 X post, Weber said, “This photo I created with AI went viral worldwide.”

    Hagin also shared an analysis by Gemini, Google’s AI model, that said the image was created with Google AI.

    PolitiFact found noncropped versions of the image, which we used to prompt Gemini. It found that the image contains the SynthID watermark for images created by the tool. It is invisible to humans but detectable to Google’s technology.

    Trump shared an image on Truth Social on Jan. 3 that he said shows “Maduro on board the USS Iwo Jima.” News outlets also released pictures of Maduro in U.S. custody, in which he is wearing a light blue jacket. In the real image, he is with DEA Administrator Terry Cole, who is not wearing fatigues.

    Images of New York protest, celebration in Venezuela show signs of AI

    A Jan. 4 Facebook post shared two images with the caption, “Right now, Americans are marching in New York chanting… ‘Hands off Venezuela,’ ‘Stop the war,’ ‘Free Venezuela’ …while actual Venezuelans are celebrating in the streets because a real dictator is finally gone.”

    The images show signs of being created with AI. The text on some of the protest signs is illegible, and some of the Venezuelan flags are inaccurate. The real Venezuelan flag has eight stars in an arc, and yellow, blue and red horizontal stripes. One Venezuelan flag in the image has the wrong colors, one had only seven stars, and two showed the stars forming a shape other than an arc.

    Protest signs show illegible text. Supposed Venezuelan flags include the wrong colors, or have an inaccurate shape or number of stars. (Screenshots from Facebook)

    A protest did occur in Times Square on Jan. 3, but this image does not show that. 

    Videos of Venezuelans reacting show inconsistencies

    The X account “Wall Street Apes” shared a video with the text, “Venezuelans take to the streets to celebrate Maduro’s downfall,” which got 5.3 million views. 

    The first clip showed an elderly woman kneeling in the street, clutching a flag and crying, while the second and third clips show young men saying in Spanish, “The dictator finally fell.” The fourth clip shows an elderly woman — wearing a shirt similar, but not identical, to the woman on her knees — thanking Trump.

    The earliest version of this video that we found was uploaded Jan. 3 by the TikTok account “curiosmindusa.” The account has shared other AI-generated videos, including fake clips of Trump. 

    Some inconsistencies in the videos show they were AI-generated. In the first clip, a girl disappears in the background, and a flag disappears after a man waves it. The second, third and fourth clips showed inaccurate flags: The stars were in the wrong shape or in the wrong number.

    Venezuelan flags show stars that are in the wrong shape or in the wrong number. (Screenshots from TikTok

    These images and videos were AI-generated and do not depict real events. We rate them Pants on Fire!

    PolitiFact Staff Writer Maria Briceño contributed to this report. 

    RELATED: Fact-checking Donald Trump following U.S. attacks on Venezuela and capture of Nicolás Maduro

    Source link

  • Bank of America’s AI-driven forecasting tool saved clients 250K hours in 2025

    Bank of America has been deploying AI tech for years and is seeing quantifiable returns, giving the organization a boost in efficiency and client experience.  The $2.4 trillion bank deployed its AI-driven CashPro Forecasting tool, which helps businesses predict cash flow while keeping in mind macroeconomic factors including tariffs and supply chain constraints, in 2022, CashPro Product Executive Jennifer Sanctis, told FinAi News.   “It is built within the CashPro platform, making it convenient for clients to […]

    Vaidik Trivedi

    Source link

  • Sorry Tamagotchi Fans, It’s AI Time

    When they said, “Nothing in this world is sacred,” they meant Tamagotchis, too, or at least Tamagotchi rip-offs. While you might remember your virtual pets of yore with all the analog goodness that the ’90s had to offer, this is the year of our lord 2026, and everything has to have AI. Yup, everything.

    While the Sweekar, which I saw at CES 2026, isn’t actually a Tamagotchi, it pretty much is in everything but name, and, as you may have already guessed from the words above, it’s centered on AI.

    What exactly is that AI doing? Ya know, just normal stuff that allows it to “feel your touch” and remember “your voice, your stories, and your quirks.” It’s time to go deeper with your virtual pets, people. Clicking a few buttons until they inevitably die from neglect isn’t enough. On a hardware level, there’s some cute stuff happening. The egg one kind of vibrates and shakes and grows, which is a fun tactile experience.

    © James Pero / Gizmodo

    As far as capabilities go, the Sweekar allegedly “needs your love, just like a real pet,” which also means it has moods like happy, angry, sleepy, and something that Takway.Ai, which makes this little toy, is calling “sneaky smile,” which is basically just mischievous? I think? I shudder to think what else it could mean.

    Just like a Tamagotchi, the Sweekar has growth cycles that include an “egg stage,” a “baby stage,” a “teen stage,” and an “adult stage.” At each stage, the pet is supposed to gain certain abilities and continually grow and understand more about you and your personality.

    More than anything, though, the Sweekar is centered around using AI for memory, so it can remember your name and your favorite color and that time you forgot its birthday. This Tamagotchi’s therapy bill is going to be sizable. The people at Takway.Ai tell me that it’s using a combination of Google’s Gemini and ChatGPT to do that, and that everything you tell the Sweekar is private, though I obviously cannot verify the data practices of a company selling an AI Tamagotchi at CES.

    There’s also the whole issue with AI toys having a mind of their own, which means you may want to think twice before you give this little guy to a kid.

    If an AI Tamagotchi is really high on your list of things that you absolutely must have then you can eventually throw money at Sweekar’s Kickstarter in March. While there’s no official price right now, the makers of this little virtual pet say it’ll likely debut for between $150 and $200.

    Gizmodo is on the ground in Las Vegas all week bringing you everything you need to know about the tech unveiled at CES 2026. You can follow our CES live blog here and find all our coverage here.

    James Pero

    Source link