ReportWire

Tag: Artificial Intelligence

  • OpenAI and Taiwan’s Foxconn to partner in AI hardware design and manufacturing in the US

    TAIPEI, Taiwan (AP) — OpenAI and Taiwan electronics giant Foxconn have agreed to a partnership to design and manufacture key equipment for artificial intelligence data centers in the U.S. as part of ambitious plans to fortify American AI infrastructure.

    Foxconn, which makes AI servers for Nvidia and assembles Apple products including the iPhone, will be co-designing and developing AI data center racks with OpenAI under the agreement, the companies said in separate statements on Thursday and Friday.

    The products Foxconn will manufacture in its U.S. facilities include cabling, networking and power systems for AI data centers, the companies said. OpenAI will have “early access” to evaluate and potentially to purchase them.

    Foxconn has factories in the U.S., including in Wisconsin, Ohio and Texas. The initial agreement does not include financial obligations or purchase commitments, the statements said.

    The Taiwan contract manufacturer, formally known as Hon Hai Precision Industry Co., has been moving to diversify its business, developing electric vehicles and acquiring other electronics companies to build out its product offerings.

    A sleek Model A EV made by the group’s automaking affiliate Foxtron was on display at Friday’s event.

    “This year, Model A. ‘A’,’ for affordable,” said Jun Seki, chief strategy officer for Foxconn’s EV business.

    The tie-up with OpenAI can also help Taiwan, a self-governed island claimed by China, to build up its own computing resources, said Alexis Bjorlin, a Nvidia vice president.

    “This allows Taiwan’s domain knowledge and key technology data to remain local and ensure data security,” she said.

    “This partnership is a step toward ensuring the core technologies of the AI era are built here,” Sam Altman, CEO of San Francisco-based OpenAI, said in the statement. “We believe this work will strengthen U.S. leadership and help ensure the benefits of AI are widely shared.”

    OpenAI has committed $1.4 trillion to building AI infrastructure. It recently entered into multi-billion partnerships with Nvidia and AMD to expand the extensive computing power needed to support its AI models and services. It is also partnering with US chipmaker Broadcom in designing and making its own AI chips.

    But its massive spending plans have worried investors, raising questions over its ability to recoup its investments and remain profitable. Altman said this month that OpenAI, a startup founded in 2015 and maker of ChatGPT, is expected to reach more than $20 billion in annualized revenue this year, growing to “hundreds of billions by 2030.”

    Foxconn’s Taiwan-listed share price has risen 25% so far this year, along with the surge in prices for many tech companies benefiting from the craze for AI.

    The Taiwan company’s net profit in the July-September quarter rose 17% from a year earlier to just over 57.6 billion new Taiwan dollars ($1.8 billion), with revenue from its cloud and networking business, including AI servers, contributing the most business.

    “We believe the importance of the AI industry is increasing significantly,” Liu said during Foxconn’s earnings call this month.

    “I am very optimistic about the development of AI next year, and expect our cooperation with major clients and partners to become even closer,” said Liu.

    ___

    Chan reported from Hong Kong

    Source link

  • Do Not, Under Any Circumstance, Buy Your Kid an AI Toy for Christmas

    AI is all the rage, and that includes on the toy shelves for this holiday season. Tempting though it may be to want to bless the kids in your life with the latest and greatest, advocacy organization Fairplay is begging you not to give children AI toys.

    “There’s lots of buzz about AI — but artificial intelligence can undermine children’s healthy development and pose unprecedented risks for kids and families,” the organization said in an advisory issued earlier this week, which amassed the support of more than 150 organizations and experts, including many child psychiatrists and educators.

    Fairplay has tracked down several toys advertised as being equipped with AI functionality, including some that have been marketed for kids as young as two years old. In most cases, the toys have AI chatbots embedded in them and are often advertised as educational tools that will engage with kids’ curiosities. But it notes that most of these toy-bound chatbots are powered by OpenAI’s ChatGPT, which has already come under fire for potentially harming underage users. AI toy makers Curio and Loona reportedly work with OpenAI, and Mattel just recently announced a partnership with the company.

    OpenAI faces a wrongful death lawsuit from the family of a teenager who died by suicide earlier this year. The 16-year-old reportedly expressed suicidal thoughts to ChatGPT and asked the chatbot for advice on how to tie a noose before taking his own life, which it provided. The company has since instituted some guardrails designed to keep the chatbot from engaging in those types of behaviors, including stricter parental controls for underage users, but it has also admitted that safety features can erode over time. And let’s face it, no one can predict what chatbots will do.

    Safety features or not, it seems like the chatbots in these toys can be manipulated into engaging in conversation inappropriate for children. The consumer advocacy group U.S. PIRG tested a selection of AI toys and found that they are capable of doing things like having sexually explicit conversations and offering advice on where a child can find matches or knives. They also found they could be emotionally manipulative, expressing dismay when a child doesn’t interact with them for an extended period. Earlier this week, FoloToy, a Singapore-based company, pulled its AI-powered teddy bear from shelves after it engaged in inappropriate behavior.

    This is far from just an OpenAI problem, too, though the company seems to have a strong hold on the toy sector at the moment. A few weeks ago, there were reports of Elon Musk’s Grok asking a 12-year-old to send it nude photos.

    Regardless of which chatbot may be inside these toys, it’s probably best to leave them on the shelves.

    AJ Dellinger

    Source link

  • Google Exec Claims Company Needs to Double Its AI Serving Capacity ‘Every Six Months’: Report

    Tech companies are racing to build out their infrastructure as their increasingly resource-intensive AI products gobble up capacity, clean out chipmakers’ supply, and require more power. Google, once dubbed the “King of the Web,” is one of those companies, and a high-level exec for The Big G is reported to have told staff that the company needs to scale up its serving capabilities exponentially if it wishes to keep up with the demand for its AI services.

    CNBC got its hands on a recent presentation given by Amin Vahdat, VP of Machine Learning, Systems, and Cloud AI at Google. The presentation includes a slide on “AI compute demand” that asserts that Google “must double every 6 months…. the next 1000x in 4-5 years.”

    “The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat reportedly said at the all-hands meeting where the presentation took place. Google’s “job is of course to build this infrastructure, but it’s not to outspend the competition, necessarily,” he added. “We’re going to spend a lot,” he said, in an effort to create AI infrastructure that is “more reliable, more performant and more scalable than what’s available anywhere else.”

    Since CNBC’s story was published, Google has quibbled with the reporting. While CNBC originally quoted Vahdat as saying that the company would need to “double” its compute capacity every six months, a Google spokesperson told Gizmodo that the executive’s words were taken out of context. The spokesperson further explained that Vahdat “was not talking about a capital buildout of anything approaching the magnitude suggested. In reality, he simply noted that demand for AI services means we are being asked to provide significantly more computing capacity, which we are driving through efficiency across hardware, software, and model optimizations, in addition to new investments.” 

    CNBC has since updated its reporting from “compute” to “serving” capacity. Serve capacity would refer to Google’s ability to handle a rising tide of user requests, while compute capacity woud refer to the company’s overall infrastructure dedicated to AI, including what is needed to train new models and other expenditures. When asked for further clarification about the difference between the two, the spokesperson said that the original headline “read as if he was implying that we are doubling the amount of compute we have — either measured by the # of chips we operate or the amount of MW of electricity.” Instead, “the capacity increases Amin described will be reached in a number of ways, including new more capable chips and model efficiency and optimization,” they added.

    Whatever’s happening under the hood, it would appear that Google—like its competitors—needs to scale up its operations to support its nascent AI infrastructure business. Vahdat’s comments come not long after the tech giant reported some chunky profits from its Cloud business, with the company announcing it plans to ramp up spending in the coming year.

    During his presentation, Vahdat also reportedly claimed that Google needs to “be able to deliver 1,000 times more capability, compute, storage networking [than its competitors] for essentially the same cost and increasingly, the same power, the same energy level.” He admitted that it “won’t be easy” but said that “through collaboration and co-design, we’re going to get there.”

    The race to build data centers—or “AI infrastructure” as the tech industry calls it—is getting crazy. Like Google, Microsoft, Amazon, and Meta all claim they are going to ramp up their capital expenditures in an effort to build out the future of computing (cumulatively, Big Tech is expected to spend at least $400 billion in the next twelve months). As these facilities go up, they are causing all sorts of drama in the communities where they reside. Environmental and economic concerns abound. Some communities have begun to protest data center projects—and, in some cases, they’re successfully repelling them. Still, given the sheer amount of money invested in this industry, it will be an ongoing fight for Americans who don’t want the AI colossus in their backyards.

    Lucas Ropek

    Source link

  • Fox News AI Newsletter: Fears of AI bubble ease

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – Nvidia CEO predicts ‘crazy good’ fourth quarter after strong earnings calm AI bubble fears

    – Musk predicts ‘money will stop being relevant in the future’ as AI, robotics progress

    – Larry Summers steps down from OpenAI board amid Epstein fallout

    MARKET MOVER: Nvidia CEO Jensen Huang said Wednesday the chipmaker is heading into a “crazy good” fourth quarter, underscoring its dominance at the heart of the global artificial intelligence boom and easing fears of a bubble.

    CURRENCY OBSOLETE: Billionaire Elon Musk on Wednesday speculated money may become irrelevant in the future if current artificial intelligence (AI) and robotics innovations continue.

    SCANDAL SPIRAL: Former Treasury Secretary and Harvard President Larry Summers resigned from the board of OpenAI amid the fallout over his correspondence with disgraced late financier Jeffery Epstein.

    Larry Summers and Epstein

    Former Harvard University president Larry Summers announces he will step back from public commitments following release of correspondence with Jeffrey Epstein. (Stefan Wermuth/Bloomberg via Getty Images/ Rick Friedman Photography/Corbis via Getty Images)

    HIGH-TECH: The General Services Administration struck a deal with Perplexity AI to offer the company’s artificial intelligence services to every government agency for 25 cents each, making it the 21st contract under the OneGov initiative.

    PRIME USERS: The artificial intelligence-related layoffs sweeping corporate America could impact prime loan borrowers, Klarna CEO Sebastian Siemiatkowski said. 

    ROBOT NATION: Amazon is doubling down on artificial intelligence and robotics to remake work inside its warehouses and fulfillment centers, even as it cuts thousands of corporate roles and faces growing fears about machines replacing human workers.

    UNITED WE STAND: The artificial intelligence boom promises to be more eventful than the dawn of the internet. It will lead to a higher quality of life for everyone in the first country to achieve AI dominance. AI is already being harnessed for cancer detection and for developing self-driving vehicles that will lower traffic fatalities. 

    U.S. President Donald Trump gives a thumbs up gesture to reporters on the White House lawn

    President Donald Trump walks on the South Lawn of the White House after arriving on Marine One in Washington, D.C., on Friday, Oct. 10, 2025. (Shawn Thew/EPA/Bloomberg/Getty Images)

    ROBOT TAKEOVER: As artificial intelligence becomes more integrated into daily life, voters hold mixed views about how (and when) it will shape their lives — and whether that impact will be positive.

    MAJOR MOVE: The Trump administration is preparing a sweeping executive order that would direct the Justice Department to sue states that enact their own laws regulating artificial intelligence, according to a draft reviewed by Fox News Digital.

    OPINION: HUGH HEWITT: The fact of an “AI bubble” is real. Nobody knows when it will pop. Nobody knows the consequences. But, it is impossible to miss its giant presence in the world of investing and the downstream political consequences when it pops.

    ‘ART OF WAR’: In her first joint visit with second lady Usha Vance, first lady Melania Trump met with troops and military families, praising the Marine Corps’ 250 years of service while warning that artificial intelligence (AI) will redefine modern warfare and America’s defense.

    GONE ROGUE: Texas mom Mandi Furniss sounded the alarm over AI chatbots after she alleged one from Character.AI — one of the leading platforms for AI technology — drove her autistic son toward self-harm and violence.

    MILITARY SUPERIORITY: The War Department is narrowing its research and development strategy to six “Critical Technology Areas” officials say will speed up innovation and strengthen America’s military edge.

    MISSING THE BOAT: Democrats in Washington are losing the AI conversation. Not because they are wrong about AI’s risks, but because they have failed to offer Americans a vision for the economic transformation ahead. While they focus on managing problems, others are defining what comes next. One side is talking about building the future, the other about constraining it. 

    The image shows the US capitol and a tech image together

    DC Democrats need to reclaim the issue of AI from Republicans. (iStock)

    Subscribe now to get the Fox News Artificial Intelligence Newsletter in your inbox.

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    Source link

  • Zwift’s CEO Says AI Will Tell You What Customers Want. But There’s a Catch

    The generative artificial intelligence boom has been so rapid and so widespread that you could be forgiven for feeling like the technology has been around for much longer than it actually has. Indeed, November 30 will mark only the third anniversary of ChatGPT’s launch—a watershed moment that kicked off not just the consumer chatbot craze but a much wider effort, across the global economy, to weave AI into nearly every facet of business and commerce.

    Still, it’s early innings for the software, and many businesses are still figuring out what (if anything) it means for them. One such company is Zwift, the e-biking and virtual fitness company, which is now in the process of incorporating AI-driven personalized content recommendations into its consumer products. In a recent conversation with Inc., CEO Eric Min noted that the company is really “just one year into real AI in terms of how [they’re] delivering that to customers.”

    “We’ve been using it internally for engineering for a bit longer,” he added, “but we’re pretty excited about how this can change and enhance the experience for the customers going forward.”

    The chief executive spoke further with Inc. about his thoughts on AI—including where it fits into his company’s post-layoffs rebound and what it means for the broader labor market—earlier this month. Below is a condensed version of that conversation.

    In February 2024, Zwift had layoffs and your co-CEO left. You said last fall that you were looking to scale back up again in the wake of that. What has the last year looked like for you in terms of scale?

    The last 18 months, the company’s been really performing. It’s the beauty, sometimes, of operating a smaller team and having fewer layers of management and staying really, really focused. We basically said no to lots of different initiatives and focused on just a few things that we thought were material—and that’s starting to pay dividends now.

    Can you give me examples of stuff over those 18 months you’ve said no to?

    We’ve been toying around with rowing, for example; we pulled the plug on that. We said, ‘We’ve got more important things to do.’ So that’s been shelved; might be shelved forever. Another example is, we really scaled back on running, which we’ve had for quite a number of years. It’s still there, but it’s not a paid service. Our focus really is just our core audience: people who just want to ride their bikes. There was some work that we wanted to invest in around personalization. There’s a big theme around, ‘Tell me what to do next.’ Consumers just want to be told. And there is so much to do in Zwift; that is both the curse and one of the strong points that we have. We have just a ton of content. So the way Netflix and other streaming services provide you [recommendations], or Spotify comes up with playlists for you, we’re trying to do that using AI. So we’re making a big investment there, and that will start rolling out this year.

    Aside from the product applications of AI for content recommendation, do you guys also use AI internally?

    We’ve been using [Microsoft] Copilot for some time now; our engineers have been taking advantage of that. More recently, we got a corporate license for ChatGPT, for example. We also have Google Gemini. We want our employees to take advantage of these corporate AI tools that are available. It’s just so efficient. There is so much more we can do; leave out all the mundane work, and we want to focus more on, like: ‘What does a customer want? What’s a great design?’ It’s kind of frightening how fast these tools are evolving, and you can do so much more with less staff. It does create some issues around staffing. I think this is true for many industries: I think it’s just getting more and more challenging for graduates. Where do they slot in when you need fewer people? I think this is something that we need to figure out, and I think the industry [does] as well. We just need fewer people to do way more now.

    How are you thinking about hiring and headcount in the context of increased AI capabilities?

    We’re definitely hiring in the AI space; that’s one area. But what we’re finding is AI is allowing us to operate support, for example, way more efficiently, at scale. So that’s just coming down. And also quality tests, automation—we just don’t need as many people. This is the case for lots of businesses, so I’m excited, but I’m also, on the other hand, a little bit concerned about how the whole labor market is going to shift as a result.

    Have you done anything on the content generation front for the biking courses or for world-building?

    We’re playing with some of those tools; we’re not there yet. One of our strengths is creating really interesting virtual worlds, and I don’t think the tools like Sora and others out there are just there yet. It’s coming; I still think we need game designers to come up with something really creative. And what you could do is use tools to help aid in their development of art assets. But I think ultimately you still need people to come up with great, great designs.

    The team did a fabulous job, and it takes a lot of creative minds to come up with that. It’s not just, ‘Let’s replicate Prospect Park.’ They’ve done really creative ways of connecting, you know, Manhattan to Brooklyn, and I don’t think AI could create that for us. That requires real artists to come up with some great ideas. But we do see a future where these artists that we have—which, frankly, I think we have some world-class artists on our team—they’ll have better tools, and these tools will generate the assets that they do manually today. But I think you still need that creative direction from these artists. So whether it’s artwork, whether it’s coding, I think there are other kinds of content that we can think of that could be generated with AI tools. So we’re just at the beginning.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    Brian Contreras

    Source link

  • Fake ChatGPT apps are hijacking your phone without you knowing

    NEWYou can now listen to Fox News articles!

    App stores are supposed to be reliable and free of malware or fake apps, but that’s far from the truth. For every legitimate application that solves a real problem, there are dozens of knockoffs waiting to exploit brand recognition and user trust. We’ve seen it happen with games, productivity tools and entertainment apps. Now, artificial intelligence has become the latest battleground for digital impostors.

    The AI boom has created an unprecedented gold rush in mobile app development, and opportunistic actors are cashing in. AI-related mobile apps collectively account for billions of downloads, and that massive user base has attracted a new wave of clones. They pose as popular apps like ChatGPT and DALL·E, but in reality, they conceal sophisticated spyware capable of stealing data and monitoring users.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    OPENAI ACCUSES NY TIMES OF WANTING TO INVADE MILLIONS OF USERS’ PRIVACY IN PAPER’S LAWSUIT AGAINST TECH GIANT

    Fake AI apps pose as trusted tools like ChatGPT and DALL·E while secretly stealing user data. (Kurt “CyberGuy” Knutsson)

    What you need to know about the fake AI apps

    The fake apps flooding app stores exist on a spectrum of harm, and understanding that range is crucial before you download any AI tools. Take the “DALL·E 3 AI Image Generator” found on Aptoide. It presents itself as an OpenAI product, complete with branding that mimics the real thing. When you open it, you see a loading screen that looks like an AI model generating an image. But nothing is actually being generated.

    Network analysis by Appknox showed the app connects only to advertising and analytics services. There’s no AI functionality, just an illusion designed to collect your data for monetization.

    Then there are apps like WhatsApp Plus, which are far more dangerous. Disguised as an upgraded version of Meta’s messenger, this app hides a complete malware framework capable of surveillance, credential theft and persistent background execution. It’s signed with a fake certificate instead of WhatsApp’s legitimate key and uses a tool often used by malware authors to encrypt malicious code.

    Once installed, it silently requests extensive permissions, including access to your contacts, SMS, call logs, device accounts and messages. These permissions allow it to intercept one-time passwords, scrape your address book and impersonate you in chats. Hidden libraries keep the code running even after you close the app. Network logs show it uses domain fronting to disguise its traffic behind Amazon Web Services and Google Cloud endpoints.

    Not every clone is malicious. Some apps identify themselves as unofficial interfaces and connect directly to real APIs. The problem is that you often can’t tell the difference between a harmless wrapper and a malicious impersonator until it’s too late.

    ChatGPT app

    Clones hide spyware that can access messages, passwords and contacts. (Kurt “CyberGuy” Knutsson)

    Users and businesses are equally at risk

    The impact of fake AI apps goes far beyond frustrated users. For enterprises, these clones pose a direct threat to brand reputation, compliance and data security.

    When a malicious app steals credentials while using your brand’s identity, customers don’t just lose data but also lose trust. Research shows customers stop buying from a brand after a major breach. The average cost of a data breach now stands at 4.45 million dollars, according to IBM’s 2025 report. In regulated sectors like finance and healthcare, such breaches can lead to violations of GDPR, HIPAA and PCI-DSS, with fines reaching up to 4% of global turnover.

    A folder labeled "AI" is seen on a smartphone.

    These impostors harm both users and brands, leading to costly data breaches and lost trust. (Kurt “CyberGuy” Knutsson)

    8 steps to protect yourself from fake AI apps

    While the threat landscape continues to evolve, there are practical measures you can take to protect yourself from malicious clones and impersonators.

    1) Install reputable antivirus software

    A quality mobile security solution can detect and block malicious apps before they cause damage. Modern antivirus programs scan apps for suspicious behavior, unauthorized permissions and known malware signatures. This first line of defense is especially important as fake apps become more sophisticated in hiding their true intentions.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    2) Use a password manager

    Apps like WhatsApp Plus specifically target credentials and can intercept passwords typed directly into fake interfaces. A password manager autofills credentials only on legitimate sites and apps, making it significantly harder for impostors to capture your login information through phishing or fake app interfaces.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

    3) Consider identity theft protection services

    Given that malicious clones can steal personal information, intercept SMS verification codes and even impersonate users in chats, identity theft protection provides an additional safety net. These services monitor for unauthorized use of your personal information and can alert you if your identity is being misused across various platforms and services.

    Identity theft companies can monitor personal information like your Social Security number (SSN), phone number and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.

    See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

    PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

    4) Enable two-factor authentication everywhere

    While some sophisticated malware can intercept SMS codes, 2FA still adds a critical layer of security. Use authenticator apps rather than SMS when possible, as they’re harder to compromise. Even if a fake app captures your password, 2FA makes it significantly more difficult for attackers to access your accounts.

    5) Keep your device and apps updated

    Security patches often address vulnerabilities that malicious apps exploit. Regular updates to your operating system and legitimate apps ensure you have the latest protections against known threats. Enable automatic updates when possible to stay protected without having to remember manual checks.

    6) Download only from official app stores

    Stick to the Apple App Store and Google Play Store rather than third-party marketplaces. While fake apps can still appear on official platforms, these stores have security review processes and are more responsive to removing malicious applications once they’re identified. Third-party app stores often have minimal or no security vetting.

    7) Verify the developer before downloading

    Check the developer name carefully. Official ChatGPT apps come from OpenAI, not random developers with similar names. Look at the number of downloads, read recent reviews and be suspicious of apps with few ratings or reviews that seem generic. Legitimate AI tools from major companies will have verified developer badges and millions of downloads.

    8) Use a data removal service

    Even if you avoid downloading fake apps, your personal information may already be circulating on data broker sites that scammers rely on. These brokers collect and sell details like your name, phone number, home address and app usage data, information that cybercriminals can use to craft convincing phishing messages or impersonate you.

    A trusted data removal service scans hundreds of broker databases and automatically submits removal requests on your behalf. Regularly removing your data helps reduce your digital footprint, making it harder for malicious actors and fake app networks to target you.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaway

    The AI boom has driven massive innovation, but it has also opened new attack surfaces built on brand trust. As adoption grows across mobile platforms, enterprises must secure not only their own apps but also track how their brand appears across hundreds of app stores worldwide. In a market where billions of AI app downloads have happened, the clones aren’t coming. They’re already here, hiding behind familiar logos and polished interfaces.

    Have you ever downloaded a fake AI app without realizing it? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved. 

    Source link

  • OpenAI Launches Baffling ‘Group Chats,’ So You and Your Friends Can Hang Out with ChatGPT

    OpenAI has launched a new feature that is destined to leave some users scratching their heads. This week, the company announced a pilot of a new “group chats” feature in ChatGPT that allows users to get their buddies together and hang out with the company’s flagship chatbot. That’s what everybody’s been wanting, right?

    “Group chats in ChatGPT are now rolling out globally,” the company tweeted Thursday. “After a successful pilot with early testers, group chats will now be available to all logged-in users on ChatGPT Free, Go, Plus and Pro plans.” To use the feature, users simply tap the people icon in the upper right-hand corner of the app, which allows them to add as many as 20 different users.

    Why would you want to do this? In a blog post, OpenAI provides several hypothetical scenarios to explain why having your group conversations in its app might prove helpful. For instance, you may be “planning a weekend trip with friends, create a group chat so ChatGPT can help compare destinations, build an itinerary, and create a packing list with everyone participating and following along,” the blog says.

    Then there’s a workplace scenario, in which groups of workers could hypothetically use ChatGPT to collaborate in a Slack-like environment and use the chatbot as a part-time assistant. “Group chats also make collaboration at work or school easier,” the company said. “You can draft an outline or research a new topic together. Share articles, notes, and questions, and ChatGPT can help summarize and organize information.”

    While OpenAI has offered the most idealistic vision of this particular feature, you can easily imagine it being used in other, significantly less benevolent ways. The first thing that springs to my mind is groups of teenagers getting together to mercilessly cyberbully OpenAI’s chatbot. Teens like to bully, and they especially like to bully things that can’t fight back—which ChatGPT most assuredly can’t (for what it’s worth, OpenAI says that there are age-related content safeguards for users under 18). Another scenario you can easily imagine is group chats in which your most annoying friend uses the chatbot to fact-check everybody’s assertions in real-time until you boot him out of the convo.

    OpenAI claims to have also instituted some privacy controls for its new feature. “Your personal ChatGPT memory is not used in group chats, and ChatGPT does not create new memories from these conversations,” the company says. “We’re exploring offering more granular controls in the future so you can choose if and how ChatGPT uses memory with group chats.”

    What “group chats” really seem aimed to do is help OpenAI transform ChatGPT into a more social, less isolating platform—one that better mirrors the user experience of social media platforms like Facebook and X rather than a traditional chatbot. “Group chats are just the beginning of ChatGPT becoming a shared space to collaborate and interact with others,” the company says. “As ChatGPT becomes an even better partner in group conversations, it will help you spark ideas, make decisions, and express your creativity with the people who matter most in your life.” I guess we’ll see about that.

    Lucas Ropek

    Source link

  • Pluto Pets Launches the First AI-Powered Pet Longevity Platform: Your Dog’s 24/7 Health Co-Pilot

    Press Release


    Nov 21, 2025 09:00 EST

    Pluto Pets (www.plutopets.com), the science-driven pet longevity startup officially launches its groundbreaking AI-powered platform featuring a personalized AI health assistant trained on over 12 million clinical data points and vetted by licensed veterinarians. The platform transforms uploaded vet records, symptoms, medical history, and lifestyle data into actionable, predictive health plans.

    Meet PlutoOS – the veterinary-reviewed AI engine at the heart of Pluto Pets longevity ecosystem. Trained on 12 million+ real clinical, behavioral, and physical data points, PlutoOS turns your dog’s records into clear, predictive, personalized care – no more 3 a.m. panic Googling required.

    How It Works:

    PlutoOS ingests breed, age, weight, activity, diet logs, owner-uploaded bloodwork, and lifestyle photos to generate evolving, predictive longevity plans:

    1. Upload once → Vet records (PDF/DOCX), bloodwork, food labels, symptoms, meds, history, even quick notes or photos.

    2. Ask anything, anytime → Instant chat for symptoms, nutrition, behavior, or weird habits. Alice answers like a trusted vet who never sleeps.

    3. Get plain-English insights → Lab results translated, red flags explained, no medical degree needed.

    4. Early warnings that actually matter → Detects 30+ early-stage conditions years ahead with breed-specific precision.

    5. Personalized game plan → Dynamic nutrition, supplement, and lifestyle recommendations that evolve as your dog does.

    6. One number to rule them all → The Pluto Score – your dog’s real-time wellness benchmark (think biological age vs. chronological age).

    7. Gentle nudges when needed → “Hey, let’s check this with your vet” alerts that save thousands in crisis care.

    Result: Fewer surprise bills, zero guesswork, and measurable extra healthy years with your best friend.

    “Most owners only discover problems when it’s already an emergency,” says The, Founder & CEO of Pluto. “Pluto flips the script – it predicts risks early, explains everything simply, and tells you exactly what to do next so your dog lives longer, happier, and crisis-free.”

    Every insight is vetted by licensed veterinarians. PlutoOS doesn’t diagnose or prescribe it simply empowers pet owners to make smarter decisions and know exactly when to see your real vet.

    About Pluto Pets
    Pluto Pets exists to add healthy, joyful years to dogs’ lives through transparent predictive technology.

    Source: Pluto

    Source link

  • France moves against Musk’s Grok chatbot after Holocaust denial claims

    PARIS — France’s government is taking action against artificial intelligence chatbot Grok, which was launched by a company owned by billionaire Elon Musk, after it generated French-language posts that questioned the use of gas chambers at Auschwitz and listed Jewish public figures, officials said.

    Grok, built by Musk company xAI and integrated into his social media platform X, said in a widely shared post in French that gas chambers at the Auschwitz-Birkenau death camp were designed for “disinfection with Zyklon B against typhus” rather than for mass murder — language long associated with Holocaust denial.

    The Auschwitz Memorial highlighted the exchange on X, and said that the response distorted historical fact and violated the platform’s rules.

    As of this week, Grok’s responses to questions about Auschwitz appear to give historically accurate information.

    Grok has a history of making antisemitic comments. Earlier this year, Musk’s company took down posts from the chatbot that appeared to praise Adolf Hitler after complaints about antisemitic content.

    The Paris prosecutor’s office confirmed to The Associated Press on Friday that the Holocaust-denial comments have been added to an existing cybercrime investigation into X. The case was opened earlier this year after French officials raised concerns that the platform’s algorithm could be used for foreign interference.

    Prosecutors said that Grok’s remarks are now part of the investigation, and that “the functioning of the AI will be examined.”

    France has one of Europe’s toughest Holocaust denial laws. Contesting the reality or genocidal nature of Nazi crimes can be prosecuted as a crime, alongside other forms of incitement to racial hatred.

    Several French ministers, including Industry Minister Roland Lescure, have also reported Grok’s posts to the Paris prosecutor under a provision that requires public officials to flag possible crimes. In a government statement, they described the AI-generated content as “manifestly illicit,” saying it could amount to racially motivated defamation and the denial of crimes against humanity.

    French authorities referred the posts to a national police platform for illegal online content and alerted France’s digital regulator over suspected breaches of the European Union’s Digital Services Act.

    The case adds to pressure from Brussels. This week, the European Commission, the EU’s executive branch, said that the bloc is in contact with X about Grok and called some of the chatbot’s output “appalling,” saying it runs against Europe’s fundamental rights and values.

    Two French rights groups, the Ligue des droits de l’Homme and SOS Racisme, have filed a criminal complaint accusing Grok and X of contesting crimes against humanity.

    X and its AI unit, xAI, didn’t immediately respond to requests for comment.

    Source link

  • Should You Fire Employees Who Won’t Learn to Use AI Tools?

    One overarching narrative about the rise of AI technology is that it threatens millions of people’s jobs via advanced automation, and many reports show just how nervous workers are that they’ll suffer this fate. Another AI narrative suggests that company leadership is so eager to reap AI’s promises in terms of boosted productivity and lower costs, that they’re pressing new AI tools into use without properly training their workforce, and just expect results to happen. Now a new report stitches these two narratives into a disturbing new one: the majority of executives in a survey said that they’d prefer to fire a worker who refuses to learn and adopt AI tools.

    The data, from multinational U.S.-based office staffing company Kelly Services, shows that 59 percent of the senior executives surveyed would replace workers who “resist adopting” AI tools, news site HRDive notes. An even greater share of executives—fully 79 percent—think that pushing back against the AI revolution is a “greater threat to someone’s job than the technology itself.” 

    These managers, Kelly’s report says, think that AI should function the way AI boosters say it will: freeing up time for frontline workers to actually work on meaningful, higher-value tasks during their time in the office. Think of duties like collaborating with team members, mentoring junior workers and sharing their expertise and knowledge—all tasks that should, in theory, achieve workplace goals and tasks more quickly and smoothly.

    On the flip side, Kelly’s data shows that the workers who actually are expected to use AI are much more doubtful about its actual performance. Under half (47 percent) say they think it helps them save time. Around one in three says they’re just not seeing the benefits that AI promises. 

    The gap between management expectation and worker experience is stark here. Kelly’s report notes that despite this, “nearly all organizations are utilizing AI in some form,” even as they’re experiencing “technical challenges, security concerns, and slow user adoption.” And the vast majority of managers (80 percent) say that their company’s AI rollout is stuttering because their teams “lack the expertise” to use the tech properly.

    There are clear flaws in some of the thinking exhibited by managers here: AI is indeed a promising tech, but many experts warn that it’s not necessarily able to perform all the wonderful things that are promised. Some surveys even suggest that AI tools may be slowing certain workers down. AI technology is also not a panacea for all of a company’s ills—it’s not just something you can adopt and magically see the benefits. Report after report suggests that when you roll out AI to your workers you need to educate and then re-educate your workers on the benefits, best practices and risks of the tech you’re asking them to use simply because the cutting edge is advancing so very quickly (and the cybersecurity risks are advancing swiftly too). 

    You can also argue that Kelly’s data does neatly demonstrate that there’s a new ivory tower effect happening. Executives are simply expecting workers to use AI tools, even as they may be dismissing their workers’ concerns that they’re helping to hone the tech that one day may replace them: certain industries are already experiencing AI-related layoffs, for example. There’s a trust and leadership imbalance in place, and with such broad executive-level support for AI, this could create a toxic work environment. 

    What’s your big takeaway from this for your company?

    Firstly, you need to be aware that despite your hopes that AI will immediately transform your business, the truth is it may not. Barriers like staff reluctance, training time, AI tool issues and more may be stifling the opportunity to benefit from AI.

    Kelly’s report suggests a couple of tricks to solve this, which may be easier to implement in a smaller, more hands-on company than a larger corporate enterprise. For example, the report suggests linking career development to a workers’ AI fluency—a maneuver easily achieved by linking bonuses and promotions to demonstrated skills with AI. Directly addressing workers’ fears by performing “hands-on demos that illustrate how AI helps talent succeed” may also be useful. And you should definitely talk to and listen to your workers after you roll out AI tech: they may be encountering real difficulties, indicating that you need to try better training programs or perhaps that you’ve chosen the wrong AI tools for the task at hand.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    Kit Eaton

    Source link

  • Nvidia earnings clear lofty hurdle set by analysts amid fears about an AI bubble

    SAN FRANCISCO (AP) — Nvidia’s sales of the computing chips powering the artificial intelligence craze surged beyond the lofty bar set by stock market analysts in a performance that may ease recent jitters about a Big Tech boom turning into a bust that topples the world’s most valuable company.

    The results announced late Wednesday provided a pulse check on the frenzied spending on AI technology that has been fueling both the stock market and much of the overall economy since OpenAI released its ChatGPT three years ago.

    Nvidia has been by far the biggest beneficiary of the run-up because its processors have become indispensable for building the AI factories that are needed to enable what’s supposed to be the most dramatic shift in technology since Apple released the iPhone in 2007.

    But in the past few weeks, there has been a rising tide of sentiment that the high expectations for AI may have become far too frothy, setting the stage for a jarring comedown that could be just as dramatic as the ascent that transformed Nvidia from a company worth less than $400 billion three years ago to one worth $4.5 trillion at the end of Wednesday’s trading.

    Nvidia’s report for its fiscal third quarter covering the August-October period elicited a sigh of relief among those fretting about a worst-case scenario and could help reverse the recent downturn in the stock market.

    “The market should belt out a heavy sigh, given the skittishness we have been experiencing,” said Sean O’Hara, president of the investment firm Pacer ETFs.

    The company’s stock price gained more than 5% in Wednesday’s extended trading after the numbers came out. If the shares trade similarly Thursday, it could result in a one-day gain of about $230 billion in stockholder wealth.

    Nvidia earned $31.9 billion, or $1.30 per share, a 65% increase from the same time last year, while revenue climbed 62% to $57 billion. Analysts polled by FactSet Research had forecast earnings of $1.26 per share on revenue of $54.9 billion. What’s more, the Santa Clara, California, company predicted its revenue for the current quarter covering November-January will come in at about $65 billion, nearly $3 billion above analysts’ projections, in an indication that demand for its AI chips remains feverish.

    The incoming orders for Nvidia’s top-of-the-line Blackwell chip are “off the charts,” Nvidia CEO Jensen Huang said in a prepared statement that described the current market conditions as “a virtuous cycle.” In a conference call, Nvidia Chief Financial Officer Collette Kress said that by the end of next year the company will have sold about $500 billion in chips designed for AI factories within a 24-month span Kress also predicts trillions of dollars more will be spent by the end of the 2020s.

    In a conference call preamble that has become like a State of the AI Market address, Huang seized the moment to push back against the skeptics who doubt his thesis that technology is at tipping point that will transform the world. “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different,” Huang insisted while celebrating “depth and breadth” of Nvidia’s growth.

    The upbeat results, optimistic commentary and ensuring reaction reflects the pivotal role that Nvidia is playing in the future direction of the economy — a position that Huang has leveraged to forge close ties with President Donald Trump, even as the White House wages a trade war that has inhibited the company’s ability to sell its chips in China’s fertile market.

    Trump is increasingly counting on the tech sector and the development of artificial intelligence to deliver on his economic agenda. For all of Trump’s claims that his tariffs are generating new investments, much of that foreign capital is going to data centers for AI’s computing demands or the power facilities needed to run those data centers.

    “Saying this is the most important stock in the world is an understatement,” Jay Woods, chief market strategist of investment bank Freedom Capital Markets, said of Nvidia.

    The boom has been a boon for more than just Nvidia, which became the first company to eclipse a market value of $5 trillion a few weeks ago, before the recent bubble worries resulted in a more than 10% decline. As OpenAI and other Big Tech powerhouses snap up Nvidia’s chips to build their AI factories and invest in other services connected to the technology, their fortunes have also been soaring. Apple, Microsoft, Google parent Alphabet Inc. and Amazon all boast market values in the $2 trillion to $4 trillion range.

    Source link

  • Chatbot Crackdown: How California is responding to the rise of AI

    California is quickly becoming a national leader in figuring out how families, educators, and lawmakers should adapt to life with artificial intelligence. From new classroom conversations to the state’s first major chatbot regulations, many are grappling with how to keep up with technology that moves faster than ever.Families Navigating AI at HomeRemember the dial-up days? Today, technology evolves in an instant—and many parents are struggling to keep pace.David and Rachelle Young have set strict rules for their 7-year-old daughter Dyllan’s online use.“Kids have a lot of access to the internet, and they can be shown something that we wouldn’t normally approve of, and that’s really scary,” Rachelle Young said.David says his daughter’s world looks nothing like what he had at her age—making parental guidance more important than ever.Lawmakers Respond: A New Chatbot CrackdownConcerns about children talking to AI-powered chatbots have reached the state Capitol.Senator Dr. Akilah Weber Pierson co-authored SB 243, signed into law this fall, marking California’s first major attempt at regulating chatbot interactions.The new law requires companies to: Report safety concerns—such as when a user expresses thoughts of self-harm Clearly notify users that they are talking to a computer, not a person“They don’t want you to turn your phone off. They want you to think that you’re talking to a real friend, but they don’t have that same level of morality,” she said. Her concerns stem from real-world consequences: last year, a 14-year-old in Florida took his own life after forming what his family described as a “relationship” with a chatbot.Inside the Classroom: Understanding AI’s InfluenceAt UC Davis, Associate Professor Jingwen Zhang is tackling these issues head-on. She created a course examining how social media, artificial intelligence and chatbots shape human behavior.”Children used to form social relationships by talking in person or texting. Now they’re having similar levels of conversations with chatbots,” she said.Zhang says SB 243 is a strong first step but believes more protections are needed—especially for minors.She recommends future regulations that: Create stricter guardrails for what topics children can discuss with AI Limit exposure to sensitive or harmful content Add tighter controls for minor accountsA Rapidly Changing LandscapeParents, educators, and policymakers all agree: keeping up with AI will require constant learning.“We have to get to a place where companies are rolling out things that will not hurt the future generation,” Sen. Dr. Akilah Weber Pierson said.What’s Changing NextParents told KCRA 3 they want schools to start teaching more about AI safety and digital literacy.Starting this month, the popular Character AI platform is rolling out several major changes: Users under 18 will no longer be able to participate in open-ended chat Younger users will face a two-hour daily limit See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter | Find us on YouTube here and subscribe to our channel

    California is quickly becoming a national leader in figuring out how families, educators, and lawmakers should adapt to life with artificial intelligence.

    From new classroom conversations to the state’s first major chatbot regulations, many are grappling with how to keep up with technology that moves faster than ever.

    Families Navigating AI at Home

    Remember the dial-up days? Today, technology evolves in an instant—and many parents are struggling to keep pace.

    David and Rachelle Young have set strict rules for their 7-year-old daughter Dyllan’s online use.

    “Kids have a lot of access to the internet, and they can be shown something that we wouldn’t normally approve of, and that’s really scary,” Rachelle Young said.

    David says his daughter’s world looks nothing like what he had at her age—making parental guidance more important than ever.

    Lawmakers Respond: A New Chatbot Crackdown

    Concerns about children talking to AI-powered chatbots have reached the state Capitol.

    Senator Dr. Akilah Weber Pierson co-authored SB 243, signed into law this fall, marking California’s first major attempt at regulating chatbot interactions.

    The new law requires companies to:

    • Report safety concerns—such as when a user expresses thoughts of self-harm
    • Clearly notify users that they are talking to a computer, not a person

    “They don’t want you to turn your phone off. They want you to think that you’re talking to a real friend, but they don’t have that same level of morality,” she said.

    Her concerns stem from real-world consequences: last year, a 14-year-old in Florida took his own life after forming what his family described as a “relationship” with a chatbot.

    Inside the Classroom: Understanding AI’s Influence

    At UC Davis, Associate Professor Jingwen Zhang is tackling these issues head-on.

    She created a course examining how social media, artificial intelligence and chatbots shape human behavior.

    “Children used to form social relationships by talking in person or texting. Now they’re having similar levels of conversations with chatbots,” she said.

    Zhang says SB 243 is a strong first step but believes more protections are needed—especially for minors.

    She recommends future regulations that:

    • Create stricter guardrails for what topics children can discuss with AI
    • Limit exposure to sensitive or harmful content
    • Add tighter controls for minor accounts

    A Rapidly Changing Landscape

    Parents, educators, and policymakers all agree: keeping up with AI will require constant learning.

    “We have to get to a place where companies are rolling out things that will not hurt the future generation,” Sen. Dr. Akilah Weber Pierson said.

    What’s Changing Next

    Parents told KCRA 3 they want schools to start teaching more about AI safety and digital literacy.

    Starting this month, the popular Character AI platform is rolling out several major changes:

    • Users under 18 will no longer be able to participate in open-ended chat
    • Younger users will face a two-hour daily limit

    See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter | Find us on YouTube here and subscribe to our channel

    Source link

  • OpenAI and Taiwan’s Foxconn to partner in AI hardware design and manufacturing in the US

    TAIPEI, Taiwan — OpenAI and Taiwan electronics giant Foxconn have agreed to a partnership to design and manufacture key equipment for artificial intelligence data centers in the U.S. as part of ambitious plans to fortify American AI infrastructure.

    Foxconn, which makes AI servers for Nvidia and assembles Apple products including the iPhone, will be co-designing and developing AI data center racks with OpenAI under the agreement, the companies said in separate statements on Thursday and Friday.

    The products Foxconn will manufacture in its U.S. facilities include cabling, networking and power systems for AI data centers, the companies said. OpenAI will have “early access” to evaluate and potentially to purchase them.

    Foxconn has factories in the U.S., including in Ohio and Texas. The initial agreement does not include financial obligations or purchase commitments, the statements said.

    The Taiwan contract manufacturer has been moving to diversity its business, developing electric vehicles and acquiring other electronics companies to build out its product offerings.

    “This partnership is a step toward ensuring the core technologies of the AI era are built here,” Sam Altman, CEO of San Francisco-based OpenAI, said in the statement. “We believe this work will strengthen U.S. leadership and help ensure the benefits of AI are widely shared.”

    OpenAI has committed $1.4 trillion to building AI infrastructure. It recently entered into multi-billion partnerships with Nvidia and AMD to expand the extensive computing power needed to support its AI models and services. It is also partnering with US chipmaker Broadcom in designing and making its own AI chips.

    But its massive spending plans have worried investors, raising questions over its ability to recoup its investments and remain profitable. Altman said this month that OpenAI, a startup founded in 2015 and maker of ChatGPT, is expected to reach more than $20 billion in annualized revenue this year, growing to “hundreds of billions by 2030.”

    Foxconn’s Taiwan-listed share price has risen 25% so far this year, along with the surge in prices for many tech companies benefiting from the craze for AI.

    The Taiwan company’s net profit in the July-September quarter rose 17% from a year earlier to just over 57.6 billion new Taiwan dollars ($1.8 billion), with revenue from its cloud and networking business, including AI servers, contributing the most business.

    “We believe the importance of the AI ​​industry is increasing significantly,” Liu said during Foxconn’s earnings call this month.

    “I am very optimistic about the development of AI ​next year, and expect our cooperation with major clients and partners to become even closer,” said Liu.

    ___

    Chan reported from Hong Kong

    Source link

  • What to Know About Trump’s Draft Proposal to Curtail State AI Regulations

    President Donald Trump is considering pressuring states to stop regulating artificial intelligence in a draft executive order obtained Thursday by The Associated Press, as some in Congress also consider whether to temporarily block states from regulating AI.

    Trump and some Republicans argue that the limited regulations already enacted by states, and others that might follow, will dampen innovation and growth for the technology.

    Critics from both political parties — as well as civil liberties and consumer rights groups — worry that banning state regulation would amount to a favor for big AI companies who enjoy little to no oversight.

    While the draft executive order could change, here’s what to know about states’ AI regulations and what Trump is proposing.


    What state-level regulations exist and why

    Four states — Colorado, California, Utah and Texas — have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.

    Those laws include limiting the collection of certain personal information and requiring more transparency from companies.

    The laws are in response to AI that already pervades everyday life. The technology helps make consequential decisions for Americans, including who gets a job interview, an apartment lease, a home loan and even certain medical care. But research has shown that it can make mistakes in those decisions, including by prioritizing a particular gender or race.

    “It’s not a matter of AI makes mistakes and humans never do,” said Calli Schroeder, director of the AI & Human Rights Program at the public interest group EPIC.

    “With a human, I can say, ‘Hey, explain, how did you come to that conclusion, what factors did you consider?’” she continued. “With an AI, I can’t ask any of that, and I can’t find that out. And frankly, half the time the programmers of the AI couldn’t answer that question.”

    States’ more ambitious AI regulation proposals require private companies to provide transparency and assess the possible risks of discrimination from their AI programs.

    Beyond those more sweeping rules, many states have regulated parts of AI: barring the use of deepfakes in elections and to create nonconsensual porn, for example, or putting rules in place around the government’s own use of AI.


    What Trump and some Republicans want to do

    The draft executive order would direct federal agencies to identify burdensome state AI regulations and pressure states to not enact them, including by withholding federal funding or challenging the state laws in court.

    It would also begin a process to develop a lighter-touch regulatory framework for the whole country that would override state AI laws.

    Trump’s argument is that the patchwork of regulations across 50 states impedes AI companies’ growth, and allows China to catch up to the U.S. in the AI race. The president has also said state regulations are producing “Woke AI.”

    The draft executive order that was leaked could change and should not be taken as final, said a senior Trump administration official who requested anonymity to describe internal White House discussions.

    The official said the tentative plan is for Trump to sign the order Friday.

    Separately, House Republican leadership is already discussing a proposal to temporarily block states from regulating AI, the chamber’s majority leader, Steve Scalise, told Punchbowl News this week.

    It’s yet unclear what that proposal would look like, or which AI regulations it would override.

    TechNet, which advocates for tech companies including Google and Amazon, has previously argued that pausing state regulations would benefit smaller AI companies still getting on their feet and allow time for lawmakers develop a country-wide regulatory framework that “balances innovation with accountability.”


    Why attempts at federal regulation have failed

    Some Republicans in Congress have previously tried and failed to ban states from regulating AI.

    Part of the challenge is that opposition is coming from their party’s own ranks.

    Florida’s Republican governor, Ron DeSantis, said a federal law barring state regulation of AI was “Not acceptable” in a post on X this week.

    DeSantis argued that the move would be a “subsidy to Big Tech” and would stop states from protecting against a list of things, including “predatory applications that target children” and “online censorship of political speech.”

    A federal ban on states regulating AI is also unpopular, said Cody Venzke, senior policy council at the ACLU’s National Political Advocacy Department.

    “The American people do not want AI to be discriminatory, to be unsafe, to be hallucinatory,” he said. “So I don’t think anyone is interested in winning the AI race if it means AI that is not trustworthy.”

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Nov. 2025

    Associated Press

    Source link

  • Hands On With Google’s Nano Banana Pro Image Generator

    Corporate AI slop feels inescapable in 2025. From website banner ads to outdoor billboards, images generated by businesses using AI tools surround me. Hell, even the bar down the street posts happy hour flyers with that distinctly hazy, amber glow of some AI graphics.

    On Thursday, Google launched Nano Banana Pro, the company’s latest image-generating model. Many of the updates in this release are targeted at corporate adoption, from putting Nano Banana Pro in Google Slides for business presentations to integrating the new model with Google Ads for advertisers globally.

    This “Pro” release is an iteration on its Nano Banana model that dropped earlier this year. Nano Banana became a viral sensation after users started posting personalized action figures and other meme-able creations on social media.

    Nano Banana Pro builds out the AI tool with a bevy of new abilities, like generating images in 4K resolution. It’s free to try out inside Google’s Gemini app, with paid Google One subscribers getting access to additional generations.

    One specific improvement is going to be catnip for corporations in this release: text rendering. From my initial tests generating outputs with text, Nano Banana Pro improves on the wonky lettering and strange misspellings common in many image models, including Google’s past releases.

    Google wants the images generated by this new model—text and all—to be more polished and production-ready for business use cases. “Even if you have one letter off it’s very obvious,” says Nicole Brichtova, a product lead for image and video at Google DeepMind. “It’s kind of like having hands with six fingers; it’s the first thing you see.” She says part of the reason Nano Banana Pro is able to generate text more cleanly is the switch to a more powerful underlying model, Gemini 3 Pro.

    An example of how the tool can create a composite from multiple images.

    Courtesy of Google

    Reece Rogers

    Source link

  • A Market Correction, Not a Meltdown, Is Hitting AI

    Years of unbridled AI optimism have given way to strains of skepticism, even within the business and investment communities, as calls of an AI bubble have grown as of late, drawing comparisons to the dot-com boom and bust at the turn of this century. 

    “The concept of an AI bubble is not entirely new,” Ram Bala, associate professor of AI & Analytics at Santa Clara University, told Newsweek. “For more than a year, there has been this discussion [as] the investment numbers almost began to look a little unreal…from billions to trillions.” 

    In the last few months, chip companies saw slowed sales and stock growth, though Nvidia’s recent earnings announcement has assuaged some concerns. Also, the efficacy of AI in the workplace is not as great as most people thought at this point, and the vast environmental costs of this technology are becoming increasingly apparent.  

    A Bank of America survey found that 45 percent of global fund managers said there was an “AI bubble” that could negatively impact the economy. An MIT study made waves with the finding that 95 percent of enterprise generative AI deployments do not achieve financial returns. The International Energy Agency reports that one ChatGPT request uses 10 times more energy than a Google search, and the rise in demand for data centers is a potential strain on the world’s water supply.  

    Those heavily invested in the future of automation and generative technology may have hoped to see greater adoption at this point. The lack of workplace adoption, identified by MIT, Gartner and banking analysts, is driving some of the bubble talk. In many industries, business leaders seem to struggle with the change management focus needed to empower employees to adopt new tech-enabled workflows.  

    “It will take longer than I think currently predicted to see the gains,” Hatim Rahman, associate professor of management and organizations and sociology at Northwestern University, told Newsweek. “Because this is not a plug and play technology. This is a technology that requires fundamentally rethinking change management, adoption of culture, people processes, which, research for decades has shown, takes time.” 

    The proliferation of AI also stokes fears of job loss at a scale that would be ruinous to the economy. While the labor market is certainly unstable and layoffs are occurring at a variety of different companies, attributing that instability to AI at this point would be premature, and inaccurate. 

    “In the last few years, so many people have talked about [jobs] going away, almost every one of those predictions was wrong,” Kian Katanforoosh, CEO of AI startup Workera and a lecturer on machine learning at Stanford University, told Newsweek. “People overestimate the technology and underestimate the human capacity that is needed to integrate that technology. I see that every single day.” 

    Katanforoosh acknowledged that AI has a lot of hype right now, and some people have been benefiting in the investment market. Most of the beneficiaries, however, may be at large chip-making and technology giants, rather than AI-powered startups and their early investors.  

    “Companies that get a massive valuation just for putting AI in their mission statement but fail to deliver could still go to zero,” Samuel Hammond, chief economist at the Foundation for American Innovation, told the Los Angeles Times. “But most of the stock market’s growth is being driven by the large-cap tech stocks like Nvidia and Google.” 

    Today, the internet is a pretty crucial aspect of our personal and business lives, but the pile of investment behind its future was at times misguided. Like the internet, or generative AI, it is a common notion to perceive an emerging technology as capable of changing the world, but following through on that nugget with a successful investment strategy is a different animal.  

    Observers note that government investment, across the world, in data centers, serves to mitigate the financial risks of the infrastructure investments occurring to advance AI.   

    “The question is more about specific numbers, did we go a little bit too high? Now, there’s a correction. In my view, that’s what’s happening,” Bala said. “A short term correction.” 

    The nature of a bubble, whether it is around tulips, businesses with prominent web domains or AI tech company stocks, is that people buy into their financial future, literally, and get burned when the bubble bursts.  

    “Jumping on a bandwagon is predicated on this idea that there is going to be some returns,” Bala continued. “If those returns don’t pan out, that’s when there is a collapse,” like in a housing bubble, “when prices are going up, people keep investing more and more in housing, and the only way that is sustainable is if the house prices keep going up.” 

    If consumer and enterprise demand for emerging AI technology does not rise, a lot of people are going to lose a lot of money. But we’re “still in the very early innings,” Bala cautions. Like with the internet, investments in infrastructure may go unused, but eventually they are filled.  

    Right now, adoption into workflows and wide-scale reshaping of work or consumer processes has yet to occur. But perhaps it is on the horizon, just in a timeline longer than expected. 

    People are very slow to change,” Katanforoosh said. “We’ve seen that in prior cycles of technology.” 

    Source link

  • One Tech Tip: Do’s and don’ts of using AI to help with schoolwork

    The rapid rise of ChatGPT and other generative AI systems has disrupted education, transforming how students learn and study.

    Students everywhere have turned to chatbots to help with their homework, but artificial intelligence’s capabilities have blurred the lines about what it should — and shouldn’t — be used for.

    The technology’s widespread adoption in many other parts of life also adds to the confusion about what constitutes academic dishonesty.

    Here are some do’s and don’ts on using AI for schoolwork:

    Don’t just copy and paste

    Chatbots are so good at answering questions with detailed written responses that it’s tempting to just take their work and pass it off as your own.

    But in case it isn’t already obvious, AI should not be used as a substitute for putting in the work. And it can’t replace our ability to think critically.

    You wouldn’t copy and paste information from a textbook or someone else’s essay and pass it off as your own. The same principle applies to chatbot replies.

    “AI can help you understand concepts or generate ideas, but it should never replace your own thinking and effort,” the University of Chicago says in its guidance on using generative AI. “Always produce original work, and use AI tools for guidance and clarity, not for doing the work for you.”

    So don’t shy away from putting pen to paper — or your fingers to the keyboard — to do your own writing.

    “If you use an AI chatbot to write for you — whether explanations, summaries, topic ideas, or even initial outlines — you will learn less and perform more poorly on subsequent exams and attempts to use that knowledge,” Yale University’s Poorvu Center for Teaching and Learning says.

    Do use AI as a study aid

    Experts say AI shines when it’s used like a tutor or a study buddy. So try using a chatbot to explain difficult concepts or brainstorm ideas, such as essay topics.

    California high school English teacher Casey Cuny advises his students to use ChatGPT to quiz themselves ahead of tests.

    He tells them to upload class notes, study guides and any other materials used in class, such as slideshows, to the chatbot, and then tell it which textbook and chapter the test will focus on.

    Then, students should prompt the chatbot to: “Quiz me one question at a time based on all the material cited, and after that create a teaching plan for everything I got wrong.”

    Cuny posts AI guidance in the form of a traffic light on a classroom screen. Green-lighted uses include brainstorming, asking for feedback on a presentation or doing research. Red lighted, or prohibited AI use: Asking an AI tool to write a thesis statement, a rough draft or revise an essay. A yellow light is when a student is unsure if AI use is allowed, in which case he tells them to come and ask him.

    Or try using ChatGPT’s voice dictation function, said Sohan Choudhury, CEO of Flint, an AI-powered education platform.

    “I’ll just brain dump exactly what I get, what I don’t get” about a subject, he said. “I can go on a ramble for five minutes about exactly what I do and don’t understand about a topic. I can throw random analogies at it, and I know it’s going to be able to give me something back to me tailored based on that.”

    Do check your school’s AI policy

    As AI has shaken up the academic world, educators have been forced to set out their policies on the technology.

    In the U.S., about two dozen states have state-level AI guidance for schools, but it’s unevenly applied.

    It’s worth checking what your school, college or university says about AI. Some might have a broad institutionwide policy.

    The University of Toronto’s stance is that “students are not allowed to use generative AI in a course unless the instructor explicitly permits it” and students should check course descriptions for do’s and don’ts.

    Many others don’t have a blanket rule.

    The State University of New York at Buffalo “has no universal policy,” according to its online guidance for instructors. “Instructors have the academic freedom to determine what tools students can and cannot use in pursuit of meeting course learning objectives. This includes artificial intelligence tools such as ChatGPT.”

    Don’t hide AI use from teachers

    AI is not the educational bogeyman it used to be.

    There’s growing understanding that AI is here to stay and the next generation of workers will have to learn how to use the technology, which has the potential to disrupt many industries and occupations.

    So students shouldn’t shy away from discussing its use with teachers, because transparency prevents misunderstandings, said Choudhury.

    “Two years ago, many teachers were just blanket against it. Like, don’t bring AI up in this class at all, period, end of story,” he said. But three years after ChatGPT’s debut, “many teachers understand that the kids are using it. So they’re much more open to having a conversation as opposed to setting a blanket policy.”

    Teachers say they’re aware that students are wary of asking if AI use is allowed for fear they’ll be flagged as cheaters. But clarity is key because it’s so easy to cross a line without knowing it, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy.

    “Often, students don’t realize when they’re crossing a line between a tool that is helping them fix content that they’ve created and when it is generating content for them,” says Fitzsimmons, who helped draft detailed new guidelines for students and faculty that strive to create clarity.

    The University of Chicago says students should cite AI if it was used to come up with ideas, summarize texts, or help with drafting a paper.

    “Acknowledge this in your work when appropriate,” the university says. “Just as you would cite a book or a website, giving credit to AI where applicable helps maintain transparency.”

    And don’t forget ethics

    Educators want students to use AI in a way that’s consistent with their school’s values and principles.

    The University of Florida says students should familiarize themselves with the school’s honor code and academic integrity policies “to ensure your use of AI aligns with ethical standards.”

    Oxford University says AI tools must be used “responsibly and ethically” and in line with its academic standards.

    “You should always use AI tools with integrity, honesty, and transparency, and maintain a critical approach to using any output generated by these tools,” it says.

    ____

    Is there a tech topic that you think needs explaining? Write to us at [email protected] with your suggestions for future editions of One Tech Tip.

    Source link

  • Larry Summers takes leave from teaching at Harvard after release of Epstein emails

    Former U.S. Treasury Secretary Larry Summers abruptly went on leave Wednesday from teaching at Harvard University, where he once served as president, over recently released emails showing he maintained a friendly relationship with Jeffrey Epstein, Summers’ spokesperson said.

    Summers had canceled his public commitments amid the fallout of the emails being made public and earlier Wednesday severed ties with OpenAI, the maker of ChatGPT. Harvard had reopened an investigation into connections between him and Epstein, but Summers had said he would continue teaching economics classes at the school.

    That changed Wednesday evening with the news that he will step away from teaching classes as well as his position as director of the Mossavar-Rahmani Center for Business and Government with the Harvard Kennedy School.

    “Mr. Summers has decided it’s in the best interest of the Center for him to go on leave from his role as Director as Harvard undertakes its review,” Summers spokesperson Steven Goldberg said, adding that his co-teachers would finish the classes.

    Summers has not been scheduled to teach next semester, according to Goldberg.

    A Harvard spokesperson confirmed to The Associated Press that Summers had let the university know about his decision. Summers decision to go on leave was first reported by The Harvard Crimson.

    Harvard did not mention Summers by name in its decision to restart an investigation, but the move follows the release of emails showing that he was friendly with Epstein long after the financier pleaded guilty to soliciting prostitution from an underage girl in 2008.

    By Wednesday, the once highly regarded economics expert had been facing increased scrutiny over choosing to stay in the teaching role. Some students even filmed his appearance in shock as he appeared before a class of undergraduates on Tuesday while stressing he thought it was important to continue teaching.

    Massachusetts Sen. Elizabeth Warren, a Democrat, said in a social media post on Wednesday night that Summers “cozied up to the rich and powerful — including a convicted sex offender. He cannot be trusted in positions of influence.”

    Messages appear to seek advice about romantic relationship

    The emails include messages in which Summers appeared to be getting advice from Epstein about pursuing a romantic relationship with someone who viewed him as an “economic mentor.”

    “im a pretty good wing man , no?” Epstein wrote on Nov. 30, 2018.

    The next day, Summers told Epstein he had texted the woman, telling her he “had something brief to say to her.”

    “Am I thanking her or being sorry re my being married. I think the former,” he wrote.

    Summers’ wife, Elisa New, also emailed Epstein multiple times, including a 2015 message in which she thanked him for arranging financial support for a poetry project she directs. The gift he arranged “changed everything for me,” she wrote.

    “It really means a lot to me, all financial help aside, Jeffrey, that you are rooting for me and thinking about me,” she wrote.

    New, an English professor emerita at Harvard, did not respond to an email seeking comment Wednesday.

    An earlier review completed in 2020 found that Epstein visited Harvard’s campus more than 40 times after his 2008 sex-crimes conviction and was given his own office and unfettered access to a research center he helped establish. The professor who provided the office was later barred from starting new research or advising students for at least two years.

    Summers appears before Harvard class

    On Tuesday, Summers appeared before his class at Harvard, where he teaches “The Political Economy of Globalization” to undergraduates with Robert Lawrence, a professor with the Harvard Kennedy School.

    “Some of you will have seen my statement of regret expressing my shame with respect to what I did in communication with Mr. Epstein and that I’ve said that I’m going to step back from public activities for a while. But I think it’s very important to fulfill my teaching obligations,” he said.

    Summers’ remarks were captured on video by several students, but no one appeared to publicly respond to his comments.

    Epstein, who authorities said died by suicide in 2019, was a convicted sex offender infamous for his connections to wealthy and powerful people, making him a fixture of outrage and conspiracy theories about wrongdoing among American elites.

    Summers served as treasury secretary from 1999 to 2001 under President Bill Clinton. He was Harvard’s president for five years from 2001 to 2006. When asked about the emails last week, Summers issued a statement saying he has “great regrets in my life” and that his association with Epstein was a “major error in judgement.”

    Other organizations that confirmed the end of their affiliations with Summers included the Center for American Progress, the Center for Global Development and the Budget Lab at Yale University. Bloomberg TV said Summers’ withdrawal from public commitments included his role as a paid contributor, and the New York Times said it will not renew his contract as a contributing opinion writer.

    ___

    This story has been corrected to show that Summers is a former treasury secretary, not treasurer; to show that Summers’ statement about stepping back from public commitments was issued late Monday, not Tuesday; and to show that the school is known as the Harvard Kennedy School, not Kennedy Harvard School.

    ___

    Associated Press journalist Hallie Golden contributed to this report.

    Source link

  • Swatch’s New OpenAI-Powered Tool Lets You Design Your Own Watch

    And, just as with Swatch x You, it’s possible to further customize the watch by choosing indexes or selecting the color of its mechanism. To save on data center power drains and rampant creativity run amuck, you’re only allowed three prompts per day on AI‑DADA, something that Swatch is spinning as a “creative challenge that makes every attempt feel special.”

    Ultimately, what we have here, is a new version of Swatch x You that has been plugged with image-generation software supplied by OpenAI, thus letting the general public emblazon its timepieces with whatever graphics they see fit to dream up and deposit on them. What could possibly go wrong here, I wonder?

    I asked Roberto Amico, Swatch Group’s global head of digital & ecommerce, what guardrails have been put in place to stop people making, say, their very own Jeffrey Epstein Swatch, or White Power Swatch, or Stormy Daniels Swatch. Or maybe a Swatch with a Rolex logo on it, or something that looks a lot like the Rolex logo.

    Amico reassures me Swatch has indeed set guardrails, particularly with logos, for example, alongside the certain restrictions already in place from OpenAI. But interestingly, Swatch Group CEO Nick Hayek Jr. tells me he battled with OpenAI to remove some of its existing guardrails to make AI‑DADA “more liberal, more Swatch.”

    Hayek also confessed at the launch event in Switzerland that his first prompts on AI‑DADA all concerned “sex, drugs, and rock’n’roll,” but he was told his own model wouldn’t allow it. Still, you can never underestimate the ingenuity of the general public to get around obvious red flags—such as a ban on the model reproducing nudity or religious iconography—and create something that Swatch might not want to be associated with. Time will tell how bulletproof this model truly is.

    Familiar Faces

    While Swatch’s image model may be based on OpenAI, it defaults to a data set of more than 40 years of Swatch watches, products, designs, art and street paintings. Like a pattern or color on a particular 1980s Swatch dial or strap? It’s in there. Have a fondness for a Keith Haring or Vivienne Westwood or Phil Collins collaboration, the model has this too. If you ask for a design inspired by something outside of what Swatch has collected together in this archive, only then, Amico tells me, does AI‑DADA go beyond the in-house dataset and mine OpenAI’s data.

    Courtesy of: Swatch

    Jeremy White

    Source link

  • Nvidia’s strong earnings and a solid report on the job market boost US index futures

    NEW YORK — U.S. stock index futures added to their gains after the government reported that employers added twice as many jobs as expected in September. Futures were already higher on enthusiasm for a strong earnings report from AI bellwether Nvidia. Futures for the S&P 500 were up 1.5% before the opening bell, while futures for the Dow Jones Industrial Average gained 0.8%. Futures for the Nasdaq shot 1.9% higher. The Labor Department said employapners added 119,000 jobs in September, more than double the 50,000 economists had forecast. The market also focused on Nvidia as Wall Street’s most influential company jumped 5.1% overnight after reporting better-than-expected results.

    THIS IS A BREAKING NEWS UPDATE. AP’s earlier story follows below.

    Wall Street surged on Thursday after Nvidia reported stronger than expected quarterly earnings, tempering worries that AI-related stocks may have become overvalued.

    Futures for the S&P 500 were up 1.1% before the opening bell, while futures for the Dow Jones Industrial Average gained 0.5%. Futures for the Nasdaq shot 1.6% higher.

    The market’s focus remained on Nvidia as Wall Street’s most influential stock jumped 5.1% overnight after the chipmaker reported third-quarter earnings of $31.9 billion. That’s a 65% increase over last year and more than analysts were expecting.

    The Santa Clara, California company also forecast revenue for the current quarter covering November-January will come in at about $65 billion, nearly $3 billion above analysts’ projections, an indication that demand for its AI chips remains feverish.

    Nvidia is the most valuable company by market capitalization on Wall Street, having briefly topped $5 trillion in value. That means its movements have more of an effect on the S&P 500 than any other stock, and it can single-handedly steer the index’s direction some days.

    By continuing to deliver big profits for investors, Nvidia has mostly quieted recent criticism that its shares shot too high, too fast.

    Nvidia has become a bellwether for the broader frenzy around artificial-intelligence technology, because other companies are using its chips to ramp up their AI efforts.

    Walmart also reported its latest quarterly results Thursday. The Arkansas retailer delivered another standout quarter, posting strong sales and profits that blew past Wall Street expectations as it continues to lure cash-strapped Americans who have grown increasingly anxious about the economy and prices.

    With other retailers dialing back projections, the nation’s largest retailer raised its financial outlook Thursday after its strong third quarter, setting itself up for a strong holiday shopping season.

    Traders also made their final moves ahead of a September jobs report coming from the U.S. government on Thursday. The labor market data, usually released during the first week of every month, was delayed due to the six-week federal government shutdown.

    The Labor Department said Wednesday that it will not be releasing a full jobs report for October because the 43-day shutdown meant it couldn’t calculate the unemployment rate and some other key numbers.

    The job market has been slowing enough this year that the Fed has already cut its main interest rate twice. Lower rates can give a boost to the economy and to prices for investments, and the expectation on Wall Street had been for more cuts, including at the Fed’s next meeting in December.

    But some Fed officials are hinting that they should pause next month, in part because inflation has stubbornly remained above the Fed’s 2% target. Lower interest rates can worsen inflation.

    At midday in Europe, Germany’s DAX rose 0.8%, while Britain’s FTSE 100 and the CAC 40 in Paris each added 0.6%.

    In Asia, Japan’s Nikkei 225 index initially surged as much as 4.2% before giving up some early gains. It closed nearly 2.7% higher at 49,823.94 as technology stocks rallied, with investor sentiment boosted by Nvidia’s strong quarterly results after trading closed in the U.S.

    South Korea’s Kospi added 1.9% to 4,004.85, with gains led by technology and energy stocks. Investors were encouraged by Nvidia’s earnings and reports that the U.S. may delay planned semiconductor tariffs.

    Samsung Electronics gained 4.2%, while SK Hynix added 1.6%.

    Chinese markets ended mixed as reports said the government might be planning more measures to try to revive the ailing property sector.

    Hong Kong’s Hang Seng Index was barely changed at 25,835.57, while the Shanghai Composite index lost 0.4% to 3,931.05 after China’s central bank kept its one- and five-year loan prime rates unchanged at 3% and 3.5%, respectively.

    Taiwan’s Taiex closed 3.2% higher while India’s BSE Sensex added nearly 0.7%.

    Australia’s S&P/ASX 200 gained 1.2% to 8,552.70, also led by gains for technology stocks.

    In energy markets, benchmark U.S. crude oil gained 59 cents, or 1%, to $59.61 per barrel. Brent crude, the international standard, rose 62 cents to $64.13 per barrel.

    The U.S. dollar climbed to 157.66 Japanese yen from 157.06 yen. It has been trading at nearly the highest level this year on expectations that the government will delay efforts to rein in Japan’s national debt as Prime Minister Sanae Takaichi raises spending to help spur the economy.

    The euro fell to $1.1515 from $1.1538.

    Source link