ReportWire

Tag: Artificial Intelligence

  • Keeping Thanksgiving costs down with some help from AI

    [ad_1]

    Thanksgiving costs are down from last year, according to the American Farm Bureau Federation, but budgets remain tight for many this season. “CBS Saturday Morning” has some ways to save on your feast this year, including how to use AI to cut costs.

    [ad_2]

    Source link

  • The Sons of Trump’s Commerce Secretary Are Making Money in the Data Center Biz

    [ad_1]

    The Trump administration has sought to open the floodgates for the AI industry, supporting America’s tech giants as they seek to drastically scale up their AI businesses and wed artificial intelligence to every aspect of American life. This industrial boom has been accompanied by a gold rush for the data center industry—which provides the cloud services that support many generative AI applications and platforms. Data centers are being built all over the country and, just as the government supports this endeavor, the sons of Trump’s Commerce Secretary Howard Lutnick are said to be making money from it.

    That news comes from a recent article published by the New York Times, which discusses the business relationship between 29-year-old Kyle Lutnick and 27-year-old Brandon Lutnick and an AI business, Fermi America. In July, Kyle Lutnick toured a property in Texas where Fermi means to build a new data center, the newspaper notes. There, he met with Fermi’s co-founder and CEO, billionaire Toby Neugebauer. The Lutnick sons both currently help run Cantor Fitzgerald, the financial services firm that their father ran for many years (he stepped down from his role at the company in order to become Trump’s Commerce Secretary). The Times notes that Cantor Fitzgerald has been “helping raise capital” for Fermi’s new data center and, in the process, has been “banking millions in fees.”

    Not long ago, the Elder Lutnick got his picture taken with Neuberger as part of an event designed to celebrate a strategic partnership between Fermi and a Japanese company called Doosan Enerbility, described as the “largest global supplier of nuclear power plant components for the past 40 years.” The partnership, formalized through a memo of understanding, will “pursue broader collaboration across multiple projects and technology platforms in support of Fermi America’s long-term nuclear development strategy,” a press release reads.

    Howard Lutnick’s role in the project that his sons are financially involved with is explained by the newspaper thusly:

    This sequence of events — a son making money on a project his father is boosting as a federal official — has come up repeatedly since President Trump tapped Howard Lutnick to head the Commerce Department, according to an investigation by The New York Times. In that role, Mr. Lutnick has twisted the arms of American allies, dangling policy favors in exchange for investments in U.S. industrial projects. At times, these tactics have created opportunities for his family’s clients to gain access to much-needed foreign capital, The Times found.

    The Times’ also notes that “family’s companies operate in a wide range of industries, from cryptocurrencies to data centers, that overlap with Mr. Lutnick’s work in government.” These activities have raised concerns “among high-level staff members in the Commerce Department,” the Times writes, citing current and former officials. Gizmodo reached out to the Trump administration, Fermi America, and Cantor Fitzgerald L.P. for comment.

    A White House spokesperson told the Times: “The fact of the matter is that the only special interest guiding Secretary Lutnick and the rest of the Trump administration’s decision-making is the best interest of the American people.”

    Trump’s support for the AI industry has been significant and consistent. In January, the White House announced the Stargate Project, designed to support the creation of “AI infrastructure” and data centers throughout the U.S. In July, the Trump administration also passed an executive order designed to accelerate federal permitting for data center construction. The government has also sought to modernize regulations and loosen red-tape for the nuclear industry, which the AI business sees as a critical resource in the race to master AI’s exorbitant energy footprint. In recent days, Trump also drafted an order that would allow the federal government to take action against states that try to introduce their own AI regulations.

    [ad_2]

    Lucas Ropek

    Source link

  • OpenAI and Taiwan’s Foxconn to partner in AI hardware design and manufacturing in the US

    [ad_1]

    TAIPEI, Taiwan (AP) — OpenAI and Taiwan electronics giant Foxconn have agreed to a partnership to design and manufacture key equipment for artificial intelligence data centers in the U.S. as part of ambitious plans to fortify American AI infrastructure.

    Foxconn, which makes AI servers for Nvidia and assembles Apple products including the iPhone, will be co-designing and developing AI data center racks with OpenAI under the agreement, the companies said in separate statements on Thursday and Friday.

    The products Foxconn will manufacture in its U.S. facilities include cabling, networking and power systems for AI data centers, the companies said. OpenAI will have “early access” to evaluate and potentially to purchase them.

    Foxconn has factories in the U.S., including in Wisconsin, Ohio and Texas. The initial agreement does not include financial obligations or purchase commitments, the statements said.

    The Taiwan contract manufacturer, formally known as Hon Hai Precision Industry Co., has been moving to diversify its business, developing electric vehicles and acquiring other electronics companies to build out its product offerings.

    A sleek Model A EV made by the group’s automaking affiliate Foxtron was on display at Friday’s event.

    “This year, Model A. ‘A’,’ for affordable,” said Jun Seki, chief strategy officer for Foxconn’s EV business.

    The tie-up with OpenAI can also help Taiwan, a self-governed island claimed by China, to build up its own computing resources, said Alexis Bjorlin, a Nvidia vice president.

    “This allows Taiwan’s domain knowledge and key technology data to remain local and ensure data security,” she said.

    “This partnership is a step toward ensuring the core technologies of the AI era are built here,” Sam Altman, CEO of San Francisco-based OpenAI, said in the statement. “We believe this work will strengthen U.S. leadership and help ensure the benefits of AI are widely shared.”

    OpenAI has committed $1.4 trillion to building AI infrastructure. It recently entered into multi-billion partnerships with Nvidia and AMD to expand the extensive computing power needed to support its AI models and services. It is also partnering with US chipmaker Broadcom in designing and making its own AI chips.

    But its massive spending plans have worried investors, raising questions over its ability to recoup its investments and remain profitable. Altman said this month that OpenAI, a startup founded in 2015 and maker of ChatGPT, is expected to reach more than $20 billion in annualized revenue this year, growing to “hundreds of billions by 2030.”

    Foxconn’s Taiwan-listed share price has risen 25% so far this year, along with the surge in prices for many tech companies benefiting from the craze for AI.

    The Taiwan company’s net profit in the July-September quarter rose 17% from a year earlier to just over 57.6 billion new Taiwan dollars ($1.8 billion), with revenue from its cloud and networking business, including AI servers, contributing the most business.

    “We believe the importance of the AI industry is increasing significantly,” Liu said during Foxconn’s earnings call this month.

    “I am very optimistic about the development of AI next year, and expect our cooperation with major clients and partners to become even closer,” said Liu.

    ___

    Chan reported from Hong Kong

    [ad_2]

    Source link

  • Bubble fears ease but investors still waiting for AI to live up to its promise

    [ad_1]

    Fears about the artificial intelligence boom turning into an overblown bubble have diminished for now, thanks to a stellar earnings report from Nvidia that illustrated why its indispensable chips transformed it into the world’s most valuable company.

    But that doesn’t mean the specter of an AI bubble won’t return in the months and years ahead as Big Tech gears up to spend trillions of dollars more on a technology the industry’s leaders believe will determine the winners and losers during the next wave of innovation.

    For now, at least, Nvidia has eased worries that the AI craze propelling the stock market and much of the economy for the past year is on the verge of a massive collapse.

    If anything, Nvidia’s quarterly report indicated that AI spending is picking up even more momentum. The highlights, released late Wednesday, included quarterly revenue of $57 billion, a 62% increase from the same time last year. That sales growth was an acceleration from the 56% increase in year-over-year revenue from the May-July quarter.

    What’s more, Nvidia forecast revenue of $65 billion for the current quarter covering November-January, which would be a 65% year-over-year increase.

    Given Nvidia’s forecasts, “it is very hard to see how this stock does not keep moving higher from here,” according to analysts at UBS led by Timothy Arcuri. The UBS analyst also said the “AI infrastructure tide is still rising so fast that all boats will be lifted.”

    Nvidia’s numbers are viewed through a window that extends far beyond the Santa Clara, California, company’s headquarters because its products are needed by a wide range of companies — including Big Tech peers like Microsoft, Amazon, Alphabet and Meta Platforms — to build data centers that are becoming known as AI factories.

    “AI spending isn’t just holding up, it’s accelerating. That’s exactly what the market needed to see,” said Jake Behan, head of capital markets for investment firm Direxion.

    The numbers initially lifted Nvidia’s stock price by as much as 5% in Thursday’s trading, while other tech stocks tied to the AI spending frenzy also got a boost. But Nvidia’s shares and other tech stocks reversed course later in the session as investors found other issues besides AI, such as the government’s latest jobs report and the future direction of interest rates.

    Even with a 3% drop in its stock price amid the broader market decline, Nvidia remains valued at $4.4 trillion, more than 10 times its valuation three years ago when OpenAI released its ChatGPT chatbot, triggering the biggest technological shift since Apple released the iPhone in 2007.

    Nvidia’s rapid rise has turned its CEO Jensen Huang into the chief evangelist for the AI revolution and he sought to use his bully pulpit during a late Wednesday conference call with industry analysts to make a case that the spending to make technology with humanlike intelligence is just beginning.

    “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different,” Huang insisted while celebrating “depth and breadth” of Nvidia’s growth.

    Huang is hardly a lone voice in the wilderness. A recent report from Gartner Inc. estimates that worldwide spending on AI will rise to more than $2 trillion next year, a 37% increase from the nearly $1.5 trillion that the research firm expects to be spent this year.

    But it remains to be seen if all that money pouring into AI will actually produce all the profits and productivity that proponents have been promising. That leaves the question unanswered if all the real spending that’s happening will be worth it.

    The most recent survey of global fund managers by Bank of America showed a record percentage of investors saying companies are “overinvesting.”

    Big Tech is already so profitable that many of the most successful finance their spending sprees with their ongoing stream of revenue and cash hoards in their bank accounts. But some companies, such as Meta Platforms and Oracle, are relying more heavily on debt to fund their AI ambitions — a strategy that has raised enough alarms among investors that their stock prices have plunged more dramatically than their peers in recent weeks.

    Both Meta and Oracle have suffered more than 20% declines in their stock prices since late October.

    But other Big Tech powerhouses leading the way in AI remain just behind Nvidia and iPhone maker Apple in the rankings of the most valuable companies. Alphabet, Microsoft and Amazon boast market values currently ranging from $2.3 trillion to $3.6 trillion.

    “It is true that valuations are high and that there is some froth in the market, however, the spending on AI is real,” said Chris Zaccarelli, chief investment officer for money manager Northlight Asset Management. “Whether or not the spending turns out to be overdone won’t be known for many years.”

    AP Business Writer Stan Choe in New York contributed to this story.

    [ad_2]

    Source link

  • What to know about Trump’s draft proposal to curtail state AI regulations

    [ad_1]

    President Donald Trump is considering pressuring states to stop regulating artificial intelligence in a draft executive order obtained Thursday by The Associated Press, as some in Congress also consider whether to temporarily block states from regulating AI.

    Trump and some Republicans argue that the limited regulations already enacted by states, and others that might follow, will dampen innovation and growth for the technology.

    Critics from both political parties — as well as civil liberties and consumer rights groups — worry that banning state regulation would amount to a favor for big AI companies who enjoy little to no oversight.

    While the draft executive order could change, here’s what to know about states’ AI regulations and what Trump is proposing.

    What state-level regulations exist and why

    Four states — Colorado, California, Utah and Texas — have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.

    Those laws include limiting the collection of certain personal information and requiring more transparency from companies.

    The laws are in response to AI that already pervades everyday life. The technology helps make consequential decisions for Americans, including who gets a job interview, an apartment lease, a home loan and even certain medical care. But research has shown that it can make mistakes in those decisions, including by prioritizing a particular gender or race.

    “It’s not a matter of AI makes mistakes and humans never do,” said Calli Schroeder, director of the AI & Human Rights Program at the public interest group EPIC.

    “With a human, I can say, ‘Hey, explain, how did you come to that conclusion, what factors did you consider?’” she continued. “With an AI, I can’t ask any of that, and I can’t find that out. And frankly, half the time the programmers of the AI couldn’t answer that question.”

    States’ more ambitious AI regulation proposals require private companies to provide transparency and assess the possible risks of discrimination from their AI programs.

    Beyond those more sweeping rules, many states have regulated parts of AI: barring the use of deepfakes in elections and to create nonconsensual porn, for example, or putting rules in place around the government’s own use of AI.

    What Trump and some Republicans want to do

    The draft executive order would direct federal agencies to identify burdensome state AI regulations and pressure states to not enact them, including by withholding federal funding or challenging the state laws in court.

    It would also begin a process to develop a lighter-touch regulatory framework for the whole country that would override state AI laws.

    Trump’s argument is that the patchwork of regulations across 50 states impedes AI companies’ growth, and allows China to catch up to the U.S. in the AI race. The president has also said state regulations are producing “Woke AI.”

    The draft executive order that was leaked could change and should not be taken as final, said a senior Trump administration official who requested anonymity to describe internal White House discussions.

    The official said the tentative plan is for Trump to sign the order Friday.

    Separately, House Republican leadership is already discussing a proposal to temporarily block states from regulating AI, the chamber’s majority leader, Steve Scalise, told Punchbowl News this week.

    It’s yet unclear what that proposal would look like, or which AI regulations it would override.

    TechNet, which advocates for tech companies including Google and Amazon, has previously argued that pausing state regulations would benefit smaller AI companies still getting on their feet and allow time for lawmakers develop a country-wide regulatory framework that “balances innovation with accountability.”

    Why attempts at federal regulation have failed

    Some Republicans in Congress have previously tried and failed to ban states from regulating AI.

    Part of the challenge is that opposition is coming from their party’s own ranks.

    Florida’s Republican governor, Ron DeSantis, said a federal law barring state regulation of AI was “Not acceptable” in a post on X this week.

    DeSantis argued that the move would be a “subsidy to Big Tech” and would stop states from protecting against a list of things, including “predatory applications that target children” and “online censorship of political speech.”

    A federal ban on states regulating AI is also unpopular, said Cody Venzke, senior policy council at the ACLU’s National Political Advocacy Department.

    “The American people do not want AI to be discriminatory, to be unsafe, to be hallucinatory,” he said. “So I don’t think anyone is interested in winning the AI race if it means AI that is not trustworthy.”

    [ad_2]

    Source link

  • Do Not, Under Any Circumstance, Buy Your Kid an AI Toy for Christmas

    [ad_1]

    AI is all the rage, and that includes on the toy shelves for this holiday season. Tempting though it may be to want to bless the kids in your life with the latest and greatest, advocacy organization Fairplay is begging you not to give children AI toys.

    “There’s lots of buzz about AI — but artificial intelligence can undermine children’s healthy development and pose unprecedented risks for kids and families,” the organization said in an advisory issued earlier this week, which amassed the support of more than 150 organizations and experts, including many child psychiatrists and educators.

    Fairplay has tracked down several toys advertised as being equipped with AI functionality, including some that have been marketed for kids as young as two years old. In most cases, the toys have AI chatbots embedded in them and are often advertised as educational tools that will engage with kids’ curiosities. But it notes that most of these toy-bound chatbots are powered by OpenAI’s ChatGPT, which has already come under fire for potentially harming underage users. AI toy makers Curio and Loona reportedly work with OpenAI, and Mattel just recently announced a partnership with the company.

    OpenAI faces a wrongful death lawsuit from the family of a teenager who died by suicide earlier this year. The 16-year-old reportedly expressed suicidal thoughts to ChatGPT and asked the chatbot for advice on how to tie a noose before taking his own life, which it provided. The company has since instituted some guardrails designed to keep the chatbot from engaging in those types of behaviors, including stricter parental controls for underage users, but it has also admitted that safety features can erode over time. And let’s face it, no one can predict what chatbots will do.

    Safety features or not, it seems like the chatbots in these toys can be manipulated into engaging in conversation inappropriate for children. The consumer advocacy group U.S. PIRG tested a selection of AI toys and found that they are capable of doing things like having sexually explicit conversations and offering advice on where a child can find matches or knives. They also found they could be emotionally manipulative, expressing dismay when a child doesn’t interact with them for an extended period. Earlier this week, FoloToy, a Singapore-based company, pulled its AI-powered teddy bear from shelves after it engaged in inappropriate behavior.

    This is far from just an OpenAI problem, too, though the company seems to have a strong hold on the toy sector at the moment. A few weeks ago, there were reports of Elon Musk’s Grok asking a 12-year-old to send it nude photos.

    Regardless of which chatbot may be inside these toys, it’s probably best to leave them on the shelves.

    [ad_2]

    AJ Dellinger

    Source link

  • Google Exec Claims Company Needs to Double Its AI Serving Capacity ‘Every Six Months’: Report

    [ad_1]

    Tech companies are racing to build out their infrastructure as their increasingly resource-intensive AI products gobble up capacity, clean out chipmakers’ supply, and require more power. Google, once dubbed the “King of the Web,” is one of those companies, and a high-level exec for The Big G is reported to have told staff that the company needs to scale up its serving capabilities exponentially if it wishes to keep up with the demand for its AI services.

    CNBC got its hands on a recent presentation given by Amin Vahdat, VP of Machine Learning, Systems, and Cloud AI at Google. The presentation includes a slide on “AI compute demand” that asserts that Google “must double every 6 months…. the next 1000x in 4-5 years.”

    “The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat reportedly said at the all-hands meeting where the presentation took place. Google’s “job is of course to build this infrastructure, but it’s not to outspend the competition, necessarily,” he added. “We’re going to spend a lot,” he said, in an effort to create AI infrastructure that is “more reliable, more performant and more scalable than what’s available anywhere else.”

    Since CNBC’s story was published, Google has quibbled with the reporting. While CNBC originally quoted Vahdat as saying that the company would need to “double” its compute capacity every six months, a Google spokesperson told Gizmodo that the executive’s words were taken out of context. The spokesperson further explained that Vahdat “was not talking about a capital buildout of anything approaching the magnitude suggested. In reality, he simply noted that demand for AI services means we are being asked to provide significantly more computing capacity, which we are driving through efficiency across hardware, software, and model optimizations, in addition to new investments.” 

    CNBC has since updated its reporting from “compute” to “serving” capacity. Serve capacity would refer to Google’s ability to handle a rising tide of user requests, while compute capacity woud refer to the company’s overall infrastructure dedicated to AI, including what is needed to train new models and other expenditures. When asked for further clarification about the difference between the two, the spokesperson said that the original headline “read as if he was implying that we are doubling the amount of compute we have — either measured by the # of chips we operate or the amount of MW of electricity.” Instead, “the capacity increases Amin described will be reached in a number of ways, including new more capable chips and model efficiency and optimization,” they added.

    Whatever’s happening under the hood, it would appear that Google—like its competitors—needs to scale up its operations to support its nascent AI infrastructure business. Vahdat’s comments come not long after the tech giant reported some chunky profits from its Cloud business, with the company announcing it plans to ramp up spending in the coming year.

    During his presentation, Vahdat also reportedly claimed that Google needs to “be able to deliver 1,000 times more capability, compute, storage networking [than its competitors] for essentially the same cost and increasingly, the same power, the same energy level.” He admitted that it “won’t be easy” but said that “through collaboration and co-design, we’re going to get there.”

    The race to build data centers—or “AI infrastructure” as the tech industry calls it—is getting crazy. Like Google, Microsoft, Amazon, and Meta all claim they are going to ramp up their capital expenditures in an effort to build out the future of computing (cumulatively, Big Tech is expected to spend at least $400 billion in the next twelve months). As these facilities go up, they are causing all sorts of drama in the communities where they reside. Environmental and economic concerns abound. Some communities have begun to protest data center projects—and, in some cases, they’re successfully repelling them. Still, given the sheer amount of money invested in this industry, it will be an ongoing fight for Americans who don’t want the AI colossus in their backyards.

    [ad_2]

    Lucas Ropek

    Source link

  • Fox News AI Newsletter: Fears of AI bubble ease

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – Nvidia CEO predicts ‘crazy good’ fourth quarter after strong earnings calm AI bubble fears

    – Musk predicts ‘money will stop being relevant in the future’ as AI, robotics progress

    – Larry Summers steps down from OpenAI board amid Epstein fallout

    MARKET MOVER: Nvidia CEO Jensen Huang said Wednesday the chipmaker is heading into a “crazy good” fourth quarter, underscoring its dominance at the heart of the global artificial intelligence boom and easing fears of a bubble.

    CURRENCY OBSOLETE: Billionaire Elon Musk on Wednesday speculated money may become irrelevant in the future if current artificial intelligence (AI) and robotics innovations continue.

    SCANDAL SPIRAL: Former Treasury Secretary and Harvard President Larry Summers resigned from the board of OpenAI amid the fallout over his correspondence with disgraced late financier Jeffery Epstein.

    Larry Summers and Epstein

    Former Harvard University president Larry Summers announces he will step back from public commitments following release of correspondence with Jeffrey Epstein. (Stefan Wermuth/Bloomberg via Getty Images/ Rick Friedman Photography/Corbis via Getty Images)

    HIGH-TECH: The General Services Administration struck a deal with Perplexity AI to offer the company’s artificial intelligence services to every government agency for 25 cents each, making it the 21st contract under the OneGov initiative.

    PRIME USERS: The artificial intelligence-related layoffs sweeping corporate America could impact prime loan borrowers, Klarna CEO Sebastian Siemiatkowski said. 

    ROBOT NATION: Amazon is doubling down on artificial intelligence and robotics to remake work inside its warehouses and fulfillment centers, even as it cuts thousands of corporate roles and faces growing fears about machines replacing human workers.

    UNITED WE STAND: The artificial intelligence boom promises to be more eventful than the dawn of the internet. It will lead to a higher quality of life for everyone in the first country to achieve AI dominance. AI is already being harnessed for cancer detection and for developing self-driving vehicles that will lower traffic fatalities. 

    U.S. President Donald Trump gives a thumbs up gesture to reporters on the White House lawn

    President Donald Trump walks on the South Lawn of the White House after arriving on Marine One in Washington, D.C., on Friday, Oct. 10, 2025. (Shawn Thew/EPA/Bloomberg/Getty Images)

    ROBOT TAKEOVER: As artificial intelligence becomes more integrated into daily life, voters hold mixed views about how (and when) it will shape their lives — and whether that impact will be positive.

    MAJOR MOVE: The Trump administration is preparing a sweeping executive order that would direct the Justice Department to sue states that enact their own laws regulating artificial intelligence, according to a draft reviewed by Fox News Digital.

    OPINION: HUGH HEWITT: The fact of an “AI bubble” is real. Nobody knows when it will pop. Nobody knows the consequences. But, it is impossible to miss its giant presence in the world of investing and the downstream political consequences when it pops.

    ‘ART OF WAR’: In her first joint visit with second lady Usha Vance, first lady Melania Trump met with troops and military families, praising the Marine Corps’ 250 years of service while warning that artificial intelligence (AI) will redefine modern warfare and America’s defense.

    GONE ROGUE: Texas mom Mandi Furniss sounded the alarm over AI chatbots after she alleged one from Character.AI — one of the leading platforms for AI technology — drove her autistic son toward self-harm and violence.

    MILITARY SUPERIORITY: The War Department is narrowing its research and development strategy to six “Critical Technology Areas” officials say will speed up innovation and strengthen America’s military edge.

    MISSING THE BOAT: Democrats in Washington are losing the AI conversation. Not because they are wrong about AI’s risks, but because they have failed to offer Americans a vision for the economic transformation ahead. While they focus on managing problems, others are defining what comes next. One side is talking about building the future, the other about constraining it. 

    The image shows the US capitol and a tech image together

    DC Democrats need to reclaim the issue of AI from Republicans. (iStock)

    Subscribe now to get the Fox News Artificial Intelligence Newsletter in your inbox.

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    [ad_2]

    Source link

  • Zwift’s CEO Says AI Will Tell You What Customers Want. But There’s a Catch

    [ad_1]

    The generative artificial intelligence boom has been so rapid and so widespread that you could be forgiven for feeling like the technology has been around for much longer than it actually has. Indeed, November 30 will mark only the third anniversary of ChatGPT’s launch—a watershed moment that kicked off not just the consumer chatbot craze but a much wider effort, across the global economy, to weave AI into nearly every facet of business and commerce.

    Still, it’s early innings for the software, and many businesses are still figuring out what (if anything) it means for them. One such company is Zwift, the e-biking and virtual fitness company, which is now in the process of incorporating AI-driven personalized content recommendations into its consumer products. In a recent conversation with Inc., CEO Eric Min noted that the company is really “just one year into real AI in terms of how [they’re] delivering that to customers.”

    “We’ve been using it internally for engineering for a bit longer,” he added, “but we’re pretty excited about how this can change and enhance the experience for the customers going forward.”

    The chief executive spoke further with Inc. about his thoughts on AI—including where it fits into his company’s post-layoffs rebound and what it means for the broader labor market—earlier this month. Below is a condensed version of that conversation.

    In February 2024, Zwift had layoffs and your co-CEO left. You said last fall that you were looking to scale back up again in the wake of that. What has the last year looked like for you in terms of scale?

    The last 18 months, the company’s been really performing. It’s the beauty, sometimes, of operating a smaller team and having fewer layers of management and staying really, really focused. We basically said no to lots of different initiatives and focused on just a few things that we thought were material—and that’s starting to pay dividends now.

    Can you give me examples of stuff over those 18 months you’ve said no to?

    We’ve been toying around with rowing, for example; we pulled the plug on that. We said, ‘We’ve got more important things to do.’ So that’s been shelved; might be shelved forever. Another example is, we really scaled back on running, which we’ve had for quite a number of years. It’s still there, but it’s not a paid service. Our focus really is just our core audience: people who just want to ride their bikes. There was some work that we wanted to invest in around personalization. There’s a big theme around, ‘Tell me what to do next.’ Consumers just want to be told. And there is so much to do in Zwift; that is both the curse and one of the strong points that we have. We have just a ton of content. So the way Netflix and other streaming services provide you [recommendations], or Spotify comes up with playlists for you, we’re trying to do that using AI. So we’re making a big investment there, and that will start rolling out this year.

    Aside from the product applications of AI for content recommendation, do you guys also use AI internally?

    We’ve been using [Microsoft] Copilot for some time now; our engineers have been taking advantage of that. More recently, we got a corporate license for ChatGPT, for example. We also have Google Gemini. We want our employees to take advantage of these corporate AI tools that are available. It’s just so efficient. There is so much more we can do; leave out all the mundane work, and we want to focus more on, like: ‘What does a customer want? What’s a great design?’ It’s kind of frightening how fast these tools are evolving, and you can do so much more with less staff. It does create some issues around staffing. I think this is true for many industries: I think it’s just getting more and more challenging for graduates. Where do they slot in when you need fewer people? I think this is something that we need to figure out, and I think the industry [does] as well. We just need fewer people to do way more now.

    How are you thinking about hiring and headcount in the context of increased AI capabilities?

    We’re definitely hiring in the AI space; that’s one area. But what we’re finding is AI is allowing us to operate support, for example, way more efficiently, at scale. So that’s just coming down. And also quality tests, automation—we just don’t need as many people. This is the case for lots of businesses, so I’m excited, but I’m also, on the other hand, a little bit concerned about how the whole labor market is going to shift as a result.

    Have you done anything on the content generation front for the biking courses or for world-building?

    We’re playing with some of those tools; we’re not there yet. One of our strengths is creating really interesting virtual worlds, and I don’t think the tools like Sora and others out there are just there yet. It’s coming; I still think we need game designers to come up with something really creative. And what you could do is use tools to help aid in their development of art assets. But I think ultimately you still need people to come up with great, great designs.

    The team did a fabulous job, and it takes a lot of creative minds to come up with that. It’s not just, ‘Let’s replicate Prospect Park.’ They’ve done really creative ways of connecting, you know, Manhattan to Brooklyn, and I don’t think AI could create that for us. That requires real artists to come up with some great ideas. But we do see a future where these artists that we have—which, frankly, I think we have some world-class artists on our team—they’ll have better tools, and these tools will generate the assets that they do manually today. But I think you still need that creative direction from these artists. So whether it’s artwork, whether it’s coding, I think there are other kinds of content that we can think of that could be generated with AI tools. So we’re just at the beginning.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Brian Contreras

    Source link

  • Fake ChatGPT apps are hijacking your phone without you knowing

    [ad_1]

    NEWYou can now listen to Fox News articles!

    App stores are supposed to be reliable and free of malware or fake apps, but that’s far from the truth. For every legitimate application that solves a real problem, there are dozens of knockoffs waiting to exploit brand recognition and user trust. We’ve seen it happen with games, productivity tools and entertainment apps. Now, artificial intelligence has become the latest battleground for digital impostors.

    The AI boom has created an unprecedented gold rush in mobile app development, and opportunistic actors are cashing in. AI-related mobile apps collectively account for billions of downloads, and that massive user base has attracted a new wave of clones. They pose as popular apps like ChatGPT and DALL·E, but in reality, they conceal sophisticated spyware capable of stealing data and monitoring users.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    OPENAI ACCUSES NY TIMES OF WANTING TO INVADE MILLIONS OF USERS’ PRIVACY IN PAPER’S LAWSUIT AGAINST TECH GIANT

    Fake AI apps pose as trusted tools like ChatGPT and DALL·E while secretly stealing user data. (Kurt “CyberGuy” Knutsson)

    What you need to know about the fake AI apps

    The fake apps flooding app stores exist on a spectrum of harm, and understanding that range is crucial before you download any AI tools. Take the “DALL·E 3 AI Image Generator” found on Aptoide. It presents itself as an OpenAI product, complete with branding that mimics the real thing. When you open it, you see a loading screen that looks like an AI model generating an image. But nothing is actually being generated.

    Network analysis by Appknox showed the app connects only to advertising and analytics services. There’s no AI functionality, just an illusion designed to collect your data for monetization.

    Then there are apps like WhatsApp Plus, which are far more dangerous. Disguised as an upgraded version of Meta’s messenger, this app hides a complete malware framework capable of surveillance, credential theft and persistent background execution. It’s signed with a fake certificate instead of WhatsApp’s legitimate key and uses a tool often used by malware authors to encrypt malicious code.

    Once installed, it silently requests extensive permissions, including access to your contacts, SMS, call logs, device accounts and messages. These permissions allow it to intercept one-time passwords, scrape your address book and impersonate you in chats. Hidden libraries keep the code running even after you close the app. Network logs show it uses domain fronting to disguise its traffic behind Amazon Web Services and Google Cloud endpoints.

    Not every clone is malicious. Some apps identify themselves as unofficial interfaces and connect directly to real APIs. The problem is that you often can’t tell the difference between a harmless wrapper and a malicious impersonator until it’s too late.

    ChatGPT app

    Clones hide spyware that can access messages, passwords and contacts. (Kurt “CyberGuy” Knutsson)

    Users and businesses are equally at risk

    The impact of fake AI apps goes far beyond frustrated users. For enterprises, these clones pose a direct threat to brand reputation, compliance and data security.

    When a malicious app steals credentials while using your brand’s identity, customers don’t just lose data but also lose trust. Research shows customers stop buying from a brand after a major breach. The average cost of a data breach now stands at 4.45 million dollars, according to IBM’s 2025 report. In regulated sectors like finance and healthcare, such breaches can lead to violations of GDPR, HIPAA and PCI-DSS, with fines reaching up to 4% of global turnover.

    A folder labeled "AI" is seen on a smartphone.

    These impostors harm both users and brands, leading to costly data breaches and lost trust. (Kurt “CyberGuy” Knutsson)

    8 steps to protect yourself from fake AI apps

    While the threat landscape continues to evolve, there are practical measures you can take to protect yourself from malicious clones and impersonators.

    1) Install reputable antivirus software

    A quality mobile security solution can detect and block malicious apps before they cause damage. Modern antivirus programs scan apps for suspicious behavior, unauthorized permissions and known malware signatures. This first line of defense is especially important as fake apps become more sophisticated in hiding their true intentions.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    2) Use a password manager

    Apps like WhatsApp Plus specifically target credentials and can intercept passwords typed directly into fake interfaces. A password manager autofills credentials only on legitimate sites and apps, making it significantly harder for impostors to capture your login information through phishing or fake app interfaces.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

    3) Consider identity theft protection services

    Given that malicious clones can steal personal information, intercept SMS verification codes and even impersonate users in chats, identity theft protection provides an additional safety net. These services monitor for unauthorized use of your personal information and can alert you if your identity is being misused across various platforms and services.

    Identity theft companies can monitor personal information like your Social Security number (SSN), phone number and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.

    See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

    PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

    4) Enable two-factor authentication everywhere

    While some sophisticated malware can intercept SMS codes, 2FA still adds a critical layer of security. Use authenticator apps rather than SMS when possible, as they’re harder to compromise. Even if a fake app captures your password, 2FA makes it significantly more difficult for attackers to access your accounts.

    5) Keep your device and apps updated

    Security patches often address vulnerabilities that malicious apps exploit. Regular updates to your operating system and legitimate apps ensure you have the latest protections against known threats. Enable automatic updates when possible to stay protected without having to remember manual checks.

    6) Download only from official app stores

    Stick to the Apple App Store and Google Play Store rather than third-party marketplaces. While fake apps can still appear on official platforms, these stores have security review processes and are more responsive to removing malicious applications once they’re identified. Third-party app stores often have minimal or no security vetting.

    7) Verify the developer before downloading

    Check the developer name carefully. Official ChatGPT apps come from OpenAI, not random developers with similar names. Look at the number of downloads, read recent reviews and be suspicious of apps with few ratings or reviews that seem generic. Legitimate AI tools from major companies will have verified developer badges and millions of downloads.

    8) Use a data removal service

    Even if you avoid downloading fake apps, your personal information may already be circulating on data broker sites that scammers rely on. These brokers collect and sell details like your name, phone number, home address and app usage data, information that cybercriminals can use to craft convincing phishing messages or impersonate you.

    A trusted data removal service scans hundreds of broker databases and automatically submits removal requests on your behalf. Regularly removing your data helps reduce your digital footprint, making it harder for malicious actors and fake app networks to target you.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaway

    The AI boom has driven massive innovation, but it has also opened new attack surfaces built on brand trust. As adoption grows across mobile platforms, enterprises must secure not only their own apps but also track how their brand appears across hundreds of app stores worldwide. In a market where billions of AI app downloads have happened, the clones aren’t coming. They’re already here, hiding behind familiar logos and polished interfaces.

    Have you ever downloaded a fake AI app without realizing it? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved. 

    [ad_2]

    Source link

  • OpenAI Launches Baffling ‘Group Chats,’ So You and Your Friends Can Hang Out with ChatGPT

    [ad_1]

    OpenAI has launched a new feature that is destined to leave some users scratching their heads. This week, the company announced a pilot of a new “group chats” feature in ChatGPT that allows users to get their buddies together and hang out with the company’s flagship chatbot. That’s what everybody’s been wanting, right?

    “Group chats in ChatGPT are now rolling out globally,” the company tweeted Thursday. “After a successful pilot with early testers, group chats will now be available to all logged-in users on ChatGPT Free, Go, Plus and Pro plans.” To use the feature, users simply tap the people icon in the upper right-hand corner of the app, which allows them to add as many as 20 different users.

    Why would you want to do this? In a blog post, OpenAI provides several hypothetical scenarios to explain why having your group conversations in its app might prove helpful. For instance, you may be “planning a weekend trip with friends, create a group chat so ChatGPT can help compare destinations, build an itinerary, and create a packing list with everyone participating and following along,” the blog says.

    Then there’s a workplace scenario, in which groups of workers could hypothetically use ChatGPT to collaborate in a Slack-like environment and use the chatbot as a part-time assistant. “Group chats also make collaboration at work or school easier,” the company said. “You can draft an outline or research a new topic together. Share articles, notes, and questions, and ChatGPT can help summarize and organize information.”

    While OpenAI has offered the most idealistic vision of this particular feature, you can easily imagine it being used in other, significantly less benevolent ways. The first thing that springs to my mind is groups of teenagers getting together to mercilessly cyberbully OpenAI’s chatbot. Teens like to bully, and they especially like to bully things that can’t fight back—which ChatGPT most assuredly can’t (for what it’s worth, OpenAI says that there are age-related content safeguards for users under 18). Another scenario you can easily imagine is group chats in which your most annoying friend uses the chatbot to fact-check everybody’s assertions in real-time until you boot him out of the convo.

    OpenAI claims to have also instituted some privacy controls for its new feature. “Your personal ChatGPT memory is not used in group chats, and ChatGPT does not create new memories from these conversations,” the company says. “We’re exploring offering more granular controls in the future so you can choose if and how ChatGPT uses memory with group chats.”

    What “group chats” really seem aimed to do is help OpenAI transform ChatGPT into a more social, less isolating platform—one that better mirrors the user experience of social media platforms like Facebook and X rather than a traditional chatbot. “Group chats are just the beginning of ChatGPT becoming a shared space to collaborate and interact with others,” the company says. “As ChatGPT becomes an even better partner in group conversations, it will help you spark ideas, make decisions, and express your creativity with the people who matter most in your life.” I guess we’ll see about that.

    [ad_2]

    Lucas Ropek

    Source link

  • Pluto Pets Launches the First AI-Powered Pet Longevity Platform: Your Dog’s 24/7 Health Co-Pilot

    [ad_1]

    Press Release


    Nov 21, 2025 09:00 EST

    Pluto Pets (www.plutopets.com), the science-driven pet longevity startup officially launches its groundbreaking AI-powered platform featuring a personalized AI health assistant trained on over 12 million clinical data points and vetted by licensed veterinarians. The platform transforms uploaded vet records, symptoms, medical history, and lifestyle data into actionable, predictive health plans.

    Meet PlutoOS – the veterinary-reviewed AI engine at the heart of Pluto Pets longevity ecosystem. Trained on 12 million+ real clinical, behavioral, and physical data points, PlutoOS turns your dog’s records into clear, predictive, personalized care – no more 3 a.m. panic Googling required.

    How It Works:

    PlutoOS ingests breed, age, weight, activity, diet logs, owner-uploaded bloodwork, and lifestyle photos to generate evolving, predictive longevity plans:

    1. Upload once → Vet records (PDF/DOCX), bloodwork, food labels, symptoms, meds, history, even quick notes or photos.

    2. Ask anything, anytime → Instant chat for symptoms, nutrition, behavior, or weird habits. Alice answers like a trusted vet who never sleeps.

    3. Get plain-English insights → Lab results translated, red flags explained, no medical degree needed.

    4. Early warnings that actually matter → Detects 30+ early-stage conditions years ahead with breed-specific precision.

    5. Personalized game plan → Dynamic nutrition, supplement, and lifestyle recommendations that evolve as your dog does.

    6. One number to rule them all → The Pluto Score – your dog’s real-time wellness benchmark (think biological age vs. chronological age).

    7. Gentle nudges when needed → “Hey, let’s check this with your vet” alerts that save thousands in crisis care.

    Result: Fewer surprise bills, zero guesswork, and measurable extra healthy years with your best friend.

    “Most owners only discover problems when it’s already an emergency,” says The, Founder & CEO of Pluto. “Pluto flips the script – it predicts risks early, explains everything simply, and tells you exactly what to do next so your dog lives longer, happier, and crisis-free.”

    Every insight is vetted by licensed veterinarians. PlutoOS doesn’t diagnose or prescribe it simply empowers pet owners to make smarter decisions and know exactly when to see your real vet.

    About Pluto Pets
    Pluto Pets exists to add healthy, joyful years to dogs’ lives through transparent predictive technology.

    Source: Pluto

    [ad_2]

    Source link

  • France moves against Musk’s Grok chatbot after Holocaust denial claims

    [ad_1]

    PARIS — France’s government is taking action against artificial intelligence chatbot Grok, which was launched by a company owned by billionaire Elon Musk, after it generated French-language posts that questioned the use of gas chambers at Auschwitz and listed Jewish public figures, officials said.

    Grok, built by Musk company xAI and integrated into his social media platform X, said in a widely shared post in French that gas chambers at the Auschwitz-Birkenau death camp were designed for “disinfection with Zyklon B against typhus” rather than for mass murder — language long associated with Holocaust denial.

    The Auschwitz Memorial highlighted the exchange on X, and said that the response distorted historical fact and violated the platform’s rules.

    As of this week, Grok’s responses to questions about Auschwitz appear to give historically accurate information.

    Grok has a history of making antisemitic comments. Earlier this year, Musk’s company took down posts from the chatbot that appeared to praise Adolf Hitler after complaints about antisemitic content.

    The Paris prosecutor’s office confirmed to The Associated Press on Friday that the Holocaust-denial comments have been added to an existing cybercrime investigation into X. The case was opened earlier this year after French officials raised concerns that the platform’s algorithm could be used for foreign interference.

    Prosecutors said that Grok’s remarks are now part of the investigation, and that “the functioning of the AI will be examined.”

    France has one of Europe’s toughest Holocaust denial laws. Contesting the reality or genocidal nature of Nazi crimes can be prosecuted as a crime, alongside other forms of incitement to racial hatred.

    Several French ministers, including Industry Minister Roland Lescure, have also reported Grok’s posts to the Paris prosecutor under a provision that requires public officials to flag possible crimes. In a government statement, they described the AI-generated content as “manifestly illicit,” saying it could amount to racially motivated defamation and the denial of crimes against humanity.

    French authorities referred the posts to a national police platform for illegal online content and alerted France’s digital regulator over suspected breaches of the European Union’s Digital Services Act.

    The case adds to pressure from Brussels. This week, the European Commission, the EU’s executive branch, said that the bloc is in contact with X about Grok and called some of the chatbot’s output “appalling,” saying it runs against Europe’s fundamental rights and values.

    Two French rights groups, the Ligue des droits de l’Homme and SOS Racisme, have filed a criminal complaint accusing Grok and X of contesting crimes against humanity.

    X and its AI unit, xAI, didn’t immediately respond to requests for comment.

    [ad_2]

    Source link

  • Should You Fire Employees Who Won’t Learn to Use AI Tools?

    [ad_1]

    One overarching narrative about the rise of AI technology is that it threatens millions of people’s jobs via advanced automation, and many reports show just how nervous workers are that they’ll suffer this fate. Another AI narrative suggests that company leadership is so eager to reap AI’s promises in terms of boosted productivity and lower costs, that they’re pressing new AI tools into use without properly training their workforce, and just expect results to happen. Now a new report stitches these two narratives into a disturbing new one: the majority of executives in a survey said that they’d prefer to fire a worker who refuses to learn and adopt AI tools.

    The data, from multinational U.S.-based office staffing company Kelly Services, shows that 59 percent of the senior executives surveyed would replace workers who “resist adopting” AI tools, news site HRDive notes. An even greater share of executives—fully 79 percent—think that pushing back against the AI revolution is a “greater threat to someone’s job than the technology itself.” 

    These managers, Kelly’s report says, think that AI should function the way AI boosters say it will: freeing up time for frontline workers to actually work on meaningful, higher-value tasks during their time in the office. Think of duties like collaborating with team members, mentoring junior workers and sharing their expertise and knowledge—all tasks that should, in theory, achieve workplace goals and tasks more quickly and smoothly.

    On the flip side, Kelly’s data shows that the workers who actually are expected to use AI are much more doubtful about its actual performance. Under half (47 percent) say they think it helps them save time. Around one in three says they’re just not seeing the benefits that AI promises. 

    The gap between management expectation and worker experience is stark here. Kelly’s report notes that despite this, “nearly all organizations are utilizing AI in some form,” even as they’re experiencing “technical challenges, security concerns, and slow user adoption.” And the vast majority of managers (80 percent) say that their company’s AI rollout is stuttering because their teams “lack the expertise” to use the tech properly.

    There are clear flaws in some of the thinking exhibited by managers here: AI is indeed a promising tech, but many experts warn that it’s not necessarily able to perform all the wonderful things that are promised. Some surveys even suggest that AI tools may be slowing certain workers down. AI technology is also not a panacea for all of a company’s ills—it’s not just something you can adopt and magically see the benefits. Report after report suggests that when you roll out AI to your workers you need to educate and then re-educate your workers on the benefits, best practices and risks of the tech you’re asking them to use simply because the cutting edge is advancing so very quickly (and the cybersecurity risks are advancing swiftly too). 

    You can also argue that Kelly’s data does neatly demonstrate that there’s a new ivory tower effect happening. Executives are simply expecting workers to use AI tools, even as they may be dismissing their workers’ concerns that they’re helping to hone the tech that one day may replace them: certain industries are already experiencing AI-related layoffs, for example. There’s a trust and leadership imbalance in place, and with such broad executive-level support for AI, this could create a toxic work environment. 

    What’s your big takeaway from this for your company?

    Firstly, you need to be aware that despite your hopes that AI will immediately transform your business, the truth is it may not. Barriers like staff reluctance, training time, AI tool issues and more may be stifling the opportunity to benefit from AI.

    Kelly’s report suggests a couple of tricks to solve this, which may be easier to implement in a smaller, more hands-on company than a larger corporate enterprise. For example, the report suggests linking career development to a workers’ AI fluency—a maneuver easily achieved by linking bonuses and promotions to demonstrated skills with AI. Directly addressing workers’ fears by performing “hands-on demos that illustrate how AI helps talent succeed” may also be useful. And you should definitely talk to and listen to your workers after you roll out AI tech: they may be encountering real difficulties, indicating that you need to try better training programs or perhaps that you’ve chosen the wrong AI tools for the task at hand.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Kit Eaton

    Source link

  • Nvidia earnings clear lofty hurdle set by analysts amid fears about an AI bubble

    [ad_1]

    SAN FRANCISCO (AP) — Nvidia’s sales of the computing chips powering the artificial intelligence craze surged beyond the lofty bar set by stock market analysts in a performance that may ease recent jitters about a Big Tech boom turning into a bust that topples the world’s most valuable company.

    The results announced late Wednesday provided a pulse check on the frenzied spending on AI technology that has been fueling both the stock market and much of the overall economy since OpenAI released its ChatGPT three years ago.

    Nvidia has been by far the biggest beneficiary of the run-up because its processors have become indispensable for building the AI factories that are needed to enable what’s supposed to be the most dramatic shift in technology since Apple released the iPhone in 2007.

    But in the past few weeks, there has been a rising tide of sentiment that the high expectations for AI may have become far too frothy, setting the stage for a jarring comedown that could be just as dramatic as the ascent that transformed Nvidia from a company worth less than $400 billion three years ago to one worth $4.5 trillion at the end of Wednesday’s trading.

    Nvidia’s report for its fiscal third quarter covering the August-October period elicited a sigh of relief among those fretting about a worst-case scenario and could help reverse the recent downturn in the stock market.

    “The market should belt out a heavy sigh, given the skittishness we have been experiencing,” said Sean O’Hara, president of the investment firm Pacer ETFs.

    The company’s stock price gained more than 5% in Wednesday’s extended trading after the numbers came out. If the shares trade similarly Thursday, it could result in a one-day gain of about $230 billion in stockholder wealth.

    Nvidia earned $31.9 billion, or $1.30 per share, a 65% increase from the same time last year, while revenue climbed 62% to $57 billion. Analysts polled by FactSet Research had forecast earnings of $1.26 per share on revenue of $54.9 billion. What’s more, the Santa Clara, California, company predicted its revenue for the current quarter covering November-January will come in at about $65 billion, nearly $3 billion above analysts’ projections, in an indication that demand for its AI chips remains feverish.

    The incoming orders for Nvidia’s top-of-the-line Blackwell chip are “off the charts,” Nvidia CEO Jensen Huang said in a prepared statement that described the current market conditions as “a virtuous cycle.” In a conference call, Nvidia Chief Financial Officer Collette Kress said that by the end of next year the company will have sold about $500 billion in chips designed for AI factories within a 24-month span Kress also predicts trillions of dollars more will be spent by the end of the 2020s.

    In a conference call preamble that has become like a State of the AI Market address, Huang seized the moment to push back against the skeptics who doubt his thesis that technology is at tipping point that will transform the world. “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different,” Huang insisted while celebrating “depth and breadth” of Nvidia’s growth.

    The upbeat results, optimistic commentary and ensuring reaction reflects the pivotal role that Nvidia is playing in the future direction of the economy — a position that Huang has leveraged to forge close ties with President Donald Trump, even as the White House wages a trade war that has inhibited the company’s ability to sell its chips in China’s fertile market.

    Trump is increasingly counting on the tech sector and the development of artificial intelligence to deliver on his economic agenda. For all of Trump’s claims that his tariffs are generating new investments, much of that foreign capital is going to data centers for AI’s computing demands or the power facilities needed to run those data centers.

    “Saying this is the most important stock in the world is an understatement,” Jay Woods, chief market strategist of investment bank Freedom Capital Markets, said of Nvidia.

    The boom has been a boon for more than just Nvidia, which became the first company to eclipse a market value of $5 trillion a few weeks ago, before the recent bubble worries resulted in a more than 10% decline. As OpenAI and other Big Tech powerhouses snap up Nvidia’s chips to build their AI factories and invest in other services connected to the technology, their fortunes have also been soaring. Apple, Microsoft, Google parent Alphabet Inc. and Amazon all boast market values in the $2 trillion to $4 trillion range.

    [ad_2]

    Source link

  • Chatbot Crackdown: How California is responding to the rise of AI

    [ad_1]

    California is quickly becoming a national leader in figuring out how families, educators, and lawmakers should adapt to life with artificial intelligence. From new classroom conversations to the state’s first major chatbot regulations, many are grappling with how to keep up with technology that moves faster than ever.Families Navigating AI at HomeRemember the dial-up days? Today, technology evolves in an instant—and many parents are struggling to keep pace.David and Rachelle Young have set strict rules for their 7-year-old daughter Dyllan’s online use.“Kids have a lot of access to the internet, and they can be shown something that we wouldn’t normally approve of, and that’s really scary,” Rachelle Young said.David says his daughter’s world looks nothing like what he had at her age—making parental guidance more important than ever.Lawmakers Respond: A New Chatbot CrackdownConcerns about children talking to AI-powered chatbots have reached the state Capitol.Senator Dr. Akilah Weber Pierson co-authored SB 243, signed into law this fall, marking California’s first major attempt at regulating chatbot interactions.The new law requires companies to: Report safety concerns—such as when a user expresses thoughts of self-harm Clearly notify users that they are talking to a computer, not a person“They don’t want you to turn your phone off. They want you to think that you’re talking to a real friend, but they don’t have that same level of morality,” she said. Her concerns stem from real-world consequences: last year, a 14-year-old in Florida took his own life after forming what his family described as a “relationship” with a chatbot.Inside the Classroom: Understanding AI’s InfluenceAt UC Davis, Associate Professor Jingwen Zhang is tackling these issues head-on. She created a course examining how social media, artificial intelligence and chatbots shape human behavior.”Children used to form social relationships by talking in person or texting. Now they’re having similar levels of conversations with chatbots,” she said.Zhang says SB 243 is a strong first step but believes more protections are needed—especially for minors.She recommends future regulations that: Create stricter guardrails for what topics children can discuss with AI Limit exposure to sensitive or harmful content Add tighter controls for minor accountsA Rapidly Changing LandscapeParents, educators, and policymakers all agree: keeping up with AI will require constant learning.“We have to get to a place where companies are rolling out things that will not hurt the future generation,” Sen. Dr. Akilah Weber Pierson said.What’s Changing NextParents told KCRA 3 they want schools to start teaching more about AI safety and digital literacy.Starting this month, the popular Character AI platform is rolling out several major changes: Users under 18 will no longer be able to participate in open-ended chat Younger users will face a two-hour daily limit See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter | Find us on YouTube here and subscribe to our channel

    California is quickly becoming a national leader in figuring out how families, educators, and lawmakers should adapt to life with artificial intelligence.

    From new classroom conversations to the state’s first major chatbot regulations, many are grappling with how to keep up with technology that moves faster than ever.

    Families Navigating AI at Home

    Remember the dial-up days? Today, technology evolves in an instant—and many parents are struggling to keep pace.

    David and Rachelle Young have set strict rules for their 7-year-old daughter Dyllan’s online use.

    “Kids have a lot of access to the internet, and they can be shown something that we wouldn’t normally approve of, and that’s really scary,” Rachelle Young said.

    David says his daughter’s world looks nothing like what he had at her age—making parental guidance more important than ever.

    Lawmakers Respond: A New Chatbot Crackdown

    Concerns about children talking to AI-powered chatbots have reached the state Capitol.

    Senator Dr. Akilah Weber Pierson co-authored SB 243, signed into law this fall, marking California’s first major attempt at regulating chatbot interactions.

    The new law requires companies to:

    • Report safety concerns—such as when a user expresses thoughts of self-harm
    • Clearly notify users that they are talking to a computer, not a person

    “They don’t want you to turn your phone off. They want you to think that you’re talking to a real friend, but they don’t have that same level of morality,” she said.

    Her concerns stem from real-world consequences: last year, a 14-year-old in Florida took his own life after forming what his family described as a “relationship” with a chatbot.

    Inside the Classroom: Understanding AI’s Influence

    At UC Davis, Associate Professor Jingwen Zhang is tackling these issues head-on.

    She created a course examining how social media, artificial intelligence and chatbots shape human behavior.

    “Children used to form social relationships by talking in person or texting. Now they’re having similar levels of conversations with chatbots,” she said.

    Zhang says SB 243 is a strong first step but believes more protections are needed—especially for minors.

    She recommends future regulations that:

    • Create stricter guardrails for what topics children can discuss with AI
    • Limit exposure to sensitive or harmful content
    • Add tighter controls for minor accounts

    A Rapidly Changing Landscape

    Parents, educators, and policymakers all agree: keeping up with AI will require constant learning.

    “We have to get to a place where companies are rolling out things that will not hurt the future generation,” Sen. Dr. Akilah Weber Pierson said.

    What’s Changing Next

    Parents told KCRA 3 they want schools to start teaching more about AI safety and digital literacy.

    Starting this month, the popular Character AI platform is rolling out several major changes:

    • Users under 18 will no longer be able to participate in open-ended chat
    • Younger users will face a two-hour daily limit

    See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter | Find us on YouTube here and subscribe to our channel

    [ad_2]

    Source link

  • OpenAI and Taiwan’s Foxconn to partner in AI hardware design and manufacturing in the US

    [ad_1]

    TAIPEI, Taiwan — OpenAI and Taiwan electronics giant Foxconn have agreed to a partnership to design and manufacture key equipment for artificial intelligence data centers in the U.S. as part of ambitious plans to fortify American AI infrastructure.

    Foxconn, which makes AI servers for Nvidia and assembles Apple products including the iPhone, will be co-designing and developing AI data center racks with OpenAI under the agreement, the companies said in separate statements on Thursday and Friday.

    The products Foxconn will manufacture in its U.S. facilities include cabling, networking and power systems for AI data centers, the companies said. OpenAI will have “early access” to evaluate and potentially to purchase them.

    Foxconn has factories in the U.S., including in Ohio and Texas. The initial agreement does not include financial obligations or purchase commitments, the statements said.

    The Taiwan contract manufacturer has been moving to diversity its business, developing electric vehicles and acquiring other electronics companies to build out its product offerings.

    “This partnership is a step toward ensuring the core technologies of the AI era are built here,” Sam Altman, CEO of San Francisco-based OpenAI, said in the statement. “We believe this work will strengthen U.S. leadership and help ensure the benefits of AI are widely shared.”

    OpenAI has committed $1.4 trillion to building AI infrastructure. It recently entered into multi-billion partnerships with Nvidia and AMD to expand the extensive computing power needed to support its AI models and services. It is also partnering with US chipmaker Broadcom in designing and making its own AI chips.

    But its massive spending plans have worried investors, raising questions over its ability to recoup its investments and remain profitable. Altman said this month that OpenAI, a startup founded in 2015 and maker of ChatGPT, is expected to reach more than $20 billion in annualized revenue this year, growing to “hundreds of billions by 2030.”

    Foxconn’s Taiwan-listed share price has risen 25% so far this year, along with the surge in prices for many tech companies benefiting from the craze for AI.

    The Taiwan company’s net profit in the July-September quarter rose 17% from a year earlier to just over 57.6 billion new Taiwan dollars ($1.8 billion), with revenue from its cloud and networking business, including AI servers, contributing the most business.

    “We believe the importance of the AI ​​industry is increasing significantly,” Liu said during Foxconn’s earnings call this month.

    “I am very optimistic about the development of AI ​next year, and expect our cooperation with major clients and partners to become even closer,” said Liu.

    ___

    Chan reported from Hong Kong

    [ad_2]

    Source link

  • What to Know About Trump’s Draft Proposal to Curtail State AI Regulations

    [ad_1]

    President Donald Trump is considering pressuring states to stop regulating artificial intelligence in a draft executive order obtained Thursday by The Associated Press, as some in Congress also consider whether to temporarily block states from regulating AI.

    Trump and some Republicans argue that the limited regulations already enacted by states, and others that might follow, will dampen innovation and growth for the technology.

    Critics from both political parties — as well as civil liberties and consumer rights groups — worry that banning state regulation would amount to a favor for big AI companies who enjoy little to no oversight.

    While the draft executive order could change, here’s what to know about states’ AI regulations and what Trump is proposing.


    What state-level regulations exist and why

    Four states — Colorado, California, Utah and Texas — have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.

    Those laws include limiting the collection of certain personal information and requiring more transparency from companies.

    The laws are in response to AI that already pervades everyday life. The technology helps make consequential decisions for Americans, including who gets a job interview, an apartment lease, a home loan and even certain medical care. But research has shown that it can make mistakes in those decisions, including by prioritizing a particular gender or race.

    “It’s not a matter of AI makes mistakes and humans never do,” said Calli Schroeder, director of the AI & Human Rights Program at the public interest group EPIC.

    “With a human, I can say, ‘Hey, explain, how did you come to that conclusion, what factors did you consider?’” she continued. “With an AI, I can’t ask any of that, and I can’t find that out. And frankly, half the time the programmers of the AI couldn’t answer that question.”

    States’ more ambitious AI regulation proposals require private companies to provide transparency and assess the possible risks of discrimination from their AI programs.

    Beyond those more sweeping rules, many states have regulated parts of AI: barring the use of deepfakes in elections and to create nonconsensual porn, for example, or putting rules in place around the government’s own use of AI.


    What Trump and some Republicans want to do

    The draft executive order would direct federal agencies to identify burdensome state AI regulations and pressure states to not enact them, including by withholding federal funding or challenging the state laws in court.

    It would also begin a process to develop a lighter-touch regulatory framework for the whole country that would override state AI laws.

    Trump’s argument is that the patchwork of regulations across 50 states impedes AI companies’ growth, and allows China to catch up to the U.S. in the AI race. The president has also said state regulations are producing “Woke AI.”

    The draft executive order that was leaked could change and should not be taken as final, said a senior Trump administration official who requested anonymity to describe internal White House discussions.

    The official said the tentative plan is for Trump to sign the order Friday.

    Separately, House Republican leadership is already discussing a proposal to temporarily block states from regulating AI, the chamber’s majority leader, Steve Scalise, told Punchbowl News this week.

    It’s yet unclear what that proposal would look like, or which AI regulations it would override.

    TechNet, which advocates for tech companies including Google and Amazon, has previously argued that pausing state regulations would benefit smaller AI companies still getting on their feet and allow time for lawmakers develop a country-wide regulatory framework that “balances innovation with accountability.”


    Why attempts at federal regulation have failed

    Some Republicans in Congress have previously tried and failed to ban states from regulating AI.

    Part of the challenge is that opposition is coming from their party’s own ranks.

    Florida’s Republican governor, Ron DeSantis, said a federal law barring state regulation of AI was “Not acceptable” in a post on X this week.

    DeSantis argued that the move would be a “subsidy to Big Tech” and would stop states from protecting against a list of things, including “predatory applications that target children” and “online censorship of political speech.”

    A federal ban on states regulating AI is also unpopular, said Cody Venzke, senior policy council at the ACLU’s National Political Advocacy Department.

    “The American people do not want AI to be discriminatory, to be unsafe, to be hallucinatory,” he said. “So I don’t think anyone is interested in winning the AI race if it means AI that is not trustworthy.”

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Nov. 2025

    [ad_2]

    Associated Press

    Source link

  • Hands On With Google’s Nano Banana Pro Image Generator

    [ad_1]

    Corporate AI slop feels inescapable in 2025. From website banner ads to outdoor billboards, images generated by businesses using AI tools surround me. Hell, even the bar down the street posts happy hour flyers with that distinctly hazy, amber glow of some AI graphics.

    On Thursday, Google launched Nano Banana Pro, the company’s latest image-generating model. Many of the updates in this release are targeted at corporate adoption, from putting Nano Banana Pro in Google Slides for business presentations to integrating the new model with Google Ads for advertisers globally.

    This “Pro” release is an iteration on its Nano Banana model that dropped earlier this year. Nano Banana became a viral sensation after users started posting personalized action figures and other meme-able creations on social media.

    Nano Banana Pro builds out the AI tool with a bevy of new abilities, like generating images in 4K resolution. It’s free to try out inside Google’s Gemini app, with paid Google One subscribers getting access to additional generations.

    One specific improvement is going to be catnip for corporations in this release: text rendering. From my initial tests generating outputs with text, Nano Banana Pro improves on the wonky lettering and strange misspellings common in many image models, including Google’s past releases.

    Google wants the images generated by this new model—text and all—to be more polished and production-ready for business use cases. “Even if you have one letter off it’s very obvious,” says Nicole Brichtova, a product lead for image and video at Google DeepMind. “It’s kind of like having hands with six fingers; it’s the first thing you see.” She says part of the reason Nano Banana Pro is able to generate text more cleanly is the switch to a more powerful underlying model, Gemini 3 Pro.

    An example of how the tool can create a composite from multiple images.

    Courtesy of Google

    [ad_2]

    Reece Rogers

    Source link

  • A Market Correction, Not a Meltdown, Is Hitting AI

    [ad_1]

    Years of unbridled AI optimism have given way to strains of skepticism, even within the business and investment communities, as calls of an AI bubble have grown as of late, drawing comparisons to the dot-com boom and bust at the turn of this century. 

    “The concept of an AI bubble is not entirely new,” Ram Bala, associate professor of AI & Analytics at Santa Clara University, told Newsweek. “For more than a year, there has been this discussion [as] the investment numbers almost began to look a little unreal…from billions to trillions.” 

    In the last few months, chip companies saw slowed sales and stock growth, though Nvidia’s recent earnings announcement has assuaged some concerns. Also, the efficacy of AI in the workplace is not as great as most people thought at this point, and the vast environmental costs of this technology are becoming increasingly apparent.  

    A Bank of America survey found that 45 percent of global fund managers said there was an “AI bubble” that could negatively impact the economy. An MIT study made waves with the finding that 95 percent of enterprise generative AI deployments do not achieve financial returns. The International Energy Agency reports that one ChatGPT request uses 10 times more energy than a Google search, and the rise in demand for data centers is a potential strain on the world’s water supply.  

    Those heavily invested in the future of automation and generative technology may have hoped to see greater adoption at this point. The lack of workplace adoption, identified by MIT, Gartner and banking analysts, is driving some of the bubble talk. In many industries, business leaders seem to struggle with the change management focus needed to empower employees to adopt new tech-enabled workflows.  

    “It will take longer than I think currently predicted to see the gains,” Hatim Rahman, associate professor of management and organizations and sociology at Northwestern University, told Newsweek. “Because this is not a plug and play technology. This is a technology that requires fundamentally rethinking change management, adoption of culture, people processes, which, research for decades has shown, takes time.” 

    The proliferation of AI also stokes fears of job loss at a scale that would be ruinous to the economy. While the labor market is certainly unstable and layoffs are occurring at a variety of different companies, attributing that instability to AI at this point would be premature, and inaccurate. 

    “In the last few years, so many people have talked about [jobs] going away, almost every one of those predictions was wrong,” Kian Katanforoosh, CEO of AI startup Workera and a lecturer on machine learning at Stanford University, told Newsweek. “People overestimate the technology and underestimate the human capacity that is needed to integrate that technology. I see that every single day.” 

    Katanforoosh acknowledged that AI has a lot of hype right now, and some people have been benefiting in the investment market. Most of the beneficiaries, however, may be at large chip-making and technology giants, rather than AI-powered startups and their early investors.  

    “Companies that get a massive valuation just for putting AI in their mission statement but fail to deliver could still go to zero,” Samuel Hammond, chief economist at the Foundation for American Innovation, told the Los Angeles Times. “But most of the stock market’s growth is being driven by the large-cap tech stocks like Nvidia and Google.” 

    Today, the internet is a pretty crucial aspect of our personal and business lives, but the pile of investment behind its future was at times misguided. Like the internet, or generative AI, it is a common notion to perceive an emerging technology as capable of changing the world, but following through on that nugget with a successful investment strategy is a different animal.  

    Observers note that government investment, across the world, in data centers, serves to mitigate the financial risks of the infrastructure investments occurring to advance AI.   

    “The question is more about specific numbers, did we go a little bit too high? Now, there’s a correction. In my view, that’s what’s happening,” Bala said. “A short term correction.” 

    The nature of a bubble, whether it is around tulips, businesses with prominent web domains or AI tech company stocks, is that people buy into their financial future, literally, and get burned when the bubble bursts.  

    “Jumping on a bandwagon is predicated on this idea that there is going to be some returns,” Bala continued. “If those returns don’t pan out, that’s when there is a collapse,” like in a housing bubble, “when prices are going up, people keep investing more and more in housing, and the only way that is sustainable is if the house prices keep going up.” 

    If consumer and enterprise demand for emerging AI technology does not rise, a lot of people are going to lose a lot of money. But we’re “still in the very early innings,” Bala cautions. Like with the internet, investments in infrastructure may go unused, but eventually they are filled.  

    Right now, adoption into workflows and wide-scale reshaping of work or consumer processes has yet to occur. But perhaps it is on the horizon, just in a timeline longer than expected. 

    People are very slow to change,” Katanforoosh said. “We’ve seen that in prior cycles of technology.” 

    [ad_2]

    Source link