ReportWire

Tag: OpenAI Inc

  • Alibaba’s cloud business revenue soars 34% driven by AI boom

    HONG KONG (AP) — China’s Alibaba Group posted a 34% jump in revenue from its cloud business in its most recent quarter, buoyed by the boom in artificial intelligence.

    But overall revenue at the Chinese tech group for the July-September quarter increased by just 5% year-on-year to 247.8 billion yuan ($35 billion), and profit fell 52% from last year, as a fierce price war in China’s e-commerce landscape — including in the food delivery segment — eroded into short-term profitability. JD.com, its e-commerce rival, reported a 55% net profit drop in the same quarter.

    Alibaba started out in e-commerce and later turned its focus to cloud and AI technologies. Earlier this year, it pledged to invest at least 380 billion yuan ($53 billion) in three years in advancing its cloud computing and AI infrastructure.

    CEO Eddie Wu said in prepared remarks Tuesday that the group’s “significant” investments in AI had helped its revenue growth. The 34% cloud revenue growth was faster than the 26% increase in the April-June quarter.

    The company added that demand for AI was “accelerating” and its “conviction in future AI demand growth is strong.” It also will probably end up investing more than the planned 380 billion yuan in AI to meet surging demand, Alibaba said Tuesday.

    On Monday, Alibaba announced that its upgraded AI chatbot Qwen — which aims to rival OpenAI’s ChatGPT — recorded 10 million downloads in the first week after its public launch.

    The company’s Hong Kong shares gained 2% Tuesday and just before the opening bell on the New York Stock Exchange, shares rose 2.4%. Shares have gained more than 90% so far this year, fueled by optimism over its progress in AI.

    Chinese companies have been gaining ground in AI since tech startup DeepSeek upended the industry, raising doubts over the dominance in the sector of its U.S. rivals.

    Recent earnings reports by other Chinese tech giants have been mixed.

    Tencent, which rivals Alibaba in AI, this month reported a strong 15% year-on-year gain in its revenue for the July-September quarter. But Baidu, which also competes with Alibaba in AI development, recorded a 7% drop in revenue in the same quarter compared to last year.

    Concerns among investors and analysts over an overblown AI bubble have also been growing, although strong earnings at Nvidia last week slightly eased worries.

    Source link

  • OpenAI and Taiwan’s Foxconn to partner in AI hardware design and manufacturing in the US

    TAIPEI, Taiwan (AP) — OpenAI and Taiwan electronics giant Foxconn have agreed to a partnership to design and manufacture key equipment for artificial intelligence data centers in the U.S. as part of ambitious plans to fortify American AI infrastructure.

    Foxconn, which makes AI servers for Nvidia and assembles Apple products including the iPhone, will be co-designing and developing AI data center racks with OpenAI under the agreement, the companies said in separate statements on Thursday and Friday.

    The products Foxconn will manufacture in its U.S. facilities include cabling, networking and power systems for AI data centers, the companies said. OpenAI will have “early access” to evaluate and potentially to purchase them.

    Foxconn has factories in the U.S., including in Wisconsin, Ohio and Texas. The initial agreement does not include financial obligations or purchase commitments, the statements said.

    The Taiwan contract manufacturer, formally known as Hon Hai Precision Industry Co., has been moving to diversify its business, developing electric vehicles and acquiring other electronics companies to build out its product offerings.

    A sleek Model A EV made by the group’s automaking affiliate Foxtron was on display at Friday’s event.

    “This year, Model A. ‘A’,’ for affordable,” said Jun Seki, chief strategy officer for Foxconn’s EV business.

    The tie-up with OpenAI can also help Taiwan, a self-governed island claimed by China, to build up its own computing resources, said Alexis Bjorlin, a Nvidia vice president.

    “This allows Taiwan’s domain knowledge and key technology data to remain local and ensure data security,” she said.

    “This partnership is a step toward ensuring the core technologies of the AI era are built here,” Sam Altman, CEO of San Francisco-based OpenAI, said in the statement. “We believe this work will strengthen U.S. leadership and help ensure the benefits of AI are widely shared.”

    OpenAI has committed $1.4 trillion to building AI infrastructure. It recently entered into multi-billion partnerships with Nvidia and AMD to expand the extensive computing power needed to support its AI models and services. It is also partnering with US chipmaker Broadcom in designing and making its own AI chips.

    But its massive spending plans have worried investors, raising questions over its ability to recoup its investments and remain profitable. Altman said this month that OpenAI, a startup founded in 2015 and maker of ChatGPT, is expected to reach more than $20 billion in annualized revenue this year, growing to “hundreds of billions by 2030.”

    Foxconn’s Taiwan-listed share price has risen 25% so far this year, along with the surge in prices for many tech companies benefiting from the craze for AI.

    The Taiwan company’s net profit in the July-September quarter rose 17% from a year earlier to just over 57.6 billion new Taiwan dollars ($1.8 billion), with revenue from its cloud and networking business, including AI servers, contributing the most business.

    “We believe the importance of the AI industry is increasing significantly,” Liu said during Foxconn’s earnings call this month.

    “I am very optimistic about the development of AI next year, and expect our cooperation with major clients and partners to become even closer,” said Liu.

    ___

    Chan reported from Hong Kong

    Source link

  • Bubble fears ease but investors still waiting for AI to live up to its promise

    Fears about the artificial intelligence boom turning into an overblown bubble have diminished for now, thanks to a stellar earnings report from Nvidia that illustrated why its indispensable chips transformed it into the world’s most valuable company.

    But that doesn’t mean the specter of an AI bubble won’t return in the months and years ahead as Big Tech gears up to spend trillions of dollars more on a technology the industry’s leaders believe will determine the winners and losers during the next wave of innovation.

    For now, at least, Nvidia has eased worries that the AI craze propelling the stock market and much of the economy for the past year is on the verge of a massive collapse.

    If anything, Nvidia’s quarterly report indicated that AI spending is picking up even more momentum. The highlights, released late Wednesday, included quarterly revenue of $57 billion, a 62% increase from the same time last year. That sales growth was an acceleration from the 56% increase in year-over-year revenue from the May-July quarter.

    What’s more, Nvidia forecast revenue of $65 billion for the current quarter covering November-January, which would be a 65% year-over-year increase.

    Given Nvidia’s forecasts, “it is very hard to see how this stock does not keep moving higher from here,” according to analysts at UBS led by Timothy Arcuri. The UBS analyst also said the “AI infrastructure tide is still rising so fast that all boats will be lifted.”

    Nvidia’s numbers are viewed through a window that extends far beyond the Santa Clara, California, company’s headquarters because its products are needed by a wide range of companies — including Big Tech peers like Microsoft, Amazon, Alphabet and Meta Platforms — to build data centers that are becoming known as AI factories.

    “AI spending isn’t just holding up, it’s accelerating. That’s exactly what the market needed to see,” said Jake Behan, head of capital markets for investment firm Direxion.

    The numbers initially lifted Nvidia’s stock price by as much as 5% in Thursday’s trading, while other tech stocks tied to the AI spending frenzy also got a boost. But Nvidia’s shares and other tech stocks reversed course later in the session as investors found other issues besides AI, such as the government’s latest jobs report and the future direction of interest rates.

    Even with a 3% drop in its stock price amid the broader market decline, Nvidia remains valued at $4.4 trillion, more than 10 times its valuation three years ago when OpenAI released its ChatGPT chatbot, triggering the biggest technological shift since Apple released the iPhone in 2007.

    Nvidia’s rapid rise has turned its CEO Jensen Huang into the chief evangelist for the AI revolution and he sought to use his bully pulpit during a late Wednesday conference call with industry analysts to make a case that the spending to make technology with humanlike intelligence is just beginning.

    “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different,” Huang insisted while celebrating “depth and breadth” of Nvidia’s growth.

    Huang is hardly a lone voice in the wilderness. A recent report from Gartner Inc. estimates that worldwide spending on AI will rise to more than $2 trillion next year, a 37% increase from the nearly $1.5 trillion that the research firm expects to be spent this year.

    But it remains to be seen if all that money pouring into AI will actually produce all the profits and productivity that proponents have been promising. That leaves the question unanswered if all the real spending that’s happening will be worth it.

    The most recent survey of global fund managers by Bank of America showed a record percentage of investors saying companies are “overinvesting.”

    Big Tech is already so profitable that many of the most successful finance their spending sprees with their ongoing stream of revenue and cash hoards in their bank accounts. But some companies, such as Meta Platforms and Oracle, are relying more heavily on debt to fund their AI ambitions — a strategy that has raised enough alarms among investors that their stock prices have plunged more dramatically than their peers in recent weeks.

    Both Meta and Oracle have suffered more than 20% declines in their stock prices since late October.

    But other Big Tech powerhouses leading the way in AI remain just behind Nvidia and iPhone maker Apple in the rankings of the most valuable companies. Alphabet, Microsoft and Amazon boast market values currently ranging from $2.3 trillion to $3.6 trillion.

    “It is true that valuations are high and that there is some froth in the market, however, the spending on AI is real,” said Chris Zaccarelli, chief investment officer for money manager Northlight Asset Management. “Whether or not the spending turns out to be overdone won’t be known for many years.”

    AP Business Writer Stan Choe in New York contributed to this story.

    Source link

  • Nvidia earnings clear lofty hurdle set by analysts amid fears about an AI bubble

    SAN FRANCISCO (AP) — Nvidia’s sales of the computing chips powering the artificial intelligence craze surged beyond the lofty bar set by stock market analysts in a performance that may ease recent jitters about a Big Tech boom turning into a bust that topples the world’s most valuable company.

    The results announced late Wednesday provided a pulse check on the frenzied spending on AI technology that has been fueling both the stock market and much of the overall economy since OpenAI released its ChatGPT three years ago.

    Nvidia has been by far the biggest beneficiary of the run-up because its processors have become indispensable for building the AI factories that are needed to enable what’s supposed to be the most dramatic shift in technology since Apple released the iPhone in 2007.

    But in the past few weeks, there has been a rising tide of sentiment that the high expectations for AI may have become far too frothy, setting the stage for a jarring comedown that could be just as dramatic as the ascent that transformed Nvidia from a company worth less than $400 billion three years ago to one worth $4.5 trillion at the end of Wednesday’s trading.

    Nvidia’s report for its fiscal third quarter covering the August-October period elicited a sigh of relief among those fretting about a worst-case scenario and could help reverse the recent downturn in the stock market.

    “The market should belt out a heavy sigh, given the skittishness we have been experiencing,” said Sean O’Hara, president of the investment firm Pacer ETFs.

    The company’s stock price gained more than 5% in Wednesday’s extended trading after the numbers came out. If the shares trade similarly Thursday, it could result in a one-day gain of about $230 billion in stockholder wealth.

    Nvidia earned $31.9 billion, or $1.30 per share, a 65% increase from the same time last year, while revenue climbed 62% to $57 billion. Analysts polled by FactSet Research had forecast earnings of $1.26 per share on revenue of $54.9 billion. What’s more, the Santa Clara, California, company predicted its revenue for the current quarter covering November-January will come in at about $65 billion, nearly $3 billion above analysts’ projections, in an indication that demand for its AI chips remains feverish.

    The incoming orders for Nvidia’s top-of-the-line Blackwell chip are “off the charts,” Nvidia CEO Jensen Huang said in a prepared statement that described the current market conditions as “a virtuous cycle.” In a conference call, Nvidia Chief Financial Officer Collette Kress said that by the end of next year the company will have sold about $500 billion in chips designed for AI factories within a 24-month span Kress also predicts trillions of dollars more will be spent by the end of the 2020s.

    In a conference call preamble that has become like a State of the AI Market address, Huang seized the moment to push back against the skeptics who doubt his thesis that technology is at tipping point that will transform the world. “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different,” Huang insisted while celebrating “depth and breadth” of Nvidia’s growth.

    The upbeat results, optimistic commentary and ensuring reaction reflects the pivotal role that Nvidia is playing in the future direction of the economy — a position that Huang has leveraged to forge close ties with President Donald Trump, even as the White House wages a trade war that has inhibited the company’s ability to sell its chips in China’s fertile market.

    Trump is increasingly counting on the tech sector and the development of artificial intelligence to deliver on his economic agenda. For all of Trump’s claims that his tariffs are generating new investments, much of that foreign capital is going to data centers for AI’s computing demands or the power facilities needed to run those data centers.

    “Saying this is the most important stock in the world is an understatement,” Jay Woods, chief market strategist of investment bank Freedom Capital Markets, said of Nvidia.

    The boom has been a boon for more than just Nvidia, which became the first company to eclipse a market value of $5 trillion a few weeks ago, before the recent bubble worries resulted in a more than 10% decline. As OpenAI and other Big Tech powerhouses snap up Nvidia’s chips to build their AI factories and invest in other services connected to the technology, their fortunes have also been soaring. Apple, Microsoft, Google parent Alphabet Inc. and Amazon all boast market values in the $2 trillion to $4 trillion range.

    Source link

  • Larry Summers takes leave from teaching at Harvard after release of Epstein emails

    Former U.S. Treasury Secretary Larry Summers abruptly went on leave Wednesday from teaching at Harvard University, where he once served as president, over recently released emails showing he maintained a friendly relationship with Jeffrey Epstein, Summers’ spokesperson said.

    Summers had canceled his public commitments amid the fallout of the emails being made public and earlier Wednesday severed ties with OpenAI, the maker of ChatGPT. Harvard had reopened an investigation into connections between him and Epstein, but Summers had said he would continue teaching economics classes at the school.

    That changed Wednesday evening with the news that he will step away from teaching classes as well as his position as director of the Mossavar-Rahmani Center for Business and Government with the Harvard Kennedy School.

    “Mr. Summers has decided it’s in the best interest of the Center for him to go on leave from his role as Director as Harvard undertakes its review,” Summers spokesperson Steven Goldberg said, adding that his co-teachers would finish the classes.

    Summers has not been scheduled to teach next semester, according to Goldberg.

    A Harvard spokesperson confirmed to The Associated Press that Summers had let the university know about his decision. Summers decision to go on leave was first reported by The Harvard Crimson.

    Harvard did not mention Summers by name in its decision to restart an investigation, but the move follows the release of emails showing that he was friendly with Epstein long after the financier pleaded guilty to soliciting prostitution from an underage girl in 2008.

    By Wednesday, the once highly regarded economics expert had been facing increased scrutiny over choosing to stay in the teaching role. Some students even filmed his appearance in shock as he appeared before a class of undergraduates on Tuesday while stressing he thought it was important to continue teaching.

    Massachusetts Sen. Elizabeth Warren, a Democrat, said in a social media post on Wednesday night that Summers “cozied up to the rich and powerful — including a convicted sex offender. He cannot be trusted in positions of influence.”

    Messages appear to seek advice about romantic relationship

    The emails include messages in which Summers appeared to be getting advice from Epstein about pursuing a romantic relationship with someone who viewed him as an “economic mentor.”

    “im a pretty good wing man , no?” Epstein wrote on Nov. 30, 2018.

    The next day, Summers told Epstein he had texted the woman, telling her he “had something brief to say to her.”

    “Am I thanking her or being sorry re my being married. I think the former,” he wrote.

    Summers’ wife, Elisa New, also emailed Epstein multiple times, including a 2015 message in which she thanked him for arranging financial support for a poetry project she directs. The gift he arranged “changed everything for me,” she wrote.

    “It really means a lot to me, all financial help aside, Jeffrey, that you are rooting for me and thinking about me,” she wrote.

    New, an English professor emerita at Harvard, did not respond to an email seeking comment Wednesday.

    An earlier review completed in 2020 found that Epstein visited Harvard’s campus more than 40 times after his 2008 sex-crimes conviction and was given his own office and unfettered access to a research center he helped establish. The professor who provided the office was later barred from starting new research or advising students for at least two years.

    Summers appears before Harvard class

    On Tuesday, Summers appeared before his class at Harvard, where he teaches “The Political Economy of Globalization” to undergraduates with Robert Lawrence, a professor with the Harvard Kennedy School.

    “Some of you will have seen my statement of regret expressing my shame with respect to what I did in communication with Mr. Epstein and that I’ve said that I’m going to step back from public activities for a while. But I think it’s very important to fulfill my teaching obligations,” he said.

    Summers’ remarks were captured on video by several students, but no one appeared to publicly respond to his comments.

    Epstein, who authorities said died by suicide in 2019, was a convicted sex offender infamous for his connections to wealthy and powerful people, making him a fixture of outrage and conspiracy theories about wrongdoing among American elites.

    Summers served as treasury secretary from 1999 to 2001 under President Bill Clinton. He was Harvard’s president for five years from 2001 to 2006. When asked about the emails last week, Summers issued a statement saying he has “great regrets in my life” and that his association with Epstein was a “major error in judgement.”

    Other organizations that confirmed the end of their affiliations with Summers included the Center for American Progress, the Center for Global Development and the Budget Lab at Yale University. Bloomberg TV said Summers’ withdrawal from public commitments included his role as a paid contributor, and the New York Times said it will not renew his contract as a contributing opinion writer.

    ___

    This story has been corrected to show that Summers is a former treasury secretary, not treasurer; to show that Summers’ statement about stepping back from public commitments was issued late Monday, not Tuesday; and to show that the school is known as the Harvard Kennedy School, not Kennedy Harvard School.

    ___

    Associated Press journalist Hallie Golden contributed to this report.

    Source link

  • Google unveils Gemini’s next generation, aiming to turn its search engine into a ‘thought partner’

    SAN FRANCISCO (AP) — Google is unleashing its Gemini 3 artificial intelligence model on its dominant search engine and other popular online services in the high-stakes battle to create technology that people can trust to enlighten them and manage tedious tasks.

    The next-generation model unveiled Tuesday comes nearly two years after Google took the wraps off its first iteration of the technology. Google designed Gemini in response to a competitive threat posed by OpenAI’s ChatGPT that came out in late 2022, triggering the biggest technological shift since Apple released the iPhone in 2007.

    Google’s latest AI features initially will be rolled out to Gemini Pro and Ultra subscribers in the United States before coming to a wider, global audience. Gemini 3’s advances include a new AI “thinking” feature within Google’s search engine that company executives believe will become an indispensable tool that will help make people more productive and creative.

    “We like to think this will help anyone bring any idea to life,” Koray Kavukcuoglu, a Google executive overseeing Gemini’s technology, told reporters.

    As AI models have become increasingly sophisticated, the advances have raised worries that the technology is more prone to behave in ways that jumble people’s feelings and thoughts while feeding them misleading information and fawning flattery. In some of the most egregious interactions, AI chatbots have faced accusations of becoming suicide coaches for emotionally vulnerable teenagers.

    The various problems have spurred a flurry of negligence lawsuits against the makers of AI chatbots, although none have targeted Gemini yet.

    Google executives believe they have built in guardrails that will prevent Gemini 3 from hallucinating or be deployed for sinister purposes such as hacking into websites and computing devices.

    Gemini 3 ‘s responses are designed to be “smart, concise and direct, trading cliche and flatter for insight — telling you what you need to hear, not just what you want to hear. It acts as a true thought partner,” Kavukcuoglu and Demis Hassabis, CEO of Google’s DeepMind division, wrote in a blog post.

    Besides providing consumers with more AI tools, Gemini 3 is also likely to be scrutinized as a barometer that investors may use to get a better sense about whether the massive torrent of spending on the technology will pay off.

    After starting the year expecting to spend $75 billion, Google’s corporate parent Alphabet recently raised its capital expenditure budget from $91 billion to $93 billion, with most of the money earmarked for AI. Other Big Tech powerhouses such as Microsoft, Amazon and Facebook parent Meta Platforms are spending nearly as much — or even more — on their AI initiatives this year.

    Investors so far have been mostly enthusiastic about the AI spending and the breakthroughs they have spawned, helping propel the values of Alphabet and its peers to new highs. Alphabet’s market value is now hovering around $3.4 trillion, more than doubling in value since the initial version of Gemini came out in late 2023. Alphabet’s shares edged up slightly Tuesday after the Gemni 3 news came out.

    But the sky-high values also have amplified fears of a potential investment bubble that will eventually burst and drag down the entire stock market.

    For now, AI technology is speeding ahead.

    OpenAI released its fifth generation of the AI technology powering ChatGPT in August, around the same time the next version of Claude came out from Anthropic.

    Like Gemini, both ChatGPT and Claude are capable of responding rapidly to conversational questions involving complex topics — a skill that has turned them into the equivalent of “answer engines” that could lessen people’s dependence on Google search.

    Google quickly countered that threat by implanting Gemini’s technology into its search engine to begin creating detailed summaries called “AI Overviews” in 2023, and then introducing an even more conversational search tool called “AI mode” earlier this year.

    Those innovations have prompted Google to de-emphasize the rankings of relevant websites in its search results — a shift that online publishers have complained is diminishing the visitor traffic that helps them finance their operations through digital ad sales.

    The changes have been mostly successful for Google so far, with AI Overviews now being used by more than 2 billion people every month, according to the company. The Gemini app, by comparison, has about 650 million monthly users.

    With the release of Gemini 3, the AI mode in Google’s search engine is also adding a new feature that will allow users to click on a “thinking” option in a tab that company executives promise will deliver even more in-depth answers than has been happening so far. Although the “thinking” choice in the search engine’s AI mode initially will only be offered to Gemini Pro and Ultra subscribers, the Mountain View, California, company plans to eventually make it available to all comers.

    Source link

  • Microsoft partners with Anthropic and Nvidia in cloud infrastructure deal

    Microsoft said Tuesday it is partnering with artificial intelligence company Anthropic and chipmaker Nvidia as part of an AI infrastructure deal that moves the software giant further away from its longtime alliance with OpenAI.

    Anthropic, maker of the chatbot Claude that competes with OpenAI’s ChatGPT, said it is committed to buying $30 billion in computing capacity from Microsoft’s Azure cloud computing platform.

    As part of the partnership, Nvidia will also invest up to $10 billion in Anthropic, and Microsoft will invest up to $5 billion in the San Francisco-based startup.

    The joint announcements by CEOs Dario Amodei of Anthropic, Satya Nadella of Microsoft, and Jensen Huang of Nvidia came just ahead of the opening of Microsoft’s annual Ignite developer conference.

    “This is all about deepening our commitment to bringing the best infrastructure, model choice and applications to our customers,” Nadella said on a video call with the other two executives, adding that it builds on the “critical” partnership Microsoft still has with OpenAI.

    Microsoft was, until earlier this year, the exclusive cloud provider for OpenAI and made the technology behind ChatGPT the foundation for its own AI assistant, Copilot. But the two companies moved farther apart and their business agreements were amended as OpenAI increasingly sought to secure its own cloud capacity through big deals with Oracle, SoftBank and other data center developers and chipmakers.

    Asked in September if OpenAI could do more with those new computing partnerships than it could with Microsoft, OpenAI CEO Sam Altman told The Associated Press his company was “severely limited for the value we can offer to people.”

    At the same time, Microsoft holds a roughly 27% stake in the new for-profit corporation that OpenAI, founded as a nonprofit, is forming to advance its commercial ambitions as the world’s most valuable startup.

    Anthropic, founded by ex-OpenAI leaders in 2021, said Claude will now be the “only frontier model” available to customers of the three biggest cloud computing providers: Amazon, which remains Anthropic’s primary cloud provider, and Google and Microsoft.

    AI products like Claude, ChatGPT, Copilotand Google’s Gemini are reshaping how many people work but take huge amounts of energy and computing power to build and operate. Neither OpenAI nor Anthropic has yet reported turning a profit, amplifying concerns about an AI bubble if their products don’t meet investors’ high expectations and justify the expenditures. As part of the deal, Nvidia said Anthropic will have access to up to a gigawatt of capacity from its specialized AI chips.

    Huang said he’s “admired the work of Anthropic and Dario for a long time, and this is the first time we are going to deeply partner with Anthropic to accelerate Claude.”

    At Microsoft’s Ignite conference, a showcase of its latest AI technology which opened Tuesday in San Francisco, Anthropic’s chief product officer Mike Krieger highlighted the budding partnership during an on-stage appearance.

    “From the beginning, it has seemed there has been a lot of shared DNA between our companies,” said Krieger, who was also the co-founder of Instagram.

    ——

    AP Technology Writer Michael Liedtke in San Francisco contributed to this report.

    Source link

  • Anthropic warns of AI-driven hacking campaign linked to China

    WASHINGTON (AP) — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion.

    The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.

    While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new operation is the degree to which AI was able to automate some of the work, the researchers said.

    “While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” they wrote in their report.

    The operation targeted tech companies, financial institutions, chemical companies and government agencies. The researchers wrote that the hackers attacked “roughly thirty global targets and succeeded in a small number of cases.” Anthropic detected the operation in September and took steps to shut it down and notify the affected parties.

    Anthropic noted that while AI systems are increasingly being used in a variety of settings for work and leisure, they can also be weaponized by hacking groups working for foreign adversaries. The San Francisco-based company, maker of the generative AI chatbot Claude, is one of many tech developers pitching AI “agents” that go beyond a chatbot’s capability to access computer tools and take actions on a person’s behalf.

    “Agents are valuable for everyday work and productivity — but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks,” the researchers concluded. “These attacks are likely to only grow in their effectiveness.”

    A spokesperson for China’s embassy in Washington did not immediately return a message seeking comment on the report.

    Microsoft warned earlier this year that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and less labor-intensive. The head of OpenAI’s safety panel, which has the authority to halt the ChatGPT maker’s AI development, recently told The Associated Press he’s watching out for new AI systems that give malicious hackers “much higher capabilities.”

    America’s adversaries, as well as criminal gangs and hacking companies, have exploited AI’s potential, using it to automate and improve cyberattacks, to spread inflammatory disinformation and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for example, as well as generate digital clones of senior government officials.

    Anthropic said the hackers were able to manipulate Claude, using “jailbreaking” techniques that involve tricking an AI system to bypass its guardrails against harmful behavior, in this case by claiming they were employees of a legitimate cybersecurity firm.

    “This points to a big challenge with AI models, and it’s not limited to Claude, which is that the models have to be able to distinguish between what’s actually going on with the ethics of a situation and the kinds of role-play scenarios that hackers and others may want to cook up,” said John Scott-Railton, senior researcher at Citizen Lab.

    The use of AI to automate or direct cyberattacks will also appeal to smaller hacking groups and lone wolf hackers, who could use AI to expand the scale of their attacks, according to Adam Arellano, field CTO at Harness, a tech company that uses AI to help customers automate software development.

    “The speed and automation provided by the AI is what is a bit scary,” Arellano said. “Instead of a human with well-honed skills attempting to hack into hardened systems, the AI is speeding those processes and more consistently getting past obstacles.”

    AI programs will also play an increasingly important role in defending against these kinds of attacks, Arellano said, demonstrating how AI and the automation it allows will benefit both sides.

    Reaction to Anthropic’s disclosure was mixed, with some seeing it as a marketing ploy for Anthropic’s approach to defending cybersecurity and others who welcomed its wake-up call.

    “This is going to destroy us – sooner than we think – if we don’t make AI regulation a national priority tomorrow,” wrote U.S. Sen. Chris Murphy, a Connecticut Democrat, on social media.

    That led to criticism from Meta’s chief AI scientist Yann LeCun, an advocate of the Facebook parent company’s open-source AI systems that, unlike Anthropic’s, make their key components publicly accessible in a way that some AI safety advocates deem too risky.

    “You’re being played by people who want regulatory capture,” LeCun wrote in a reply to Murphy. “They are scaring everyone with dubious studies so that open source models are regulated out of existence.”

    __

    O’Brien reported from Providence, Rhode Island.

    Source link

  • Anthropic, Microsoft announce new AI data center projects as industry’s construction push continues

    Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York.

    Microsoft also on Wednesday announced a new data center under construction in Atlanta, Georgia, describing it as connected to another in Wisconsin to form a “massive supercomputer” running on hundreds of thousands of Nvidia chips to power AI technology.

    The latest deals show that the tech industry is moving forward on huge spending to build energy-hungry AI infrastructure, despite lingering financial concerns about a bubble, environmental considerations and the political effects of fast-rising electricity bills in the communities where the massive buildings are constructed.

    Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn’t disclose their exact locations or what source of electricity they will need.

    Another company, cryptocurrency mining data center developer TeraWulf, has previously revealed it was working with Fluidstack on Google-backed data center projects in Texas and New York, on the shore of Lake Ontario. TeraWulf declined comment Wednesday.

    A report last month from TD Cowen said that the leading cloud computing providers leased a “staggering” amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined.

    Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic’s chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft.

    Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the “scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier.”

    Microsoft has branded its two-story Atlanta data center as Fairwater 2 and said it will be connected across a “high-speed network” with the original Fairwater complex being built south of Milwaukee, Wisconsin. The company said the facility’s densely packed Nvidia chips will help power Microsoft’s own AI technology, along with OpenAI’s and other AI developers.

    Microsoft was, until earlier this year, OpenAI’s exclusive cloud computing provider before the two companies amended their partnership. OpenAI has since announced more than $1 trillion in infrastructure obligations, much of it tied to its Stargate project with partners Oracle and SoftBank. Microsoft, in turn, spent nearly $35 billion in the July-September quarter on capital expenditures to support its AI and cloud demand, nearly half of that on computer chips.

    Anthropic has made its own computing partnerships with Amazon and, more recently, Google.

    The tech industry’s big spending on computing infrastructure for AI startups that aren’t yet profitable has fueled concerns about an AI investment bubble.

    Investors have closely watched a series of circular deals over recent months between AI developers and the companies building the costly chips and data centers needed to power their AI products. Anthropic said it will continue to “prioritize cost-effective, capital-efficient approaches” to scaling up its business.

    OpenAI had to backtrack last week after its chief financial officer, Sarah Friar, made comments at a tech conference suggesting the U.S. government could help in financing chips needed for data centers. The White House’s top AI official, David Sacks, responded on social media platform X that there “will be no federal bailout for AI” and if one of the leading companies fails, “others will take its place,” though he also added he didn’t think “anyone was actually asking for a bailout.”

    OpenAI CEO Sam Altman later confirmed in a lengthy statement that “we do not have or want government guarantees” for the company’s data centers and also sought to address concerns about whether it will be able to pay for all the infrastructure it has signed up for.

    “We are looking at commitments of about $1.4 trillion over the next 8 years,” Altman wrote. “Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there.”

    Source link

  • OpenAI faces 7 lawsuits claiming ChatGPT drove people to suicide, delusions

    OpenAI is facing seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues.

    The lawsuits filed Thursday in California state courts allege wrongful death, assisted suicide, involuntary manslaughter and negligence. Filed on behalf of six adults and one teenager by the Social Media Victims Law Center and Tech Justice Law Project, the lawsuits claim that OpenAI knowingly released GPT-4o prematurely, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.

    ___

    EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

    ___

    The teenager, 17-year-old Amaurie Lacey, began using ChatGPT for help, according to the lawsuit filed in San Francisco Superior Court. But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to “live without breathing.’”

    “Amaurie’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit says.

    OpenAI called the situations “incredibly heartbreaking” and said it was reviewing the court filings to understand the details.

    Another lawsuit, filed by Alan Brooks, a 48-year-old in Ontario, Canada, claims that for more than two years ChatGPT worked as a “resource tool” for Brooks. Then, without warning, it changed, preying on his vulnerabilities and “manipulating, and inducing him to experience delusions. As a result, Allan, who had no prior mental health illness, was pulled into a mental health crisis that resulted in devastating financial, reputational, and emotional harm.”

    “These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, in a statement.

    OpenAI, he added, “designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them.” By rushing its product to market without adequate safeguards in order to dominate the market and boost engagement, he said, OpenAI compromised safety and prioritized “emotional manipulation over ethical design.”

    In August, parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

    “The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer at Common Sense Media, which was not part of the complaints. “These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”

    Source link

  • Phony AI-generated videos of Hurricane Melissa flood social media sites

    One viral video shows what appears to be four sharks swimming in a Jamaican hotel’s pool as floodwaters allegedly brought on by Hurricane Melissa swamp the area. Another purportedly depicts Jamaica’s Kingston airport completely ravaged by the storm. But neither of these events happened, it’s just AI-generated misinformation circulating on social media as the storm churned across the Caribbean this week.

    These videos and others have racked up millions of views on social media platforms, including X, TikTok and Instagram.

    Some of the clips appear to be spliced together or based on footage of old disasters. Others appear to be created entirely by AI video generators.

    “I am in so many WhatsApp groups and I see all of these videos coming. Many of them are fake,” said Jamaica’s Education Minister Dana Morris Dixon on Monday. “And so we urge you to please listen to the official channels.”

    Although it’s common for hoax photos, videos and misinformation to surface during natural disasters, they’re usually debunked quickly. But videos generated by new artificial intelligence tools have taken the problem to a new level by making it easy to create and spread realistic clips.

    In this case, the content has been showing up in social media feeds alongside genuine footage shot by local residents and news organizations, sowing confusion among social media users.

    Here are a few steps you can take to reduce your chances of getting fooled.

    Check for watermarks

    Look for a watermark logo indicating that the video was generated by Sora, a text-to-video tool launched by ChatGPT-maker OpenAI, or other AI video generators. These will usually appear in one of the corners of a video or photo.

    It is quite easy to remove these logos using third-party tools, so you can also check for blurs, pixelation or discoloration where a watermark should be.

    Take a closer look

    Look more closely at videos for unclear details. While the sharks-in-pool video appears realistic at first glance, it looks less believable upon closer examination because one of the sharks has a strange shape.

    You might see objects that blend together, or details such as lettering on a sign that are garbled, which are telltale signs of AI-generated imagery. Branding is also something to look out for as many platforms are cautious about reproducing specific company logos.

    Experts say it’s going to get increasingly harder to tell the difference between reality and deepfakes as the technology improves.

    Experts noted that Melissa is the first big natural disaster since OpenAI launched the latest version of its video generation tool Sora last month.

    “Now, with the rise of easily accessible and powerful tools like Sora, it has become even easier for bad actors to create and distribute highly convincing synthetic videos,” said Sofia Rubinson, a senior editor at NewsGuard, which analyzes online misinformation.

    “In the past, people could often identify fakes through telltale signs like unnatural motion, distorted text, or missing fingers. But as these systems improve, many of those flaws are disappearing, making it increasingly difficult for the average viewer to distinguish AI-generated content from authentic footage.”

    Why create deepfakes around a crisis?

    AI expert Henry Ajder said most of the hurricane deepfakes he’s seen aren’t inherently political. He suspects it’s “much closer to more traditional kind of click-based content, which is to try and get engagement, to try and get clicks.”

    On X, users can get paid based on the amount of engagement their posts get. YouTubers can earn money from ads.

    A video that racks up millions of views could earn the creator a few thousand dollars, Ajder said, not bad for the amount of effort needed.

    Social media accounts also use videos to expand their follower base in order to promote projects, products or services, Ajder said.

    So check who’s posting the video. If the account has a track record of clickbait-style content, be skeptical.

    But keep in mind that the people behind deepfake videos aren’t always trying to hide.

    “Some creators are just trying to do interesting things using AI that they think are going to get people’s attention,” he said.

    So who is behind the account?

    While it’s unclear who exactly created the pool shark video, one version found on Instagram carries the watermark for a TikTok account, Yulian_Studios. That account’s TikTok profile describes itself, in Spanish, as a “Content creator with AI visual effects in the Dominican Republic.”

    The shark video can’t be found on the account’s page, but it does have another AI-generated clip of an obese man clinging to a palm tree as hurricane winds blow in Jamaica.

    Trust your gut

    Context matters. Take a beat to consider whether what you’re seeing is plausible. The Poynter journalism website advises that if you see a situation that seems “exaggerated, unrealistic or not in character,” consider that it could be a deepfake.

    That includes the audio. AI videos used to come with synthetic voice-overs that had unusual cadence or tone, but newer tools can create synchronized sound that sound realistic.

    And if you found it on X, make sure to check whether there’s a community note attached, which is the platform’s user-powered fact-checking tool.

    One version of the shark pool video on X comes with a community note that says: “This video footage and the voice used were both created by artificial intelligence, it is not real footage of hurricane Melissa in Jamaica.”

    Go to an official source

    Don’t just rely on random strangers on the internet for information. The Jamaican government has been posting storm updates and so has the National Hurricane Center.

    Source link

  • Microsoft $9.7 billion deal with IREN will give it access to Nvidia chips

    Microsoft has entered into a $9.7 billion cloud services contract with artificial intelligence cloud service provider IREN that will give it access to some of Nvidia’s chips.

    The five-year deal, which includes a 20% prepayment, will help Microsoft as it looks to keep up with AI demand. Last week the software maker reported its quarterly sales grew 18% to $77.7 billion, beating Wall Street expectations while also surprising some investors with the huge amounts of money it is spending to expand its cloud computing infrastructure and address the growing need for AI tools.

    Microsoft spent nearly $35 billion in the July-September quarter on capital expenditures to support AI and cloud demand, nearly half of that on computer chips and much of the rest related to data center real estate.

    “IREN’s expertise in building and operating a fully integrated AI cloud — from data centers to GPU stack — combined with their secured power capacity makes them a strategic partner,” Jonathan Tinter, president of business development and ventures at Microsoft, said in a statement. “This collaboration unlocks new growth opportunities for both companies and the customers we serve.”

    Microsoft also announced new deal with OpenAI last week that pushed the Redmond, Washington, company to $4 trillion in valuation for the second time this year. The agreement gives the software giant a roughly 27% stake in OpenAI’s new for-profit corporation but changes some of the details of their close partnership. Microsoft’s $135 billion stake will be just ahead of the OpenAI nonprofit’s $130 billion stake in the for-profit company.

    IREN also said Monday that it signed a deal with Dell Technologies to buy the chips and ancillary equipment for about $5.8 billion. The Australian company anticipates the chips being deployed in phases through next year at its Childress, Texas campus.

    Shares of IREN jumped 22% before the opening bell in the U.S. Shares of Microsoft rose slightly,.

    Source link

  • Microsoft to ship 60,000 Nvidia AI chips to UAE under US-approved deal

    WASHINGTON (AP) — Microsoft said Monday it will be shipping Nvidia’s most advanced artificial intelligence chips to the United Arab Emirates as part of a deal approved by the U.S. Commerce Department.

    The Redmond, Washington software giant said licenses approved in September under “stringent” safeguards enable it to ship more than 60,000 Nvidia chips, including the California chipmaker’s advanced GB300 Grace Blackwell chips, for use in data centers in the Middle Eastern country.

    The agreement appeared to contradict President Donald Trump’s remarks in a “60 Minutes” interview aired Sunday that such chips would not be exported outside the U.S.

    Asked by CBS News’ Norah O’Donnell if he will allow Nvidia to sell its most advanced chips to China, Trump said he wouldn’t.

    “We will let them deal with Nvidia but not in terms of the most advanced,” Trump said. “The most advanced, we will not let anybody have them other than the United States.”

    The UAE’s ability to access chips is tied to its pledge to invest $1.4 trillion in U.S. energy and AI-related projects, an outsized sum given its annual GDP is roughly $540 billion.

    The UAE ambassador to the U.S., Yousef Al Otaiba, said in a statement earlier this year that the arrangement was “setting a new ‘Gold Standard’ for securing AI models, chips, data and access.”

    Microsoft’s announcement Monday was part of the company’s planned $15.2 billion investment in technology in the UAE, which is says has some of the highest per-capita usage of AI. Microsoft had already accumulated in the UAE more than 21,000 of Nvidia’s graphics processor chips, known as GPUs, through licenses approved under then-President Joe Biden.

    “We’re using these GPUs to provide access to advanced AI models from OpenAI, Anthropic, open-source providers, and Microsoft itself,” said a company statement.

    Source link

  • OpenAI and Amazon sign $38 billion deal for AI computing power

    SEATTLE (AP) — OpenAI and Amazon have signed a $38 billion deal that enables the ChatGPT maker to run its artificial intelligence systems on Amazon’s data centers in the U.S.

    OpenAI will be able to power its AI tools using “hundreds of thousands” of Nvidia’s specialized AI chips through Amazon Web Services as part of the deal announced Monday.

    Amazon shares increased 4% after the announcement.

    The agreement comes less than a week after OpenAI altered its partnership with its longtime backer Microsoft, which until early this year was the startup’s exclusive cloud computing provider.

    California and Delaware regulators also last week allowed San Francisco-based OpenAI, which was founded as a nonprofit, to move forward on its plan to form a new business structure to more easily raise capital and make a profit.

    “The rapid advancement of AI technology has created unprecedented demand for computing power,” Amazon said in a statement Monday. It said OpenAI “will immediately start utilizing AWS compute as part of this partnership, with all capacity targeted to be deployed before the end of 2026, and the ability to expand further into 2027 and beyond.”

    AI requires huge amounts of energy and computing power and OpenAI has long signaled that it needs more capacity, both to develop new AI systems and keep existing products like ChatGPT answering the questions of its hundreds of millions of users. It’s recently made more than $1 trillion worth of financial obligations in spending for AI infrastructure, including data center projects with Oracle and SoftBank and semiconductor supply deals with chipmakers Nvidia, AMD and Broadcom.

    Some of the deals have raised investor concerns about their “circular” nature, since OpenAI doesn’t make a profit and can’t yet afford to pay for the infrastructure that its cloud backers are providing on the expectations of future returns on their investments. OpenAI CEO Sam Altman last week dismissed doubters he says have aired “breathless concern” about the deals.

    “Revenue is growing steeply. We are taking a forward bet that it’s going to continue to grow,” Altman said on a podcast where he appeared with Microsoft CEO Satya Nadella.

    Amazon is already the primary cloud provider to AI startup Anthropic, an OpenAI rival that makes the Claude chatbot.

    Source link

  • Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases

    If you believe artificial intelligence poses grave risks to humanity, then a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.

    Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker’s release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people’s mental health.

    “Very much we’re not just talking about existential concerns here,” Kolter said in an interview with The Associated Press. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.”

    OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter’s oversight a key part of their agreements to allow OpenAI to form a new business structure to more easily raise capital and make a profit.

    Safety has been central to OpenAI’s mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race. Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought those concerns that it had strayed from its mission to a wider audience.

    The San Francisco-based organization faced pushback — including a lawsuit from co-founder Elon Musk — when it began steps to convert itself into a more traditional for-profit company to continue advancing its technology.

    Agreements announced last week by OpenAI along with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to assuage some of those concerns.

    At the heart of the formal commitments is a promise that decisions about safety and security must come before financial considerations as OpenAI forms a new public benefit corporation that is technically under the control of its nonprofit OpenAI Foundation.

    Kolter will be a member of the nonprofit’s board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions, according to Bonta’s memorandum of understanding with OpenAI. Kolter is the only person, besides Bonta, named in the lengthy document.

    Kolter said the agreements largely confirm that his safety committee, formed last year, will retain the authorities it already had. The other three members also sit on the OpenAI board — one of them is former U.S. Army General Paul Nakasone, who was commander of the U.S. Cyber Command. Altman stepped down from the safety panel last year in a move seen as giving it more independence.

    “We have the ability to do things like request delays of model releases until certain mitigations are met,” Kolter said. He declined to say if the safety panel has ever had to halt or mitigate a release, citing the confidentiality of its proceedings.

    Kolter said there will be a variety of concerns about AI agents to consider in the coming months and years, from cybersecurity – “Could an agent that encounters some malicious text on the internet accidentally exfiltrate data?” – to security concerns surrounding AI model weights, which are numerical values that influence how an AI system performs.

    “But there’s also topics that are either emerging or really specific to this new class of AI model that have no real analogues in traditional security,” he said. “Do models enable malicious users to have much higher capabilities when it comes to things like designing bioweapons or performing malicious cyberattacks?”

    “And then finally, there’s just the impact of AI models on people,” he said. “The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint.”

    OpenAI has already faced criticism this year about the behavior of its flagship chatbot, including a wrongful-death lawsuit from California parents whose teenage son killed himself in April after lengthy interactions with ChatGPT.

    Kolter, director of Carnegie Mellon’s machine learning department, began studying AI as a Georgetown University freshman in the early 2000s, long before it was fashionable.

    “When I started working in machine learning, this was an esoteric, niche area,” he said. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.”

    Kolter, 42, has been following OpenAI for years and was close enough to its founders that he attended its launch party at an AI conference in 2015. Still, he didn’t expect how rapidly AI would advance.

    “I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now,” he said.

    AI safety advocates will be closely watching OpenAI’s restructuring and Kolter’s work. One of the company’s sharpest critics says he’s “cautiously optimistic,” particularly if Kolter’s group “is actually able to hire staff and play a robust role.”

    “I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this,” said Nathan Calvin, general counsel at the small AI policy nonprofit Encode. Calvin, who OpenAI targeted with a subpoena at his home as part of its fact-finding to defend against the Musk lawsuit, said he wants OpenAI to stay true to its original mission.

    “Some of these commitments could be a really big deal if the board members take them seriously,” Calvin said. “They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”

    Source link

  • Google’s Pixel 10 phones raises the ante on artificial intelligence

    Google on Wednesday unveiled a new line-up of Pixel smartphones injected with another dose of artificial intelligence that’s designed to do everything from fetch vital information stored on the devices to help improve photos as they’re being taken.

    The AI expansion on the four Pixel 10 models amplifies Google’s efforts to broaden the use of a technology that is already starting to reshape society. At the same time, Google is taking a swipe at Apple’s Achilles’ heel on the iPhone.

    Apple so far has only been able to introduce a few basic AI features on the iPhone while failing to deliver on last year’s promise to deliver a more conversational and versatile version of its often-blundering virtual assistant Siri.

    Without mentioning the iPhone by name, Google has already been mocking Apple’s missteps in online ads promoting the four new Pixel models as smartphones loaded with AI technology that consumers won’t have to wait for more than a year to arrive.

    “There has been a lot of hype about this and, frankly, a lot of broken promises, too,” Google executive Rick Osterloh said during a 75-minute presentation in New York about the new Pixel phones. The event was emceed by late-night TV show host Jimmy Fallon.

    Google, in contrast, has been steadily increasing the amount of AI that it began to implant on its Pixels since 2023, with this year’s models taking it to another level.

    “We think this yeasr we have a game-changing phone with game-changing technology,” Osterloh said.

    Taking advantage of a more advanced processor, Google is introducing a new AI feature on the Pixel 10 phones called “Magic Cue” that’s designed to serve as a digital mind reader that automatically fetches information stored on the devices and displays the data at the time it’s needed. For instance, if a Pixel 10 user is calling up an airline, Magic Cue is supposed to instantaneously recognize the phone number and display the flight information if it’s in Gmail or a Google Calendar.

    The Pixel 10 phones will also come with a preview feature of a new AI tool called “Camera Coach” that will automatically suggest the best framing and lighting angle as the lens is being aimed at a subject. Camera Coach will also recommend the best lens mode to use for an optimal picture.

    The premium models — Pixel 10 Pro and Pixel 10 Pro XL — will also include a “Super Res” option that deploys a grab bag of software and AI tricks to zoom up to 100 times the resolution to capture the details of objects located miles away from the camera. The AI wizardry could happen without users even realizing it’s happening, making it even more difficult to know whether an image captured in a photo reflects how things really looked at the time a picture was taken or was modified by technology.

    The Pixel 10 will also be able to almost instantaneously translate phone conversations into a range of different languages using the participants own voices.

    Google is also offering a free one-year subscription to its AI Pro plan to anyone who buys the more expensive Pixel 10 Pro or Pixel 10 Pro XL models in hopes of hooking more people on the Gemini toolkit it has assembled to compete against OpenAI’s ChatGPT.

    The prices on all four Pixel 10 models will remain unchanged from last year’s Pixel 9 generation, with the basic starting at $800 and the Pro selling for $1,000, the Pro XL at $1,200 and a foldable version at $1,800. All the Pixel 10s expect the foldable model will be in stores on August 28. The Pixel 10 Pro Fold will be available starting October 9.

    Although the Pixel smartphone remains a Lilliputian next to the Gulliverian stature of the iPhone and Samsung’s Galaxy models, Google’s ongoing advances in AI while holding the line on its marquee devices raise the competitive stakes.

    “In the age of AI, it is a true laboratory of innovation,” Forrester Research analyst Thomas Husson said of the Pixel.

    Apple, in particular, will be facing more pressure than usual when it introduces the next-generation iPhone next month. Although the company has already said the smarter Siri won’t be ready until next year at the earliest, Apple will still be expected to show some progress in AI to demonstrate the iPhone is adapting to technology’s AI evolution rather than tilting toward gradual obsolescence. Clinging to a once-successful formula eventually sank the BlackBerry and its physical keyboard when the iPhone and its touch screen came along nearly 20 years ago.

    Apple’s pricing of the next iPhone will also be under the spotlight, given that the devices are made in China and India — two of the prime targets in President Donald Trump’s trade war.

    But Apple appeared to gain a reprieve from Trump’s most onerous threats earlier this month by adding another $100 billion on top of an earlier $500 billion investment pledge to the U.S. The tariff relief may enable Apple to minimize or even avoid price increases for the iPhone, just as Google has done with the Pixel 10 models.

    Source link

  • OpenAI picks labor icon Dolores Huerta and other philanthropy advisers as it moves toward for-profit

    OpenAI has named labor leader Dolores Huerta and three others to a temporary advisory board that will help guide the artificial intelligence company’s philanthropy as it attempts to shift itself into a for-profit business.

    Huerta, who turned 95 last week, formed the first farmworkers union with Cesar Chavez in the early 1960s and will now voice her ideas on the direction of philanthropic initiatives that OpenAI says will consider “both the promise and risks of AI.”

    The group will have just 90 days to make their suggestions.

    “She recognizes the significance of AI in today’s world and anybody who’s been paying attention for the last 50 years knows she will be a force in this conversation,” said Daniel Zingale, the convener of OpenAI’s new nonprofit commission and a former adviser to three California governors.

    Huerta’s advice won’t be binding but the presence of a social activist icon could be influential as OpenAI CEO Sam Altman attempts a costly restructuring of the San Francisco company’s corporate governance, which requires the approval of California’s attorney general and others.

    Another coalition of labor leaders and nonprofits recently petitioned state Attorney General Rob Bonta, a Democrat, to investigate OpenAI, halt the proposed conversion and “protect billions of dollars that are under threat as profit-driven hunger for power yields conflicts of interest.”

    OpenAI, the maker of ChatGPT, started out in 2015 as a nonprofit research laboratory dedicated to safely building better-than-human AI that benefits humanity.

    It later formed a for-profit arm and shifted most of its staff there, but is still controlled by a nonprofit board of directors. It is now trying to convert itself more fully into a for-profit corporation but faces a number of hurdles, including getting the approval of California and Delaware attorneys general, potentially buying out the nonprofit’s pricy assets and fighting a lawsuit from co-founder and early investor Elon Musk.

    Backed by Japanese tech giant SoftBank, OpenAI last month said it’s working to raise $40 billion in funding, putting its value at $300 billion.

    Huerta will be joined on the new advisory commission by former Spanish-language media executive Monica Lozano; Robert Ross, the recently retired president of The California Endowment; and Jack Oliver, an attorney and longtime Republican campaign fundraiser. Zingale, the group’s convener, is a former aide to California governors including Democrat Gavin Newsom and Republican Arnold Schwarzenegger.

    “We’re interested in how you put the power of AI in the hands of everyday people and the community organizations that serve them,” Zingale said in an interview Wednesday. “Because, if AI is going to bring a renaissance, or a dark age, these are the people you want to tip the scale in favor of humanity.”

    The group is now tasked with gathering community feedback for the problems OpenAI’s philanthropy could work to address. But for California nonprofit leaders pushing for legal action from the state attorney general, it doesn’t alter what they view as the state’s duty to pause the restructuring, assess the value of OpenAI’s charitable assets and make sure they are used in the public’s interest.

    “As impressive as the individual members of OpenAI’s advisory commission are, the commission itself appears to be a calculated distraction from the core problem: OpenAI misappropriating its nonprofit assets for private gain,” said Orson Aguilar, the CEO and founding president of LatinoProsperity, in a written statement.

    ——————————-

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    Source link

  • AI is having its Nobel moment. Do scientists need the tech industry to sustain it?

    AI is having its Nobel moment. Do scientists need the tech industry to sustain it?

    Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google’s California headquarters to celebrate.

    Hinton doesn’t work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant.

    But his impromptu party reflected AI’s moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition.

    That was Tuesday. Then, early Wednesday, two employees of Google’s AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins.

    “This is really a testament to the power of computer science and artificial intelligence,” said Jeanette Wing, a professor of computer science at Columbia University.

    Asked about the historic back-to-back science awards for AI work in an email Wednesday, Hinton said only: “Neural networks are the future.”

    It didn’t always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year’s physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning.

    Neural network advances came from “basic, curiosity-driven research,” Hinton said at a press conference after his win. “Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things.”

    Such work started well before Google existed. But a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work.

    One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems.

    “These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data,” Wing said. “There are very few companies — tech companies — that have that kind of computational power. Google is one. Microsoft is another.”

    The chemistry Nobel Prize awarded Wednesday went to Demis Hassabis and John Jumper of Google’s London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines.

    Hassabis, the CEO and co-founder of DeepMind, which Google acquired in 2014, told the AP in an interview Wednesday his dream was to model his research laboratory on the “incredible storied history” of Bell Labs. Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications.

    “I wanted to recreate a modern day industrial research lab that really did cutting-edge research,” Hassabis said. “But of course, that needs a lot of patience and a lot of support. We’ve had that from Google and it’s been amazing.”

    Hinton joined Google late in his career and quit last year so he could talk more freely about his concerns about AI’s dangers, particularly what happens if humans lose control of machines that become smarter than us. But he stops short of criticizing his former employer.

    Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California when the Nobel committee woke him up with a phone call early Tuesday morning, leading him to cancel a medical appointment scheduled for later that day.

    By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he “seemed pretty lively and not very tired at all” as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton’s who joined him at the Google party Tuesday.

    “Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting,” said Zemel, now a Columbia professor.

    But Zemel said what’s more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance.

    Guests included Google executives and another former Hinton student, Ilya Sutskever, a co-founder and former chief scientist and board member at ChatGPT maker OpenAI. Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry’s conflicts.

    An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual press conference organized by the University of Toronto in which he thanked former mentors and students.

    “I’m particularly proud of the fact that one of my students fired Sam Altman,” Hinton said.

    Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence “and ensure that it was safe.”

    “And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that’s unfortunate,” Hinton said.

    In response, OpenAI said in a statement that it is “proud of delivering the most capable and safest AI systems” and that they “safely serve hundreds of millions of people each week.”

    Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources “well beyond those of your typical research university,” said Michael Kearns, a professor of computer science at the University of Pennsylvania.

    But Kearns, who sits on the committee that picks the winners of computer science’s top prize — the Turing Award — said this week marks a “great victory for interdisciplinary research” that was decades in the making.

    Hinton is only the second person to win both a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called “computer simulation of human cognition” in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decision-making.

    Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing’s most powerful capabilities to other fields.

    “We’re just at the beginning in terms of scientific discovery using AI,” she said.

    ——

    AP Business Writer Kelvin Chan contributed to this report.

    Source link

  • Top AI business leaders meet with Biden administration to discuss the emerging industry’s needs

    Top AI business leaders meet with Biden administration to discuss the emerging industry’s needs

    WASHINGTON (AP) — Top Biden administration officials on Thursday discussed the future of artificial intelligence at a meeting with a group of executives from OpenAI, Nvidia, Microsoft and other companies. The focus was on building data centers in the United States and the infrastructure needed to develop the technology.

    White House press secretary Karine Jean-Pierre told reporters at the daily press briefing that the meeting focused on increasing public-private collaboration and the workforce and permitting needs of the industry. The computer power for the sector will likely depend on reliable access to electricity, so the utility companies Exelon and AES were also part of the meeting to discuss power grid needs.

    The emergence of AI holds a mix of promise and peril: The automatically generated text, images, audio and video could help to increase economic productivity but it also has the potential to displace some workers. It also could serve as both a national security tool and a threat to guard against.

    President Joe Biden last October signed an executive order to address the develop of the technology, seeking to establish protections through steps such as the watermarking of AI content and addressing consumer rights issues.

    Attending the meeting for the administration were White House chief of staff Jeff Zients, National Economic Council Director Lael Brainard, national security adviser Jake Sullivan, deputy chief of staff Bruce Reed, Commerce Secretary Gina Raimondo and Energy Secretary Jennifer Granholm, among others.

    Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman, Alphabet President and Chief Investment Officer Ruth Porat, Meta Chief Operating Officer Javier Olivan, and Microsoft President and Vice Chairman Brad Smith were among the corporate attendees.

    Matt Garman, the CEO of AWS, a subsidiary of Amazon, also attended. The company said in a statement that attendees discussed modernizing the nation’s utility grid, expediting permits for new projects and ensuring that carbon-free energy projects are integrated into the grid.

    Source link

  • Elon Musk sues OpenAI, renewing claims ChatGPT-maker put profits before ‘the benefit of humanity’

    Elon Musk sues OpenAI, renewing claims ChatGPT-maker put profits before ‘the benefit of humanity’

    LOS ANGELES (AP) — Elon Musk filed a lawsuit on Monday against OpenAI and two of its founders, Sam Altman and Greg Brockman, renewing claims that the ChatGPT-maker betrayed its founding aims of benefiting the public good rather than pursuing profits.

    The lawsuit, filed in a Northern California federal court, called Musk’s case a “textbook tale of altruism versus greed.” Altman and others named in the suit “intentionally courted and deceived Musk, preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence,” according to the complaint.

    Musk was an early investor in OpenAI when it was founded in 2015 and co-chaired its board alongside Altman. In the lawsuit, he said he invested “tens of millions” of dollars and recruited top AI research scientists for OpenAI. Musk resigned from the board in early 2018 in a move that OpenAI said — at the time — would prevent conflicts of interest as he was recruiting AI talent to build self-driving technology at the electric car maker.

    The Tesla CEO dropped his previous lawsuit against OpenAI without explanation in June. That lawsuit alleged that when Musk bankrolled OpenAI’s creation, he secured an agreement with Altman and Brockman to keep the AI company as a nonprofit that would develop technology for the benefit of the public and keep its code open.

    “As we said about Elon’s initial legal filing, which was subsequently withdrawn, Elon’s prior emails continue to speak for themselves,” a spokesperson for OpenAI said in an emailed statement. In March, OpenAI released emails from Musk showing his earlier support for making it a for-profit company.

    Musk claims in the new suit that he and OpenAI’s namesake objective were “betrayed by Altman and his accomplices.”

    “The perfidy and deceit are of Shakespearean proportions,” the complaint said.

    Source link