ReportWire

Tag: financial times

  • 13-hour AWS outage reportedly caused by Amazon’s own AI tools

    [ad_1]

    A recent Amazon Web Services (AWS) outage that lasted 13 hours was reportedly caused by one of its own AI tools, according to reporting by Financial Times. This happened in December after engineers deployed the Kiro AI coding tool to make certain changes, say four people familiar with the matter.

    Kiro is an agentic tool, meaning it can take autonomous actions on behalf of users. In this case, the bot reportedly determined that it needed to “delete and recreate the environment.” This is what allegedly led to the lengthy outage that primarily impacted China.

    Amazon says it was merely a “coincidence that AI tools were involved” and that “the same issue could occur with any developer tool or manual action.” The company blamed the outage on “user error, not AI error.” It said that by default the Kiro tool “requests authorization before taking any action” but that the staffer involved in the December incident had “broader permissions than expected — a user access control issue, not an AI autonomy issue.”

    Multiple Amazon employees spoke to Financial Times and noted that this was “at least” the second occasion in recent months in which the company’s AI tools were at the center of a service disruption. “The outages were small but entirely foreseeable,” said one senior AWS employee.

    The company launched Kiro in July and has since pushed employees into using the tool. Leadership set an 80 percent weekly use goal and has been closely tracking adoption rates. Amazon also sells access to the agentic tool for a monthly subscription fee.

    These recent outages follow a more serious event from October, in which a 15-hour AWS outage disrupted services like Alexa, Snapchat, Fortnite and Venmo, among others. The company blamed a bug in its automation software for that one.

    However, Amazon disagrees with the characterization of certain products and services being unavailable as an outage. In response to the Financial Times report, the company shared the following , which it also published on its news blog:

    We want to address the inaccuracies in the yesterday. The brief service interruption they reported on was the result of user error—specifically misconfigured access controls—not AI as the story claims.

    The disruption was an extremely limited event last December affecting a single service (AWS Cost Explorer—which helps customers visualize, understand, and manage AWS costs and usage over time) in one of our 39 Geographic Regions around the world. It did not impact compute, storage, database, AI technologies, or any other of the hundreds of services that we run. The issue stemmed from a misconfigured role—the same issue that could occur with any developer tool (AI powered or not) or manual action. We did not receive any customer inquiries regarding the interruption. We implemented numerous safeguards to prevent this from happening again—not because the event had a big impact (it didn’t), but because we insist on learning from our operational experience to improve our security and resilience. Additional safeguards include mandatory peer review for production access. While operational incidents involving misconfigured access controls can occur with any developer tool—AI-powered or not—we think it is important to learn from these experiences. The Financial Times’ claim that a second event impacted AWS is entirely false.

    For more than two decades, Amazon has achieved high operational excellence with our Correction of Error (COE) process. We review these together so that we can learn from any incident, irrespective of customer impact, to address issues before their potential impact grows larger.

    Update, February 21 2026, 11:58AM ET: This story has been updated to include Amazon’s full statement in response to the Financial Times report.

    [ad_2]

    Lawrence Bonk

    Source link

  • The New Patronage: A.I., Algorithms and the Economics of Creativity

    [ad_1]

    Generative A.I. is cheapening media production while platforms recode payouts, power and provenance. Unsplash+

    The cost of making high-quality media is collapsing. The cost of getting anyone to care about it is not. As generative A.I. turns production into a near-commodity, cultural power is shifting from studios and galleries to the platforms that allocate attention and the algorithms that determine who gets paid. The new patrons are not moguls with checkbooks; they are recommendation systems tuned for engagement and brand safety.

    Production is cheap; distribution is scarce

    Video models now draft storyboards, generate shots and remix audio at consumer scale. Yet the money still follows distribution, not tools. On YouTube, the rules of the YouTube Partner Program, set and revised unilaterally, determine whether a creator receives 55 percent of watch-page ad revenue for long-form content and 45 percent for Shorts. Those headline rates are stable, but the platform’s enforcement posture has shifted: as of July 15, YouTube began tightening monetization against “inauthentic” or mass-produced A.I. content, a clarification aimed at the surge of spammy, low-effort videos. The message is clear: use A.I. to enhance originality, not to flood the feed. 

    The enforcement problem is real. “Cheapfake” celebrity clips—static images, synthetic narration and rage-bait scripts—have racked up views while confusing audiences. YouTube has removed channels and now requires disclosure labels for realistic synthetic media, but detection and policing remain uneven at scale. 

    Platforms are recoding payouts and power

    Spotify’s 2024 royalty overhaul illustrates how platform rule-sets become policy for the creative middle class. Tracks now require at least 1,000 streams in 12 months to pay out; functional “noise” content is throttled; and labels face fees for detected artificial streaming. The goal is to redirect the pool away from bot farms and sub-cent trickles. The effect is a re-concentration of earnings at the head of the curve and a higher bar for the long tail. When platforms change the taps, whole genres feel the drought or the deluge. 

    TikTok’s détente with Universal Music in May 2024 underscored the same power dynamic in short-form video. After months of public sparring over royalties and A.I. clones, a new licensing deal restored UMG’s catalogue to the app, alongside language about improved remuneration and protections against generative knock-offs. When distribution is the choke point, even the largest rights-holders must negotiate on platform terms.

    Data deals: the new studio lots

    If attention is one axis of the new patronage, training data is the other. The most lucrative cultural contracts of the past year were not output commissions but input licences. OpenAI’s run of publisher agreements, including the Associated Press (archives), Axel Springer, the Financial Times and a multi-year global deal with News Corp, reportedly worth more than $250 million, signals a market price for premium corpora. A.I. labs are paying for access, and the beneficiaries are large, well-structured repositories of rights, not individual creators. 

    The legal battles surrounding image training demonstrate the unsettled state of the rules. Getty Images narrowed its U.K. lawsuit against Stability A.I. in June, dropping core copyright claims while pressing trademark-style arguments about reproduced watermarks. The pivot reflects the complexity of proving training-stage infringement across borders, as well as the industry’s search for more predictable routes to compensation.

    Regulation is standardizing transparency and shifting risk

    Rules are arriving, and they read like operating manuals for platformized culture. The E.U.’s A.I. Act phases in obligations for general-purpose models, with guidance for “systemic-risk” providers by 2025 and a Code of Practice outlining requirements for transparency, copyright diligence and safety. In effect, document training, assessing model risks, publishing technical summaries and preparing for audits are all tasks that privilege firms and partners with a strong compliance presence

    In the U.S., the Copyright Office’s multipart A.I. study is moving from theory to guidance. Part 2 (January 2025) addresses whether and when A.I.-assisted outputs can be copyrighted, while the pre-publication of Part 3 (May 2025) examines training and how to reconcile text-and-data mining with compensation. The studio system, once established, created creative norms through collective bargaining; now, regulators and A.I. vendors are co-authoring the manual.

    Unions are also imposing guardrails. The WGA’s 2023 deal barred studios from treating A.I.-generated material as “source material” and protected writers from being required to use A.I.; SAG-AFTRA’s agreements introduced consent and compensation for digital replicas, with similar provisions in music. These are not abstractions; they are hard-coded constraints on how platforms and producers can deploy synthetic labour.

    Provenance becomes product

    As synthetic media scales, provenance is turning into both a feature and a bargaining chip. TikTok has begun automatically labelling A.I. assets imported from tools that support C2PA Content Credentials. YouTube now requires creators to disclose realistic synthetic edits. Meanwhile, device makers are integrating C2PA into the capture pipeline, with Google’s Pixel 10 embedding credentials in its camera output. OpenAI, for its part, adds C2PA metadata to DALL-E images. Attribution is becoming clickable. 

    The provenance layer will not solve misinformation alone. Metadata can be stripped, and enforcement lags, but it rewires incentives. Platforms can boost authentic, labelled media in feeds, penalize evasions and share “credibility signals” with advertisers. That is algorithmic patronage by another name.

    What shifts next

    Studios and galleries will increasingly resemble platforms. Owning release windows is no longer enough. Expect investments in first-party audiences, data clean rooms and rights bundles that can be licensed to model providers. The historic advantage, taste and talent pipelines must be coupled with distribution levers and data assets. Deals will include not just streaming residuals but “model-weight” royalties and retraining rights, mirroring the structure of today’s publisher licences.

    Creators will face algorithmic wage setting. Eligibility thresholds (1,000 Spotify streams), demonetization triggers (unoriginal Shorts), disclosure requirements (synthetic media labels) and fraud detection fees are becoming the effective tax code of digital culture. The prudent strategy is to diversify revenue streams, ads, direct fan funding and commerce, and to instrument provenance by default to stay on the right side of both algorithms and regulators.

    Policy, too, will reward those who can comply. The E.U. framework, the U.S. copyright study, and union clauses collectively nudge the market toward licensed inputs, documented outputs and consent-based replication. Those advantages include larger catalogues and well-capitalized intermediaries. For independent creators, collective licensing pools and guild-run registries may offfer the path to negotiating power.

    The arts has seen patronage shift before, from courts to salons to art galleries and museums. This time, the median patron is a ranking function. Where culture is made matters less than where it is surfaced, metered and paid. Those who understand the incentives embedded in platform policy, and can prove provenance at the speed of the feed, will capture the surplus. Everyone else will be producing to spec for someone else’s algorithm.

    The New Patronage: A.I., Algorithms and the Economics of Creativity

    [ad_2]

    Gonçalo Perdigão

    Source link

  • Perplexity’s Clash with New Publishers Continues Despite Revenue-Sharing Efforts

    [ad_1]

    Perplexity CEO Aravind Srinivas previously worked at OpenAI. Saul Loeb/AFP via Getty Images

    Perplexity AI, a startup that has previously come under fire from online publishers, is attempting to rebuild trust with media players through revenue-sharing agreements. But that effort hasn’t stopped complaints about how the company surfaces content. Its latest challenge comes from Japanese media groups Nikkei and Asahi Shumbun, which today (Aug. 26) filed a joint lawsuit accusing Perplexity of copyright infringement.

    Co-founded in 2022 by CEO Aravind Srinivas, Perplexity has quickly become a leader in A.I.-powered search and is currently valued at $18 billion. Unlike traditional search engines that return links, Perplexity responds to queries by summarizing information found online, accompanies by citations.

    Perplexity did not respond to Observer requests for comment on the lawsuit.

    Nikkei, which owns the eponymous Japanese newspaper and the Financial Times, and Asahi Shumbun claim that Perplexity has been storing and resurfacing their articles since at least June 2024, a practice the publishers describe as “free riding” on journalists’ work. The lawsuit, filed in a Tokyo District Court, demands that the A.I. company delete stored articles, stop reproducing publisher content, and pay each media company 2.2 billion Japanese yen ($15 million) in damages.

    The suit also alleges that Perplexity ignored robot.txt safeguards implemented by the news publishers to block unauthorized crawling and sometimes presented articles alongside incorrect information, a move the publishers argue “severely damages the credibility” of their newspapers.

    This is not Perplexity’s first clash with news publishers. Earlier this month, Yomiuri Shimbun, another major Japanese newspaper, filed its own lawsuit against the company. U.S. outlets have also raised challenges.

    Last year, Condé Nast, Forbes and The New York Times all threatened legal action over alleged copyright infringement. Perplexity is currently battling a 2024 lawsuit from Dow Jones and The New York Post—both owned by Rupert Murdoch’s News Corp—claiming that the startup misused content to train A.I. models. A court recently rejected Perplexity’s bid to dismiss that case.

    Perplexity has since tried to ease tensions by launching revenue-sharing programs that give outlets a portion of the ad revenue generated from their material. The program has attracted partners such as Time Magazine, Fortune and the German news site Der Spiegel. Perplexity also recently unveiled plans to give publishers around 80 percent of the sales from Comet Plus, a news service expected to launch later this year.

    For now, the media industry remains divided on how to handle the rise of A.I. Some, like the Associated Press, Vox Media and The Atlantic, have signed licensing deals with OpenAI. Others remain wary. The New York Times is suing OpenAI and Microsoft over unauthorized use of its content, while Canadian startup Cohere was hit with a similar lawsuit this year from more than a dozen news publishers. Thompson Reuters has also accused A.I. platform Ross Intelligence of copyright infringement in a case that dates back to 2020.

    Perplexity’s Clash with New Publishers Continues Despite Revenue-Sharing Efforts

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Leaked notes from Chinese health officials estimate 250 million Covid-19 infections in December: reports | CNN

    Leaked notes from Chinese health officials estimate 250 million Covid-19 infections in December: reports | CNN

    [ad_1]


    Hong Kong
    CNN
     — 

    Almost 250 million people in China may have caught Covid-19 in the first 20 days of December, according to an internal estimate from the nation’s top health officials, Bloomberg News and the Financial Times reported Friday.

    If correct, the estimate – which CNN cannot independently confirm – would account for roughly 18% of China’s 1.4 billion people and represent the largest Covid-19 outbreak to date globally.

    The figures cited were presented during an internal meeting of China’s National Health Commission (NHC) on Wednesday, according to both outlets – which cited sources familiar with the matter or involved in the discussions. The NHC summary of Wednesday’s meeting said it delved into the treatment of patients affected by the new outbreak.

    On Friday, a copy of what was purportedly the NHC meeting notes was circulated on Chinese social media and seen by CNN; the authenticity of the document has not been verified and the NHC did not immediately respond to a request for comment.

    Both the Financial Times and Bloomberg laid out in great detail the discussions by authorities over how to handle the outbreak.

    Among the estimates cited in both reports, was the revelation that on Tuesday alone, 37 million people were newly infected with Covid-19 across China. That stood in dramatic contrast to the official number of 3,049 new infections reported that day.

    The Financial Times said it was Sun Yang – a deputy director of the Chinese Center for Disease Control and Prevention – who presented the figures to officials during the closed-door briefing, citing two people familiar with the matter.

    Sun explained that the rate of Covid’s spread in China was still rising and estimated that more than half of the population in Beijing and Sichuan were already infected, according to the Financial Times.

    The estimates follow China’s decision at the start of December to abruptly dismantle its strict zero-Covid policy which had been in place for almost three years.

    The figures are in stark contrast to the public data of the NHC, which reported just 62,592 symptomatic Covid cases in the first twenty days of December.

    How the NHC came up with the estimates cited by Bloomberg and the Financial Times is unclear, as China is no longer officially tallying its total number of infections, after authorities shut down their nationwide network of PCR testing booths and said they would stop gathering data on asymptomatic cases.

    People in China are also now using rapid antigen tests to detect infections and are under no obligation to report positive results.

    Officially, China has reported only eight Covid deaths this month – a strikingly low figure given the rapid spread of the virus and the relatively low vaccine booster rates among the elderly.

    Only 42.3% of those aged 80 and over in China have received a third dose of vaccine, according to a CNN calculation of new figures released by the NHC on December 14.

    Facing growing skepticism that it is downplaying Covid deaths, the Chinese government defended the accuracy of its official tally by revealing it had updated its method of counting fatalities caused by the virus.

    According to the latest NHC guidelines, only deaths caused by pneumonia and respiratory failure after contracting the virus are classified as Covid deaths, Wang Guiqiang, a top infectious disease doctor, told a news conference Tuesday.

    The minutes of the Wednesday closed-door NHC meeting made no reference to discussions concerning how many people may have died in China, according to both reports and the document CNN viewed.

    “The numbers look plausible, but I have no other sources of data to compare [them] with. If the estimated infection numbers mentioned here are accurate, it means the nationwide peak will occur within the next week,” Ben Cowling, a professor of epidemiology at the University of Hong Kong told CNN in an emailed statement, when asked about the purported NHC estimates.

    [ad_2]

    Source link