ReportWire

Tag: artificial intelligence (a.i.)

  • Trump’s Profiteering Hits $4 Billion

    [ad_1]

    In September, the company floated shares on the stock market, capitalizing in another way on the cachet of the Trump name. American Bitcoin merged with a penny-stock bitcoin miner as a way of going public without the cost—or scrutiny—of an initial public offering. And the stock market, as expected, has put a far higher price on the company, in part because it owns a stockpile of bitcoin. The brothers’ stake now appears to be worth around two hundred million dollars. A caveat: Eric Trump, as a large and active investor in American Bitcoin, must report any sale of shares, and that might trigger a selloff. So it seems excessive to add it all to the Presidential-profit ledger. I will add only the approximate value of Donald Trump, Jr.,’s stake: about a hundred million dollars.

    The number in August: $3.4 billion
    Additional profit: $100 million
    New total: $3.5 billion

    WORLD LIBERTY FINANCIAL, BINANCE, AND PAKISTAN

    The Trumps have made even more money since August through World Liberty Financial, a digital-finance startup heavily linked to the family. Its website lists the President as a “co-founder emeritus” and displays his photograph prominently; Eric, Donald, Jr., and Barron Trump are all listed as co-founders. Steven Witkoff, the President’s old friend and diplomatic envoy, is also listed as a co-founder emeritus, and his son Zach is C.E.O.

    In May, World Liberty began selling a form of crypto known as a stablecoin. Unlike digital currencies such as bitcoin, which rise and fall in price, a stablecoin is supposed to hold a fixed value in dollars. Before July, when President Trump signed the first legislation regulating stablecoin, some of the best-known examples, such as TerraUSD, had turned out to be Ponzi schemes. (In December, a New York court sentenced TerraUSD’s co-founder to fifteen years in prison.) But World Liberty promised that its stablecoin, USD1, will always be worth exactly one dollar. Buyers can transfer USD1 to move money or make payments, and any holder can redeem USD1 for dollars. In between, while USD1s are circulating, World Liberty invests the cash that it is holding in U.S. Treasury bonds, in much the same way a savings bank might invest deposits. At current interest rates, World Liberty can expect to earn more than four per cent annually on the volume of USD1 in circulation.

    Last spring, a company owned by the rulers of the United Arab Emirates bought two billion dollars’ worth of USD1. The transaction raised alarms about the appearance of a payoff—because the U.A.E. was simultaneously seeking approval from the Trump Administration to acquire sensitive American artificial-intelligence technology. (President Trump soon granted that approval.) The Emiratis immediately used the stablecoin to invest in Binance, the largest crypto exchange, which has its own interest in influencing Trump. In 2023, Binance’s founder, Changpeng Zhao, known as C.Z., pleaded guilty to violating anti-money-laundering laws, served a brief prison sentence, and agreed to stop running the company. At the time of the two-billion-dollar stablecoin payment from the U.A.E., he was petitioning Trump for a pardon. Binance, as the holder of the stablecoin, can determine how long World Liberty continues earning four per cent a year on that two billion dollars. In other words, Binance controls how much profit the Trumps will make from the two-billion-dollar stablecoin sale. In October, Trump granted C.Z.’s request for a pardon. (David Wachsman, a spokesman for World Liberty, told me that Binance cannot “exert control or influence over World Liberty Financial.”)

    Binance is currently seeking to end federal monitoring that had been imposed when he was convicted for violating anti-money-laundering laws. Now the company is goosing the Trumps’ stablecoin profits in another way. On December 11th, Binance dropped its fees for certain crypto trades if they were conducted in USD1. Then, on December 23rd, Binance began paying users of its platform to hold USD1: Binance announced that, for the next month, it would give users a bonus equal to about 1.7 per cent on up to fifty thousand dollars’ worth of USD1 holdings. If this return rate were annualized, it would yield an eye-popping twenty per cent. And, on January 23rd, Binance announced a combination of new giveaways to USD1 holders which roughly extended that offer. Many users leapt at these opportunities. In the months preceding Binance’s maneuvers, the total volume of USD1 in circulation had held steady at about two billion dollars. On December 25th, shortly after Binance announced its first giveaway, World Liberty announced that USD1’s volume had crossed three billion dollars. It has now climbed to roughly five billion, and most of that expansion appears to have taken place on the Binance platform.

    [ad_2]

    David D. Kirkpatrick

    Source link

  • The Dangerous Paradox of A.I. Abundance

    [ad_1]

    Even if A.I. doesn’t greatly accelerate economic growth, there’s the issue of how it affects employment and wages. The key issue here is whether A.I. primarily complements or substitutes for human labor. If it enables office workers to carry out their tasks more quickly and effectively, for example, it could raise their wages, preserve many existing jobs, and create well-paid new positions for people who are adept at working with A.I. agents. In a recent article, Séb Krier, a manager for policy development and strategy at Google DeepMind, argued that “future workers will likely function as orchestrators of intelligence,” overseeing what A.I. does. Over the longer term, A.I. could also create new jobs and new professions that we can’t currently envision, which is what other transformative technologies have done.

    But the fact remains that if A.I. agents can eventually carry out virtually all cognitive tasks without human intervention—a possibility touted by their promoters—many workers could be displaced, and firms may be reluctant to take on new ones. Given the evolving capacities of models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, it’s perhaps unwise to wholly discount the prediction from Dario Amodei, the C.E.O. of Anthropic, that within five years A.I. could eliminate half of all entry-level white-collar jobs. Elsewhere in the economy, who knows what could happen? But if the marriage of A.I. and robotics proceeds, in other sectors, along the lines that it seems to be moving in the automotive industry, where autonomous vehicles are already being deployed in some places, taxi-drivers and truck drivers likely won’t be the only blue-collar workers whose jobs are affected.

    “It’s clear that a lot of jobs are going to disappear: it’s not clear that it’s going to create a lot of jobs to replace that,” Geoffrey Hinton, one of the pioneers of the deep-learning models that underpin generative A.I., remarked at a conference last month. “This isn’t A.I.’s problem. This is our political system’s problem. If you get a massive increase in productivity, how does that wealth get shared around?” If A.I. abundance does materialize, that will be a central question.

    In a recent Substack article, Philip Trammell, an economist at the Stanford Digital Economy Lab, and Dwarkesh Patel, a tech podcaster, pointed out that in standard economic theory deploying more capital raises workers’ productivity and their wages, but reduces the rewards of further capital investment as diminishing returns set in. This “correction mechanism” keeps the over-all shares of income that accrue to labor and capital pretty constant over time. But if A.I. is easily substitutable for labor throughout the economy, and a potential shortage of workers is no longer a bottleneck to production, the stabilization effect disappears, capital incomes “can rise indefinitely,” and the owners of capital receive an ever-growing share of the economic pie, Trammell and Patel write. How far can this process go? “[O]nce A.I. renders capital a true substitute for labor,” Trammell and Patel write, “approximately everything will eventually belong to those who are wealthiest when the transition occurs, or their heirs.”

    Trammel and Patel relate their analysis to Thomas Piketty’s book “Capital in the Twenty-First Century,” from 2014, which argued that, under certain conditions, rising inequality is inevitable under capitalism. To address this problem, Piketty called for a global tax on wealth. Trammell and Patel argue that Piketty’s pessimistic analysis hasn’t applied until now, but “he will probably be right about the future.” They also endorse Piketty’s policy solution, writing, “Assuming the rich do not become unprecedentedly philanthropic, a global and highly progressive tax on capital (or at least capital income) will then indeed be essentially the only way to prevent inequality from growing extreme.” (The tax would have to be global, the authors argue, because if capital doesn’t need much labor to produce things it would be even more mobile than it is now, which would enable it to evade national levies.)

    The article by Trammell and Patel has already received some pushback online, largely on the ground that its assumption that capital is perfectly substitutable for labor is unrealistic. Brian Albrecht, the chief economist at the Portland-based International Center for Law & Economics, argues that the process of A.I. machines replacing workers is likely to take a long time, and during that transition “standard economic principles apply.” Krier argued that the mere fact A.I. can do something more cheaply or effectively than human workers doesn’t mean it will inevitably replace them. “People pay a lot to go see concerts and Olympic races even if in principle a model can generate the same song and a robot can run faster,” he wrote.

    [ad_2]

    John Cassidy

    Source link

  • Five Things That Changed the Media in 2025

    [ad_1]

    The true-crime genre has been a cornerstone of the podcast market for years, and we could very well see a proliferation of newsletters about cold cases, wife murders, or gangland rackets. Sadly, this is a form that could easily be mimicked by ChatGPT, which can pull information off Wikipedia and other websites and stitch together stories that feel suspenseful.

    4. An A.I. scammer cons her way into print

    ChatGPT brings me to the next item on my list. In September, Nicholas Hune-Brown, a Toronto-based journalist and editor at The Local, put out an open call for stories about the privatization of health care in Canada. One of the better pitches he received came from Victoria Goldiee, a freelancer who boasted a résumé of publications that would intrigue any editor at a small but prestigious outlet such as The Local. Through some straightforward due diligence, Hune-Brown figured out that Goldiee had fabricated quotes in previous stories—sometimes from people who did not seem to exist—and concluded that she had likely used A.I. to write not only her articles but also her pitch. She did not appear to live in Toronto, as she claimed when she pitched her story to Hune-Brown. She had been deceptive in her other work.

    Goldiee seems to have duped a long list of publications; the Guardian, Dwell, and the Journal of the Law Society of Scotland all retracted articles she had written for them. I do not think the editors in these places were naïve, nor do I think they made obvious mistakes that reflect widespread incompetence in the industry. And this does not necessarily augur a flood of A.I.-slop freelancers duping editors around the world—mostly because journalism pays terribly and there are better grifts to pull. But we are approaching a time when it will be hard to tell the difference between a daily feed of news generated by humans and one generated by a large language model. What happens when that line gets crossed?

    Or perhaps an anxious and financially strapped media industry will simply cross that Rubicon itself, deliberately. Last week, the Washington Post launched an audio product called Your Personal Podcast, which will allow users to custom-build a daily summary of the news. According to an internal e-mail, users will be able to pick their own hosts, select their areas of interest, and even “ask questions using our Ask the Post AI technology.” Presumably, these answers will be derived from the paper’s own reporters and stories, but when you replace the names and faces that gather the news with a soothing robot voice, how will readers and listeners begin to think about the news?

    3. Streamers get incentivized to talk about politics

    I’ve written about this plenty already this year, so I’ll keep it short: streaming, like all disaggregated social-media phenomena, is much less democratic and independent than it might seem. The algorithm is the great determiner of success and failure, and the people who are always trying to game its secrets tend, ultimately, to do the same things. This past year, we saw something that I’ll call, in a term coined by the internet, “politicsmaxxing.” Content creators such as Adin Ross and the Nelk Boys, who only recently have demonstrated an interest in politics, began talking about the news—most notably, about Gaza. I imagine that many of these people will stop talking about Palestine and politics the instant the algorithms change; still, given the influence that these new-media forms have on young men specifically, it would not be surprising to see this switch get turned on during every major election cycle.

    2. News traffic continues to decline

    In October, Northwestern University’s Medill School of Journalism released its annual report on the state of local news. During the past four years, according to the report, monthly page views for the hundred largest newspapers in the country dropped by forty-five per cent. The other stats in the report are no better. The number of “news deserts,” defined as areas that don’t have consistent local reporting, continued to grow, as more than a hundred and thirty newspapers shut down in 2025, about the same number that shut down the year before.

    Nobody seems to have much of a plan for what to do about any of this. Certainly, no one seems to know how to fill the need for local news—despite many efforts, which have had varying degrees of success. One possibility is that there is less demand for local news than journalists would like to believe, and that we now live in a world in which what people care most about are updates regarding Donald Trump. But I believe that the public is a bit sick, at this point, of endless Trump coverage, and that people will support local news efforts that try to meet them somewhere in their regular rounds, through the internet.

    [ad_2]

    Jay Caspian Kang

    Source link

  • Sora 2 and the Limits of Digital Narcissism

    [ad_1]

    During the past few weeks, I’ve seen a proliferation of A.I.-generated video in my social-media feeds and group texts. The more impressive—or, at least, more personalized—of these have been the work of Sora 2, the updated version of OpenAI’s video-generation platform, which the company released on an invitation-only basis at the end of September. This iteration of Sora comes with a socially networked app, and it appears to be much better at integrating you and your friends, say, into a stock scene. What this means is that, when you open up Sora 2, you’ll likely see a video of someone you know winning a Nobel Prize, getting drafted into the N.B.A., or flying a bomber plane in the Second World War.

    When I first started seeing these videos, my internal monologue went something like this: Whoa, that really looks like me/my friend. Ha, that’s cool. This is going to be a problem for society.

    This set of thoughts, usually in this order, has been part of the A.I. industry’s pitch to consumers—and it’s worth pausing in this moment and asking if those reactions still hold the same sway as they did when ChatGPT was launched to justifiably breathless reviews and speculation about the future three years ago. Sora 2 has been met with relatively little fanfare, at least in comparison to earlier releases from OpenAI. I was impressed by the videos I’ve seen, but I find myself feeling similarly meh. I opened the app, saw a video of my friend Max giving a TED Talk, chuckled, and then went back to watching YouTube. I haven’t opened it up since.

    I think of myself as a cautious A.I. believer—if I were a Christian, I would be the type who goes to church two Sundays a month, knows the hymns, but mostly keeps up his faith as a matter of social norms and the possibility that God actually is angry. I don’t believe artificial general intelligence, A.G.I., will gut the world, but I do think a lot of us will be working in new jobs in the next decade or so. (These changes, I suspect, will mostly be for the worse.) I also have spent a good deal of time working on documentaries, which has driven home for me how much time and money typically goes into producing even a good minute of traditional film. So, what’s changed? Why do these updates feel increasingly like the unremarkable ones that periodically hit my iPhone?

    The most powerful trick that A.I. pulls is to put you, or at least your digitized self, into some new dimension. It does this by, for instance, converting your family photos into Studio Ghibli animations, or writing in your voice, and, now, by grafting your face onto your favorite movie scenes. All of this tickles the vanities of the user—even if he’s the President of the United States, apparently—and creates an accessible and ultimately personal connection to the program. You might not be impressed by watching Claude, the large language model created by the A.I. startup Anthropic, write code for six straight hours because, chances are, you can’t follow what it’s doing, nor, unless you’re a coder, do you likely care too much about the possible ramifications. But, when you see yourself riding a dragon as if you’re in a copyright-gray-zoned version of “Game of Thrones,” you will probably take notice.

    For the most part, we enjoy A.I. because it lets us gaze into a better mirror, at least for a little while. And by giving well-timed teases of what the A.I. future might look like, the companies behind these programs nudge us to ask if the A.I. version of our lives might not be better than the real ones. It’s worth noting that this is more or less how strip clubs work. The customers are sold a fantasy, and they keep throwing money around because they hold out hope, however dim, that the teases will turn into something else. Under the spell of such intense flattery, we all become easy marks.

    The A.I. boom of the past few years has been built in the space between the second thought I had when I first saw Sora 2 videos and the third—between “Ha, that’s cool” and “This is going to be a problem for society.” Many of us have that third thought, but few of us, save for the A.I. doomers proselytizing about the existential threats posed by this technology, have sat with it long. We wonder if these cute, obsequious chatbots will someday try to kill us because that’s what happens in “The Terminator” and “Ex Machina” and “2001: A Space Odyssey.” We don’t actually have a working theory of how Claude or Grok will subjugate the human race, nor, I imagine, do we really believe that will happen.

    Why, once our brains are finished being mildly impressed with the latest step in A.I. technology, do we immediately start sketching out doom scenarios? The people making threats often happen to be financially incentivized to make A.I. seem as world changing and dangerous as possible. There are true believers among the doomers, but I suspect that a good portion of people who work at A.I. companies have no strong opinions about the threats of A.G.I. Some of them, given how engineering talent follows capital in Silicon Valley, may have worked previously at a cryptocurrency startup. And if they spent any amount of time in crypto, especially during the early days of apocalyptic Bitcoin philosophizing, they may recognize the similarities in the rhetoric of Bitcoin maximalists—who preached about the inevitability of deflationary currency, the coming upheaval of the world markets, and the need to use this power for good—and the A.I. Cassandras, who say that SkyNet is coming for us all. When the iPhone never changes and Bitcoin just becomes an investment vehicle, the only way left to grab people’s attention is to tell them they might all die.

    [ad_2]

    Jay Caspian Kang

    Source link

  • The A.I.-Profits Drought and the Lessons of History

    [ad_1]

    In a 1987 article in the Times Book Review, Robert Solow, a Nobel-winning economist at M.I.T., commented, “You can see the computer age everywhere but in the productivity statistics.” Despite massive increases in computing power and the rising popularity of personal computers, government figures showed that over-all output per worker, a key determinant of wages and living standards, had stagnated for more than a decade. The “productivity paradox,” as it came to be known, persisted into the nineteen-nineties and beyond, generating a huge and inconclusive body of literature. Some economists blamed mismanagement of the new technology; others argued that computers paled in economic importance compared to older inventions such as the steam engine and electricity; still others blamed measurement errors in the data and argued that once these were corrected the paradox disappeared.

    Nearly forty years after Solow’s article, and almost three years since OpenAI released its ChatGPT chatbot, we may be facing a new economic paradox, this one involving generative artificial intelligence. According to a recent survey carried out by economists at Stanford, Clemson, and the World Bank, in June and July of this year, almost half of all workers—45.6 per cent, to be precise—were using A.I. tools. And yet, a new study, from a team of researchers associated with M.I.T.’s Media Lab, reports, “Despite $30 – $40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.”

    The study’s authors examined more than three hundred public A.I. initiatives and announcements, and interviewed more than fifty company executives. They defined a successful A.I. investment as one that had been deployed beyond the pilot phase and had generated some measurable financial return or marked gain in productivity after six months. “Just 5% integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L”—profit-and-loss—“impact,” they wrote.

    The survey interviews elicited a range of responses, some of which were highly skeptical. “The hype on LinkedIn says everything has changed, but in our operations, nothing fundamental has shifted,” the chief operating officer at a midsize manufacturing firm told researchers. “We’re processing some contracts faster, but that’s all that has changed.” Another respondent commented, “We’ve seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects.”

    To be sure, the report points out that some firms have made successful A.I. investments. For example, it highlights efficiencies created by customized tools aimed at back-office operations, noting, “These early results suggest that learning-capable systems, when targeted at specific processes, can deliver real value, even without major organizational restructure.” The survey also cites some firms reporting “improved customer retention and sales conversion through automated outreach and intelligent follow-up systems,” which suggests that A.I. systems could be useful for marketing.

    But the idea that many companies are struggling to achieve substantial returns jibed with another recent survey, by Akkodis, a multinational consulting firm. After contacting more than two thousand business executives, the firm found that the percentage of C.E.O.s who are “very confident” in their firm’s A.I.-implementation strategies has fallen from eighty-two per cent in 2024 to forty-nine per cent this year. Confidence had also fallen among corporate chief technology officers, although not by as much. These developments “may reflect disappointing outcomes from previous attempts at digital or AI initiatives, delays or failures in implementation as well as concerns around scalability,” the Akkodis survey said.

    Last week, media accounts of the M.I.T. Media Lab study coincided with a fall in highly valued stocks associated with A.I., including Nvidia, Meta, and Palantir. Correlation isn’t causation, of course, and recent comments by Sam Altman, the chief executive of OpenAI, may have played a bigger role in the sell-off, which was surely inevitable at some point, given recent price increases. At a dinner with reporters, Altman said valuations were “insane” and used the term “bubble” three times in fifteen seconds, CNBC reported.

    Still, the M.I.T. study garnered a lot of attention, and after the initial raft of news stories about the research, a report emerged that the Media Lab, which has ties to many technology companies, was quietly restricting access to it. Messages that I left with the organization’s communications office and two of the report’s authors went unreturned.

    Although the report is more nuanced than some news coverage made out, it certainly raises questions about the grand economic narrative that has underpinned the tech boom since November, 2022, when OpenAI released ChatGPT. The short version of this narrative is that the economy-wide diffusion of generative A.I. would be bad for workers, particularly knowledge workers, but great for companies, and their shareholders, because it would generate a big leap in productivity and, by extension, profits.

    One possible reason this doesn’t seem to have happened yet recalls the suggestion that management failures were constraining the productivity benefits of computers in the nineteen-eighties and early nineties. The Media Lab study found that some of the most successful A.I. investments were made by startups that use highly customized tools in narrow areas of workflow processes. On the other side of the “GenAI Divide,” the study pointed to less successful startups that were “either building generic tools or trying to develop capabilities internally.” More generally, the report said the divisions between success and failure “does not seem to be driven by model quality or regulation, but seems to be determined by approach.”

    Conceivably, the novelty and complexity of generative A.I. may be holding some companies back. A recent study, by the consultancy firm Gartner, found that fewer than half of C.E.O.s are confident that their chief information officers are “AI-savvy.” But there is another possible explanation for the disappointing record highlighted in the Media Lab report: for many established businesses, generative A.I., at least in its current incarnation, simply isn’t all it’s been cracked up to be. “It’s excellent for brainstorming and first drafts, but it doesn’t retain knowledge of client preferences or learn from previous edits,” one respondent to the Media Lab survey said. “It repeats the same mistakes and requires extensive context input for each session. For high-stakes work, I need a system that accumulates knowledge and improves over time.”

    Of course, there are plenty of people who find A.I. useful, and there is academic evidence to back this up: in 2023, two economists at M.I.T. found that exposure to ChatGPT enabled participants in a randomized trial to complete “professional writing tasks” more quickly and improved the quality of their writing. The same year, other research teams identified productivity-enhancing outcomes for computer programmers who used Github’s Copilot, and for customer-support agents who were given access to proprietary A.I. tools. The Media Lab researchers found that many workers are using their personal tools, such as GPT or Claude, at their jobs; the report refers to this phenomenon as the “shadow AI economy,” and comments that “it often delivers better ROI” than employer initiatives. But the question remains, and it’s one that senior corporate executives will surely be asking more frequently: Why haven’t more firms seen these types of benefits feeding through to the bottom line?

    Part of the problem may be that generative A.I., remarkable as it is, has limited application in many parts of the economy. Taken together, leisure and hospitality, retail, construction, real estate, and the care sector—child-minding and looking after people who are old or infirm—employ about fifty million Americans, but they don’t look like immediate candidates for an A.I. transformation.

    Another important thing to note is that adoption of A.I. throughout the economy could well be a lengthy process. In Silicon Valley, people like to move fast and break things. But economic history tells us that even the most transformative technologies, which economists refer to as general-purpose technologies, can’t be exploited to maximum effect until infrastructure, skills, and products that can complement them are developed. And this can be a long process. The Scottish inventor James Watt invented his cylindrical steam engine in 1769. Thirty years later, most cotton factories in Great Britain were still powered by water wheels, partly because it was difficult to transport coal for use in steam engines. That didn’t change until the development of steam-powered railways in the early nineteenth century. Electricity also spread slowly and didn’t immediately lead to an economy-wide spurt in productivity growth. As Solow noted, the development of computers followed the same pattern. (From 1996 to 2003, economy-wide productivity growth finally increased, which many economists attributed to the delayed effect of information technology. Subsequently, however, it fell back.)

    [ad_2]

    John Cassidy

    Source link