ReportWire

Tag: open ai

  • Disney’s AI Slop Era Is Here

    When Bob Iger eagerly told investors that slop was on the menu at the House of Mouse last month, the Disney CEO mentioned that the studio was in talks with a major generative AI company to power its reckless new era. It’s no longer talks: Disney’s disastrous turn into the AI bubble is here.

    This morning the studio announced it had agreed to a major deal with OpenAI that will see over 200 Disney characters—including ones from Pixar and Marvel properties, as well as Star Wars—allowed to be used on its Sora video platform and in imagery generated by ChatGPT, making Disney the first major brand to license its content with the AI company.

    The three-year licensing deal, which remains subject to negotiation agreements and approval from both Disney and OpenAI’s executive boards, does not cover the likenesses of actors or any voice rights. As part of the agreement, Disney will also become a “major customer” of OpenAI, integrating ChatGPT into its workflow as well as using the company’s APIs to develop new products, tools, and experiences.

    “Technological innovation has continually shaped the evolution of entertainment, bringing with it new ways to create and share great stories with the world,” Iger said in a statement shared by OpenAI this morning. “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works.

    “Bringing together Disney’s iconic stories and characters with OpenAI’s groundbreaking technology puts imagination and creativity directly into the hands of Disney fans in ways we’ve never seen before, giving them richer and more personal ways to connect with the Disney characters and stories they love,” the Disney CEO concluded.

    The news comes after Disney has spent the past few years joining several high-profile lawsuits alongside other Hollywood studios to aggressively pursue generative AI platforms from the likes of Midjourney and MiniMax, that allowed users to generate imagery of its characters in breach of Disney’s intellectual copyrights. Indeed, this morning Variety reported that Disney lawyers sent a cease and desist letter to Google this week, accusing the company of “infringing Disney’s copyrights on a massive scale” by allowing its properties to be generated and distributed through its AI platforms.

    But even while doing so publicly, the studio has been internally experimenting with implementing generative AI into its movies for a while—only to have those efforts dashed by concerns with both legal ramifications and potential public backlash.

    Over the summer, the Wall Street Journal reported on two separate instances related to the production of the live-action Moana remake and Tron: Ares where Disney floated the use of generative AI. In the former case, it would’ve been reportedly used to mask over the use of Dwayne Johnson’s cousin, Tanoai Reed, to act as a stand-in for the performer on days he was unavailable. In the latter, Disney allegedly experimented with integrating a character powered by generative AI into Tron‘s grid of programs—named “Bit,” and envisioned as a potential companion for Jeff Bridges’ Kevin Flynn.

    In neither case did the plans come to fruition, with Disney wrapped up in legal concerns over ultimate copyright involving the use of AI, as well as fears that news of its use would engender further public enmity with the studio—a fear that reached a fever pitch months after the report when Disney rode a wave of boycott calls and widespread criticism over its decision to temporarily suspend late-night host Jimmy Kimmel over comments he made on-air in the wake of the assassination of the right-wing commentator Charlie Kirk, seen as the latest in a long line of attempted capitulations made by the movie studio to the Trump administration.

    With its deal with OpenAI in place, those copyright concerns are seemingly no longer an issue for the studio. It remains to be seen if public backlash will be.

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    James Whitbrook

    Source link

  • The Double Bind of the AI Bubble Means We’re Screwed Whether the Tech Succeeds or Fails

    The stock market is largely being propped up by a promise of AI that has yet to deliver—setting up what could be a disastrous bubble. But the AI land rush isn’t merely a risky gamble. Wall Street is staking the health of the markets—and that of the broader economy with it—on a bet against the common good.

    It seems to be a lose-lose situation for those outside Wall Street and Silicon Valley: If the frenzied investment in artificial intelligence is overheated, as even OpenAI’s Sam Altman has warned, the markets could be hit hard enough to shake the economy as a whole—making an impact worse than that of the dot-com bust 25 years ago. But if the gamble pays off, it’ll mean a boom for a technology that could bring about widespread unemployment and social disruption.

    “If this works as fast as VC money seems to be hoping, society seems wholly unprepared for the consequences,” as Daniel Barcay, executive director of the Center for Humane Technology, tells Vanity Fair, citing potential mass unemployment and political instability. “If the market is over its skis, then what we’re likely to see is a race to the bottom of these dark monetization patterns. We were promised curing cancer, but we’re getting AI slop.”

    AI has already led to significant layoffs across a number of industries, including at Amazon, which announced it would slash 14,000 white-collar jobs last week. And Americans appear concerned that it’s just the beginning, with more than two thirds of respondents in a recent Reuters/Ipsos poll worrying that swaths of the workforce could be permanently displaced.

    That’s for good reason: A report released by Senator Bernie Sanders last month (which ironically made use of ChatGPT) estimated that close to 100 million American jobs could be eliminated in 10 years by AI. Other reports have reached similar conclusions. And while not all the data is quite so pessimistic, purveyors of the technology themselves have warned about looming job losses: Dario Amodei, CEO of AI company Anthropic, cautioned in an interview with Axios earlier this year that the tech could get rid of half of all entry-level white-collar jobs. As he described the scenario, AI could mean “cancer is cured, the economy grows at 10% a year, the budget is balanced—and 20% of people don’t have jobs.”

    Eric Lutz

    Source link

  • The Feud Between Grok and ChatGPT Just Got Real

    Elon Musk has been accusing OpenAI and Apple of engaging in some sort of corporate conspiracy to inhibit smaller AI companies (you know, like his own) from flourishing. Now, his company, xAI, is suing both firms, accusing them of having used “anticompetitive” practices to quash their competitors (again, you know, like Musk).

    The lawsuit, which was filed in Texas on Monday, argues that the two companies have colluded to make ChatGPT the “only generative AI chatbot that benefits from billions of user prompts originating from hundreds of millions of iPhones.” The suit makes the case that Apple and OpenAI are “monopolists” that, in an effort to maintain an edge on their competitors, have manipulated the Apple App Store rankings to keep ChatGPT at the top of the charts. The lawsuit was preceded by public complaints from Musk about how Apple was holding back his own chatbot, Grok, from succeeding. Those complaints came after the latest version of Grok was released, with less-than-stellar download results.

    The New York Times writes:

    At first, Mr. Musk posted about how well the chatbot was doing in the App Store rankings. But when it didn’t rise to the No. 1 spot, Mr. Musk said on X that Apple had committed “an unequivocal antitrust violation” and added that Apple was “behaving in a manner that makes it impossible for any A.I. company besides OpenAI to reach #1 in the App Store.”

    xAI’s lawsuit largely repeats these claims: “Apple and OpenAI’s exclusive arrangement has made ChatGPT the only generative AI chatbot integrated into the iPhone,” the suit says. “This means that if iPhone users want to use a generative AI chatbot for key tasks on their devices, they have no choice but to use ChatGPT, even if they would prefer to use more innovative and imaginative products like xAI’s Grok.”

    The litigation seeks a jury trial and the recovery of “billions in damages.” Gizmodo reached out to Apple, OpenAI, and xAI for comment. In a statement shared with Gizmodo, an OpenAI spokesperson said: “This latest filing is consistent with Mr. Musk’s ongoing pattern of harassment.”

    Ongoing is right. Elon’s grudge against Altman might aptly be compared to the eternal flame. He first sued OpenAI last year, accusing the company of betraying its mission to humanity. A few months later, the lawsuit was withdrawn. Musk then sued the company again two months later, accusing it of a breach of contract, and claiming it had “deceived” him. The expanded litigation included OpenAI’s business partner, Microsoft. OpenAI subsequently countersued. Shifting tactics, Musk then attempted to buy OpenAI from Altman earlier this year in an unsolicited bid worth $97.5 billion. Altman turned Musk down with a snide tweet. “No thank you but we will buy twitter for $9.74 billion if you want,” he wrote on Musk’s own platform (Altman’s “offer” was notably $34 billion less than Musk paid for it). Elon responded by calling Altman a “Swindler.”

    During much of this time, Musk and Altman have added to tensions by bickering online. A couple of weeks ago, the two execs got into it on X, wherein Musk claimed that OpenAI wasn’t playing fair. “They are making it impossible for any other AI company to succeed by relentlessly promoting OpenAI in every way possible!” Musk whined. Altman replied: “This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn’t like.”

    Can all of this be traced back to the fact that Musk was once a co-founder and board member of OpenAI but now, having parted ways with the firm, must be forced to watch it flourish, while his own AI company is frequently the butt of jokes? There’s simply no way to tell. Why these guys do anything is a total mystery, but it certainly seems like there’s some hurt feelings at stake (not to mention billions of dollars).

    Lucas Ropek

    Source link

  • Apple’s AI Approach Is a Welcome Break From the Industry Arms Race

    Apple’s AI Approach Is a Welcome Break From the Industry Arms Race

    We interact all day with rectangles—in our hands, on our desks, and on our walls. And as we’re typing, listening, watching, scrolling, and clicking away, the makers of these devices are engaged in a never-ending battle for dominance. The shapes may be the same, but the software, that intangible digital essence that flows through them, is not. For the majority of the past decade, software has been one of the most defining differentiating factors between devices, but in the past six months, AI capabilities have begun to matter most. It’s no longer about how you can touch them, but how you can talk to them.

    This was clearly evident this week at Apple’s Worldwide Developers Conference in Cupertino, California, as the company announced dozens of new AI features, all part of an umbrella of products called Apple Intelligence, and played catch-up with its competitors. While Google has made numerous missteps with its AI rollouts, the phone has not been one of them. It was one of the first to launch an AI-featured phone in October 2023, through the company’s flagship Pixel 8 and Pixel 8 Pro. These devices run Google’s Gemini Nano AI, which enables a range of AI-processing features directly on the phones, including AI-generated summaries of voice recordings and suggested replies in messaging apps, and a slew of object recognition tools for visual search. Since then, smartphones with built-in AI, like the Samsung Galaxy S24 Ultra and the Xiaomi 14 Ultra, have been unveiled.

    On Monday, Apple showcased a vastly improved Siri (if anyone needed an upgrade and a spa day it was Siri) that can engage in free-form conversations and handle complex queries with nuance. Additionally, the company announced Writing Tools, which can summarize text and even give you a shortened version of a long group thread (thank you, Lord!). You can also create Genmojis, which are customized emojis that sound fun but will likely result in a lot of people texting back and asking, “What the hell is that supposed to be?” (Thankfully, Apple’s AI will be able to tell you.) The company also partnered with OpenAI for some of the more advanced questions, allowing Siri to tap into ChatGPT’s expertise for more complex tasks such as answering questions about photos or documents, or helping Siri answer questions it doesn’t know the answer to. (More on this partnership later.)

    Clearly, Wall Street investors, who get excited about anything with the term AI in it, were giddy with the latest updates from Apple. For a brief moment on Wednesday following the announcement, Apple surpassed Microsoft to become the most valuable company in the world, a slot that Apple once held but lost to Microsoft when it, too, got the AI bump after an obsessive focus on integrating artificial intelligence into its products. These days, a hardware tweak isn’t going to generate such enthusiasm from investors—or likely, consumers.

    Case in point, earlier this month I got to try out the new 12.9-inch Apple iPad. I had planned to write about Apple’s latest rectangle, and to offer some sort of analysis on what it meant for the company. And while it’s always nice to work on a smaller, thinner, lighter, faster iPad, I was frankly underwhelmed by the updates to the hardware because at the end of the day, it’s how the software integrates with those updates that make one device better than another. At WWDC, Apple announced several new features under the banner of Apple Intelligence for iPadOS 18. These include a revamped Siri that can engage in more natural conversations and handle complex queries directly on the latest iPads, new writing tools that allow users to rewrite and proofread text, and the new Math Notes feature that turns the iPad into an interactive blackboard so it can solve simple or complex problems with live updates. Not to mention Smart Script in Notes, which lets users scribble out their work in their own handwriting, which is then turned into actual text—though it’s still TBD if Apple’s AI can read my chicken scratch handwriting.

    Not everyone was happy with Apple’s announcements this week. The internet was filled with plenty of Apple obsessives who complained (yes, there are people on the internet who complain about things they have no involvement with) that the new AI features announced by the company were tepid at best, that the name Apple Intelligence was “cringe,” and that the look and feel of the graphics were “uninspired.” But Apple’s foray into AI is a much-needed counterbalance to the frenetic pace set by Silicon Valley. While other tech giants are in a mad dash, rolling out AI advancements without appearing to fully consider the repercussions (ahem, OpenAI) Apple is taking a much-needed slow and methodical approach. It is being “cringe” and “uninspired” and “tepid,” but honestly, that’s what the industry needs right now: a focus on thoughtful, well-implemented innovations rather than a rush to be first (ahem, Microsoft), even at the risk of causing societal harm (ahem, OpenAI and Microsoft).

    Tony Fadell, who was at Apple for most of his career and was behind the iPhone and iPod, noted that Apple is taking “baby steps” into AI, and that’s exactly what it should be doing. “Today’s AI LLMs [large language models] are mostly glorified demos for the really interesting applications. They are turning into a commodity because they’re overfunded by FOMO-driven VCs who don’t truly understand the technology limitations that drive real application requirements,” Fadell wrote on X. “Hallucinations are a real problem and there is no fundamental way to get rid of them. The expectations of customers are so much higher than what can be delivered today.”

    Nick Bilton

    Source link

  • Student fights AI cheating allegations for using Grammarly

    Student fights AI cheating allegations for using Grammarly

    (NewsNation) — University junior Marley Stevens faced a startling setback when a paper she worked on received a zero grade, plunging her into academic probation and jeopardizing her scholarship. The twist? She had used Grammarly, a popular writing plugin recommended by her university to refine her work.

    Stevens, recounting her ordeal, expressed initial disbelief upon receiving the email notifying her of the zero grade. “I thought he had sent the email to the wrong person because I worked super hard on my paper,” she said in a Sunday interview on “NewsNation Prime.”

    She didn’t expect that three months later, she would still be entangled in the aftermath, with her scholarship hanging by a thread. Grammarly says 30 million people use this tool to catch spelling errors, typos and grammar issues.

    Grammarly also uses generative AI, and a detection service flagged Stevens’ assignment for the teacher as “unintentionally cheating.”

    “I’m on probation until Feb. 16 of next year. And this started when he sent me the email. It was October. I didn’t think that now in March of 2024 that this would still be a big thing that was going on,” Stevens said.

    Despite Grammarly being recommended on the University of North Georgia’s website, Stevens found herself embroiled in a battle to clear her name. The tool, briefly removed from the school’s website, later resurfaced, adding to the confusion surrounding its acceptable usage despite the software’s utilization of generative AI.

    “I have a teacher this semester who told me in an email like, ‘Yes, use Grammarly. It’s a great tool.’ And they advertise it,” Stevens said.

    Grammarly’s Jenny Maxwell clarified the company’s stance, emphasizing its role as a partner in enhancing writing experiences while ensuring responsible usage. “Our AI engine inside of it helps people create better writing experiences that are grammatically correct, [with] fewer spelling issues,” she explained.

    Maxwell defended the tool’s integrity, highlighting its 15-year history of aiding students and professionals in crafting grammatically correct content. “We’ve recently added a generative engine within Grammarly,” Maxwell explained, emphasizing responsible usage and transparency in citing its assistance.

    Despite Stevens’ appeal and subsequent GoFundMe campaign to rectify the situation, her options seem limited. The university’s stance, citing the absence of suspension or expulsion, has left her in a bureaucratic bind.

    Maxwell, on behalf of Grammarly, extended support, including a $4,000 donation.

    Reflecting on the broader implications, Maxwell urged institutions to adapt their assessment methods in light of evolving technologies like AI.

    “Education is wrestling right now with how they need to evolve the way that they assess writing,” she remarked.

    NewsNation reached out to the university for comment and hasn’t heard back.

    Damita Menezes

    Source link

  • VC Trae Stephens says he has a bunker (and much more) in talk about Founders Fund and Anduril | TechCrunch

    VC Trae Stephens says he has a bunker (and much more) in talk about Founders Fund and Anduril | TechCrunch

    Last night, for an evening hosted by StrictlyVC, this editor sat down with Trae Stephens, a former government intelligence analyst turned early Palantir employee turned investor at Founders Fund, where Stephens has cofounded two companies of his own. One of these is Anduril, the buzzy defense tech company that is now valued at $8.4 billion by its investors. The other is Sol, which makes a single-purpose, $350 headset that weighs about the same as a pair of sunglasses and that is focused squarely on reading, a bit like a wearable Kindle. (Having put on the pair that Stephens brought to the event, I immediately wanted one of my own, though there’s a 15,000-person waitlist right now, says Stephens.)

    We spent the first half of our chat talking primarily about Founders Fund, kicking off the conversation by talking about how Founders Fund differentiates itself from other firms (board seats are rare, it doesn’t reserve money for follow-on investments, consensus is largely a no-no).

    We also talked about a former colleague who manages to get a lot of press (Stephens rightly ribbed me for talking about him during our own conversation), whether Founders Fund has concerns that Elon Musk is stretching himself too thin (it has stakes in numerous Musk companies), and what happens to another portfolio company, OpenAI, if it loses too much talent, now that it has let its employees sell some percentage of their shares at an $86 billion valuation.

    The second half of our conversation centered on Anduril, and here’s where Stephens really lit up. It’s not surprising. Stephens lives in Costa Mesa, Ca., and spends much of each day overseeing large swaths of the outfit’s operations. Anduril is also very much on the rise right now for obvious reasons.

    If you’d rather watch the talk, you can catch it below. For those of you who prefer reading, what follows is much of that conversation, edited lightly for length.

    Keith Rabois, who recently re-joined Khosla Ventures, was reported to have been “pushed out” of Founders Fund after a falling out with colleagues. Can you talk a bit about what happened?

    At Founders Fund, everyone has their own style. And one of the benefits that really comes down from Peter from the beginning, when we were first founded around 20 years ago, is that everyone should run their own strategy. I do strategy in a different way than [colleague] Brian [Singerman] does venture. It’s different than the way that Napoleon [Ta] — who runs our growth fund — does venture, and that’s good, because we get different looks that we wouldn’t otherwise get by having people executing these different strategies. Keith had a very different strategy. He had a very specific strategy that was very hands-on, very engaged, and I think Khosla is a very good fit for that. . .and I’m really happy that he found a place where he feels like he has a team that can back him up in that execution.

    Image Credits: TechCrunch

    You’ve talked in the past about Founders Fund not wanting to back founders who need a lot of hand holding . . .

    The ideal case for a VC is you have a founder who is going to really good at running their own business, and there’s some unique edge that you can provide to help them. The reality is that that’s usually not the case. Usually the investors who think they’re the most value added are the most annoying and difficult to deal with. The more a VC says ‘I’m going to add value,’ the more you should hear them say, ‘I’m going to annoy the ever-living crap out of you for the rest of the time that I’m on the cap table.’ If we believe that we — Founders Fund — are necessary to make the business work — we should be investing in ourselves, not the founders.

    I find it interesting that so much ink was spilled when Keith moved to Miami, and again when he moved back to the Bay Area in a part-time capacity. People thought Founders Fund had moved to Florida, but you’ve told me the bulk of the firm remains in the Bay Area.

    The vast majority of the team is still in San Francisco. . . Even when I joined Founders Fund 10 years ago, it was really a Bay Area game. Silicon Valley was still the dominant force. I think if you look at fund five, which is the one I entered at Founders Fund, something like 60% to 70% of our investments were Bay Area companies. If you look at fund seven, which is the last vintage, the majority of the companies were not in the Bay Area. So whatever people thought about Founders Fund relocating to Miami, that was never the case. The idea was that if things are geographically distributed, we should have people who are closer to the other things that are interesting.

    Keith said something earlier today at the [nearby] Upfront Summit about founders in the Bay Area being comparatively lazy and not willing to work nine to nine on weekdays or on Saturdays. What do you think about that and also, do you think founders should be working those hours?

    I used to work for the government, where, when you speak publicly, the goal is to say as many words as possible without saying anything . . .it’s just like the teacher from Charlie Brown, rah, rah, rah, rah, rah. Keith is really good at saying things that journalists ask about later. That’s actually good for Keith. He made us talk about him here on stage. He wins. I think the reality is that there aren’t enough people in the world that say things that people remember that are worth talking about later. My goal for the rest of this talk is to find something to say that someone will ask about later today or tomorrow, ‘Can you believe Trae said that?’

    I have a solution to that, but that comes later! OpenAI is a portfolio company; you bought secondary shares. It just oversaw another secondary sale. Its employees have made a lot of money (presumably) from these sales. Does that concern you? Do you have a stance on when is too soon for employees to start selling shares to investors?

     

    Image Credits:

     

    In tech, the competition for talent is really fierce, and companies want their employees to believe that their equity has real monetary value. Obviously it would be bad if you said, ‘You can sell 100% of your vested equity,’ but at a fairly early stage, I think it’s fine to say, ‘You’ve got 100,000 shares vested; maybe you can sell 5% to 10% of that in a company-facilitated tender, so that when you’re being compensated with equity, that’s real and that’s part of your total comp package.’

    But the scale is so different. This is a company with an $86 billion valuation [per these secondary buyers], so 5% to 10% is a lot.

    I think if you start seeing a performance degradation related to people checking out because they have too much liquidity, then yeah, that becomes a pretty serious problem. I haven’t seen that happen at OpenAI. I feel like they are super mission-motivated to get to [artificial general intelligence], and that’s a really meaty mission.

    You’re also an investor in SpaceX. You’re an investor in Neuralink. Are you also an investor in Boring Company?

    We’re an investor in Boring Company.

    Are you an investor in X?

    No. No, no, no, no. [Laughs.]

    But you’re in the business of Elon Musk, as I guess anyone who’s an investor would want to be. Are you worried about him? Are you worried about a breaking point?

    I’m not personally concerned. Elon is one of the most unique and generational talents that I think I’ll see for the rest of my life. There are always trade-offs. You go above a certain IQ point and the trade-offs become quite severe, and Elon has a set of trade-offs. He’s incredibly intense. He will outwork anyone. He’s brilliant. He’s able to organize a lot of stuff in his brain. And there are going to be other parts of life that suffer.

    You are very involved in the day-to-day of Anduril, more than I realized. You’ve built these autonomous vessels and aircraft. You recently introduced the RoadRunner, a VTOL that can handle varying payloads. Can you give us a curtain raiser about what else you’re working on?

    The nature of Anduril and what we’re doing there is that the threat that we’re facing globally is very different than it was in 2000 through 2020, when we were talking about non-state actors: terrorist organizations, insurgent groups, rogue states, things like that. It looks now more like a Cold War conflict against near-peer adversaries. And the way we engaged with great power conflict during the Cold War was by building these really expensive, exquisite systems: nuclear deterrents, aircraft carriers, multi-hundred-million-dollar aircraft missile systems. [But] we find ourselves in these conflicts where our adversaries are showing up with these low-cost attritable systems: things like a $100,000 Iranian Shahed kamikaze drone or a $750,000 Turkish TB2 Bayraktar or simple rockets and DJI drones with grenades attached to them with little gripper claws.

    Our response to that has been historically to shoot a $2.25 million Patriot missile at it, because that’s what we have, that’s what’s in our inventory. But this isn’t a scalable solution for the future. So since we were founded, Anduril has looked at: how can we reduce the cost of engagement, while also removing the human operator, removing them from the threat of loss of life . . .And these capabilities are not hardware capabilities largely; this is about autonomy, which is a software problem . . .so we wanted to build a company that’s software-defined and hardware-enabled, so we’re bringing these systems that are low cost and supplementing the existing capabilities to create a continued deterrent impact so that we avoid global conflict . . .You want to do things in attritable ways that reduce the cost of life and the capital costs of deploying these systems, [yet] that still allow you to demonstrate total technological superiority on the battlefield to the extent that you prevent conflict from ever happening.

    I’d read a story recently where someone from one of the defense ‘primes,’ as they’re called, rolled their eyes and said defense tech upstarts don’t know enough yet about mass production. Is that a concern for you? 

    Startups don’t know how to do mass production. But primes also don’t know how to do mass production. You can look at the Boeing 737 problem if you want some evidence of that. We have no supply of Stingers, Javelins HIMARS, GMLRS, Patriot missiles — they can’t make them fast enough. And the reason is they built these supply chains and manufacturing facilities that are more like the manufacturing facilities of the Cold War.

    To look at an analogy to this, when Tesla went out to build at massive scale, they said, ‘We need to build an autonomous factory from the ground up to actually hit the demand requirements for producing at a low cost and at the scale that we need to grow.’ And GM looked at that and they said, ‘That’s ridiculous. This company will never scale.’ And then five years later, it was evident that they were just getting absolutely smoked. So I think the primes are saying this because it’s the defensive reaction that they would have. to say these upstarts will never get it.

    Anduril is trying to build a Tesla. We’re going to build a modular, autonomous factory that’s going to be able to keep up with the demand that the customer is throwing at us. It’s a big bet, but we hired the guy that did it at Tesla. His name is Keith Flynn. He’s now our Head of Production.

     

     

    I’m sure you get asked a lot about the danger of autonomous systems. Sam Altman, at one of these events, told me years ago that it was among his biggest fears when it comes to AI. How you think about that?

    Throughout the course of human history, we’ve gotten more and more violent. We started with, like, punching each other and then hitting each other with rocks and then eventually we figured out metals and we started making swords and bow and arrows and spears, and then catapults and then eventually we got to the advent of gunpowder. And then we started dropping bombs on each other, and then in the 1940s, we reached the point where we realized we had humanity-destroying capability in nuclear weapons. Then everyone kind of stopped. And we stood around and we said, ‘It would not be good to use nuclear weapons. We can all kind of agree we don’t actually want to do this.’

    If you look at the curve of that violent potential, it started coming down during the Cold War, where you had precision-guided munitions. If you need to take out a target, [the question became] can you shoot a missile through a window and only take out the target that you’re intending to take out? We got much more serious about intelligence operations so we could be more precise and more discriminating in the attacks that we delivered. I think autonomous systems are the far reach of that. It’s saying, ‘We want to prevent the loss of human life. What can we do to eliminate that, to the extent possible to be absolutely sure that when we take lethal action, we’re doing it in the most responsible way possible’ . . .

    Am I scared of Terminator? Sure, there’s some potential hypothetical future where the AGI becomes sentient and decides that we will be better off making paper clips. We’re not close to that right now. No one in the DoD or any of our allies and partners is talking about sentient AGI taking over the world and that being the goal of the DoD. But in 2016, Vladimir Putin, in a speech to the Technical University of Moscow, said ‘He who controls AI controls the world,’ and so I think we have to be very serious about recognizing that our adversaries are doing this. They’re going to be building into this future. And their goal is to beat us to that. And if they beat us to it, I’d be much more concerned about that Terminator reality than if we, in a democratic Western society, we’re the ones that control the edge.

    Speaking of Putin, what is Anduril doing in Ukraine?

    We’re deployed all over the world in conflict zones including Ukraine. You go into a conflict with the technology you already have, not with the technology you hope to have in the future. So much of the technology that the United States, the UK, and Germany sent over to Ukraine were Cold War era technologies. We were sending them things that were sitting in warehouses that we needed to get out of our inventory as quickly as possible. Anduril’s goal, aside from supporting those conflicts, is to build the capabilities that we need to build, to ensure that the next time there’s a conflict, we have a big inventory of stuff that we can deploy very quickly to support our allies.

    You’re privy to conversations that we probably can’t imagine. What is in your survival kit? And is it in a bunker?

    I do have a bunker, I can confirm. What’s in my survival kit? I don’t think I have any interesting ideas here. It’s like, you want non perishables. You want a big supply of water. It might not hurt to have some shotguns. I don’t know. Find your own bunker. It turns out you can buy Cold War era missile silos that make for great bunkers and there’s one for sale right now in Kansas. I would encourage any of you [in the audience] that are interested to check it out.

    You’re obviously very passionate about this country. You worked in government service. You work with Peter Thiel, who has thrown his resources behind people who’ve been elected to public office, including now, Ohio Senator J.D. Vance. Will we ever see you run for office?

    I’m not personally opposed to the idea, but my wife — who I love very much — said she would divorce me if I ever ran for public office. So the answer is the strong no.

     

     

    Connie Loizos

    Source link

  • The New York Times Sues Open AI, Microsoft For Copyright Infringement

    The New York Times Sues Open AI, Microsoft For Copyright Infringement

    In what may be a landmark case, the New York Times has sued Open AI and Microsoft for copyright infringement, saying the publication’s content is being by the platform to feed automated chatbots, constituting “unlawful copying and use of The Times’s uniquely valuable works.”

    It’s the first time a major media organization has sued an AI platforms although there are a handful of pending cases brought by IP owners from Sarah Silverman to John Grisham to Getty Images.

    The suit says Open AI should be responsible for “billions of dollars in statutory and actual damages” chatbot and training models that use copyrighted material from The Times should be destroyed.

    Generative AI, a surging and well-funded field led by Microsoft’s Open AI  trains chatbots on large data sets. The suit says the platform uses “The Times’s content without payment to create products that substitute for The Times and steal audiences away from it.”

    The Authors Guild, John Grisham, George R.R. Martin, Michael Connelly, Jodi Picoult and a group of other famous fiction writers filed a class action lawsuit against OpenAI, claiming that their technology is infringing on their works.

    Pulitzer Prize-winner Michael Chabon and Tony-winning playwright David Henry Hwang are among another group of writers that filed a class action lawsuit against Meta in federal court for having “copied and ingested” their works to train its LLaMA AI platform.

    At issue that case is training data for AI software programs that are designed to produce convincingly natural text in response user prompts. They are trained, the suit says, “by copying massive amounts of text and extracting expressive information from it. The body of text is referred to as the training dataset,” said that suit. The Plaintiffs have copyrights for their books and written works “and never consented to their use as training materials for LLaMA” — the AI platform of Facebook parent Meta. 

    In July, Sarah Silverman and two other authors sued OpenAI and Meta for copyright infringement.

    More…

    jillg366

    Source link

  • How Jessica Lessin’s The Information Has Survived a Decade of Media Tumult

    How Jessica Lessin’s The Information Has Survived a Decade of Media Tumult

    The OpenAI saga was, in many ways, a perfect story for The Information. Reporters at the influential tech site spent the week of Thanksgiving obsessively chronicling the chaos inside the company behind ChatGPT, after its board of directors abruptly ousted its CEO Sam Altman. Five days later, Altman, the generative AI poster boy, was reinstated. By then, The Information had published 17 exclusive news articles on the company that had been picked up hundreds of times by other news outlets. “His firing was announced, and then everyone on my team was sending me all these tweets, where people were saying, ‘Oh, if The Information gets the scoop on this, I’ll subscribe,’ or ‘I really hope my Information subscription’s worth the money,’” editor in chief Jessica Lessin recalls. “And so it really felt like game on.” Lessin—who has followed Altman from the start, writing the first extensive profile on him back in 2005—supported her team throughout the week by, among other things, “reporting in bathrooms while serving my friendsgiving” and at the ENT doctor with her four-year-old.

    The small-but-mighty Silicon Valley publication, which turns 10 this week, has spent the past decade rolling out ad-free scoops and analysis to a targeted audience willing to cough up $399 a year for total access. Back in 2013, when Lessin left The Wall Street Journal to start her company, it was generally accepted that “legacy media was where serious journalism was. And then there were a couple of upstarts trying to do new things, but trying to fuel it with venture capital and ad dollars,” she says, adding, “Those businesses have evaporated.” But The Information, fueled by subscriptions, has survived and seemingly paved the way for a new cohort of outlets offering niche industry reporting at a premium price, from Puck to Punchbowl News. Today, more outlets, like Axios and Politico, are also offering B2B subscription products along with their free content.

    “There were a number of media start-ups around that moment, and she was very unconventional—that she was doing paid subscriptions and was not that interested in social,” says Ben Smith, a former editor in chief of BuzzFeed News, who last year founded Semafor, one of the start-ups in which Lessin has invested. “It kind of pains me to say it, but obviously, she’s been totally vindicated, and most of her competitors are no longer around.” Those former competitors include BuzzFeed News, the Pulitzer Prize–winning online news site that shut down in April. There was also Recode, a brand Vox retired in March; Quartz, which is still around but has changed hands multiple times over the years, most recently to G/O Media; and Vice, which, the Times, while reporting that the company had filed for bankruptcy in May, referred to as a “decayed digital colossus.” Lessin was ahead of her time with the business model she adopted and the story she wanted to own. “She’d come out of The Wall Street Journal, and there was a sense that The Information was applying the kind of East Coast financial reporting rigor to an ecosystem that the East Coast publications didn’t really seem to understand very well,” says Smith. Longtime subscriber Roelof Botha, the head of Sequoia Capital and former CFO of PayPal, agrees, noting that when Lessin started The Information, “The conventional wisdom at the time was, Oh, you’re not going to build a successful subscription-only business at that price point. Who knows if the market is big enough for people who are deeply passionate about technology news of the sorts that they would cover?” He adds, “She was on the right side of history.”

    “There is no CEO of any company of significance that was not paying attention to OpenAI over the past week,” Lessin tells me. “I think that was a fundamental bet we took 10 years ago—that you cannot be ahead or even keep up in business without immersing yourself in what’s happening in these companies and technologies.”

    Today, per Lessin, The Information has 475,000 active readers (i.e., paid subscribers and unpaid newsletter subscribers). According to Lessin, they expect to be profitable this year. The company will grow its overall revenue by 30% year over year in 2023. They’ve been disciplined when it comes to growth, with only 65 full-time employees working across offices in San Francisco, New York, and Hong Kong, as well as remotely. Lessin is focused on growing The Information’s presence in Asia; they currently have three people assigned to the Hong Kong bureau and two hires in the works. Lessin, meanwhile, traveled with US commerce secretary Gina Raimondo to China in August—a trip she later recapped during a special event for subscribers.

    She’s also focused on building out The Information’s finance coverage, especially following their coverage of the Silicon Valley Bank crisis earlier this year. That was a “real eye-opener for me,” says Lessin, both in terms of how they were serving their audience—“a lot of subscribers said we saved them a lot of money,” she notes—and that they could compete on the finance beat, which she says has “led to a host of coverage around the banking sector overall.” Legacy media outlets like the Times, the Journal, and Bloomberg, says Lessin, are “going to be around forever,” but “they’re not as relevant” in “my world, and I think in business,” because of the size of the audience they aim to serve. “That model really limits how indispensable you can be, especially to a certain class of reader,” says Lessin.

    Among that targeted class is Jeff Bezos. “I read it all the time and have been a subscriber for years,” the Amazon founder told me in an email. “Jessica has done a terrific job. Always insightful on tech.” Another longtime subscriber is Netflix cofounder Reed Hastings. “Check it every day,” he tells me, noting that he’s “thrilled from a business-model standpoint that she’s succeeded”—he is, after all, “a subscriber guy”—but “as a reader, what I care about is the thoughtfulness. She curates amazing reporters, and the pieces, from my perspective, are written in-depth, as opposed to clickbaity. Probably subscription is the key to that because then they don’t get paid on clicks,” says Hastings. “People care enough about the stories to continue to renew.”

    Lessin maintains full ownership of the company and says she has no plans to sell. “I’m in this for the long term,” she says, a view that she says has been key to the site’s success. “You need the talent, you need the right business model, and kind of that alignment that we’re not going to go chase the latest fancy revenue thing,” she says. “Over the course of the 10 years, I’ve seen every legacy publication build a Snapchat team, and then a TikTok team, and then a video team. We built none of those teams and instead hired journalists or paid our journalists what they were worth. It’s a different formula, and it takes a lot of patience.”

    It’s worth noting that Lessin used her own money—“less than $1 million,” she previously said—to start The Information. Her father is a partner at the private equity giant TPG, and her husband, the tech entrepreneur Sam Lessin, won big on Facebook stock he received when Harvard pal Mark Zuckerberg bought his start-up in 2010. And there’s a perception that Lessin has worked to distance herself from—that she’s too close to the people she covers. Her personal relationship with Zuckerberg, for one, has come under scrutiny. “You learn to have dinner with people one night and then edit a tough but true piece about them the next day,” Lessin says, when I asked about the dynamic. “That’s what we do time and time again.”

    “Finding the truth and telling people why it matters is a fabulous business. It’s just really hard.” That’s why, she suggests, others haven’t been able to figure it out in the same way. “They don’t want to sit in a closet during Thanksgiving taking source calls,” she tells me.

    Charlotte Klein

    Source link

  • The Sam Altman Soap Opera Reflects Silicon Valley at Its Worst

    The Sam Altman Soap Opera Reflects Silicon Valley at Its Worst

    In a weekend filled with more twists and turns than Succession, the saga of Sam Altman’s departure from OpenAI unfolded like a rather dramatic and often ridiculous soap opera for the tech world. On Friday, Ilya Sutskever, a cofounder and board member of OpenAI, in a move reminiscent of a high-stakes thriller, informed Altman, the company’s CEO, that he was fired. Altman was told he was “not consistently candid in his communications” with the board of directors and that they had lost confidence in his ability to run the company. In the tech world, this set off so many rumors it was hard to keep track.

    By Saturday morning, after sifting through all the speculation, it became clear that Altman was likely pushed out, according to two people I spoke to close to the board, and reporting from other outlets, including The New York Times, because of safety concerns around the speed with which he was ushering the company into the AI future, and, what some feared was potentially an AI apocalypse. The board, after all, was not set up to pursue profits for OpenAI, but rather, to ensure the company didn’t destroy humanity. However, the drama didn’t end there. For a few hours, Altman and another cofounder Greg Brockman, who quit as president after Altman was fired, were in talks with venture firms to start a new AI company. Then the news shifted, to note that Altman was in talks to return to OpenAI as CEO. Then the board was called on to resign. Then the board wasn’t going to resign. Then Altman was not coming back as CEO. And in a final twist, Emmett Shear, the former CEO of Twitch, was appointed as interim CEO of OpenAI, and Altman is going off to Microsoft to run a new AI division—or, maybe returning to OpenAI?

    For those of you following along at home who lost the plot of this bizarre story, I asked ChatGPT to summarize this into a haiku. “AI drama unfolds. CEO’s swift exit. Tech soap opera.” (Though, it should be noted, that ChatGPT still can’t accurately write a haiku, which should be 5, 7, and 5 syllables. This is 6, 6, and 5 syllables.) But beyond the drama and the AI poetry, what happened in Silicon Valley this weekend points to a much bigger problem with Silicon Valley, and the people who continue to populate it, which played out, where else, but on social media, specifically Twitter, or X, or the cesspool of the internet, or whatever it’s called these days.

    During the saga, people were constantly tweeting their uninformed viewpoints on what was happening inside the boardroom of OpenAI. The panopticon of Twitter/X was so vividly clear when Altman emerged, tweeting a picture of himself wearing a guest pass at OpenAIs office, and saying: “first and last time I ever wear one of these.” Then an employee posted a picture of Altman tweeting the picture of himself. Reporters were stationed outside the building reporting what kind of food and drink was being delivered to the company’s headquarters (boba tea and McDonald’s in case you were wondering). Other employees were tweeting so many different colored heart emojis at each other that I didn’t actually know they came in that many colors. Through all of this, Silicon Valley mainstays like Marissa Mayer, Vinod Khosla, and Brian Chesky lay tweets at the feet of Altman, praising him like a deity and arguing for the board to reinstate him as the rightful CEO. Then, just when you didn’t think it could get more dramatic, 500 of the 770 employees at OpenAI signed a letter threatening to resign if the board did not quit… but wait… you won’t believe whose name was on the first page: Ilya Sutskever, the board member who fired Altman on Friday. Stutskever tweeted that he regretted his role in the firing. Which Altman retweeted with three heart emojis. (By Monday late-morning over 700 of the 770 OpenAI employees had now signed the open letter threatening to resign—I’m assuming the other 70 hadn’t woken up yet.)

    Nick Bilton

    Source link

  • The New York Times Has Had a Summer of AI Anxiety: “They’re Freaking Out”

    The New York Times Has Had a Summer of AI Anxiety: “They’re Freaking Out”

    The nation’s most influential news organization has spent the summer agonizing about artificial intelligence. “Do not put any proprietary information, including published or unpublished Times articles, into generative AI tools, such as ChatGPT, Bing Chat, Bard or others,” New York Times deputy managing editors Sam Dolnick and Steve Duenes, and director of photography Meaghan Looram, wrote in an email to newsroom and opinion staff on June 27. That includes “notes from your reporting…internal financial or audience data, or code from our products or stories” the management team said. “Do not use generative AI tools in any aspect of our journalism without getting approval, until we further explore the opportunities and the risks they bring,” continued the memo, which noted that “the public terms of use for almost all of these tools also carry significant legal risks for protecting our intellectual property and other rights.” Even before the email went out, I’m told, management had started clamping down internally, warning some desk heads directly about putting any articles or reporting into AI models. “They’re freaking out,” said one Times staffer.

    The Times is not alone: Top executives like News Corp CEO Robert Thomson and IAC’s Barry Diller have been publicly sounding the alarm over AI for months. At this point most media managers have likely sent memos to staff about the developing technology; newsroom unions are contemplating the labor implications; legal and business departments know their IP has probably been used to train the models without compensation. “That’s already done. They’ve already robbed the candy store,” as another Times staffer put it. Still, the broader question—and fear—remains: What will generative AI do to the professional news industry? “I think, correctly, the Times is deathly afraid of what this can mean,” the staffer said. “It’s potentially an existential moment for the Times and for the news industry, so I think leadership is properly taking a very robust look at this. But what are we doing? No one knows.” (The Times declined to comment.)

    The Gray Lady’s effort to address AI started back in the spring, with chief product officer Alex Hardiman leading the corporate effort while Dolnick and other senior editors lead the editorial front, according to a source familiar. The Times has dedicated roughly 60 staff in the newsroom to address the threats, and possible benefits, of AI in news. These staffers (most of whom are participating in these AI working groups on top of their regular jobs) are brainstorming, among other things, areas the technology could be used for in the newsroom, as well as ways to ensure the paper’s human-led reporting can remain distinct at the *Times—*particularly in a world where more news is written by AI, according to another source familiar. (The technology was also top of mind during the Times’s Maker Week, an annual event soliciting ideas from people across their workforce: using AI for chatbots in the Cooking section and for gift finders on Wirecutter were two ideas presented during the event last month.) The working groups are scheduled to convene on August 17 for a meeting, which I’m told has some 80 people on the invite list.

    So far the Times has kept its AI deliberations internal. Semafor recently reported that the paper is not part of a coalition of media organizations hoping to negotiate with tech companies over how artificial intelligence uses their content, an effort IAC is leading. News Corp, as I previously reported, is also not part of the coalition, though Thomson recently said that the company is in active discussion with AI and tech companies “to establish a value for our unique content sets and IP that will play a crucial role in the future of AI.” The Associated Press cut its own deal with OpenAI—a two-year agreement to share access to certain news content and technology that marks one of the first official news-sharing agreements between a major US news company and an AI company. Meanwhile, the Times updated their terms of service with restrictions on data scraping.

    The Times is proceeding cautiously. “Our approach of innovating strategically, rather than chasing the trend of the moment, has served us well and remains a blueprint for how we intentionally complement human expertise with digital tools,” Hardiman and Dolnick wrote in another internal memo this summer. “We’re keenly aware of previous moments of techno-euphoria that barreled past red flags that only later became obvious. In this case, the risks—for society, journalism and our own business—are starkly clear from square one, and we need to balance our enthusiasm with the sober reality that we need to work through the legal, journalistic and business implications of these tools before we can put them into practice.”

    Charlotte Klein

    Source link

  • Christopher Nolan Compares Artificial Intelligence To The Atomic Bomb

    Christopher Nolan Compares Artificial Intelligence To The Atomic Bomb

    After a screening of his new film, Oppenheimer Saturday night, director Christopher Nolan suggested that his period piece couldn’t have come at a better time, as we’re in an “Oppenheimer moment,” he said. But this time, the dangerous technology isn’t being created in a lab in a New Mexico lab; it’s coming from Silicon Valley.

    Speaking with the BBC, Nolan compared the Manhattan Project, the World War II-era effort to develop the world’s first nuclear weapons, to the current race to develop intelligent algorithms and artificial intelligence. Oppenheimer is “coming at a time when there are a lot of new technologies that people start to worry about the unintended consequences,” Nolan said.

    “When you talk to leaders in the field of AI, as I do from time to time, they see this moment right now as their Oppenheimer moment. They’re looking to his story to say, ‘What are our responsibilities? How can we deal with the potential unintended consequences?’ Sadly, for them, there are no easy answers.”

    Nolan elaborated on those concerns at a panel Saturday night in New York that followed a preview screening of his film, Variety reports. The panel was moderated by Meet the Press anchor Chuck Todd, who asked Nolan if he thought the tech industry was “re-examining Oppenheimer” as they continue to develop AI.

    “They say that they do,” Nolan responded. “It’s helpful that that’s in the conversation, and I hope that that thought process will continue. I am not saying Oppenheimer’s story offers any easy answers to those questions, but it at least can show where some of those responsibilities lie and how people take a breath and think, ‘Okay, what is the accountability?’”

    At present, though, Nolan worries that that question of accountability isn’t being asked enough by people in Hollywood. “People in my business talking about it, they just don’t want to take responsibility for whatever that algorithm does,” he said. “Applied to AI, that’s a terrifying possibility. Terrifying.”

    The use of AI is one of the sticking points in the current WGA and SAG-AFTRA strikes, the latter of which prompted the stars of Oppenheimer to formally walk off at the red carpet premiere last week.

    Matt Damon, Emily Blunt, Cillian Murphy and Florence Pugh pose on the red carpet at the UK premiere of “Oppenheimer.” Photo by HENRY NICHOLLS/AFP via Getty Images

    HENRY NICHOLLS/Getty Images

    “With the labor disputes going on in Hollywood right now, a lot of it—when we talk about AI, when we talk about these issues—they’re all ultimately born from the same thing, which is when you innovate with technology, you have to maintain accountability,” Nolan said Saturday. 

    Nolan has also stated support of striking actors and writers, and has said that he won’t start work on another film until the strikes conclude. “No, absolutely,” he told the BBC when asked if he’d be writing during the strike period. “It’s very important that everybody understands it is a very key moment in the relationship between working people and Hollywood.”

    “This is about jobbing actors, this is about staff writers on television programs trying to raise a family, trying to keep food on the table,” he said. “This is not about me, this is not about the stars of my film.”

    Eve Batey

    Source link

  • “Don’t Get Screwed Again”: News Publishers Are Banding Together in the Face of AI Threat

    “Don’t Get Screwed Again”: News Publishers Are Banding Together in the Face of AI Threat

    Publishers, according to Diller, need to band together and declare: “You cannot scrape our content, you cannot take it, you cannot take it transformatively…you cannot take it and use it in real time to actually cannibalize everything.”

    This notion of banding together also entered the bloodstream during Jessica Lessin’s annual gathering of news leaders a few weekends ago, hosted at the Information CEO’s rustic-chic home in Jackson Hole, Wyoming. It was an off-the-record who’s who of Gen X and elder-millennial media luminaries, who lounged on beanbag chairs, high-top stools, and a large cozy sectional: Ben Smith, Lydia Polgreen, Jesse Angelo, S. Mitra Kalita, Nicholas Carlson, Brian Stelter, Kevin Delaney, Sam Jacobs, Rebecca Blumenstein, Noah Shachtman, and so on, along with Uber CEO Dara Khosrowshahi, Netflix co-CEO Greg Peters, Quora CEO Adam D’Angelo, and venture capitalist Vinod Khosla. Talk of AI was heavy in the air, I’m told, and during one freewheeling session, New York Times executive editor Joe Kahn caused some of his fellow attendees to prick up their ears when he speculated about a group effort among publishers to “make sure they don’t get screwed again,” as one person who was present summarized Kahn’s remarks. (Another attendee noted that “Joe doesn’t talk a lot in these things, so when he does, you kind of listen.”)

    The Times didn’t have anything to add for this story, but a spokesman shared a memo that went out to employees on June 7 and described a “cross-company effort” related to AI, which will sort through questions including: “How do we ensure that companies that use generative AI respect our intellectual property, brands, reader relationships and investments?” The trade group Digital Content Next—whose members include the Times, NBCUniversal News Group, The Washington Post, BBC News, Axel Springer, Bloomberg, Condé Nast (the parent company of Vanity Fair), and numerous other major media organizations—recently issued a set of “principles for development and governance of generative AI.” Among these: “Publishers are entitled to negotiate for and receive fair compensation for use of their IP,” and “copyright laws protect content creators from the unlicensed use of their content.”

    To get a better sense of some of the legal nuances at play, I spoke to a pair of lawyers currently working on this issue. They said publishers are looking at three ways in which AI is harvesting their journalism: the training of large language models, the surfacing of content in response to search queries, and the synthesizing of content to create summaries for the user. “There are imbalances in terms of our abilities to negotiate,” one of the lawyers said, “because historically, some of these platforms didn’t feel the need to come to the table because they’re so big. Do we have bargaining power to get them to pay us for the use of our content in training, surfacing, and synthesizing, or not? Do they care? There’s some issues there—copyright issues, competition issues. Section 230 will be a big thing.” 

    “I don’t think any publisher would say that they produce content because they want to get paid for AI training,” the other lawyer added. “But economically, there’s a big problem here, because copyrighted content is private property, and it’s being used by commercial entities to create something that wasn’t licensed. In no other industry would we accept that private property is fair play as long as you build something else with it.” 

    Another issue, of course, is that AI queries could theoretically supplant Google Search, thereby starving publishers of monetizable clicks. This would be particularly problematic for ad-supported mass-traffic publications like, say, Jimmy Finkelstein’s The Messenger, which is chasing an ambitious goal of 100 million unique monthly visitors.

    I interviewed Finkelstein last month, and this next part didn’t make it into the published version of our chat, but here’s something that he floated: “I just took an airplane from Palm Beach. In the old days, you would google, ‘Is such and such a safe plane?’ Four publications would arise and you would decide which is the most credible, and you’d click on it and read the article and get an answer.” When using an AI chatbot instead of Google Search, he continued, “there’s no place to click! So that’s obviously an issue.” (See also: one of John Herrman’s latest Screen Time columns, “Will Google’s AI Plans Destroy the Media?”)

    As discussions heat up between publishers, on the one hand, and AI platforms/AI developers, on the other, negotiations will presumably be informed by the hard-fought content skirmishes of yore. The lawyers I consulted said there’s now an “overarching view” and “fundamental optimism” that publishers will ultimately be able to prove the need for payments and create a structure to make that happen.

    For what it’s worth, Sam Altman, the CEO of ChatGPT’s parent company, OpenAI, has embarked on a charm offensive with lawmakers, even urging regulation of the AI technology he’s helping to shepherd into the world. He also appears to have cultivated relationships with media executives. During Diller’s Semafor appearance, Diller called Altman a “close friend” and said, “I think he’s sympathetic [to publishers], but also I think he realizes the dragon that he’s got.”

    Thomson apparently has rubbed elbows with Altman as well. During the Q&A that followed Thomson’s prepared remarks at the World Congress of News Media in May, he mentioned that he’d met with Altman a couple of months earlier. Over a meal, Thomson recalled, they’d talked about AI’s “potential.”

    A few beats later in the Q&A, Thomson emphasized the importance of comprehending not only the potential of AI but “the real dangers.” “I don’t think there’s going to be regulation of AI anytime soon,” he said. “I mean, AI at the moment is more ambiguity than anything else, and just—my experience of Washington, and I’ve been down there a lot in the last 15 years, is that there won’t be any coherent, cogent response in a regulatory way. So it’s going to be up to us, and certainly our journalists, to write about it, to explain it, and, as media companies, to advocate where appropriate.”

    Joe Pompeo

    Source link

  • ChatGPT Made Me Question What It Means to Be a Creative Human

    ChatGPT Made Me Question What It Means to Be a Creative Human

    I don’t say this lightly, but this tech is one of the most astonishing, and terrifying, technologies I’ve ever seen, and I’ve been writing about technology for almost two decades. Not only because of what it is capable of today, with its ability to output truly “creative” text (or at least text that appears creative), but because of what this technology will be capable of in the next year or two—and the number of jobs it could (or should I say “will”) replace when it gets there. For example, when I asked ChatGPT to list 50 jobs that could be replaced by ChatGPT, it spit out this list in less than a second: customer service representative, technical support specialist, sales representative, receptionist, data entry clerk, call center agent, transcriptionist, legal secretary, medical secretary, executive assistant, personal assistant, journalist, novelist, travel agent, insurance agent, retail salesperson, bookkeeper, court reporter, marketing manager, public relations manager, advertising manager, and on and on and on.

    While there are already examples of crude AI writing simple articles for news outlets today—some basic stock reports, sports updates, and weather-related stories are written by robots—the advent of ChatGPT, and the coming iterations of this tech, illustrate that in the coming year or so, my editor (if he or she still has a job) might not ask me or another journalist to write a story with an analysis of what Elon Musk will do to Twitter, or a detailed look at how people voted in Georgia to determine how they may vote in 2024; instead, they could simply type a prompt into an app like ChatGPT. The same is true with art and design and illustration, as we’ve seen a spate of other new AI products released in recent months that are threatening all areas of the arts and creative careers. There are the text varieties, like GPT-3, which is the basis for ChatGPT and is capable of reading and writing like a human. And then there are the astonishing image-generation abilities of computers, like DALL·E 2 and Stable Diffusion, which can draw or paint anything in mere seconds, in any style you want, based on a single command. 

    Already, I’m hearing anecdotal reports from friends with kids in high school and college that some professors and teachers who have learned about the technology are in a panic after seeing ChatGPT and what it’s capable of, with some proclaiming the impending death of the high school and college essay. ChatGPT is already being used to automatically generate essays based on a prompt or topic, which makes the traditional process of brainstorming, researching, and writing essays obsolete. Why waste your time doing all that when you can just put your homework assignment as a prompt into ChatGPT and receive a complete essay in a matter of seconds? You might think a professor or teacher could decipher the difference between something written by an AI and something written by a human, but that is impossible, and it’s also impossible for an AI to tell the difference. One of the things you can do with ChatGPT is give it a paragraph or sentence and make it continue writing the rest of the essay. I did this with a made-up science fiction story and asked people to tell me which parts of the essay were written by me and which were written by AI. No one could tell the difference; it felt a little like the Pepsi Challenge. Then I fed the same text back to the AI and asked it to tell me which parts were written by a computer and which were written by a human, and ChatGPT guessed incorrectly. 

    In 2017, a research paper titled “Attention Is All You Need” landed on the internet to little fanfare outside of the esoteric tech circles of people interested in the cutting edges of natural language processing and artificial intelligence. The paper talked about “dominant sequence transduction models,” an idea called the “Transformer,” and “recurrent neural networks,” and for 99.999999% of society, trying to read the theories in this 11-page report would be akin to trying to read a book written in a language you’ve never heard of before while wearing a blindfold. But the paper, written by a team of researchers at Google Brain, an AI research team that is part of Google’s AI division, proposed a new approach to natural language processing—the branch of artificial intelligence concerned with giving computers the ability to understand human language in much the same way human beings can—that has arguably changed the field forever. 

    The paper essentially reimagined how to model information processing. The researchers argued that traditional models—which worked like a librarian who carefully sorts each book into its proper place on the shelves, making sure that everything is organized and easy to find—were inefficient. Instead, they proposed an “attention-based model.” It works like this: When a reader is looking for something, it scans all the books and focuses its attention on the ones that contain the information it needs, without worrying about organizing all the books on the shelves.

    These programs have since been fed millions of examples of human writing and art and music and creativity, and the machines have since learned how to replicate these styles. All of this has made me ask myself: What does it mean to be human in a future where robots can potentially be more creative than us? Can the next iteration of AI (or the one after that) have better ideas than humans? Or will these things just become tools that help us? 

    Members of the pro-AI tech set concede that this technology has the potential to automate many tasks that today require human creativity, but they point out that machines are not truly capable of understanding or appreciating art in the same way that humans are. Machines do not have consciousness; a computer can’t feel what it’s like to fall in love or lose a loved one or be tormented to a point that you have to chop off your ear. The argument goes: Machines can mimic our creations, but they cannot truly understand the emotions and experiences that inspire us to create. But, to me, if machines are able to imitate art with emotion and depth because they are learning from things humans have created over hundreds of years, then the machines are, in turn, an extension of those human emotions. A machine does not have to be conscious or capable of experiencing emotions to create art that is meaningful to us. The value and significance of the art lie not in the machine’s ability to feel, but in the ability of the viewer to appreciate it. 

    Nick Bilton

    Source link