ReportWire

Tag: openai

  • Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_1]

    Sam Altman will have a key role in OpenAI’s new safety committee. Justin Sullivan/Getty Images

    Following the dissolution of an OpenAI team focused on artificial intelligence safety, the company has formed a new safety and security committee that will be led by CEO Sam Altman and other board members to guide its safety recommendations going forward, as revealed by the startup in a blog post yesterday (May 28). The announcement also noted that OpenAI has begun training a new A.I. model to succeed GPT-4, the one currently powering its ChatGPT chatbot.

    The committee’s formation comes shortly after OpenAI’s “Superalignment” team, which worked on preparations regarding the long-term risks of A.I., was disbanded with members dispersed across different areas of the company. Key employees overseeing the safety team left OpenAI earlier this month, with some citing concerns on the company’s current trajectory.

    The “Superalignment” team was led by Ilya Sutskever, OpenAI’s co-founder and former chief scientist who played a lead role in the unsuccessful ousting of Altman last November. Sutskever announced his resignation on May 14, ending his almost decade-long tenure at the company. Jan Leike, who co-ran the Superalignment team alongside Sutskever, left the startup shortly afterwards and in an X post claimed that “safety culture and processes have taken a backseat to shiny products” at OpenAI. He recently joined Anthropic, a rival A.I. startup founded by former OpenAI employees Dario and Daniela Amodei.

    “It’s pretty clear that there were these different camps within OpenAI that were leading to friction,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, told Observer. “It seems that the people who were not aligned with Sam Altman’s vision have off-ramped either forcibly or by their own volition, and what’s left now is that they’re all speaking with one voice and that voice is Sam Altman.”

    Members of the new safety and security committee will be responsible for advising OpenAI’s board on recommendations regarding company projects and operations. But with its CEO leading the group, “I would not anticipate that these other committee members would have anywhere close to an equal voice in any decisions,” said Kreps. In addition to Altman, it will be headed by OpenAI chairman and former Salesforce co-CEO Bret Taylor alongside board members Nicole Seligman, a former Sony Entertainment executive, and Adam D’Angelo, a co-founder of Quora. D’Angelo notably was the only member of the original OpenAI board to stay on as a director after its failed firing of Altman.

    Meanwhile, former board members Helen Toner and Tasha McCauley recently urged for increased A.I. regulation in an Economist article that described Altman as having “undermined the board’s oversight of key decisions and internal safety protocols.”

    The new committee is filled with OpenAI insiders

    OpenAI’s technical and policy experts who have previously expressed their support for Altman will make up the rest of the committee. These include Jakub Pachocki, who recently filled Sutskever’s role as chief scientist, and Aleksander Madry, who oversees OpenAI’s preparedness team. Both researchers publicly resigned amid Altman’s brief removal last year and returned following his reinstatement. The committee is rounded out by Lilian Weng, John Schulman and Matt Knight, who respectively oversee the safety systems, alignment science and security teams at OpenAI and in November were among the more than 700 employees who signed a letter threatening to quit unless Altman was reinstated.

    OpenAI also revealed plans to consult cybersecurity officials like John Carlin, a former Justice Department official, and Rob Joyce, previously a cybersecurity director for the National Security Agency. “Happy to be able to support the important security and safety efforts of OpenAI!” said Joyce in an X post announcing the news. The company’s newly formed committee will spend the next 90 days developing processes and safeguards, which will be subsequently given to the board and shared in a public update describing adopted recommendations.

    While OpenAI didn’t provide a timeline for its new A.I. model, its blog post described it as one that will “bring us to the next level of capabilities” on its path to artificial general intelligence, or A.G.I., a term used for A.I. systems matching the capabilities of humans. Earlier this month, the company unveiled an updated version of ChatGPT based on a new A.I. model known as GPT-4o that showcased enhanced capabilities across audio, image and video.

    “We’ve seen in the last several months and last few days more indications that OpenAI is going in an accelerated direction toward artificial general intelligence,” said Kreps, adding that the company “seems to be signaling that there’s less interest in the safety and alignment principles that had been part of its focus earlier.”

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Did OpenAI steal Scarlett Johansson’s voice? 5 Critical Lessons for Entrepreneurs in The AI Era | Entrepreneur

    Did OpenAI steal Scarlett Johansson’s voice? 5 Critical Lessons for Entrepreneurs in The AI Era | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Did OpenAI steal Scarlett Johansson’s voice? OpenAI has since paused the “Sky” voice feature, but Johansson argues that this is no coincidence. In response, Johansson delivers a masterclass for entrepreneurs on navigating the AI era successfully.

    In today’s discussion, we delve into what this controversy means for business owners, highlighting five critical AI strategies they must deploy. We also explore essential methods to protect your intellectual property and leverage AI for a competitive edge—insights vital for keeping your venture ahead in the AI revolution to remain your competitive advantage.

    Take the AI skills quiz here (available for a limited time) and equip yourself with practical knowledge by grabbing a copy of my new book, ‘The Wolf is at the Door – How to Survive and Thrive in an AI-Driven World.’

    [ad_2]

    Ben Angel

    Source link

  • How Microsoft Plans to Squeeze Cash Out of AI

    How Microsoft Plans to Squeeze Cash Out of AI

    [ad_1]

    Photo-Illustration: Intelligenecer; Photo: Microsoft

    Microsoft has invested $13 billion in OpenAI, providing the money-losing start-up with the huge amounts of capital and computing power necessary for its continued operations. In exchange, Microsoft gets access to OpenAI’s technology for use in its own products as well as a real and reputational stake in the AI boom. “We are below them, above them, around them,” Satya Nadella said in March about OpenAI.

    Microsoft’s investment has been cast as a strategic masterstroke, through which stodgy old Microsoft sent Google into a panic and established itself as a player in tech’s Next Big Thing. Good for it! And the company’s stock price. But Microsoft’s long-term plans to make money with OpenAI’s technology — and to monetize AI in general — are still taking shape. There was a Bing relaunch, which promised clean, simple responses to questions, which Nadella suggested was existentially threatening to Google’s core business; one year later, it was still error-prone, its interface was cluttered, and its market share was still hovering in the low single digits. Last year, the company started charging for subscriptions to Copilot, its AI assistant software, in hopes that enterprise customers would shell out for the promise of increased employee productivity in Microsoft Office and Github; according to the company, uptake has been solid.

    AI subscriptions are, so far, the tech industry’s favorite idea for making money from AI. This is conceptually simple — your customers are paying you for access to a new product. The problem is that compute-heavy cloud services like ChatGPT and Copilot remain extremely expensive to run, meaning that in some cases even paying customers might be costing them money. Computing costs are likely to fall, and AI-model efficiency could improve, but, much like the basic assumption that there’s a huge market for these things just waiting to be tapped, these are bets and not particularly safe ones.

    This week, Microsoft announced that it would be integrating AI more deeply into even more of its products, including Windows, which, among many other chatbot-shaped things, is set to get a feature called Recall, described by the company as “an explorable timeline of your PC’s past.” This feature, which will be turned on by default for Windows users, records and “recalls” everything you do on your computer by taking near-constant screenshots, processing them with AI, and making them available for future browsing through a conversational interface. You can watch Nadella try to explain the feature and its appeal to the Wall Street Journal’s Joanna Stern:

    Like smartphones, personal computers already collect and produce vast amounts of data about their users, but this is a big step in the direction of surveillance — constant, open-ended, and mostly unredacted — offered in exchange for a strange feature that Microsoft’s CEO is quite insistent its users will enjoy. Nadella attempts to preempt any concerns by pointing out that the AI models powering Recall run locally — that is, on the user’s device, not in the cloud. This is, at best, a partial solution to a problem of Microsoft’s own creation — a problem Windows users didn’t know they had until this week.

    On-device AI processing is interesting to Microsoft for other reasons, too. In a world where AI services are expensive to run, installing them in every popular Microsoft product represents a real risk. In a world where the processing necessary to run chatbots, generate images, or surveil your own computer usage to the maximum possible extent occurs on users’ devices, the cost of deploying AI is vastly lower.

    For Microsoft, that is — if it expects to fully utilize these new features that are becoming increasingly integral to the core Window product, customers will have to buy new machines, some of which Microsoft also showed off this week. According to The Verge:

    “All of Microsoft’s major laptop partners will offer Copilot Plus PCs, Microsoft CEO Satya Nadella said at an event at the company’s headquarters on Monday. That includes Dell, Lenovo, Samsung, HP, Acer, and Asus; Microsoft is also introducing two of its own as part of the Surface line. And while Microsoft is also making a big push to bring Arm chips to Windows laptops today, Nadella said that laptops with Intel and AMD chips will offer these AI features, too.”

    These PCs will come with a “neural processor,” roughly akin to a graphics card, which is a separate hardware feature that can handle AI-related processing tasks more quickly and with lower power use than existing CPUs and GPUs. In conjunction with Microsoft’s shift to more efficient mobile processor architecture for laptops and desktops — something Apple committed to years ago, selling huge numbers of laptops in the process — AI is being used to make the case to its customers that this is the next stage of the upgrade cycle. It’s time to get a new PC, says the company that makes the software that powers most PCs and that sells PCs of its own.

    Microsoft, like many other tech giants, says it’s all in on AI, but its approach includes hedges against AI deflation, too. Maybe customers flock to new AI features, in which case Microsoft will have shifted computing expenses back to its billions of customers, improving margins on subscription products and selling lots of Windows licenses in the process. If they don’t, though — if people keep using their Windows machines in approximately the same way they have for decades — Microsoft makes money anyway and leaves its cloud computing capacity free to sell to other firms that want to try their luck building AI tools. Multi-market sector domination with considerable leverage over different but overlapping groups of customers: Not such a bad deal!


    See All



    [ad_2]

    John Herrman

    Source link

  • Meta and Google want to make AI deals with Hollywood studios

    Meta and Google want to make AI deals with Hollywood studios

    [ad_1]

    Meta and Google are offering Hollywood studios millions of dollars with the hope of striking licensing deals that could improve their models for AI-generated video, according to a in Bloomberg. The companies have reportedly offered “tens of millions of dollars,” though it’s unclear what will come from the talks.

    According to the report, Netflix and Disney “aren’t willing to license their content” but have “expressed interest in other types of collaborations.” Warner Brothers Discovery has reportedly indicated “a willingness to license some of its programs.”

    A spokesperson for Meta declined to comment. Google didn’t immediately respond to a request for comment. The companies, it seems, are hoping such deals would help advance their video generation tools. Google recently showed off a text-to-video model, , and tapped Donald Glover to promote its capabilities. Meta is also AI-generated video.

    There’s been something of an arms race among AI firms to forge licensing deals with media companies. OpenAI and NewsCorp a multi-year deal to bring news content to ChatGPT earlier this week. Meta is also considering paying publishers to access “news, photo and video content” to train its AI models, Business Insider .

    But, as Bloomberg points out, Hollywood studios may have some reservations about such deals. Though AI-editing tools may be appealing, there has been widespread concern in the entertainment industry about how AI companies might use their creative work. That tension burst into full view this week when Scarlett Johansson accused OpenAI of copying for its “Sky” assistant in ChatGPT after she declined to partner with the company herself. OpenAI has claims that it tried to mimic her voice, though the company has yet to explain that one Sam Altman .

    [ad_2]

    Karissa Bell

    Source link

  • OpenAI didn’t intend to copy Scarlett Johansson’s voice, ‘The Washington Post’ reports

    OpenAI didn’t intend to copy Scarlett Johansson’s voice, ‘The Washington Post’ reports

    [ad_1]

    OpenAI cast the actor of Sky’s voice months before Sam Altman contacted Scarlett Johansson, and it had no intention of finding someone who sounded like her, according to The Washington Post. The publication said the flier OpenAI issued last year looked for actors that had “warm, engaging [and] charismatic” voices. They needed to be between 25 and 45 years old and had to be non-union, but OpenAI reportedly didn’t specify that it was looking for a Scarlett Johansson voice-alike. If you’ll recall, Johansson accused the company of copying her likeness without permission for its Sky voice assistant.

    The agent of Sky’s voice told The Post that the company never talked about Johansson or the movie Her with their talent. OpenAI apparently didn’t tweak the actor’s recordings to sound like Johansson either, because her natural voice sounded like Sky’s, based on the clips of her initial voice test that The Post had listened to. OpenAI product manager Joanne Jang told the publication that the company selected actors who were eager to work on AI. She said that Mira Murati, the company’s Chief Technology Officer, made all the decisions about the AI voices project and that Altman was not intimately involved in the process.

    Jang also told the publication that to her, Sky sounded nothing like Johansson. Sky’s actress told The Post through her agent that she just used her natural voice and that she has never been compared to Johansson by the people who know her closely. But in a statement Johansson’s team shared with Engadget, she said that she was shocked OpenAI pursued a voice that “sounded so eerily similar” to hers that her “closest friends and news outlets could not tell the difference” after she turned down Altman’s offer to voice ChatGPT.

    Johansson said that Altman first contacted her in September 2023 with the offer and then reached out again just two days before the company introduced GPT-4o to ask her to reconsider. Sky has been one of ChatGPT’s voices since September, but GPT-4o gave it the power to have more human-like conversations with users. That made its similarities to Johansson’s voice more apparent — Altman tweeting “her” after OpenAI demonstrated the new large language model didn’t help with the situation and invited more comparisons to the AI virtual assistant Johansson voiced in the movie. OpenAI has paused using Sky’s voice “out of respect” for Johansson’s concerns, it wrote in a blog post. The actor said, however, that the company only stopped using Sky after she hired legal counsel who wrote Altman and the company to ask for an explanation.

    If you’re wondering if Sky truly does sound like Johansson, we embedded a video below so you can judge for yourself. It’s a recording of Johansson’s statement as read by the Sky voice assistant, posted by Victor Mochere on YouTube. Opinions in the comment section are divided, with some saying that it does sound like her if she were robotic, while others say that the voice sounds more like Rashida Jones.

    [ad_2]

    Mariella Moon

    Source link

  • The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’

    The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’

    [ad_1]

    AI projects like OpenAI’s ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industry—contractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop “systemically abusing and exploiting African workers.”

    Most of the letter’s signatories are from Kenya, a hub for tech outsourcing, whose president, William Ruto, is visiting the US this week. The workers allege that the practices of companies like Meta, OpenAI, and data provider Scale AI “amount to modern day slavery.” The companies did not immediately respond to a request for comment.

    A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.

    The letter’s signatories say their work includes reviewing content on platforms like Facebook, TikTok, and Instagram, as well as labeling images and training chatbot responses for companies like OpenAI that are developing generative-AI technology. The workers are affiliated with the African Content Moderators Union, the first content moderators union on the continent, and a group founded by laid-off workers who previously trained AI technology for companies such as Scale AI, which sells datasets and data-labeling services to clients including OpenAI, Meta, and the US military. The letter was published on the site of the UK-based activist group Foxglove, which promotes tech-worker unions and equitable tech.

    In March, the letter and news reports say, Scale AI abruptly banned people based in Kenya, Nigeria, and Pakistan from working on Remotasks, Scale AI’s platform for contract work. The letter says that these workers were cut off without notice and are “owed significant sums of unpaid wages.”

    “When Remotasks shut down, it took our livelihoods out of our hands, the food out of our kitchens,” says Joan Kinyua, a member of the group of former Remotasks workers, in a statement to WIRED. “But Scale AI, the big company that ran the platform, gets away with it, because it’s based in San Francisco.”

    Though the Biden administration has frequently described its approach to labor policy as “worker-centered.” The African workers’ letter argues that this has not extended to them, saying “we are treated as disposable.”

    “You have the power to stop our exploitation by US companies, clean up this work and give us dignity and fair working conditions,” the letter says. “You can make sure there are good jobs for Kenyans too, not just Americans.”

    Tech contractors in Kenya have filed lawsuits in recent years alleging that tech-outsourcing companies and their US clients such as Meta have treated workers illegally. Wednesday’s letter demands that Biden make sure that US tech companies engage with overseas tech workers, comply with local laws, and stop union-busting practices. It also suggests that tech companies “be held accountable in the US courts for their unlawful operations aboard, in particular for their human rights and labor violations.”

    The letter comes just over a year after 150 workers formed the African Content Moderators Union. Meta promptly laid off all of its nearly 300 Kenya-based content moderators, workers say, effectively busting the fledgling union. The company is currently facing three lawsuits from more than 180 Kenyan workers, demanding more humane working conditions, freedom to organize, and payment of unpaid wages.

    “Everyone wants to see more jobs in Kenya,” Kauna Malgwi, a member of the African Content Moderators Union steering committee, says. “But not at any cost. All we are asking for is dignified, fairly paid work that is safe and secure.”

    [ad_2]

    Caroline Haskins

    Source link

  • Scarlett Johansson demands answers after OpenAI releases voice

    Scarlett Johansson demands answers after OpenAI releases voice

    [ad_1]

    Scarlett Johansson demands answers after OpenAI releases voice “eerily similar” to hers – CBS News


    Watch CBS News



    Scarlett Johansson is demanding answers from OpenAI and its CEO, Sam Altman, after it released a ChatGPT voice that she says sounds “eerily similar” to her own. Johansson claims she declined Altman’s offer for her to voice the product. Jo Ling Kent has the detials.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Scarlett Johansson Lawyers Up Over ChatGPT Possibly STEALING Her Voice! – Perez Hilton

    Scarlett Johansson Lawyers Up Over ChatGPT Possibly STEALING Her Voice! – Perez Hilton

    [ad_1]

    Scarlett Johansson does NOT want to be turned into an AI chatbot… And she ain’t about to let ChatGPT use a knockoff version of her voice for theirs!

    In a lengthy statement released by her rep on Tuesday, the Avengers actress accused OpenAI CEO Sam Altman of pursuing a voice for ChatGPT’s latest system that sounds similar to hers… a little too similar. She began by explaining how, months ago, she declined his offer to be his robot voice! She wrote:

    “Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said he felt that my voice would be comforting to people.

    Related: Sorry, That One Terrible Harrison Butker Quote Is FAKE!

    But she wasn’t interested:

    “After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.”

    If you haven’t heard it, take a listen for yourselves (below):

    That sounds SO much like her, right?! The 39-year-old continued:

    “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.”

    And shockingly, the AI CEO even seemed to insinuate the voice’s similarity to that of Scarlett’s in 2013’s Her, in which she voiced an AI chatbot. Scarlett wrote:

    “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word “her” — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”

    It’s true! As of this writing, the tweet is still up! And remember, that wasn’t before asking her. That was months after the rejection! Seems like he knew what he was doing — and made it clear to everyone else, too!

    WTF!!!

    And apparently, he came knocking on her door AGAIN just before the ChatGPT system dropped:

    “Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.”

    Wait, TWO DAYS?? Surely he didn’t plan to have her record everything that quickly… So is the supposition here that he might have used her voice already? The same way AI steals everything from the internet? Kinda seems that way!

    The Lucy star explained her legal team was getting to the bottom of it:

    “As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky; voice. Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice.”

    All she asked was they “detail the exact process” by which they made it, and instead of doing that they DELETED IT?? Man, that doesn’t look guilty at all, huh?

    ScarJo concluded her statement:

    “In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

    This is so WILD!!! And SO creepy!

    Thoughts?? Let us know down in the comments!

    [Images via Warner Bros. Pictures/YouTube & MEGA/WENN]

    [ad_2]

    Perez Hilton

    Source link

  • Scarlett Johansson says OpenAI used her likeness without permission for its ‘Sky’ voice assistant

    Scarlett Johansson says OpenAI used her likeness without permission for its ‘Sky’ voice assistant

    [ad_1]

    Actor Scarlett Johansson has accused OpenAI of copying her voice for one of the voice assisstants in ChatGPT despite denying the company permission to do so. Johansson’s statement on Monday came hours after OpenAI said that the company would no longer use the voice in ChatGPT but did not provide a reason why.

    “Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system,” Johansson wrote in the statement that was first shared with NPR. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said he felt that my voice would be comforting to people.” Johansson added that she declined the offer after “much consideration and for personal reasons,” but when OpenAI demoed GPT-4o, the company’s latest large language model last week, “my friends, family, and the general public all noted how much the newest system named ’Sky’ sounded like me.”

    When Johansson saw OpenAI’s newest demo, she said she was “shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mind that my closest friends and news outlets could not tell the difference.” She also revealed that Altman had contacted her agent just two days before the company revealed GPT-4o and asked her to reconsider, but released the system anyway before she had a chance to respond.

    “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” an OpenAI spokesperson said in a statement sent to Engadget that the company attributed to Altman, OpenAI’s co-founder and CEO. “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

    Even though “Sky” has been one of the voice assisstants in ChatGPT since September 2023, GPT-4o, which the company announced last week, takes things a step further. The company said that the new model is closer to “much more natural human-computer interaction” and demoed its executives having nearly human-like conversations with the voice assistant in ChatGPT. This invited comparisons to Samantha, the virtual voice assistant played by Johansson in the 2013 movie Her who has an intimate relationship with a human being. Shortly after the event, Altman tweeted a single word — “her” — in an apparent reference to the film.

    On Monday, OpenAI said that it was pausing the use of “Sky” in ChatGPT and released a lengthy post revealing how the company hired professional voice actors to create its own virtual assistants, and denying any similarities with Johansson’s voice.

    “We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI wrote and added that each of its performers, who it declined to name for privacy reasons, was paid “above top-of-market rates, and this will continue for as long as their voices are used in our products.”

    This move, Johansson said in her statement, only came after she hired legal counsel who wrote two letters to Altman and OpenAI asking for an explanation. “In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” Johansson wrote. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

    Update, May 20 2024, 9:09 PM ET: This story has been updated to include a statement from OpenAI.

    [ad_2]

    Pranav Dixit

    Source link

  • Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

    Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

    [ad_1]

    Last week OpenAI revealed a new conversational interface for ChatGPT with an expressive, synthetic voice strikingly similar to that of the AI assistant played by Scarlett Johansson in the sci-fi movie Her—only to suddenly disable the new voice over the weekend.

    On Monday, Johansson issued a statement claiming to have forced that reversal, after her lawyers demanded OpenAI clarify how the new voice was created.

    Johansson’s statement, relayed to WIRED by her publicist, claims that OpenAI CEO Sam Altman asked her last September to provide ChatGPT’s new voice but that she declined. She describes being astounded to see the company demo a new voice for ChatGPT last week that sounded like her anyway.

    “When I heard the release demo I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” the statement reads. It notes that Altman appeared to encourage the world to connect the demo with Johansson’s performance by tweeting out “her,” in reference to the movie, on May 13.

    Johansson’s statement says her agent was contacted by Altman two days before last week’s demo asking that she reconsider her decision not to work with OpenAI. After seeing the demo, she says she hired legal counsel to write to OpenAI asking for details of how it made the new voice.

    The statement claims that this led to OpenAI’s announcement Sunday in a post on X that it had decided to “pause the use of Sky,” the company’s name for the synthetic voice. The company also posted a blog post outlining the process used to create the voice. “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the post said.

    Sky is one of several synthetic voices that OpenAI gave ChatGPT last September, but at last week’s event it displayed a much more lifelike intonation with emotional cues. The demo saw a version of ChatGPT powered by a new AI model called GPT-4o appear to flirt with an OpenAI engineer in a way that many viewers found reminiscent of Johansson’s performance in Her.

    “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” Sam Altman said in a statement provided by OpenAI. He claimed the voice actor behind Sky’s voice was hired before the company contact Johannsson. “Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

    The conflict with Johansson adds to OpenAI’s existing battles with artists, writers, and other creatives. The company is already defending a number of lawsuits alleging it inappropriately used copyrighted content to train its algorithms, including suits from The New York Times and authors including George R.R. Martin.

    Generative AI has made it much easier to create realistic synthetic voices, creating new opportunities and threats. In January, voters in New Hampshire were bombarded with robocalls featuring a deepfaked voice message from Joe Biden. In March, OpenAI said that it had developed a technology that could clone someone’s voice from a 15-second clip, but the company said it would not release the technology because of how it might be misused.

    Updated 5-20-2024, 9 pm EDT: This article has been updated with comment from OpenAI CEO Sam Altman.

    [ad_2]

    Will Knight

    Source link

  • A ScarJo-Sounding AI Voice Is No Longer Her(e)

    A ScarJo-Sounding AI Voice Is No Longer Her(e)

    [ad_1]

    Just want her back.
    Photo: Warner Bros./Everett Collection

    ChatGPT better keep Colin Jost’s wife’s voice out of its mouth. The OpenAI voice “Sky,” which sounded eerily similar to Scarlett Johansson’s in the movie Her, was suspended on May 20. “We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” OpenAI posted on Twitter. “We are working to pause the use of Sky while we address them.” In Her, Johansson played an AI named Samantha that Joaquin Phoenix’s Theodore falls in love with. In a blog post further explaining the situation, the company said that “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.” Given that OpenAI CEO Sam Altman tweeted solely the word “Her” on May 13, that does not seem super-feasible, but hey, whatever you say, guys.

    Later that same day, Johansson released a statement revealing that Altman reached out twice, asking her to voice the AI herself, but she declined. She explained in the statement, “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al… Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.” Maybe next time, they should make Hal 9000.

    [ad_2]

    Jason P. Frank

    Source link

  • Brand New GPT-4o Revealed: 3 Mind Blowing Updates and 3 Unexpected Challenges for Entrepreneurs | Entrepreneur

    Brand New GPT-4o Revealed: 3 Mind Blowing Updates and 3 Unexpected Challenges for Entrepreneurs | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Unveiling OpenAI’s GPT-4.0: The latest AI with vision, auditory, and emotional intelligence abilities is revolutionizing industries. How will it affect your business?

    In today’s in-depth discussion, I uncover three astonishing updates in GPT-4.0’s technology poised to redefine customer interaction, marketing strategies, and operational efficiency. We also confront three critical challenges this AI evolution brings, including ethical considerations, market disruptions, and the competitive landscape—essential insights to keep your venture at the forefront of innovation.

    Take the AI skills quiz here (available for a limited time) and equip yourself with practical knowledge by grabbing a copy of my new book, The Wolf is at the Door – How to Survive and Thrive in an AI-Driven World.’

    [ad_2]

    Ben Angel

    Source link

  • It’s Time to Believe the AI Hype

    It’s Time to Believe the AI Hype

    [ad_1]

    Folks, when dogs talk, we’re talking Biblical disruption. Do you think that future models will do worse on the law exams?

    If nothing else, this week proves that the rate of AI progress isn’t slowing at all. Just ask the people building these models. “A lot of things have happened—internet, mobile,” says Demis Hassabis, cofounder of DeepMind and now Google’s AI czar, in a post-keynote chat at I/O. “AI is going maybe three or four times faster than those other revolutions. We’re in a period of 25 or 30 years of massive change.” When I asked Google search VP Liz Reid to name a big challenge, she didn’t say it was to keep the innovation going—instead, she cited the difficulty of absorbing the pace of change. “As the technology is early, the biggest challenge is about even what’s possible,” she says. “It’s understanding what the models are great at today, and what they are not great at but will be great at in three months or six months. The technology is changing so fast that you can get two researchers in the room who are working on the same project, and they’ll have totally different views when something is possible.”

    There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. And when non-techies see the products for themselves, they most often become believers too. (Including Joe Biden, after a March 2023 demo of ChatGPT.) That’s why Microsoft is well along on a total AI reinvention, why Mark Zuckerberg is now refocusing Meta to create artificial general intelligence, why Amazon and Apple are desperately trying to keep up, and why countless startups are focusing on AI. And because all of these companies are trying to get an edge, the competitive fervor is ramping up new innovations at a frantic page. Do you think it was a coincidence that OpenAI made its announcement a day before Google I/O?

    Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more. But the AI revolution will change our lives, and change us, for better or worse. And we haven’t even seen GPT-5 yet.

    Time Travel

    Sure, I could be wrong about AI. But consider the last time I made such a call. In 1995, I joined Newsweek—the same organ where Clifford Stoll had just dismissed the internet as a hoax—and at the end of the year argued of this new digital medium, “This Changes Everything.” Some of my colleagues thought I’d bought into overblown hype. Actually, reality exceeded my hyperbole.

    In 1995, the Internet ruled. You talk about a revolution? For once, the shoe fits. “In the long run it’s hard to exaggerate the importance of the Internet,” says Paul Moritz, a Microsoft VP. “It really is about opening communications to the masses.” And 1995 was the year that the masses started coming. “If you look at the numbers they’re quoting, with the Web doubling every 53 days, that’s biological growth, like a red tide or population of lemmings,” says Kevin Kelly, executive editor of WIRED. “I don’t know if we’ve ever seen technology exhibit that sort of growth.” In fact, there’s a raging controversy over exactly how many people regularly use the Net. A recent Nielsen survey pegged the number at an impressive 24 million North Americans. During the course of the year the discussion of the Internet ranged from sex to stock prices to software standards. But the most significant aspect of the Internet has nothing to do with money or technology, really. It’s us.

    [ad_2]

    Steven Levy

    Source link

  • OpenAI strikes deal to put Reddit posts in ChatGPT

    OpenAI strikes deal to put Reddit posts in ChatGPT

    [ad_1]

    OpenAI and Reddit announced a partnership on Thursday that will allow OpenAI to surface Reddit discussions in ChatGPT and for Reddit to bring AI-powered features to its users. The partnership will “enable OpenAI’s tools to better understand and showcase Reddit content, especially on recent topics,” both companies said in a . As part of the agreement, OpenAI will also become an advertising partner on Reddit, which means that it will run ads on the platform.

    The deal is similar to the one that Reddit in February, and which is worth $60 million. A Reddit spokesperson declined to disclose the terms of the OpenAI deal to Engadget and OpenAI did not respond to a request for comment.

    OpenAI has been increasingly striking partnerships with publishers to get data to continue training its AI models. In the last few weeks alone, the company has with the Financial Times and Dotdash Meredith. Last year, it also with German publisher Axel Springer to train its models on news from Politico and Business Insider in the US and Bild and Die Welt in Germany.

    Under the new arrangement, OpenAI will get access to Reddit’s Data API, which, the company said, will provide it with “real time, structured, and unique content from Reddit.” It’s not clear what AI-powered features Reddit will build into its platform as a result of the partnership. A Reddit spokesperson declined to comment.

    Last year, getting access to Reddit’s data, a rich source of real time, human generated, and often high-quality information, became a contentious issue after the company announced that it would start charging developers to use its API. As a result, dozens of third-party Reddit clients were forced to and thousands of subreddits went dark in protest. At the time, Reddit stood its ground and that large AI companies were scraping its data with no payment. Since then, Reddit has been monetizing its data by striking such deals with Google and OpenAI, whose progress in training their AI models depends on having access to it.

    [ad_2]

    Pranav Dixit

    Source link

  • Prepare to Get Manipulated by Emotionally Expressive Chatbots

    Prepare to Get Manipulated by Emotionally Expressive Chatbots

    [ad_1]

    The emotional mimicry of OpenAI’s new version of ChatGPT could lead AI assistants in some strange—even dangerous—directions.

    [ad_2]

    Will Knight

    Source link

  • OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

    OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

    [ad_1]

    Ilya Sutskever, cofounder and chief scientist at OpenAI, has left the company. The former Google AI researcher was one of the four board members who voted in November to fire OpenAI CEO Sam Altman, triggering days of chaos that saw staff threaten to quit en masse and Altman ultimately restored.

    Altman confirmed Sutskever’s departure Tuesday in a post on the social platform X. In the months after Altman’s return to OpenAI, Sutskever had rarely made public appearances for the company. On Monday, OpenAI showed off a new version of ChatGPT capable of rapid-fire, emotionally tinged conversation. Sutskever was conspicuously absent from the event, streamed from the company’s San Francisco offices.

    “OpenAI would not be what it is without him,” Altman wrote in his post on Sutskever’s departure. “I am happy that for so long I got to be close to such [a] genuinely remarkable genius, and someone so focused on getting to the best future for humanity.”

    Altman’s post announced that Jakub Pachocki, OpenAI’s research director, would be the company’s new chief scientist. Pachocki has been with OpenAI since 2017.

    In his own post on X, Sutskever acknowledged his departure and hinted at future plans. “After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership team, he wrote. “I am excited for what comes next—a project that is very personally meaningful to me about which I will share details in due time.”

    Sutskever has not spoken publicly in detail about his role in the ejection of Altman last year, but after the CEO was restored he expressed regrets. “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI,” he posted on X in November. Sutskever has often spoken publicly of his belief that OpenAI was working towards developing so-called artificial general intelligence, or AGI, and of the need to do so safely.

    Sutskever blazed a trail in machine learning from an early age, becoming a protégé of deep-learning pioneer Geoffrey Hinton at the University of Toronto. With Hinton and fellow grad student Alex Krizhevsky he cocreated an image-recognition system called AlexNet that stunned the world of AI with its accuracy and helped set off a flurry of investment in the then unfashionable technique of artificial neural networks.

    Sustskever later worked on AI research at Google, where he helped establish the modern era of neural-network-based AI. In 2015 Altman invited him to dinner with Elon Musk and Greg Brockman to talk about the idea of starting a new AI lab to challenge corporate dominance of the technology. Sutskever, Musk, Brockman, and Altman became key founders of OpenAI, which was announced in December 2015. It later pivoted its model, creating a for-profit arm and taking huge investment from Microsoft and other backers. Musk left OpenAI in 2018 after disagreeing with the company’s strategy. The entrepreneur filed a lawsuit against the company in March this year claiming it had abandoned its founding mission of developing super-powerful AI to “benefit humanity,” and was instead enriching Microsoft.

    Sutskever’s departure leaves just one of the four OpenAI board members who voted for Altman’s ouster with a role at the company. Adam D’Angelo, an early Facebook employee and CEO of Q&A site Quora, was the only existing member of the board to remain as a director when Altman returned as CEO.

    [ad_2]

    Reece Rogers, Tom Simonite

    Source link

  • Ilya Sutskever Quits OpenAI

    Ilya Sutskever Quits OpenAI

    [ad_1]

    Ilya Sutskever, OpenAI’s co-founder and chief scientist, announced he was leaving the company on Tuesday. OpenAI confirmed the departure in a press release. Sutskever’s official exit comes nearly six months after he helped lead an effort with other board members to fire CEO Sam Altman, the move backfired days later.

    “After almost a decade, I have made the decision to leave OpenAI,” said Sutskever via a tweet on Tuesday afternoon. “I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.”

    “Ilya and OpenAI are going to part ways,” said Altman in a tweet shortly after. “This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”

    Altman went on to say that Jakub Pachocki, a senior researcher on Sutskever’s team, would be replacing him as OpenAI’s Chief Scientist. Sutskever notes an undisclosed project that is very “meaningful” to him moving forward. It’s unclear at this time what that project is.

    Jan Leike, another OpenAI executive who worked with Sutskever on safeguarding future AI, also resigned on Tuesday, according to The Information. Leike and Sutskever led OpenAI’s superalignment team, charged with the grandiose task of making sure the company’s super-powerful AI does not turn against humans.

    For the last six months, Sutskever’s status has been unclear at OpenAI. When Altman returned to the company in late Nov. of 2023, he said this on Sutskever: “we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” Sutskever was the only member of OpenAI left in limbo at the time—neither fired nor rehired.

    Since then, Altman has refused to answer questions about Sutskever’s status at the company in multiple interviews. We barely heard from Sutskever himself during this time period. This is Sutskever’s first tweet in over five months, and OpenAI’s chief scientist was missing from major announcements such as Sora and this week’s GPT-4 Omni.

    Earlier this year, founding OpenAI member Andrej Karpathy left the company. In that case as well, Karpathy did not provide a particular reason for his exit, and later described that he would work on personal projects.

    Sutskever posted a photo with OpenAI leaders Altman, Mira Murati, Greg Brockman, and Jakub Pachocki shortly after announcing his exit. Severa; featured in the photo posted kind messages about Sutskever’s tenure at OpenAI, praising the well-renowned scientist for his contributions to the artificial intelligence world.

    [ad_2]

    Maxwell Zeff

    Source link

  • With OpenAI’s Release of GPT-4o, Is ChatGPT Plus Still Worth It?

    With OpenAI’s Release of GPT-4o, Is ChatGPT Plus Still Worth It?

    [ad_1]

    Barret Zoph, a research lead at OpenAI, was recently demonstrating the new GPT-4o model and its ability to detect human emotions though a smartphone camera when ChatGPT misidentified his face as a wooden table. After a quick laugh, Zoph assured GPT-4o that he’s not a table and asked the AI tool to take a fresh look at the app’s live video rather than a photo he shared earlier. “Ah, that makes more sense,” said ChatGPT’s AI voice, before describing his facial expression and potential emotions.

    On Monday, OpenAI launched a new model for ChatGPT that can process text, audio, and images. In a surprising turn, the company announced that this model, GPT-4o, would be available for free, no subscription required. It’s a departure from the company’s previous rollout of GPT-4, which was released in March 2023 for those who pay OpenAI’s $20-per-month subscription to ChatGPT Plus. In this current release, many of the features that were previously gated off to paying subscribers, like memory and web browsing, are now rolling out to free users as well.

    Last year, when I tested a nascent version of ChatGPT’s web browsing capability, it had flaws but was powerful enough to make the subscription seem worthwhile for early adopters looking to experiment with the latest technology. Because the freshest AI model from OpenAI, as well as previously gated features, are available without a subscription, you may be wondering if that $20 a month is still worthwhile. Here’s a quick breakdown to help you understand what’s available with OpenAI’s free version versus what you get with ChatGPT Plus.

    What’s Available With Free ChatGPT?

    To reiterate, you don’t need any kind of special subscription to start using the OpenAI GPT-4o model today. Just know that you’re rate-limited to fewer prompts per hour than paid users, so be thoughtful about the questions you pose to the chatbot or you’ll quickly burn through your allotment of prompts.

    In addition to limited GPT-4o access, nonpaying users received a major upgrade to their overall user experience, with multiple features that were previously just for paying customers. The GPT Store, where anyone can release a version of ChatGPT with custom instructions, is now widely available. Free users can also use ChatGPT’s web-browsing tool and memory features and can upload photos and files for the chatbot to analyze.

    What’s Still Gated to ChatGPT Plus?

    While GPT-4o is available without a subscription, you may want to keep ChatGPT Plus for two reasons: access to more prompts and newer features. “You can use the model significantly more on Plus,” Zoph tells WIRED. “There’s a lot of other exciting, future things to come as well.” Compared to nonsubscribers, ChatGPT Plus subscribers are allowed to send GPT-4o five times as many prompts before having to wait or switch to a less powerful model. So, if you want to spend a decent amount of time messaging back and forth with OpenAI’s most powerful option, a subscription is necessary.

    Although some of the previously exclusive features for ChatGPT Plus are rolling out to nonpaying users, the splashiest of updates are still offered first behind OpenAI’s paywall. The impressive voice mode that Zoph demonstrated on stage is arriving sometime over the next couple of weeks for ChatGPT Plus subscribers.

    In OpenAI’s demo videos, the bubbly AI voice sounds more playful than previous iterations and is able to answer questions in response to a live video feed. “I honestly think the ways people are going to discover use cases around this is gonna be incredibly creative,” says Zoph. During the presentation, he also showed how the voice mode could be used to translate between English and Italian. After the presentation, the company released another video showing speech translation working in real time.

    [ad_2]

    Reece Rogers

    Source link

  • GPT-4o Is OpenAI’s Plan to Win Friends and Influence People

    GPT-4o Is OpenAI’s Plan to Win Friends and Influence People

    [ad_1]

    Photo-Illustration: Inteliigencer; Photo: OpenAI

    OpenAI on Monday introduced a new model called GPT-4o (as in omni) that the company says “reasons across voice, text, and vision.” In practice, this means ChatGPT now responds more quickly to a wider range of input — text, image, voice — provided in more natural ways. You can talk to it, and it talks back; you can show it things, and it tells you what it sees.

    OpenAI’s “Spring Update” event was a brisk affair that, due to runaway speculation by AI influencers, necessitated a few disclaimers. This wasn’t going to be a search engine, CEO Sam Altman warned, nor would it be the long-rumored GPT-5. Instead, he teased some “new stuff,” some of which “feels like magic” to him.

    For industry watchers, it was an interesting event in a few ways. For one, OpenAI is releasing GPT-4o to all users, breaking with its current strategy of reserving its most capable models for paid subscribers (who will now get higher usage limits among other, smaller benefits). AI enthusiasts had hypothesized for weeks that a pair of chatbots that had quietly appeared on a testing platform — and that seemed better by some measures than GPT-4 — were actually upcoming OpenAI models, and it turns out they were. What wasn’t apparent from those leaks, which let people prod a text-based chatbot, was what OpenAI spent most of its presentation showing off. ChatGPT is now a lot better at talking:

    You’ll probably notice a few strange things about the chatbot’s presentation, and you’re meant to. OpenAI says its new voice functionality — it had one before, but it was essentially voice-to-text and text-to-voice features built on top of a chatbot — is responsive enough that it can be interrupted. It can also interpret and express a range of “emotive styles,” meaning that, as with text-based chatbots, ChatGPT will now attempt to assess and choose appropriate spoken tones. The company staged a live demonstration where a parade of nervous, camera-shy executives spoke to the chatbot, which responded with — at least at first listen — substantially more confidence than its human interlocutors had. It was alternately impressive and strange — here it is singing “Happy Birthday” after seeing a piece of cake with a candle in it:

    OpenAI is showing off something technologically new here, and we can assume we’ll see similar demos from its competitors, possibly as soon as this week and perhaps from Google. The release also suggests, at minimum, an upgrade to the style of voice assistant currently epitomized by Siri and Alexa, which had promised big things before being demoted to kitchen timers and light switches. It’s also obviously evocative of representations of AI in science fiction, such as the movie Her, in which the lead character falls in love with a piece of software. This thing flatters, giggles, and does voices. It doesn’t exactly respond to being cut off as a person would, but it doesn’t just keep going or drop the conversation. It will perform whatever tone you ask it to but appears to default to an energetic, positive, supportive persona — a helpful co-worker, someone trying to be your friend, or, if you’re feeling suspicious, someone trying to get something from you.

    Months of speculation about a new core model from OpenAI and endless hints at the possibility of “artificial general intelligence” from its executives and boosters have set incredibly high expectations for the company’s forthcoming products. What OpenAI presented was instead primarily a step forward in its products’ ability to perform the part of an intelligent machine. There are risks to doubling down on the personification of AI — if people are made to feel as though they’re talking to a person, their expectations will be both impossibly diverse and very high — but there are benefits, too, which OpenAI knows well.

    ChatGPT was initially released as a public tech demo; it went viral because of its capabilities but also because it spoke more convincingly and freely than chatbots had before it. It wrote with confidence in a tone that suggested it was eager to help. It was highly responsive to requests even when it couldn’t fulfill them, though it would often try to anyway. There was (and remains) an enormous gap between what the interface suggested (that you were talking to a real person) and what you were actually doing (prompting a machine). With user expectations where they were, this interplay turned out to be hugely powerful. ChatGPT’s persona invited users to make generous assumptions about the underlying technology and, just as important, about where it would, or at least could, one day go.

    Such personification is by definition misleading; whether you think that’s a problem depends a bit on what you think OpenAI and other AI firms are up to and how much potential their projects have. The optimistic outlook is that voice, like chat, is simply a specific, unusually natural interface for computers and that the better the illusion is, the easier it will be to tap into the full productive potential of AI. But OpenAI’s sudden emphasis on ChatGPT’s performance over, well, its performance is worth thinking about in critical terms, too. The new voice features aren’t widely available yet, but what the company showed off was powerfully strange: a chatbot that laughs at its own jokes, uses filler words, and is unapologetically ingratiating. To borrow Altman’s language, the fact that Monday’s demo “feels like magic” could be read as a warning or an admission: ChatGPT is now better than ever at pretending it’s something that it’s not.


    See All



    [ad_2]

    John Herrman

    Source link

  • Why Apple’s ‘Crush’ ad is so misguided | TechCrunch

    Why Apple’s ‘Crush’ ad is so misguided | TechCrunch

    [ad_1]

    Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news.

    This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for the iPad Air, as well as Tandem OLED and a new M4 chip for the iPad Pro. But its ad for the new iPad Pro made the biggest waves from the event, for all the wrong reasons. For Apple completionists, we compiled all of the new announcements in case you missed it. 

    In the world of EVs, the embattled Fisker Ocean is facing yet another federal safety probe. The National Highway Traffic Safety Administration (NHTSA) has opened a fourth investigation into the SUV surrounding claims of “inadvertent Automatic Emergency Braking.”

    AI deepfakes took center stage at this year’s Met Gala. AI-generated images of Katy Perry and Rihanna, neither of whom attended the event, went viral on X. A good reminder that we can’t believe everything we see online.

    It’s been a big week. Let’s get into it.

    News

    New updates from OpenAI: On Monday at 10 a.m. PT, OpenAI will demo new features for both ChatGPT and GPT-4. CEO Sam Altman denied reports that the company was preparing to announce a rival search engine product ahead of Google’s I/O conference. Read more

    Say hello to the portal: A new, always-on video portal lets people in New York City and Dublin interact in real time. Portals.org, the organization behind the project, wants it to encourage people to interact with one another “above borders and prejudices.” Read More

    OpenAI explores allowing AI porn: The company released a new NSFW policy intended to start a conversation about how it might allow explicit images and text in its AI products. But can we trust OpenAI — or any generative AI vendor — to do it right? Read More

    Cops can’t use Microsoft’s AI tool: Microsoft has reaffirmed its ban on U.S. police departments from using generative AI for facial recognition through its enterprise-focused tool: Azure OpenAI Service. Read More

    Dorsey says bye to Bluesky: Ex-Twitter CEO Jack Dorsey revealed on X that he is no longer on Bluesky’s board. In a statement, the company thanked Dorsey for his help funding Bluesky and said they are actively looking to replace his seat. Read More

    Is Gen Z ditching Tinder for Hinge?: Match Group released its first-quarter earnings report, which shows a steady decline of Tinder’s paying user base. But Hinge is on track to become a “$1 billion revenue business,” in part due to its à la carte offerings for price-conscious Gen Zers. Read More

    Spotify paywalls lyrics: The music streamer quietly confirmed that it has started moving its lyrics feature behind a paywall in an attempt to entice more users to migrate to its Premium subscription service, invoking the ire of users. Read More

    Analysis

    Apple’s “Crush” ad is disgusting: Devin Coldewey says Apple’s latest ad that crushes analog creative tools into an iPad Pro missed the mark. Apple has since apologized and canceled plans to televise it. Read More

    How Newchip’s bankruptcy threatened thousands of startups: Mary Ann Azevedo and Christine Hall report on startup accelerator Newchip’s fall from grace and the ripple effects on its founders — including those who lost their companies in the fallout. Read More

    Is rabbit’s R1 really that bad?: Much has been said about rabbit’s ambitious R1 AI assistant and it not living up to its promises. Devin argues that even though it likely shipped too early, an experimental device like this is a fun look at a possible future. Read More

    [ad_2]

    Carrie Andrews

    Source link