ReportWire

Tag: openai

  • Why Elon Musk Had to Open Source Grok, His Answer to ChatGPT

    Why Elon Musk Had to Open Source Grok, His Answer to ChatGPT

    [ad_1]

    After suing OpenAI this month, alleging the company has become too closed, Elon Musk says he will release his “truth-seeking” answer to ChatGPT, the chatbot Grok, for anyone to download and use.

    “This week, @xAI will open source Grok,” Musk wrote on his social media platform X today. That suggests his AI company, xAI, will release the full code of Grok and allow anyone to use or alter it. By contrast, OpenAI makes a version of ChatGPT and the language model behind it available to use for free but keeps its code private.

    Musk had previously said little about the business model for Grok or xAI, and the chatbot was made available only to Premium subscribers to X. Having accused his OpenAI cofounders of reneging on a promise to give away the company’s artificial intelligence earlier this month, Musk may have felt he had to open source his own chatbot to show that he is committed to that vision.

    OpenAI responded to Musk’s lawsuit last week by releasing email messages between Musk and others in which he appeared to back the idea of making the company’s technology more closed as it became more powerful. Musk ultimately plowed more than $40 million into OpenAI before parting ways with the project in 2018.

    When Musk first announced Grok was in development, he promised that it would be less politically biased than ChatGPT or other AI models, which he and others with right-leaning views have criticized for being too liberal. Tests by WIRED and others quickly showed that although Grok can adopt a provocative style, it is not hugely biased one way or another—perhaps revealing the challenge of aligning AI models consistently with a particular viewpoint.

    Open sourcing Grok could help Musk drum up interest in his company’s AI. Limiting Grok access to only paid subscribers of X, one of the smaller global social platforms, means that it does not yet have the traction of OpenAI’s ChatGPT or Google’s Gemini. Releasing Grok could draw developers to use and build upon the model, and may ultimately help it reach more end users. That could provide xAI with data it can use to improve its technology.

    Musk’s move to liberate Grok sees him align with Meta’s approach to generative AI. Meta’s open source models, like Llama 2, have become popular among developers because they can be fully customized and adapted to different uses. But adopting a similar strategy could draw Musk further into a growing debate over the benefits and risks of giving anyone access to the most powerful AI models.

    Many AI experts argue that open sourcing AI models has significant benefits such as increasing transparency and broadening access. “Open models are safer and more robust, and it’s great to see more options from leading companies in the space,” says Emad Mostaque, founder of Stability AI, a company that builds various open source AI models.

    [ad_2]

    Will Knight

    Source link

  • Sam Altman Returns To OpenAI Board Of Directors, Concluding A Four-Month Management Rollercoaster

    Sam Altman Returns To OpenAI Board Of Directors, Concluding A Four-Month Management Rollercoaster

    [ad_1]

    OpenAI CEO Sam Altman, ousted four months ago from the company he co-founded, is now fully back in control. His return to its board of directors as part of a reshaped oversight team was announced on Friday.

    Altman was fired in November as CEO and director by the OpenAI board, a shocking development for a man widely perceived as the leader of AI in Silicon Valley. He then briefly went to work for Microsoft, which had a $13 billion partnership with OpenAI.

    Then, weeks after his axing, Altman was rehired as CEO by OpenAI. The company fired the directors responsible for Altman’s dismissal and added economist Larry Summers and Salesforce co-CEO Bret Taylor as chair. They kept Quora CEO Adam D’Angelo as the sole remaining board member from Altman’s prior era.

    On Friday, OpenAI announced the appointment of four new directors: Altman, former Gates Foundation CEO Sue Desmond-Hellman, former Sony General Counsel Nicole Seligman, and Instacart CEO Fidji Simo.

    “I am excited to welcome Sue, Nicole, and Fidji to the OpenAI Board of Directors,” Taylor said in a statement. “Their experience and leadership will enable the Board to oversee OpenAI’s growth, and to ensure that we pursue OpenAI’s mission of ensuring artificial general intelligence benefits all of humanity.”

    OpenAI also announced the completion of an independent investigation into the circumstances surrounding Altman’s firing. It concluded that Altman was fired not out of concern for safety or security related to artificial intelligence. Instead, the board members believed firing Altman would fix management issues.

    That argument collapsed when hundreds of employees threatened to quit and demanded Altman’s return.

    The law firm that conducted the probe concluded that while the board had the right to fire Altman, his behavior did not warrant it. He was also given no chance to defend himself or correct any errant behavior, the probe concluded.

    In a face-saving gesture, the new board now will adopt a new set of guidelines to run the company, strengthen its conflict-of-interest policy, create a whistleblower hotline for anonymous tipsters, and develop new committees for the board to oversee the company’s strategy.

    [ad_2]

    Bruce Haring

    Source link

  • Welcome to the Valley of the Creepy AI Dolls

    Welcome to the Valley of the Creepy AI Dolls

    [ad_1]

    Social robot roommate Jibo initially caused a stir, but sadly didn’t live long.

    Photograph: Jibo

    Not that there haven’t been an array of other attempts. Jibo, a social robot roommate that used AI and endearing gestures to bond with its owners had its collective plug unceremoniously pulled just a few years after being put out into the world. Meanwhile, another US-grown offering, Moxie, an AI-empowered robot aimed at helping with child development, is still active.

    It’s hard not to look at devices like this and shudder at the possibilities. There’s something inherently disturbing about tech that plays at being human, and that uncanny deception can rub people the wrong way. After all, our science fiction is replete with AI beings, many of them tales of artificial intelligence gone horribly wrong. The easy, and admittedly lazy, comparison to something like the Hyodol is M3GAN, the 2023 film about an AI-enabled companion doll that goes full murderbot.

    But aside from offputting dolls, social robots come in many forms. They’re assistants, pets, retail workers, and often socially inept weirdos that just kind of hover awkwardly in public. But they’re also sometimes weapons, spies, and cops. It’s with good reason that people are suspicious of these automatons, whether they come in a fluffy package or not.

    Wendy Moyle is a professor at the School of Nursing & Midwifery Griffith University in Australia who works with patients experiencing dementia. She says her work with social robots has angered people, who sometimes see giving robot dolls to older adults as infantilizing.

    “When I first started using robots, I had a lot of negative feedback, even from staff,” Moyle says. “I would present at conferences and have people throw things at me because they felt that this was inhuman.”

    However, the atmosphere around assistive robots has gotten less hostile recently, as they’ve been utilized in many positive use cases. Robotic companions are bringing joy to people with dementia. During the Covid pandemic, caretakers used robotic companions like Paro, a small robot meant to look like a baby harp seal, to help ease loneliness in older adults. Hyodol’s smiling dolls, whether you see them as sickly or sweet, are meant to evoke a similar friendly response.

    [ad_2]

    Boone Ashworth

    Source link

  • Sam Altman is back on the OpenAI board. We still don’t know why he was fired.

    Sam Altman is back on the OpenAI board. We still don’t know why he was fired.

    [ad_1]

    Sam Altman on the board of OpenAI, nearly four months after the CEO was ousted, and quickly reinstated, from the company he founded. Although Altman had returned as the AI company’s top executive in November, a temporary board oversaw his return and the subsequent investigation into his conduct.

    That investigation is now complete, according to the company, which added three new members to its board of directors. The additions include: Instacart CEO and former Meta executive Fidji Simo, former Sony executive Nicole Seligman and Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation. Salesforce co-CEO Bret Taylor, economist Larry Summers and OpenAI co-founder Greg Brockman, who served on the temporary three-seat board, will remain in their positions with Taylor continuing as chair.

    The announcement caps off a tumultuous several months for the AI company, which by Altman’s abrupt ouster last fall.

    On Friday, OpenAI also published a summary of the findings from WilmerHale, a law firm that the company’s board retained in December 2023 to conduct an independent investigation into the events that led to Altman’s firing. Despite that, however, we’re no closer to finding out exactly why Altman, who rejoined the company as CEO within five days, was fired to begin with.

    “WilmerHale [found] that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners,” the summary said. “Instead, it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman.” WilmerHale also concluded that OpenAI’s previous board fired Altman abruptly without giving notice to “key stakeholders”, and without giving Altman an opportunity to respond to its concerns.

    To come to this conclusion, the firm reviewed more than 30,000 documents and conducted dozens of interviews with OpenAI staffers including previous board members over the last few months.

    [ad_2]

    Karissa Bell,Pranav Dixit

    Source link

  • OpenAI announces new board members, reinstates CEO Sam Altman | TechCrunch

    OpenAI announces new board members, reinstates CEO Sam Altman | TechCrunch

    [ad_1]

    Sam Altman, the CEO of OpenAI, has a seat at the table — or board, rather — once again.

    OpenAI today announced that Altman will be rejoining the company’s board of directors several months after losing his seat and being pushed out as OpenAI’s CEO.

    Joining him are three new members: former CEO of the Bill and Melinda Gates Foundation Sue Desmond-Hellmann, ex-Sony Entertainment president Nicole Seligman and Instacart CEO Fidji Simo — bringing OpenAI’s board to eight people.

    The members of the transitionary board — the board formed after Altman’s firing in November — won’t be stepping down with the appointment of Desmond-Hellmann, Seligman and Simo. Salesforce co-CEO Bret Taylor (OpenAI’s current board chair), Quora CEO Adam D’Angelo and Larry Summers, the economist and former Harvard president, will remain in their roles on the board, as will Dee Templeton, a Microsoft-appointed board observer.

    The appointment of the four new board members — and reappointment of Altman — comes after OpenAI received criticism for its board’s all-male makeup and the nomination of Summers, who has a history of making unflattering remarks about women. The Congressional Black Caucus flagged the board’s lack of diversity in a letter sent in January, noting the importance of the Black perspective in building tools to help mitigate AI bias.

    OpenAI’s expanded board is certainly diverse — at least in terms of their backgrounds.

    Desmond-Hellmann, in addition to heading the Bill and Melinda Gates Foundation for six years, was previously chancellor of the University of California, San Francisco and before that president of product development at Genentech, where she helped develop gene-targeted cancer drugs. Desmond-Hellmann is an oncologist by training, board-certified in both internal medicine and medical oncology.

    Seligman, an attorney and corporate director, received national attention for her representation of Lieutenant Colonel Oliver North during the Iran-Contra hearings and President Bill Clinton during his impeachment trial. Seligman was Sony’s VC and general counsel before rising through the ranks to CEO of Sony Corporation and president of Sony Corporation of America.

    As for Fidji Simo, before becoming CEO of Instacart, she was head of the Facebook app at Meta and the VP overseeing Meta’s various video, games and monetization efforts. Simo also co-founded — and is currently president of — The Metrodora Foundation, a health clinic and research institute.

    “Sue, Fidji and Nicole have experience in leading global organizations and navigating complex regulatory environments, including backgrounds in technology, nonprofit and board governance,” OpenAI wrote in a blog post. “They will work closely with current board members Adam D’Angelo, Larry Summers and Bret Taylor as well as Sam and OpenAI’s senior management.”

    The board’s expansion and Altman’s reinstatement also follows an investigation by the law firm WilmerHale, retained by OpenAI, that concluded Altman’s ouster was a “consequence of a breakdown in the relationship and loss of trust” between Altman and the prior board — not out of “concerns regarding product safety or security, the pace of development, OpenAI’s finances or its statement to investors, customers, or business partners.”

    OpenAI in a blog post said that, during the probe, WilmerHale conducted dozens of interviews with the company’s prior board, current executives, advisers and other witnesses and reviewed thousands of documents and other corporate actions. In the opinion of the firm, the prior board acted within its right to terminate Altman — but Altman’s conduct didn’t mandate removal.

    “We have unanimously concluded that Sam and [OpenAI president Greg Brockman] are the right leaders for OpenAI,” Taylor said in a statement. “We recognize the magnitude of our role in stewarding transformative technologies for the global good.”

    Not all at OpenAI would likely agree.

    New York Times reporting earlier this week paints a picture of a manipulative Altman — a leader who often told people what they wanted to hear to charm them and support his decisions but who undermined their credibility when they challenged him. Both OpenAI CTO Mira Murati and Ilya Sutskever, a former OpenAI board member and the startup’s chief scientist, approached members of OpenAI’s previous board to express concerns about Altman’s behavior prior to his ouster last year, according to The Times.

    In addition to today’s board appointments, OpenAI said that it would adopt a new set of corporate governance guidelines, including strengthening its conflict of interest policy, creating a whistleblower hotline “to serve as an anonymous reporting resource for all OpenAI employees and contractors” and establishing additional board committees — including a mission and strategy committee “focused on implementation and advancement of the core mission of OpenAI.”

    We’ve asked OpenAI for more information on the reworked conflict of interest policy and mission and strategy committee and will update this post if we hear back.

    [ad_2]

    Kyle Wiggers

    Source link

  • Sam Altman Is Reinstated to OpenAI’s Board

    Sam Altman Is Reinstated to OpenAI’s Board

    [ad_1]

    Sam Altman is back—again. The entrepreneur who was suddenly fired as OpenAI CEO and from the ChatGPT developer’s board last November, before regaining his CEO position days later, is now getting his director seat back, too.

    Altman and three veteran business executives, all women, were named to OpenAI’s board on Friday, OpenAI announced in a blog post. Sue Desmond-Hellmann, former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, a former Sony general counsel; and Fidji Simo, the CEO and chair of grocery delivery company Instacart and a former Facebook executive, are the others joining the board.

    OpenAI’s announcement coincided with the release of results from an internal investigation commissioned by three existing board members and carried out by the law firm WilmerHale. It found “a breakdown in trust” precipitated Altman’s removal by the prior board and that his earlier conduct “did not mandate removal,” according to a summary published by OpenAI.

    On a press call Friday, Altman attempted to draw a line under OpenAI’s drama, saying, “I’m pleased this whole thing is over.” He added that “it’s been disheartening to see some people with an agenda trying to use leaks in the press to hurt the company, hurt the mission.”

    While the investigation cleared Altman to reclaim his board seat, he said he “did learn a lot from this experience,” expressing remorse for one incident in particular involving a board member he did not name.

    That appeared to be a reference to former OpenAI director Helen Toner, a researcher at the Center for Security and Emerging Technology, a Georgetown think tank. After she published a research analysis that criticized the speed of OpenAI’s product launch decisions, Altman reportedly tried to remove her from the board. “I think I could have handled that situation with more grace and care—I apologize for that,” he said.

    Clean-Up

    OpenAI has been looking to expand the board for months after announcing an interim board following the November chaos. It was formed after a deal between some board members who had pushed Altman out, alleging he had endangered its mission to develop superhuman AI for the benefit of all. Three of those directors agreed to step down after more than 95 percent of OpenAI employees threatened to quit if he wasn’t brought back.

    The company’s governance has drawn public scrutiny because of its development of ChatGPT, Dall-E, and other services that have kicked off a boom in generative AI technologies over the past couple of years.

    Altman had been suddenly fired by four members of the board of OpenAI’s nonprofit entity, which in an unusual structure in tech oversees a for-profit arm working on AI development. They expressed concerns about his communications with the board not being consistently candid as part of their justification for the move.

    After a chaotic few days during which Microsoft said it would hire Altman and Brockman, OpenAI employees threatened to quit en masse, Altman was reinstated as CEO.

    [ad_2]

    Paresh Dave, Steven Levy

    Source link

  • The Fear That Inspired the Creation of OpenAI

    The Fear That Inspired the Creation of OpenAI

    [ad_1]

    Elon Musk last week sued two of his OpenAI cofounders, Sam Altman and Greg Brockman, accusing them of “flagrant breaches” of the trio’s original agreement that the company would develop artificial intelligence openly and without chasing profits. Late on Tuesday, OpenAI released partially redacted emails between Musk, Altman, Brockman, and others that provide a counternarrative.

    The emails suggest that Musk was open to OpenAI becoming more profit-focused relatively early on, potentially undermining his own claim that it deviated from its original mission. In one message Musk offers to fold OpenAI into his electric-car company Tesla to provide more resources, an idea originally suggested by an email he forwarded from an unnamed outside party.

    The newly published emails also imply that Musk was not dogmatic about OpenAI having to freely provide its developments to all. In response to a message from chief scientist Ilya Sutskevar warning that open sourcing powerful AI advances could be risky as the technology advances, Musk writes, “Yup.” That seems to contradict the arguments in last week’s lawsuit that it was agreed from the start that OpenAI should make its innovations freely available.

    Putting the legal dispute aside, the emails released by OpenAI show a powerful cadre of tech entrepreneurs founding an organization that has grown to immense power. Strikingly, although OpenAI likes to describe its mission as focused on creating artificial general intelligence—machines smarter than humans—its founders spend more time discussing fears about the rising power of Google and other deep-pocketed giants than excited about AGI.

    “I think we should say that we are starting with a $1B funding commitment. This is real. I will cover whatever anyone else doesn’t provide,” Musk wrote in a missive discussing how to introduce OpenAI to the world. He dismissed a suggestion to launch by announcing $100 million in funding, citing the huge resources of Google and Facebook.

    Musk cofounded OpenAI with Altman, Brockman, and others in 2015, during another period of heady AI hype centered around Google. A month before the nonprofit was incorporated, Google’s AI program AlphaGo had learned to play the devilishly tricky board game Go well enough to defeat a champion human player for the first time. The feat shocked many AI experts who had thought Go too subtle for computers to master anytime soon. It also showed the potential for AI to master many seemingly impossible tasks.

    The text of Musk’s lawsuit confirms some previously reported details of the OpenAI backstory at this time, including the fact that Musk was first made aware of the possible dangers posed by AI during a 2012 meeting with Demis Hassabis, cofounder and CEO of DeepMind, the company that developed AlphaGo and was acquired by Google in 2014. The lawsuit also confirms that Musk disagreed deeply with Google cofounder Larry Page over the future risks of AI, something that apparently led to the pair falling out as friends. Musk eventually parted ways with OpenAI in 2018 and has apparently soured further on the project since the wild success of ChatGPT.

    Since OpenAI released the emails with Musk this week, speculation has swirled about the names and other details redacted from the messages. Some turned to AI as a way to fill in the blanks with statistically plausible text.

    “This needs billions per year immediately or forget it,” Musk wrote in one email about the OpenAI project. “Unfortunately, humanity’s future is in the hands of [redacted],” he added, perhaps a reference to Google cofounder Page.

    Elsewhere in the email change, the AI software—like some commentators on Twitter—guessed Musk had forwarded arguments that Google had a powerful advantage in AI from Hassabis.

    Whoever it was, the relationships on display in the emails between OpenAI’s cofounders have since become fractured. Musk’s lawsuit seeks to force the company to stop licensing technology to its primary backer, Microsoft. In a blog post accompanying the emails released this week, OpenAI’s other cofounders expressed sorrow at how things had soured.

    “We’re sad that it’s come to this with someone whom we’ve deeply admired,” they wrote. “Someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.”

    [ad_2]

    Will Knight

    Source link

  • Elon Musk and OpenAI CEO Sam Altman, once allies, no longer see eye to eye. Here’s why.

    Elon Musk and OpenAI CEO Sam Altman, once allies, no longer see eye to eye. Here’s why.

    [ad_1]


    Elon Musk sues OpenAI

    04:52

    The relationship between Elon Musk and OpenAI has taken an increasingly acrimonious — and public — turn, with the one-time allies lobbing allegations at each other as they battle over the future of artificial intelligence. 

    For many observers, it may seem a surprising twist in a relationship that stems from at least 2015, when Musk helped found OpenAI on the premise that it would use its generative AI technology to benefit the public. 

    But that relationship blew up publicly last week when Musk filed a lawsuit against OpenAI and two of its executives, CEO Sam Altman and President Greg Brockman, accusing them of violating those founding principles by putting profits over humanity. 

    Now, OpenAI is offering its retort, saying in a blog post on Tuesday that it intends to move to dismiss all of Musk’s claims. But the post got more personal, releasing a batch of emails from Musk that show he initially wanted to subsume OpenAI into Tesla, his electric vehicle company, and had pushed for a for-profit business. OpenAI was founded as a nonprofit, but now operates in a hybrid structure it calls a “capped profit” business.

    When Musk didn’t get his way, the Tesla CEO left the AI business, vowing to start his own company, OpenAI claimed.

    “We’re sad that it’s come to this with someone whom we’ve deeply admired — someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him,” OpenAI said in the blog post, which was co-written by executives including Altman and Brockman.

    Musk, meanwhile, posted memes to his social media service X on Wednesday, including one featuring Altman, that labeled OpenAI as “ClosedAI” — a reference to OpenAI’s transformation from being an open-source, nonprofit company to a closed-source, for-profit company controlled by Microsoft. 

    “OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” the lawsuit states. “Its technology, including GPT-4, is closed-source primarily to serve the proprietary commercial interests of Microsoft.”

    In many ways, the hostile relationship between Musk and OpenAI is a tale as old as capitalism: Founders of a company start off with shared goals but soon discover they don’t see eye-to-eye, leading to a split and bitter legal claims. But there’s more to the issue than a dispute over their business vision; the fight underscores questions about the development of AI, and who stands to benefit from its emergence.

    “Beyond the legal battle, this situation illuminates the broader conversation about the future of AI — how it should be developed, who should have access to these powerful technologies, and how they can be used in ways that benefit humanity as a whole, rather than serving narrow commercial interests,” noted Tim E. Bates, an AI expert and former CTO of Lenovo, in an email. 

    The AI boom

    The battle is occurring at a time when demand for AI is exploding, with Google and Microsoft seeking to dominate the new technology. The market for generative AI products could grow $1.3 trillion in the next decade, up from $40 billion in 2022, according to Bloomberg Intelligence. 

    OpenAI has developed commercial ties with Microsoft, which has invested billions in the company and integrated its groundbreaking GPT-4 tech into its software programs. Microsoft has also developed an AI app called Copilot that’s geared to helping consumers automate various tasks.

    Those commercial ties are at the heart of Musk’s lawsuit against OpenAI, with the tech billionaire claiming that the relationship represents “a stark betrayal of the founding agreement” to help humanity.

    Even so, Musk has his own AI developments in the works at Tesla, illustrating that he’s not entirely against the commercialization of AI, at least when it benefits him and his shareholders. In January, he demanded 25% voting control of the EV company before expanding its AI developments. Currently, Musk is the largest individual shareholder of Tesla, with about 13% of outstanding shares, according to FactSet.

    Tesla’s AI initiatives, including self-driving cars, are one reason investors are bullish on the company, noted Wedbush Securities analyst Dan Ives in a January research note.

    “The outcome of [Musk’s lawsuit against OpenAI] could set a precedent for how AI organizations balance the dual objectives of innovation and accessibility,” Bates said of Musk’s lawsuit. 

    If he succeeds, more AI companies could adopt more open-source models in which newly developed technology is free and available to the public, but if OpenAI wins the battle, it could lead to more commercialization of AI, Bates noted.

    [ad_2]

    Source link

  • OpenAI fires back at Musk, and Monzo raises a megaround | TechCrunch

    OpenAI fires back at Musk, and Monzo raises a megaround | TechCrunch

    [ad_1]

    Listen here or wherever you get your podcasts.

    Hello, and welcome back to Equity, the podcast about the business of startups, where we unpack the numbers and nuance behind the headlines.

    This is our Wednesday show, focused on startup and venture capital news that matters. If you are a founder or an investor, this one is for you!

    Here’s the day’s rundown:

    • OpenAI fires back at Musk: In the wake of a lawsuit from former backer Elon Musk, OpenAI is bringing receipts and an argument that Musk wanted to run the company’s for-profit arm. Hard to argue against something that you wanted to run, yeah?
    • Monzo raises megaround: Monzo’s latest round is proof that the worst of the fintech slump is behind us.
    • All eyes on Ema: With $25 million and a launch from stealth, Ema’s work to bring AI to the enterprise is notable. But in such a crowded market, are many startups aiming too high on the stack?
    • Accenture buys Udacity: The former unicorn’s final resting place is not what it had dreamed of before, but this deal does bring welcome liquidity to at least one venture-backed startup.
    • A climate boost? An upcoming regulatory choice could unlock a massive wave of demand for carbon-tracking startups.
    • And the latest from OpenView: The Information reports that OpenView is returning most of its latest fund to backers. A weird and slightly sad final chapter for the firm.

    For episode transcripts and more, head to Equity’s Simplecast website.

    Equity drops at 7 a.m. PT every Monday, Wednesday and Friday, so subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. TechCrunch also has a great show on crypto, a show that interviews founders and more!

    [ad_2]

    Alex Wilhelm

    Source link

  • Robert F. Kennedy Jr.’s Microsoft-Powered Chatbot Just Disappeared

    Robert F. Kennedy Jr.’s Microsoft-Powered Chatbot Just Disappeared

    [ad_1]

    Those concerns are part of the reason OpenAI said in January that it would ban people from using its technology to create chatbots that mimic political candidates or provide false information related to voting. The company also said it wouldn’t allow people to build applications for political campaigns or lobbying.

    While the Kennedy chatbot page doesn’t disclose the underlying model powering it, the site’s source code connects that bot to LiveChatAI, a company that advertises its ability to provide GPT-4 and GPT-3.5-powered customer support chatbots to businesses. LiveChatAI’s website describes its bots as “harnessing the capabilities of ChatGPT.”

    When asked which large language model powers the Kennedy campaign’s bot, LiveChatAI cofounder Emre Elbeyoglu said in an emailed statement on Thursday that the platform “utilizes a variety of technologies like Llama and Mistral” in addition to GPT-3.5 and GPT-4. “We are unable to confirm or deny the specifics of any client’s usage due to our commitment to client confidentiality,” Elbeyoglu said.

    OpenAI spokesperson Niko Felix told WIRED on Thursday that the company didn’t “have any indication” that the Kennedy campaign chatbot was directly building on its services, but suggested that LiveChatAI might be using one of its models through Microsoft’s services. Since 2019, Microsoft has reportedly invested more than $13 billion into OpenAI. OpenAI’s ChatGPT models have since been integrated into Microsoft’s Bing search engine and the company’s Office 365 Copilot.

    On Friday, a Microsoft spokesperson confirmed that the Kennedy chatbot “leverages the capabilities of Microsoft Azure OpenAI Service.” Microsoft said that its customers were not bound by OpenAI’s terms of service, and that the Kennedy chatbot was not in violation of Microsoft’s policies.

    “Our limited testing of this chatbot demonstrates its ability to generate answers that reflect its intended context, with appropriate caveats to help prevent misinformation,” the spokesperson said. “Where we find issues, we engage with customers to understand and guide them toward uses that are consistent with those principles, and in some scenarios, this could lead to us discontinuing a customer’s access to our technology.”

    OpenAI did not immediately respond to a request for comment from WIRED on whether the bot violated its rules. Earlier this year, the company blocked the developer of Dean.bot, a chatbot built on OpenAI’s models that mimicked Democratic presidential candidate Dean Phillips and delivered answers to voter questions.

    Late afternoon on Sunday, the chatbot service was no longer available. While the page remains accessible on the Kennedy campaign site, the embedded chatbot window now shows a red exclamation point icon, and simply says “Chatbot not found.” WIRED reached out to Microsoft, OpenAI, LiveChatAI, and the Kennedy campaign for comment on the chatbot’s apparent removal, but did not receive an immediate response.

    Given the propensity of chatbots to hallucinate and hiccup, their use in political contexts has been controversial. Currently OpenAI is the only major large language model to explicitly prohibit its use in campaigning; Meta, Microsoft, Google, and Mistral all have terms of service, but they don’t address politics directly. And given that a campaign can apparently access GPT-3.5 and GPT-4 through a third party without consequence, there are hardly any limitations at all.

    “OpenAI can say that it doesn’t allow for electoral use of its tools or campaigning use of its tools on one hand,” Woolley said. “But on the other hand, it’s also making these tools fairly freely available. Given the distributed nature of this technology one has to wonder how OpenAI will actually enforce its own policies.”

    [ad_2]

    Makena Kelly

    Source link

  • Elon Musk sues OpenAI and Sam Altman for allegedly ditching non-profit mission

    Elon Musk sues OpenAI and Sam Altman for allegedly ditching non-profit mission

    [ad_1]

    OpenAI co-founder Elon Musk has sued the company, his fellow co-founders, associated businesses and unidentified others. He claims that, by chasing profits, they’re violating OpenAI’s status as a non-profit and its foundational contractual agreements to develop AI “for the benefit of humanity.”

    The suit alleges that OpenAI has become a “closed-source de facto subsidiary” of Microsoft, which has invested $13 billion and holds a 49 percent stake. Microsoft uses OpenAI tech to power generative AI tools such as Copilot.

    According to the filing, under OpenAI’s current board, it is allegedly developing and refining an artificial general intelligence (AGI) “to maximize profits for Microsoft, rather than for the benefit of humanity. This was a stark betrayal of the Founding Agreement.”

    The suit defines AGI as “a machine having intelligence for a wide variety of tasks like a human.” Musk argues in the suit that GPT-4, which is purportedly “better at reasoning than average humans,” is tantamount to AGI and is “a de facto Microsoft proprietary algorithm.”

    Musk has long expressed concerns over AGI. He claims the theoretical tech posits “a grave threat to humanity,” particularly “in the hands of a closed, for-profit company like Google.”

    According to the filing, OpenAI CEO Sam Altman and fellow co-founder Greg Brockman persuaded Musk to help them start the non-profit and to fund its early operations in a bid to counter Google’s advancements in the AGI space with DeepMind. He noted that their initial agreement called for OpenAI’s tech to be “freely available” to the public. Musk claims to have donated $44 million to the non-profit between 2016 and 2020 (he stepped down as an OpenAI board member in 2018). As TechCrunch reports, Musk previously said he was offered a stake in OpenAI’s for-profit subsidiary, but rejected it due to “a principled stand.”

    Muskl, of course, has some skin in the game. Since the public debut of OpenAI’s ChatGPT in November 2022, there’s been a battle between tech giants to offer the best generative AI tools. Musk joined that rat race when his AI company, xAI, rolled out ChatGPT rival Grok to Premium+ subscribers on his X social network last year.

    When Altman swiftly returned to power after OpenAI’s board shockingly fired him in November, he’s said to have appointed a new group of directors that is less technically minded and more business-focused. Microsoft was appointed as a non-voting observer. “The new board consisted of members with more experience in profit-centric enterprises or politics than in AI ethics and governance,” the lawsuit alleges.

    The suit accuses the defendants of breach of contract, breach of fiduciary duty and unfair business practices. Musk is seeking a jury trial and a ruling that forces OpenAI to stick to its original non-profit mission. He also wants it to be banned from monetizing tech it developed as a non-profit for the benefit of OpenAI leadership as well as Microsoft and other partners.

    Competition regulators in the US, the UK and European Union are said to be examining OpenAI’s partnership with Microsoft. It was reported this week that the Securities and Exchange Commission is investigating whether OpenAI misled investors. Several news organizations have sued OpenAI and Microsoft as well, alleging that ChatGPT repurposes their work “verbatim or nearly verbatim” without attribution, infringing upon their copyright in the process.

    In a couple of internal memos seen by Bloomberg, OpenAI said it “categorically disagrees” with the lawsuit Musk has filed. Chief Strategy Officer Jason Kwon denied that OpenAI has become a “de facto subsidiary” of Microsoft and said that Musk’s claims “may stem from [his] regrets about not being involved with the company today.” Altman also said in another memo that Musk is his hero and that he misses the person he knew who competed with others by building better technology.

    Update, March 02, 2023, 1:47AM ET: This story has been updated to include OpenAI’s internal memos about the lawsuit.

    This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

    [ad_2]

    Kris Holt

    Source link

  • The Wild Claim at the Heart of Elon Musk’s OpenAI Lawsuit

    The Wild Claim at the Heart of Elon Musk’s OpenAI Lawsuit

    [ad_1]

    Elon Musk started the week by posting testily on X about his struggles to set up a new laptop running Windows. He ended it by filing a lawsuit accusing OpenAI of recklessly developing human-level AI and handing it over to Microsoft.

    Musk’s lawsuit is filed against OpenAI and two of its executives, CEO Sam Altman and president Greg Brockman, both of whom worked with the rocket and car entrepreneur to found the company in 2015. A large part of the case pivots around a bold and questionable technical claim: That OpenAI has developed so-called artificial general intelligence, or AGI, a term generally used to refer to machines that can comprehensively match or outsmart humans.

    The case claims that Altman and Brockman have breached the original “Founding Agreement” for OpenAI worked out with Musk, which it says pledged the company to develop AGI openly and “for the benefit of humanity. Musk’s suit alleges that the for-profit arm of the company, established in 2019 after he parted ways with OpenAI, has instead created AGI without proper transparency and licensed it to Microsoft, which has invested billions into the company. It demands that OpenAI be forced to release its technology openly and that it be barred from using it to financially benefit Microsoft, Altman, or Brockman.

    “On information and belief, GPT-4 is an AGI algorithm,” the lawsuit states, referring to the large language model that sits behind OpenAI’s ChatGPT. It cites studies that found the system can get a passing grade on the Uniform Bar Exam and other standard tests as proof that it has surpassed some fundamental human abilities. “GPT-4 is not just capable of reasoning. It is better at reasoning than average humans,” the suit claims.

    Although GPT-4 was heralded as a major breakthrough when it was launched in March 2023, most AI experts do not see it as proof that AGI has been achieved. “GPT-4 is general, but it’s obviously not AGI in the way that people typically use the term,” says Oren Etzioni, a professor emeritus at the University of Washington and an expert on AI.

    “It will be viewed as a wild claim,” says Christopher Manning, a professor at Stanford University who specializes in AI and language, of the AGI assertion in Musk’s suit. Manning says there are divergent views of what constitutes AGI within the AI community. Some experts might set the bar lower, arguing that GPT-4’s ability to perform a wide range of functions would justify calling it AGI, while others prefer to reserve the term for algorithms that can outsmart most or all humans at anything. “Under this definition, I think we very clearly don’t have AGI and are indeed still quite far from it,” he says.

    Limited Breakthrough

    GPT-4 won notice—and new customers for OpenAI—because it can answer a wide range of questions, while older AI programs were generally dedicated to specific tasks like playing chess or tagging images. Musk’s lawsuit refers to assertions from Microsoft researchers, in a paper from March 2023, that “given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” Despite its impressive abilities, GPT-4 still makes mistakes and has significant limitations to its ability to correctly parse complex questions.

    “I have the sense that most of us researchers on the ground think that large language models [like GPT-4] are a very significant tool for allowing humans to do much more but that they are limited in ways that make them far from stand-alone intelligences,” adds Michael Jordan, a professor at UC Berkeley and an influential figure in the field of machine learning.

    Jordan adds that he prefers to avoid the term AGI entirely because it is so vague. “I’ve never found Elon Musk to have anything to say about AI that was very calibrated or based on research reality,” he adds.

    [ad_2]

    Will Knight

    Source link

  • Elon Musk sues OpenAI and Sam Altman, claiming “stark betrayal” of the AI company’s mission

    Elon Musk sues OpenAI and Sam Altman, claiming “stark betrayal” of the AI company’s mission

    [ad_1]

    Elon Musk is suing OpenAI and CEO Sam Altman, with the Tesla founder alleging they violated the artificial intelligence company’s original mission statement by putting profits over benefiting humanity. 

    The lawsuit, filed Thursday in San Francisco, comes amid a larger debate over the potential impact of AI, such as that it could produce misleading or harmful information. In the lawsuit, Musk alleges breach of contract and fiduciary duty, among other claims, against OpenAI, Altman and OpenAI President Greg Brockman.

    Musk, who helped found OpenAI in 2015, cites the lab’s founding agreement that the company would use its technology to benefit the public and that it would open its technology for public use. Yet OpenAI has veered away from that mission with its latest AI model, GPT-4, which it hasn’t released to the public, the suit alleges.

    At the same time, OpenAI has formed commercial ties with Microsoft, which has invested billions in the AI company. Microsoft has integrated OpenAI’s GPT-4 tech into its software programs and developed an AI app called Copilot that’s geared to helping consumers automate various tasks.

    The relationship between Microsoft and OpenAI represents “a stark betrayal of the founding agreement,” Musk suit claims.

    “Mr. Altman caused OpenAI to radically depart from its original mission and historical practice of making its technology and knowledge available to the public. GPT-4’s internal design was kept and remains a complete secret except to OpenAI — and, on information and belief, Microsoft,” the complaint alleges. “There are no scientific publications describing the design of GPT-4. Instead, there are just press releases bragging about performance.”


    The Answer: What is OpenAI’s new tool, Sora?

    01:42

    Instead of helping humanity, OpenAI’s tech is now primarily serving Microsoft’s commercial interests, the lawsuit claims. GPT-4 “is now a de facto Microsoft proprietary algorithm,” it alleges.

    OpenAI and Microsoft didn’t immediately return requests for comment.

    Musk is asking the court to order OpenAI to make its AI models open to the public, and to prohibit it from using its technology to benefit its executives, Microsoft or any other person or company. He also is asking the court to force OpenAI, Altman and Brockman to repay all the money they received from their dealings with Microsoft. 

    Musk has more direct interests in the future of artificial intelligence. In 2023 he formed xAI, which recruited researchers from OpenAI and other top tech firms to develop an AI tool called Grok that the startup said wil aim to “maximally benefit all of humanity.”

    [ad_2]

    Source link

  • Here Come the AI Worms

    Here Come the AI Worms

    [ad_1]

    As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

    Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

    Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

    The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

    Most generative AI systems work by being fed prompts—text instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions. For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

    To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

    To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

    [ad_2]

    Matt Burgess

    Source link

  • See the humanoid work robot OpenAI is bringing to life with artificial intelligence

    See the humanoid work robot OpenAI is bringing to life with artificial intelligence

    [ad_1]

    OpenAI has seen the future, and it involves imbuing a humanoid robot with the silicon spark of artificial intelligence.

    The developer of ChatGPT, the generative AI technology that has set the world abuzz, said it’s investing in Figure, a California company that makes robots for the workplace. OpenAI and Figure will also join forces to develop AI technology aimed at helping robots “process and reason from language,” the companies announced in a news release. 

    Launched in 2022, Figure is building what it calls “general purpose humanoids” that can labor alongside people, as well as do dangerous or unpleasant work. “There are over 10 million unsafe or undesirable jobs in the U.S. alone, and an aging population will only make it increasingly difficult for companies to scale their workforces,” the company says on its website. 

    Other blue-chip players joining OpenAI to invest in Figure include Jeff Bezos (through Bezos Expeditions, his family office), Intel, Microsoft and Nvidia, with the $675 million venture round announced Thursday valuing the robotics company at $2.6 billion.

    Figure is one of a growing number of companies developing robots to work in warehouses, factories and other industrial settings. The company announced in January that it had signed a deal with BMW to deploy robots in the German automaker’s plants, including the German company’s facility in Spartanburg, South Carolina.

    “Our vision at Figure is to bring humanoid robots into commercial operations as soon as possible,” said Brett Adcock, the startup’s founder and CEO, in a statement on its latest funding round. “This investment, combined with our partnership with OpenAI and Microsoft, ensures that we are well-prepared to bring embodied AI into the world to make a transformative impact on humanity.”

    Here’s a look at what Figure’s robot can do.


    Figure Status Update – Real World Task by
    Figure on
    YouTube


    Figure Status Update – AI Trained Coffee Demo by
    Figure on
    YouTube

    [ad_2]

    Source link

  • More news organizations sue OpenAI and Microsoft over copyright infringement

    More news organizations sue OpenAI and Microsoft over copyright infringement

    [ad_1]

    Legal claims are starting to pile up against Microsoft and OpenAI, as three more news sites have sued the firms over copyright infringement, The Verge reported. The Intercept, Raw Story and AlterNet filed separate lawsuits accusing ChatGPT of reproducing news content “verbatim or nearly verbatim” while stripping out important attribution like the author’s name.

    The sites, all represented by the same law firm, said that if ChatGPT trained on copyright material, it “would have learned to communicate that information when providing responses.” Raw Story and AlterNet added that OpenAI and Microsoft must have known that the chatbot would be less popular and generate lower revenue if “users believed that ChatGPT responses violated third-party copyrights.”

    The news organizations note in the lawsuit that OpenAI offers an opt-out system for website owners, meaning that the company must be aware of potential copyright infringement. Microsoft and OpenAI have also said that they’ll defend customers against legal claims around copyright infringement that might arise from using their products, and even pay for incurred costs.

    Late last year, The New York Times sued OpenAI and Microsoft for copyright infringement, saying it “seeks to hold them responsible for the billions of dollars in statutory and actual damages”. OpenAI asked a court to dismiss that claim, saying the NYT took advantage of a ChatGPT bug that made it recite articles word for word.

    The companies also face lawsuits from multiple non-fiction authors accusing them of “massive and deliberate theft of copyrighted works,” and by comedian Sarah Silverman over similar claims.

    [ad_2]

    Steve Dent

    Source link

  • OpenAI unveils Sora, an AI-video generator

    OpenAI unveils Sora, an AI-video generator

    [ad_1]

    OpenAI unveils Sora, an AI-video generator – CBS News


    Watch CBS News



    OpenAI, the company behind ChatGPT, unveiled a new artificial intelligence model called Sora that takes written prompts and creates videos in just a matter of seconds. Cade Metz, technology reporter for the New York Times, joined CBS News to discuss the new product.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Sarah Silverman’s copyright infringement suit against OpenAI will advance in pared-down form

    Sarah Silverman’s copyright infringement suit against OpenAI will advance in pared-down form

    [ad_1]

    Sarah Silverman’s lawsuit against OpenAI will advance with some of her legal team’s claims dismissed. The comedian sued OpenAI and Meta in July 2023, claiming they trained their AI models on her books and other work without consent. Bloomberg reported on Tuesday that the unfair competition portion of the lawsuit will proceed. Judge Martínez-Olguín gave the plaintiffs until March 13 to amend the suit.

    US District Judge Araceli Martínez-Olguín threw out portions of the complaint from Silverman’s legal team Monday, including negligence, unjust enrichment, DMCA violations and accusations of vicarious infringement. The case’s principal claim remains intact. It alleges OpenAI directly infringed on copyrighted material by training LLMs on millions of books without permission.

    OpenAI’s motion to dismiss, filed in August, didn’t tackle the case’s core copyright claims. Although the suit will proceed, the judge suggested the federal Copyright Act may preempt the suit’s remaining claims. “As OpenAI does not raise preemption, the Court does not consider it,” Martínez-Olguín wrote.

    The US court system has yet to determine whether training AI large language models on copyrighted work falls under the fair use doctrine. Last month, OpenAI admitted in a court filing that it would be “impossible to train today’s leading AI models without using copyrighted materials.”

    The result of Silverman’s OpenAI hearing is similar to one in San Francisco in November when Silverman’s claims against Meta were also slashed down to the core copyright infringement claims. In that session, US District Judge Vince Chhabria described some of the plaintiffs’ dismissed claims as “nonsensical.”

    Other groups suing OpenAI for alleged copyright-related violations include The New York Times, a collection of nonfiction authors (a group that grew after the initial lawsuit) and The Author’s Guild. The latter filed its claim alongside authors George R.R. Martin (Game of Thrones) and John Grisham.

    [ad_2]

    Will Shanklin

    Source link

  • OpenAI forms a new team to study child safety | TechCrunch

    OpenAI forms a new team to study child safety | TechCrunch

    [ad_1]

    Under scrutiny from activists — and parents — OpenAI has formed a new team to study ways to prevent its AI tools from being misused or abused by kids.

    In a new job listing on its career page, OpenAI reveals the existence of a Child Safety team, which the company says is working with platform policy, legal and investigations groups within OpenAI as well as outside partners to manage “processes, incidents, and reviews” relating to underage users.

    The team is currently looking to hire a child safety enforcement specialist, who’ll be responsible for applying OpenAI’s policies in the context of AI-generated content and working on review processes related to “sensitive” (presumably kid-related) content.

    Tech vendors of a certain size dedicate a fair amount of resources to complying with laws like the U.S. Children’s Online Privacy Protection Rule, which mandate controls over what kids can — and can’t — access on the web as well as what sorts of data companies can collect on them. So the fact that OpenAI’s hiring child safety experts doesn’t come as a complete surprise, particularly if the company expects a significant underage user base one day. (OpenAI’s current terms of use require parental consent for children ages 13 to 18 and prohibit use for kids under 13.)

    But the formation of the new team, which comes several weeks after OpenAI announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines and landed its first education customer, also suggests a wariness on OpenAI’s part of running afoul of policies pertaining to minors’ use of AI — and negative press.

    Kids and teens are increasingly turning to GenAI tools for help not only with schoolwork but personal issues. According to a poll from the Center for Democracy and Technology, 29% of kids report having used ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

    Some see this as a growing risk.

    Last summer, schools and colleges rushed to ban ChatGPT over plagiarism and misinformation fears. Since then, some have reversed their bans. But not all are convinced of GenAI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use GenAI in a negative way — for example creating believable false information or images used to upset someone.

    In September, OpenAI published documentation for ChatGPT in classrooms with prompts and an FAQ to offer educator guidance on using GenAI as a teaching tool. In one of the support articles, OpenAI acknowledged that its tools, specifically ChatGPT, “may produce output that isn’t appropriate for all audiences or all ages” and advised “caution” with exposure to kids — even those who meet the age requirements.

    Calls for guidelines on kid usage of GenAI are growing.

    The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of GenAI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”

    [ad_2]

    Kyle Wiggers

    Source link

  • Ben Smith’s Semafor Launches an A.I. News Product With OpenAI and Microsoft

    Ben Smith’s Semafor Launches an A.I. News Product With OpenAI and Microsoft

    [ad_1]

    Ben Smith speaks on stage during the Semafor Media Summit on April 10, 2023 in New York City. Michael Loccisano/Getty Images for Semafor

    Semafor, the tech news startup founded by former BuzzFeed News editor-in-chief Ben Smith and former Bloomberg Media CEO Justin Smith (not related), is now working with Microsoft (MSFT) and OpenAI on creating news content with the help of artificial intelligence (A.I.). Semafor is the latest news organization to explore A.I. partnerships with tech companies, as the media industry cautiously navigates the technology

    Under the partnership, Semafor is launching a news product called Signals, “in which journalists, using tools from Microsoft and OpenAI, offer readers diverse, sophisticated analysis and insights on the biggest stories in the world as they develop,” according to a LinkedIn post by Rachel Oppenheim, Semafor’s chief revenue officer, today (Feb. 5)

    Signals will serve readers as a breaking news feed, where A.I. assists Semafor journalists in gathering information across news sources. Human editors will oversee the and fact-check all the information, Semafor said in a statement, and turn it into a presentable format for readers that links back to the original news source. Semafor is led by CEO Justin Smith while Ben Smith serves as the publication’s editor in chief. 

    The statement also said the way news organizations have been presenting news for a decade through “stubs” doesn’t meet the challenges readers currently face, referring to the amount of misinformation that goes unregulated on social media sites like X, where publications and journalists have struggled to reach audiences. “They are hungry for the authoritative information that social media no longer provides, and for the array of perspectives from around the world that no single source will give you,” Semafor said.

    As the digital news industry tries (and sometimes fails) to monetize outside of traditional advertising models, quite a few publications are experimenting with A.I. as an efficient way to scale up content production. Incorporating A.I. while maintaining journalistic ethics has become a recurring topic, as some publications, like Sports Illustrated, have been called out for publishing A.I.-generated articles without disclosure. Smith’s old employer BuzzFeed, which dissolved its newsroom last year, was also exposed for publishing A.I.-generated articles as bait for search engine optimization before the shutdown. 

    Tech companies seem eager to participate in the news industry, though with some challenges. For example, OpenAI already works with Axel Springer, which owns Politico and Business Insider, and Associated Press. But at the end of last year, The New York Times sued both OpenAI and its investor Microsoft for copyright infringement, saying the two companies owed “billions of dollars in statutory and actual damages” by using the Times content for training A.I. models. OpenAI published a response stating the lawsuit is without merit.

    Ben Smith’s Semafor Launches an A.I. News Product With OpenAI and Microsoft



    [ad_2]

    Nhari Djan

    Source link