ReportWire

Tag: openai

  • Stack Overflow Users Are Revolting Against an OpenAI Deal

    Stack Overflow Users Are Revolting Against an OpenAI Deal

    [ad_1]

    On Monday, Stack Overflow and OpenAI announced a new API partnership that will integrate Stack Overflow’s technical content with OpenAI’s ChatGPT AI assistant. The deal has sparked controversy among Stack Overflow’s user community, with many expressing anger and protest over the use of their contributed content to support and train AI models.

    “I hate this. I’m just going to delete/deface my answers one by one,” wrote one user on sister site Stack Exchange. “I don’t care if this is against your silly policies, because as this announcement shows, your policies can change at a whim without prior consultation of your stakeholders. You don’t care about your users, I don’t care about you.”

    Stack Overflow is a popular question-and-answer site for software developers that allows users to ask and answer technical questions related to coding. The site has a large community of developers who contribute knowledge and expertise to help others solve programming problems. Over the past decade, Stack Overflow has become a heavily utilized resource for many developers seeking solutions to common coding challenges.

    Under the announced partnership, OpenAI will utilize Stack Overflow’s OverflowAPI product to improve its AI models using content from the Stack Overflow community—officially incorporating information that many believe it had previously scraped without a license. OpenAI will also “surface validated technical knowledge from Stack Overflow directly into ChatGPT, giving users easy access to trusted, attributed, accurate, and highly technical knowledge and code backed by the millions of developers that have contributed to the Stack Overflow platform for 15 years,” according to Stack Overflow.

    In return, OpenAI plans to provide attribution to the Stack Overflow community within ChatGPT, but how the company will do that exactly is unclear. Stack Overflow will also use OpenAI technology in its development of OverflowAI, an AI model announced in July 2023 that uses an LLM to provide answers to developer questions.

    While the companies tout the collaboration’s benefits, many Stack Overflow users have expressed their displeasure with the deal. This is especially true considering that until very recently, Stack Overflow seemed to take a negative stance toward generative AI in general, banning answers written using ChatGPT. It was also widely reported last year that ChatGPT’s popularity had severely reduced Stack Overflow’s traffic, though the company seemed to later refute that, claiming faulty analysis by outsiders.

    Since the announcement, some users have attempted to alter or delete their Stack Overflow posts in protest, arguing that the move steals the labor of those who contributed to the platform without a way to opt out. In retaliation, Stack Overflow staff have reportedly been banning those users while erasing or reverting the protest posts. On Monday, a Stack Overflow user named Ben took to Mastodon to share his experience of getting suspended after posting a protest message:

    Stack Overflow announced that they are partnering with OpenAI, so I tried to delete my highest-rated answers.

    Stack Overflow does not let you delete questions that have accepted answers and many upvotes because it would remove knowledge from the community.

    So instead I changed my highest-rated answers to a protest message.

    Within an hour mods had changed the questions back and suspended my account for 7 days.

    Stack Overflow moderators have stated that once posts are made, they become “part of the collective efforts” of other contributors and should only be removed under extraordinary circumstances, according to The Verge. Stack Overflow’s terms of service also state that users cannot revoke permission for Stack Overflow to use their contributed content.

    While Stack Overflow owns user posts, the site uses a Creative Commons 4.0 license that requires attribution. We’ll see if the ChatGPT integrations, which have not rolled out yet, will honor that license to the satisfaction of disgruntled Stack Overflow users. For now, the battle continues.

    This story originally appeared on Ars Technica.

    [ad_2]

    Benj Edwards, Ars Technica

    Source link

  • Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    [ad_1]

    Anthropic Co-Founder & CEO Dario Amodei speaks onstage during TechCrunch Disrupt 2023 at Moscone Center on September 20, 2023 in San Francisco, California. Kimberly White/Getty Images for TechCrunch

    The Bloomberg Tech Summit yesterday (May 9) opened with brother and sister technologists Dario and Daniela Amodei, once principal scientists at OpenAI who later stepped away to found their own A.I. company, Anthropic, now valued at $15 billion. The entrepreneur duo are now engaged in “scaling up” Anthropic by creating models and relationships to serve emerging markets.

    Dario and Daniela left OpenAI in late 2020 to start their own company, with the goal of building A.I. systems that are not just powerful and intelligent but are also aligned with human values. “We left OpenAI because of concerns around the direction,” Daniela Amodei, who serves as president of Anthropic, said during an onstage interview yesterday. “We wanted to be sure the tools were being used reliably and responsibly…We want to be the most responsible A.I. we can, always asking the question, ‘What could go wrong here?’”

    “Our focus is on scaling with more data, along with models, and creating the relationships necessary to scale up the company in a more enterprise direction,” said Dario Amodei, the company’s CEO.

    Asked why users should trust them after last year’s debacle between the OpenAI’s board and the company’s CEO Sam Altman, Dario said, “You shouldn’t. Look at all the companies out there. Who can you trust? It’s a very good question. We believe in doing what you say, and saying what you do. The broader societal question is, is A.I. so big that there needs to be some kind of democratic mandate on the technology?…We need to put positive pressure on this industry to always do the right thing for our users.”

    Asked how a brother-and-sister duo both ended up in the tech world, Dario and Daniela said it was a natural result of growing up in San Francisco. “Ever since the time we were kids, we always had a desire to make things better. It may sound corny, but it was a really deep thing with us,” Daniela said. “Growing up in San Francisco in the 1990s, we saw that things were happening but we didn’t yet have the language for what that was. We just saw a lot of well-dressed people going into swanky offices and we wondered, what are all these people doing? What are they working on? They were all young people with good jobs and that was attractive.”

    “For me, in the 90s, my fascination was with theories of the early universe more than business,” said Dario. “But over time we began to realize that if you wanted to make science or anything else and be socially responsible you had to be involved and, later on, joining one of these A.I. companies.”

    Dario noted the entry point for creating a new A.I. model is rapidly becoming restrictive due to its increasingly high cost. The current generation of AI models cost about $100 million to make, he said. “In the next few years it’s going to grow to the $100 billion range. And the models will look very different.

    “Plus, you have to start thinking about the larger ecosystem, carbon offsets for large data centers, so we are looking into that as well,” he added.

    Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    [ad_2]

    Dan Holden

    Source link

  • OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

    OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

    [ad_1]

    OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.

    OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.

    “We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”

    The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

    In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to “bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.” She declined to share details of what OpenAI’s exploration of explicit content generation involves or what feedback the company has received on the idea.

    Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.

    AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students.

    “Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”

    Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”

    As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.

    Additional reporting by Reece Rogers

    [ad_2]

    Kate Knibbs

    Source link

  • Ashley Madison Is Still Around, a Powerful Chatbot Disappeared, Elon Musk Lays Off More Workers and More

    Ashley Madison Is Still Around, a Powerful Chatbot Disappeared, Elon Musk Lays Off More Workers and More

    [ad_1]

    Illustration: Vicky Leta, Photo: Patrick T. Fallon/Bloomberg (Getty Images), Said Fx (Getty Images), Chip Somodevilla (Getty Images), Mario Tama / Staff (Getty Images), Axelle/Bauer-Griffin/FilmMagic (Getty Images), David Paul Morris/Bloomberg (Getty Images), Dimitrios Kambouris for The Met Musuem/Vogue (Getty Images), Bene Riobó via Wikimedia Commons, Screenshot: YouTube / Mint Mobile

    This week saw a blast from the past as we told the tales of numerous fraud victims who were targeted by scammers on the cheating site, Ashley Madison. A new chatbot came and went leaving so many people with questions. And then there’s Elon Musk who went “hardcore” with layoffs he even got rid of those pesky interns that really hit a company’s bottom line with those big salaries given to college students. Here are the top tech stories of the week.

    [ad_2]

    Gizmodo Staff

    Source link

  • What I Learned Trying ChatGPT’s New Memory Feature

    What I Learned Trying ChatGPT’s New Memory Feature

    [ad_1]

    Everything reminds me of Her. While ChatGPT is not as powerful as the artificial intelligence from Spike Jonze’s sci-fi romance movie, OpenAI’s experimental memory tool for its chatbot seems to suggest a future where bots are highly personalized and capable of more fluid, lifelike conversations.

    OpenAI just rolled out a new feature for ChatGPT Plus subscribers called Memory, where the AI chatbot stores personal details that you share in conversations and refers to this information during future chats. Announced in February, ChatGPT’s Memory feature had been available only to a small group of users to test until late April.

    While it’s expected to be available for OpenAI’s Enterprise and Team customers eventually, the feature is available first to Plus subscribers. Though, subscribers in Europe or Korea are not currently able to use ChatGPT’s Memory. It’s also not yet integrated with the GPT Store.

    WIRED received early access to the feature from OpenAI, and I’ve spent some time trying it out to better understand this new functionality and to collect some advice you can use to get started. While a few specifics may change about this nascent feature, here’s what you can expect as you’re getting started with ChatGPT’s Memory.

    How to Navigate ChatGPT’s Memory Feature

    When ChatGPT’s Memory arrived on my paid account, I received a pop-up notification explaining the fresh tool and how it can be used for remembering certain details across conversations. It’s worth noting that Memory is enabled automatically. You can easily opt out by opening Settings, then Personalization, and then toggling the Memory option by moving the slider to the left.

    While there is some overlap, ChatGPT’s Memory is not expected to replace the “custom instructions” feature.

    OpenAI via Reece Rogers

    Adding facts about yourself to ChatGPT’s Memory is simple: Just chat with the bot. As you use OpenAI’s software, ChatGPT gleans personal insights from the conversations, like your name and where you live, as well as more niche observations, like your favorite movies and least favorite foods. Each time some nugget of information is added by the chatbot, you might see a Memory updated notification. Tap on that notification to check out what was included.

    [ad_2]

    Reece Rogers

    Source link

  • Apple has reportedly resumed talks with OpenAI to build a chatbot for the iPhone

    Apple has reportedly resumed talks with OpenAI to build a chatbot for the iPhone

    [ad_1]

    Apple has resumed conversations with OpenAI, the maker of ChatGPT, to power some AI features coming to iOS 18, according to a new report in Bloomberg. Apple is also building its own large language models to power some iOS 18 features, but its talks with OpenAI are centered around a “chatbot/search component,” according to Bloomberg reporter Mark Gurman.

    Apple is also reportedly in talks with Google to license Gemini, Google’s own AI-powered chatbot, for iOS 18. Bloomberg reports that those talks are still on, and things could still go either way because Apple hasn’t made a final decision on which company’s technology to use. It’s conceivable, Gurman says, that Apple could ultimately end up licensing AI tech from both companies or none of them.

    So far, Apple has been notably quiet about its AI efforts even as the rest of Silicon Valley has descended into an AI arms race. But it has dropped enough hints to indicate that it’s cooking up something. When the company announced its earnings in February, CEO Tim Cook said that Apple is continuing to work and invest in artificial intelligence and is “excited to share the details of our ongoing work in that space later this year.” It claimed that the brand new M3 MacBook Air that it launched last month was the “world’s best consumer laptop for AI,” and will reportedly start releasing AI-centric laptops and desktops later this year. And earlier this week, Apple also released a handful of open-source large language models that are designed to run locally on devices rather than in the cloud.

    It’s still unclear what Apple’s AI features in iPhones and other devices will look like. Generative AI is still notoriously unreliable and prone to making up answers. Recent AI-powered gadgets like the Humane Ai Pin released to disastrous reviews, while others like the Rabbit R1 have yet to prove themselves valuable.

    We’ll find out more at WWDC on June 10.

    [ad_2]

    Pranav Dixit

    Source link

  • Google Thinks It Can Cash In on Generative AI. Microsoft Already Has

    Google Thinks It Can Cash In on Generative AI. Microsoft Already Has

    [ad_1]

    Alphabet CEO Sundar Pichai is confident that Google will find a way to make money selling access to generative AI tools. Microsoft CEO Satya Nadella says his company is already doing it.

    Both companies reported better-than-expected quarterly sales and profit on Thursday. And the stock prices of both soared on the results, with Alphabet further buoyed by its new plans to buy back more shares and issue its first-ever dividend.

    But the near-term fortunes of Microsoft and Google, at least as far as their generative AI efforts are concerned, look different under the hood and in the comments of their executives. How investors, workers, and potential customers perceive the rivals’ dueling efforts could determine which gets the better chunk of the hundreds of billions of dollars in spending expected to flow to such software in the coming years.

    In a call with financial analysts on Thursday, Nadella touted that Microsoft now has 1.8 million customers for GitHub Copilot, a generative AI tool that helps engineers write software code. That’s up from 1.3 million customers a quarter ago.

    Among Fortune 500 companies, 60 percent are using Copilot for Microsoft Office 365, a virtual assistant that uses generative AI to help workers write emails and documents, and 65 percent are using a Microsoft Azure Cloud service that enables them to access generative AI software from ChatGPT-maker OpenAI. “Azure has become a port of call for pretty much anybody who is doing an AI project,” Nadella said. The $13 billion dollars Microsoft has invested in OpenAI has certainly helped win those clients.

    The buzz of interest in AI services helped drive revenue for Microsoft’s biggest unit–cloud services–up by seven percentage points compared to a year ago, and Microsoft’s overall sales rose 17 percent to nearly $62 billion. It also gained cloud market share, Nadella added. The number of $100 million cloud deals that Microsoft landed increased 80 percent during the quarter compared to the same period a year ago and $10-million deals doubled.

    Alphabet’s Pichai’s had milestones to boast about too. He told analysts in a separate call that more than 1 million developers are using Google Cloud’s generative AI tools and that 60 percent of generative AI startups backed by investors are Google Cloud customers. Generative AI is also boosting the ad campaigns of Google’s advertising clients.

    But Pichai didn’t say how many signups Google had drawn to Gemini Advanced, a $20 per month subscription plan announced in February that provides access to the company’s most advanced AI chatbot.

    On Google’s core business of search, Pichai didn’t share revenue figures related to experiments to summarize query results using generative AI. By providing more direct answers to searchers, Google could end up with fewer opportunities to show search ads if people spend less time doing additional, more refined searches. The types of ads Google does show also could have to shift.

    While Pichai said the tests show that users exposed to generative AI-powered search are doing more searches, they are also potentially less profitable for Google because the underlying technology to power more advanced searchers is costlier than operating its longstanding systems.

    Picahi expressed little concern on either front. “We are very, very confident we can manage the cost of how to serve these queries,” he said. “I am comfortable and confident that we’ll be able to manage the monetization transition here as well. It will play out over time.”

    Alphabet’s overall sales rose 15 percent to nearly $81 billion.

    It spent about the same about the same amount—-around $12 billion—as Microsoft investing in infrastructure like servers and datacenters last quarter. But the results and comments on Thursday suggest that Microsoft is further along in delivering a payoff.

    For now, shareholders are giving both companies leeway. At the close of Thursday, Microsoft shares were up 35 percent over the past year, and Alphabet 51 percent over the past year. They are both at or near all-time highs. But if customers keep flocking to Copilot and the prospects for Gemini and Google search don’t grow more clear, the trendlines soon could diverge.

    [ad_2]

    Paresh Dave

    Source link

  • The world’s leading AI companies pledge to protect the safety of children online

    The world’s leading AI companies pledge to protect the safety of children online

    [ad_1]

    Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech.

    The pledges from AI companies, Thorn said, “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds.” The goal of the initiative is to prevent the creation of sexually explicit material involving children and take it off social media platforms and search engines. More than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, Thorn says. In the absence of collective action, generative AI is poised to make this problem worse and overwhelm law enforcement agencies that are already struggling to identify genuine victims.

    On Tuesday, Thorn and All Tech Is Human released a new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse” that outlines strategies and lays out recommendations for companies that build AI tools, search engines, social media platforms, hosting companies and developers to take steps to prevent generative AI from being used to harm children.

    One of the recommendations, for instance, asks companies to choose data sets used to train AI models carefully and avoid ones only only containing instances of CSAM but also adult sexual content altogether because of generative AI’s propensity to combine the two concepts. Thorn is also asking social media platforms and search engines to remove links to websites and apps that let people “nudity” images of children, thus creating new AI-generated child sexual abuse material online. A flood of AI-generated CSAM, according to the paper, will make identifying genuine victims of child sexual abuse more difficult by increasing the “haystack problem” — an reference to the amount of content that law enforcement agencies must current sift through.

    “This project was intended to make abundantly clear that you don’t need to throw up your hands,” Thorn’s vice president of data science Rebecca Portnoff told the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.”

    Some companies, Portnoff said, had already agreed to separate images, video and audio that involved children from data sets containing adult content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method isn’t foolproof — watermarks and metadata can be easily removed.

    [ad_2]

    Pranav Dixit

    Source link

  • Meta Is Already Training a More Powerful Successor to Llama 3

    Meta Is Already Training a More Powerful Successor to Llama 3

    [ad_1]

    Zuckerberg took to Instagram today to explain that Meta would incorporate the new Meta AI assistant, powered by Llama 3, into products that include Whatsapp, Instagram, Facebook, and Messenger.

    Meta said in its blog post announcing Llama 3 that it had focused heavily on improving the training data used to develop the model. It was fed seven times as much data as its predecessor, Llama 2, the company said. Some AI experts noted that figures released by Meta also showed that creating Llama 3 required huge amounts of energy to power the servers required.

    The growing capabilities of open source AI models have spurred some experts to worry that they could make it easier to develop cyber, chemical, or biological weapons—or even become hostile toward humans. Meta has released tools that it says can help ensure Llama does not output potentially harmful utterances.

    Others in the field of AI say that Meta’s Llama models are not as open as they could be. The company’s open source license on the models places some restrictions on what researchers and developers can build.

    “It’s great to see more and more models openly releasing their weights,” said Luca Soldaini, senior applied research scientist at Allen Institute for AI, a nonprofit lab, n a statement after Llama 3’s release. “But the open community needs access to all other parts of the AI pipeline—its data, training, logs, code, and evaluations. This is what will ultimately accelerate our collective understanding of these models.”

    Stella Biderman, an AI researcher involved with EleutherAI, a nonprofit open source AI project, says Meta’s license for Llama 2 limited the experiments that AI researchers can run with it, and adds that the Llama 3 license looks even more restrictive. “Meta releases weights but is famously restrictive about what you can do with them,” Biderman says.

    One part of the model’s license says that companies with “greater than 700 million monthly active users” must seek a special license from Meta—a clause apparently designed to prevent the project from helping the company’s closest rivals.

    Even so, Llama 3 seems likely to spark a new burst of AI experimentation. Clement Delange, CEO of HuggingFace, a repository for open AI models, including Llama 3, says developers created more than 30,000 variants of Llama 2. “I’m sure we’ll see a flurry of new models based on Llama 3 as well,” he says. “Awesome community move by Meta.”

    [ad_2]

    Will Knight

    Source link

  • Europe is falling behind in generative AI, with the U.S. light years ahead. But the race is just getting started

    Europe is falling behind in generative AI, with the U.S. light years ahead. But the race is just getting started

    [ad_1]

    The potential of generative AI knows no limits. And what we have seen of it now might only be the tip of the iceberg. 

    For years, companies around the world have been working on mustering up their AI prowess—be it in the U.S., China, or France.

    Generative AI’s potential to boost productivity, ignite creativity, and overhaul workflows is now taking off within countless industries. Regardless of the business case, companies working with or making their own generative AI tools have been catapulted to the forefront of the conversation.

    Marking our inaugural Brainstorm AI conference at the Rosewood Hotel in London, where we’ll unpack some of these complex yet pressing subjects, Fortune took a deep dive into the state of play for generative AI across the world, with exclusive insights based on data from the Amsterdam-based intelligence company Dealroom.  

    Our analysis covers the world’s top 100 generative AI companies by funding. It’s little surprise that U.S.-based (and specifically, San Francisco Bay Area-based) companies dominated other regions by light years. The Sam Altman-led OpenAI is, by far, the highest-funded AI company, while its California neighbors Anthropic and Inflection AI follow closely after. Over in Europe, the likes of Mistral AI and Aleph Alpha have gained traction for their innovations. 

    Still, companies in France, the U.K., and Germany received a fraction of the funding—not because there aren’t enough of them, but because they haven’t reached the mammoth size their American peers have. Israel, which we’ve included in our analysis, also has a buzzing generative AI scene. 

    In numbers, that means Dealroom’s data on the 100 companies cuts off those that have raised below $70 million in total funding. That’s where the bulk of Europe’s fledgling companies fall. Since Dealroom data mainly considers funding figures in this case, some noteworthy players in the generative AI realm, like Google, aren’t part of the analysis below.

    But Europe has to pat itself on the back for some of the strides it’s made. For instance, three of the 15 companies on our list have female founders. Seven companies were initially founded in Europe but have since moved to the U.S., where they obtained about $1.7 billion in funding.  

    Given the technology’s various use cases, defining what qualifies as a generative AI company can be challenging. By definition, generative AI uses algorithms to create new and realistic content—including text, images, and audio—based on training data. Dealroom’s data, which is as of April 2024, looks at companies that either use or create large language models trained on massive data sets to produce new content. 

    The charts below give us a glimpse of how Europe compares to some of the world’s AI power players. They also show us where the biggest strides in generative AI are being made in Europe and who the movers and shakers are.

    Total funding for the world’s top 100 generative AI startups, by region

    It’s clear that the U.S. has received the lion’s share of funding. American companies are ahead with more than 10 times the funding–$36.8 billion in funds raised compared to European and Israeli companies, which have only raised $3.2 billion so far. OpenAI is a clear leader with $12.3 billion in funds raised, according to data compiled by Dealroom.

    Key European markets home to the biggest gen AI players by funding

    In our analysis, Israel has the lead over Europe as a hub for generative AI companies, based on how much they’ve secured in funding. Within continental Europe, Germany and France emerge at the top. 

    The majority of the funding for European companies originates from European investors, based on Dealroom data. Roughly 43% of the funding for European and Israeli companies comes from their home countries, about 13% comes from a different country within Europe, and 39% comes from the U.S.   

    Most funded companies in Europe and Israel

    Here’s a glimpse at the most funded companies in Europe and Israel–Aleph Alpha, the German answer to OpenAI founded by Jonas Andrulis, leads the category. In Nov. 2023, Bosch, SAP, and Hewlett Packard Enterprise backed a $500 million series B funding round, marking one of Europe’s biggest AI funding rounds ever.

    France’s Mistral AI, led by Arthur Mensch, comes up second. Microsoft said it would invest $16.3 million into the French company in February.

    See below for the full list of generative AI companies headquartered in Europe and Israel ranked by funding, per Dealroom data. 


    Aleph Alpha

    Launch year: 2019
    HQ city/country: Heidelberg, Germany
    Total funding (USD): $641.14 million

    Mistral AI

    Launch year: 2023
    HQ city/country:
    Paris, France
    Total funding:
    $553 million

    AI21

    Launch year: 2017
    HQ city/country:
    Tel Aviv-Yafo, Israel
    Total funding:
    $326.5 million

    Lightricks

    Launch year: 2013
    HQ city/country: Jerusalem, Israel
    Total funding: $305 million

    Cera

    Launch year: 2016
    HQ city/country: London, United Kingdom
    Total funding: $302.5 million

    Synthesia

    Launch year: 2017
    HQ city/country: London, United Kingdom
    Total funding: $155.58 million

    Stability AI

    Launch year: 2019
    HQ city/country: London, United Kingdom
    Total funding: $151 million

    Poolside AI

    Launch year: 2023
    HQ city/country: Paris, France
    Total funding: $126.01 million

    Pecan

    Launch year: 2016
    HQ city/country: Tel Aviv-Yafo, Israel
    Total funding: $112 million

    DeepL

    Launch year: 2009
    HQ city/country: Cologne, Germany
    Total funding: $110 million

    MDClone

    Launch year: 2015
    HQ city/country: Beersheba, Israel
    Total funding: $104.01 million

    Corti

    Launch year: 2016
    HQ city/country: Copenhagen, Denmark
    Total funding: $90.9 million

    Stratio

    Launch year: 2014
    HQ city/country: Pozuelo de Alarcón, Spain
    Total funding: $85.8 million

    Sana Labs

    Launch year: 2016
    HQ city/country: Stockholm, Sweden
    Total funding: $82.57 million

    Ready Player Me

    Launch year: 2014
    HQ city/country:
    Tallinn, Estonia
    Total funding:
    $72.55 million

    This feature was reported with assistance from Fortune’s executive editor Alex Wood Morton, list director Grethe Schepers, research analyst Elena Medina, and production editor Aslesha Mehta. 

    [ad_2]

    Prarthana Prakash, Alex Wood Morton

    Source link

  • OpenAI makes ChatGPT ‘more direct, less verbose’ | TechCrunch

    OpenAI makes ChatGPT ‘more direct, less verbose’ | TechCrunch

    [ad_1]

    ChatGPT, OpenAI’s viral AI-powered chatbot, just got a big upgrade.

    OpenAI announced today that premium ChatGPT users — customers paying for ChatGPT Plus, Team or Enterprise — can now leveraged an updated and enhanced version of GPT-4 Turbo, one of the models that powers the conversational ChatGPT experience.

    This new model (“gpt-4-turbo-2024-04-09”) brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base. It was trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off.

    “When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language,” OpenAI writes in a post on X.

    The ChatGPT update — which follows the GA launch on Tuesday of new models in OpenAI’s API, notably GPT-4 Turbo with Vision, which adds image understanding capabilities to the normally-text-only GPT-4 Turbo — arrives after an unflattering week for OpenAI.

    Reporting from The Intercept revealed that Microsoft pitched OpenAI’s DALL-E text-to-image model as a battlefield tool for the U.S. military. And, according to a piece in The Information, OpenAI recently fired two researchers — including an ally of chief scientist Ilya Sutskever, who was among those who pushed for the ouster of CEO Sam Altman late last year — for allegedly leaking information.

    [ad_2]

    Kyle Wiggers

    Source link

  • To Build a Better AI Supercomputer, Let There Be Light

    To Build a Better AI Supercomputer, Let There Be Light

    [ad_1]

    GlobalFoundries, a company that makes chips for others, including AMD and General Motors, previously announced a partnership with Lightmatter. Harris says his company is “working with the largest semiconductor companies in the world as well as the hyperscalers,” referring to the largest cloud companies like Microsoft, Amazon, and Google.

    If Lightmatter or another company can reinvent the wiring of giant AI projects, a key bottleneck in the development of smarter algorithms might fall away. The use of more computation was fundamental to the advances that led to ChatGPT, and many AI researchers see the further scaling-up of hardware as being crucial to future advances in the field—and to hopes of ever reaching the vaguely-specified goal of artificial general intelligence, or AGI, meaning programs that can match or exceed biological intelligence in every way.

    Linking a million chips together with light might allow for algorithms several generations beyond today’s cutting edge, says Lightmatter’s CEO Nick Harris. “Passage is going to enable AGI algorithms,” he confidently suggests.

    The large data centers that are needed to train giant AI algorithms typically consist of racks filled with tens of thousands of computers running specialized silicon chips and a spaghetti of mostly electrical connections between them. Maintaining training runs for AI across so many systems—all connected by wires and switches—is a huge engineering undertaking. Converting between electronic and optical signals also places fundamental limits on chips’ abilities to run computations as one.

    Lightmatter’s approach is designed to simplify the tricky traffic inside AI data centers. “Normally you have a bunch of GPUs, and then a layer of switches, and a layer of switches, and a layer of switches, and you have to traverse that tree” to communicate between two GPUs, Harris says. In a data center connected by Passage, Harris says, every GPU would have a high-speed connection to every other chip.

    Lightmatter’s work on Passage is an example of how AI’s recent flourishing has inspired companies large and small to try to reinvent key hardware behind advances like OpenAI’s ChatGPT. Nvidia, the leading supplier of GPUs for AI projects, held its annual conference last month, where CEO Jensen Huang unveiled the company’s latest chip for training AI: a GPU called Blackwell. Nvidia will sell the GPU in a “superchip” consisting of two Blackwell GPUs and a conventional CPU processor, all connected using the company’s new high-speed communications technology called NVLink-C2C.

    The chip industry is famous for finding ways to wring more computing power from chips without making them larger, but Nvidia chose to buck that trend. The Blackwell GPUs inside the company’s superchip are twice as powerful as their predecessors but are made by bolting two chips together, meaning they consume much more power. That trade-off, in addition to Nvidia’s efforts to glue its chips together with high-speed links, suggests that upgrades to other key components for AI supercomputers, like that proposed by Lightmatter, could become more important.

    [ad_2]

    Will Knight

    Source link

  • OpenAI’s GPT Store Is Triggering Copyright Complaints

    OpenAI’s GPT Store Is Triggering Copyright Complaints

    [ad_1]

    For the past few months, Morten Blichfeldt Andersen has spent many hours scouring OpenAI’s GPT Store. Since it launched in January, the marketplace for bespoke bots has filled up with a deep bench of useful and sometimes quirky AI tools. Cartoon generators spin up New Yorker–style illustrations and vivid anime stills. Programming and writing assistants offer shortcuts for crafting code and prose. There’s also a color analysis bot, a spider identifier, and a dating coach called RizzGPT. Yet Blichfeldt Andersen is hunting only for one very specific type of bot: Those built on his employer’s copyright-protected textbooks without permission.

    Blichfeldt Andersen is publishing director at Praxis, a Danish textbook purveyor. The company has been embracing AI and created its own custom chatbots. But it is currently engaged in a game of whack-a-mole in the GPT Store, and Blichfeldt Andersen is the man holding the mallet.

    “I’ve been personally searching for infringements and reporting them,” Blichfeldt Andersen says. “They just keep coming up.” He suspects the culprits are primarily young people uploading material from textbooks to create custom bots to share with classmates—and that he has uncovered only a tiny fraction of the infringing bots in the GPT Store. “Tip of the iceberg,” Blichfeldt Andersen says.

    It is easy to find bots in the GPT Store whose descriptions suggest they might be tapping copyrighted content in some way, as Techcrunch noted in a recent article claiming OpenAI’s store was overrun with “spam.” Using copyrighted material without permission is permissable in some contexts but in others rightsholders can take legal action. WIRED found a GPT called Westeros Writer that claims to “write like George R.R. Martin,” the creator of Game of Thrones. Another, Voice of Atwood, claims to imitate the writer Margaret Atwood. Yet another, Write Like Stephen, is intended to emulate Stephen King.

    When WIRED tried to trick the King bot into revealing the “system prompt” that tunes its responses, the output suggested it had access to King’s memoir On Writing. Write Like Stephen was able to reproduce passages from the book verbatim on demand, even noting which page the material came from. (WIRED could not make contact with the bot’s developer, because it did not provide an email address, phone number, or external social profile.)

    OpenAI spokesperson Kayla Wood says it responds to takedown requests against GPTs made with copyrighted content but declined to answer WIRED’s questions about how frequently it fulfills such requests. She also says the company proactively looks for problem GPTs. “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies, including the use of content from third parties without necessary permission,” Wood says.

    New Disputes

    The GPT store’s copyright problem could add to OpenAI’s existing legal headaches. The company is facing a number of high-profile lawsuits alleging copyright infringement, including one brought by The New York Times and several brought by different groups of fiction and nonfiction authors, including big names like George R.R. Martin.

    Chatbots offered in OpenAI’s GPT Store are based on the same technology as its own ChatGPT but are created by outside developers for specific functions. To tailor their bot, a developer can upload extra information that it can tap to augment the knowledge baked into OpenAI’s technology. The process of consulting this additional information to respond to a person’s queries is called retrieval-augmented generation, or RAG. Blichfeldt Andersen is convinced that the RAG files behind the bots in the GPT Store are a hotbed of copyrighted materials uploaded without permission.

    [ad_2]

    Kate Knibbs

    Source link

  • You can now use ChatGPT without an account

    You can now use ChatGPT without an account

    [ad_1]

    On Monday, OpenAI began opening up ChatGPT to users without an account. It described the move as part of its mission to “make tools like ChatGPT broadly available so that people can experience the benefits of AI.” It also gives the company more training data (for those who don’t opt out) and perhaps nudges more users into creating accounts and subscribing for superior GPT-4 access instead of the older GPT-3.5 model free users get.

    I tested the instant access, which — as advertised — allowed me to start a new GPT-3.5 thread without any login info. The chatbot’s standard “How can I help you today?” screen appears, with optional buttons to sign up or log in. Although I saw it today, OpenAI says it’s gradually rolling out access, so check back later if you don’t see the option yet.

    OpenAI says it added extra safeguards for accountless users, including blocking prompts and image generations in more categories than logged-in users. When asked for more info on what new categories it’s blocking, an OpenAI spokesperson told me that, while developing the feature, it considered how logged-out GPT-3.5 users could potentially introduce new threats.

    The spokesperson added that the teams in charge of detecting and stopping abuse of its AI models have been involved in creating the new feature and will adjust accordingly if unexpected threats emerge. Of course, it still blocks everything it does for signed-in users, as detailed in its moderation API.

    You can opt out of data training for your prompts when not signed in. To do so, click on the little question mark to the right of the text box, then select Settings and turn off the toggle for “Improve the model for everyone.”

    OpenAI says more than 100 million people across 185 countries use ChatGPT weekly. Those are staggering numbers for an 18-month-old service from a company many people still hadn’t heard of two years ago. Today’s move gives those hesitant to create an account an incentive to take the world-changing chatbot for a spin, boosting those numbers even more.

    [ad_2]

    Will Shanklin

    Source link

  • OpenAI Can Re-Create Human Voices—but Won’t Release the Tech Yet

    OpenAI Can Re-Create Human Voices—but Won’t Release the Tech Yet

    [ad_1]

    Voice synthesis has come a long way since 1978’s Speak & Spell toy, which once wowed people with its state-of-the-art ability to read words aloud using an electronic voice. Now, using deep-learning AI models, software can create not only realistic-sounding voices but can also convincingly imitate existing voices using small samples of audio.

    Along those lines, OpenAI this week announced Voice Engine, a text-to-speech AI model for creating synthetic voices based on a 15-second segment of recorded audio. It has provided audio samples of the Voice Engine in action on its website.

    Once a voice is cloned, a user can input text into the Voice Engine and get an AI-generated voice result. But OpenAI is not ready to widely release its technology. The company initially planned to launch a pilot program for developers to sign up for the Voice Engine API earlier this month. But after more consideration about ethical implications, the company decided to scale back its ambitions for now.

    “In line with our approach to AI safety and our voluntary commitments, we are choosing to preview but not widely release this technology at this time,” the company writes. “We hope this preview of Voice Engine both underscores its potential and also motivates the need to bolster societal resilience against the challenges brought by ever more convincing generative models.”

    Voice cloning tech in general is not particularly new—there have been several AI voice synthesis models since 2022, and the tech is active in the open source community with packages like OpenVoice and XTTSv2. But the idea that OpenAI is inching toward letting anyone use its particular brand of voice tech is notable. And in some ways, the company’s reticence to release it fully might be the bigger story.

    OpenAI says that benefits of its voice technology include providing reading assistance through natural-sounding voices, enabling global reach for creators by translating content while preserving native accents, supporting non-verbal individuals with personalized speech options, and assisting patients in recovering their own voice after speech-impairing conditions.

    But it also means that anyone with 15 seconds of someone’s recorded voice could effectively clone it, and that has obvious implications for potential misuse. Even if OpenAI never widely releases its Voice Engine, the ability to clone voices has already caused trouble in society through phone scams where someone imitates a loved one’s voice and election campaign robocalls featuring cloned voices from politicians like Joe Biden.

    Also, researchers and reporters have shown that voice-cloning technology can be used to break into bank accounts that use voice authentication (such as Chase’s Voice ID), which prompted US senator Sherrod Brown of Ohio, the chair of the US Senate Committee on Banking, Housing, and Urban Affairs, to send a letter to the CEOs of several major banks in May 2023 to inquire about the security measures banks are taking to counteract AI-powered risks.

    OpenAI recognizes that the tech might cause trouble if broadly released, so it’s initially trying to work around those issues with a set of rules. It has been testing the technology with a set of select partner companies since last year. For example, video synthesis company HeyGen has been using the model to translate a speaker’s voice into other languages while keeping the same vocal sound.

    [ad_2]

    Benj Edwards, Ars Technica

    Source link

  • Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

    Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

    [ad_1]

    Ever since the rollout of ChatGPT in November 2022, many people in science, business, and media have been obsessed with AI. A cursory look at my own published work during that period fingers me as among the guilty. My defense is that I share with those other obsessives a belief that large language models are the leading edge of an epochal transformation. Maybe I’m swimming in generative Kool-Aid, but I believe AI advances within our grasp will change not only the way we work, but the structure of businesses, and ultimately the course of humanity.

    Not everyone agrees, and in recent months there’s been a backlash. AI has been oversold and overhyped, some experts now opine. Self-styled AI-critic-in-chief Gary Marcus recently said of the LLM boom, “It wouldn’t surprise me if, to some extent, this whole thing fizzled out.” Others claim that AI is mired in the “trough of disillusionment.”

    This week we got some data that won’t resolve the larger questions but provides a snapshot of how the US, if not the world, views the advent of AI and large language models. The Pew Research Center—which did similar probes during the rise of the internet, social media, and mobile devices—released a study of how ChatGPT was being used, regarded, and trusted. The sample was taken between February 7 and 11 of this year.

    Some of the numbers at first seem to indicate that the LLM controversy might be a parochial disagreement that most people don’t care about. A third of Americans haven’t heard of ChatGPT. Just under a quarter have used it. Oh, and for all the panic about how AI is going to flood the public square with misinformation about the 2024 election? So far, only 2 percent of Americans have used ChatGPT to get information about the presidential election season already underway.

    More broadly, though, data from the survey indicates that we’re seeing a powerful technology whose rise is just beginning. If you accept Pew’s sample as indicative of all Americans, millions of people are indeed familiar with ChatGPT. And one thing in particular stands out: While 17 percent of respondents said they have used it for entertainment and an identical number says they’ve tried it to learn something new, a full 20 percent of adults say that they have used ChatGPT for work. That’s up dramatically from the 12 percent who responded affirmatively when the same question was asked six months earlier—a rise of two-thirds.

    When I spoke to Colleen McClain, a Pew research associate involved in the study, she agreed that it seems to track with other huge technological shifts. “If you look at our trend charts over time on internet access, smartphones, social media, certainly some of them show this uptick,” she says. For some technologies there had been a leveling off, she adds. But in the ones she mentioned, the plateau came only when so many people came on board that there weren’t many stragglers left.

    What’s crazy about that sudden jump in ChatGPT business use from 12 percent to 20 percent is that we’re only at the beginning stages of humans collaborating with these models. And the tools to fully make use of ChatGPT are in a nascent status. That’s changing fast. OpenAI, ChatGPT’s creator, is going full tilt, and AI giants Microsoft and Google are still in the process of diverting their workforces to redesign every product line to integrate conversational AI. And startups like Sierra, which is building agents for corporate customers, are enabling bespoke usages that take advantage of multiple models. As this process continues, more people will use AI tools. And since the foundation models are getting exponentially better—am I hearing that GPT5 will show up this year?—that will make them even more compelling. This raises the possibility that the quality of virtually all work will reside in how well one can draw out the talents of a robot collaborator.

    What past technology can help us understand the trajectory of the rocket ship we’re on? While the near limitless ceiling of AI makes it hard to find an analog, I suggest the uptake of spreadsheets. Dan Bricklin and Bob Frankston invented them in 1978, and a year later the concept was embodied in VisiCalc, which at the time ran only on Apple computers. Spreadsheets had a phenomenal and disruptive effect on the business world. More than mere accounting tools, they triggered an era of business innovation and shook up the flow of information inside companies. Yet it took a few years before the business world widely adopted spreadsheets. The turning point came with a new and more powerful product called Lotus 1, 2, 3, which ran on the IBM PC. The current and near-future startups in the AI world, like Sierra, are all hoping to become the Lotuses of our era—but also to be much more consequential and lasting. Spreadsheets are largely limited to the business domain. LLMs can seemingly mess with anything.

    [ad_2]

    Steven Levy

    Source link

  • Leading China VC Kai-Fu Lee warns an investor reckoning is coming for unprofitable AI companies

    Leading China VC Kai-Fu Lee warns an investor reckoning is coming for unprofitable AI companies

    [ad_1]

    The halcyon days where venture capitalists were content forking over billions to the latest AI startup, as researchers burned through cash with little to show for it, may be all but over. A “reckoning” is coming soon for AI companies that fail to turn a profit as the new technology matures, Kai-Fu Lee, chairman and chief executive of Sinovation Ventures, said at the Fortune Innovation forum in Hong Kong on Wednesday.

    Lee said too many large language model (LLM) startups focus on striving for breakthrough advances and too little on commercializing their work. “A lot of the LLM companies out there are run by researchers who care only about making a great model,” he said in a conversation with Fortune editor-in-chief Alyson Shontell. “That science fair phase needs to end.”

    If there’s one aspect the three leading U.S. megacap tech stocks all have in common, it’s that they successfully monetized an emerging technology—Microsoft with the personal computer, Apple and Google with the smartphone.

    A former Google China president and himself a researcher in the field, Lee founded his own AI startup in March 2023. The firm, named 01.AI, was valued at more than $1 billion in less than eight months.

    Lee said his own former employer Google serves as a cautionary tale. Even with the densest network of AI talent found in the world to this day, he argued that Google lost its lead to OpenAI because it squandered time and resources indulging all of its employees’ competing plans.

    “If you have too many researchers and a culture where everybody can try their ideas, you’ll quickly run out of money as a startup,” he said. 

    Huawei’s focus vs Google’s ‘let one hundred flowers bloom’

    Lee argued that in order for his company to one day count among the world leaders in the field, it needs to be brutally efficient with every dollar it spends.

    On Wednesday, the AI expert pointed to Huawei as an example of how such focus might work in practice. China’s leading telecom equipment maker seized on an obscure advance by Turkish IT researcher Erdal Arıkan, investing its efforts almost exclusively in commercializing his polar code breakthrough. This allowed them to eventually surpass larger western competitors like Ericsson and go on to control the bulk of the 5G mobile networking market.

    “That made all the difference,” Lee said. “We’re taking that same approach to be very, very diligent to save GPU [costs].”

    Thanks to its focus on efficient execution, he believes 01.AI—which publishes all its research on open sites like Hugging Face—has narrowed the gap to American companies like OpenAI from eight years to less than twelve months in just a year’s time.

    AI rivals that instead embrace Google’s strategy of “let one hundred flowers bloom”, as Lee phrased it, would by comparison struggle to reach profitability. 

    “There is a point of reckoning when investors are going to say: What do you have to show for yourself?” said Lee. “What’s your P&L? What’s your revenue? What’s your growth? When do you break even?” 

    If an AI startup doesn’t have a convincing answer, then its “science fair” days are over.

    [ad_2]

    Christiaan Hetzner

    Source link

  • 8 Google Employees Invented Modern AI. Here’s the Inside Story

    8 Google Employees Invented Modern AI. Here’s the Inside Story

    [ad_1]

    The last two weeks before the deadline were frantic. Though officially some of the team still had desks in Building 1945, they mostly worked in 1965 because it had a better espresso machine in the micro-kitchen. “People weren’t sleeping,” says Gomez, who, as the intern, lived in a constant debugging frenzy and also produced the visualizations and diagrams for the paper. It’s common in such projects to do ablations—taking things out to see whether what remains is enough to get the job done.

    “There was every possible combination of tricks and modules—which one helps, which doesn’t help. Let’s rip it out. Let’s replace it with this,” Gomez says. “Why is the model behaving in this counterintuitive way? Oh, it’s because we didn’t remember to do the masking properly. Does it work yet? OK, move on to the next. All of these components of what we now call the transformer were the output of this extremely high-paced, iterative trial and error.” The ablations, aided by Shazeer’s implementations, produced “something minimalistic,” Jones says. “Noam is a wizard.”

    Vaswani recalls crashing on an office couch one night while the team was writing the paper. As he stared at the curtains that separated the couch from the rest of the room, he was struck by the pattern on the fabric, which looked to him like synapses and neurons. Gomez was there, and Vaswani told him that what they were working on would transcend machine translation. “Ultimately, like with the human brain, you need to unite all these modalities—speech, audio, vision—under a single architecture,” he says. “I had a strong hunch we were onto something more general.”

    In the higher echelons of Google, however, the work was seen as just another interesting AI project. I asked several of the transformers folks whether their bosses ever summoned them for updates on the project. Not so much. But “we understood that this was potentially quite a big deal,” says Uszkoreit. “And it caused us to actually obsess over one of the sentences in the paper toward the end, where we comment on future work.”

    That sentence anticipated what might come next—the application of transformer models to basically all forms of human expression. “We are excited about the future of attention-based models,” they wrote. “We plan to extend the transformer to problems involving input and output modalities other than text” and to investigate “images, audio and video.”

    A couple of nights before the deadline, Uszkoreit realized they needed a title. Jones noted that the team had landed on a radical rejection of the accepted best practices, most notably LSTMs, for one technique: attention. The Beatles, Jones recalled, had named a song “All You Need Is Love.” Why not call the paper “Attention Is All You Need”?

    The Beatles?

    “I’m British,” says Jones. “It literally took five seconds of thought. I didn’t think they would use it.”

    They continued collecting results from their experiments right up until the deadline. “The English-French numbers came, like, five minutes before we submitted the paper,” says Parmar. “I was sitting in the micro-kitchen in 1965, getting that last number in.” With barely two minutes to spare, they sent off the paper.

    [ad_2]

    Steven Levy

    Source link

  • Apple’s MM1 AI Model Shows a Sleeping Giant Is Waking Up

    Apple’s MM1 AI Model Shows a Sleeping Giant Is Waking Up

    [ad_1]

    A research paper quietly released by Apple describes an AI model called MM1 that can answer questions and analyze images. It’s the biggest sign yet that Apple is developing generative AI capabilities.

    [ad_2]

    Will Knight

    Source link

  • Regulators Need AI Expertise. They Can’t Afford It

    Regulators Need AI Expertise. They Can’t Afford It

    [ad_1]

    ChatGPT caught regulators by surprise when it set off a new AI race. As companies have rushed to develop and release ever more powerful models, lawmakers and regulators around the world have sought to catch up and rein in development.

    As governments spin up new AI programs, regulators around the world are urgently trying to hire AI experts. But some of the job ads are raising eyebrows and even chuckles among AI researchers and engineers for offering wages that, amid the current AI boom, look pitiful.

    The European AI Office, which will be central to the implementation of the EU’s AI Act, listed vacancies early this month and wants applicants to begin work in the fall. They include openings for technology specialists in AI with a master’s degree in computer science or engineering and at least one year’s experience, at a seniority level that suggests an annual salary from €47,320 ($51,730).

    Across La Manche, the UK government’s Department for Science, Innovation & Technology is also seeking AI experts. One open position is Head of the International AI Safety Report, who would help shepherd a landmark global report that stems from the UK’s global AI Safety Summit last year. The ad says “expertise in frontier AI safety and/or demonstrable experience of upskilling quickly in a complex new policy area” is essential. The salary offered is £64,660 ($82,730) a year.

    Although the EU listing is net of tax, the salaries are far lower than the eye-watering sums being offered within the industry. Levels.fyi, which compiles verified tech industry compensation data, reports that the median total compensation for workers at OpenAI is $560,000, including stock grants, as is common in the tech industry. The lowest compensation it has verified at the ChatGPT maker, for a recruiter, is $190,000.

    At OpenAI’s Amazon-backed rival Anthropic—creator of the Claude chatbot—the median compensation of $212,500 still far outstrips what regulators are currently offering. The lower 25th percentile for jobs in machine learning and AI is $172,500, according to Levels.fyi. Stock grants included in tech industry compensation packages can turn into huge windfalls if a company’s value increases. OpenAI is currently valued at $80 billion following a February 2024 share tender first reported by The New York Times.

    “There’s a brain drain happening across every government across the world,” says Nolan Church, cofounder and CEO at FairComp, a company tracking salary data to help workers negotiate better pay. “Part of the reason why is that private companies not only have a better working environment, but also will offer significantly higher salaries.”

    Church worries that competition between private companies will also widen the gap further between the private and public sector. “I personally believe the government should be attracting the best and the brightest,” he says, “but how can you convince the best and the brightest to take a massive pay cut?”

    Outside the Ballpark

    It’s not new for government jobs to pay significantly less than those in industry, but in the current AI boom the disconnect is potentially more significant and urgent. Tech companies and corporations in other industries rushing to embrace the technology are competing fiercely for AI-savvy talent. The rapid pace of developments in AI means regulators need to move fast.

    Jack Clark, a cofounder of Anthropic, posted on X comparing the EU AI Office’s salary offer unfavorably to tech industry internships. “I appreciate governments are working within their own constraints, but if you want to carry out some ambitious regulation of the AI sector then you need to pay a decent wage,” he wrote. “You don’t need to be competitive with industry, but you definitely need to be in the ballpark.”

    [ad_2]

    Chris Stokel-Walker

    Source link