ReportWire

Tag: openai

  • OpenAI Threatens Bans as Users Probe Its ‘Strawberry’ AI Models

    OpenAI Threatens Bans as Users Probe Its ‘Strawberry’ AI Models

    [ad_1]

    OpenAI truly does not want you to know what its latest AI model is “thinking.” Since the company launched its “Strawberry” AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe how the model works.

    Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an “o1” model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

    Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1’s raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

    Along the way, OpenAI is watching through the ChatGPT interface, and the company is reportedly coming down hard on any attempts to probe o1’s reasoning, even among the merely curious.

    One X user reported (confirmed by others, including Scale AI prompt engineer Riley Goodside) that they received a warning email if they used the term “reasoning trace” in conversation with o1. Others say the warning is triggered simply by asking ChatGPT about the model’s “reasoning” at all.

    The warning email from OpenAI states that specific user requests have been flagged for violating policies against circumventing safeguards or safety measures. “Please halt this activity and ensure you are using ChatGPT in accordance with our Terms of Use and our Usage Policies,” it reads. “Additional violations of this policy may result in loss of access to GPT-4o with Reasoning,” referring to an internal name for the o1 model.

    Marco Figueroa, who manages Mozilla’s GenAI bug bounty programs, was one of the first to post about the OpenAI warning email on X last Friday, complaining that it hinders his ability to do positive red-teaming safety research on the model. “I was too lost focusing on #AIRedTeaming to realized that I received this email from @OpenAI yesterday after all my jailbreaks,” he wrote. “I’m now on the get banned list!!!”

    Hidden Chains of Thought

    In a post titled “Learning to Reason With LLMs” on OpenAI’s blog, the company says that hidden chains of thought in AI models offer a unique monitoring opportunity, allowing them to “read the mind” of the model and understand its so-called thought process. Those processes are most useful to the company if they are left raw and uncensored, but that might not align with the company’s best commercial interests for several reasons.

    “For example, in the future we may wish to monitor the chain of thought for signs of manipulating the user,” the company writes. “However, for this to work the model must have freedom to express its thoughts in unaltered form, so we cannot train any policy compliance or user preferences onto the chain of thought. We also do not want to make an unaligned chain of thought directly visible to users.”

    [ad_2]

    Benj Edwards, Ars Technica

    Source link

  • OpenAI Announces a Model That ‘Reasons’ Through Problems, Calling It a ‘New Paradigm’

    OpenAI Announces a Model That ‘Reasons’ Through Problems, Calling It a ‘New Paradigm’

    [ad_1]

    OpenAI made the last big breakthrough in artificial intelligence by increasing the size of its models to dizzying proportions, when it introduced GPT-4 last year. The company today announced a new advance that signals a shift in approach—a model that can “reason” logically through many difficult problems and is significantly smarter than existing AI without a major scale-up.

    The new model, dubbed OpenAI-o1, can solve problems that stump existing AI models, including OpenAI’s most powerful existing model, GPT-4o. Rather than summon up an answer in one step, as a large language model normally does, it reasons through the problem, effectively thinking out loud as a person might, before arriving at the right result.

    “This is what we consider the new paradigm in these models,” Mira Murati, OpenAI’s chief technology officer, tells WIRED. “It is much better at tackling very complex reasoning tasks.”

    The new model was code-named Strawberry within OpenAI, and it is not a successor to GPT-4o but rather a complement to it, the company says.

    Murati says that OpenAI is currently building its next master model, GPT-5, which will be considerably larger than its predecessor. But while the company still believes that scale will help wring new abilities out of AI, GPT-5 is likely to also include the reasoning technology introduced today. “There are two paradigms,” Murati says. “The scaling paradigm and this new paradigm. We expect that we will bring them together.”

    LLMs typically conjure their answers from huge neural networks fed vast quantities of training data. They can exhibit remarkable linguistic and logical abilities, but traditionally struggle with surprisingly simple problems such as rudimentary math questions that involve reasoning.

    Murati says OpenAI-o1 uses reinforcement learning, which involves giving a model positive feedback when it gets answers right and negative feedback when it does not, in order to improve its reasoning process. “The model sharpens its thinking and fine tunes the strategies that it uses to get to the answer,” she says. Reinforcement learning has enabled computers to play games with superhuman skill and do useful tasks like designing computer chips. The technique is also a key ingredient for turning an LLM into a useful and well-behaved chatbot.

    Mark Chen, vice president of research at OpenAI, demonstrated the new model to WIRED, using it to solve several problems that its prior model, GPT-4o, cannot. These included an advanced chemistry question and the following mind-bending mathematical puzzle: “A princess is as old as the prince will be when the princess is twice as old as the prince was when the princess’s age was half the sum of their present age. What is the age of the prince and princess?” (The correct answer is that the prince is 30, and the princess is 40).

    “The [new] model is learning to think for itself, rather than kind of trying to imitate the way humans would think,” as a conventional LLM does, Chen says.

    OpenAI says its new model performs markedly better on a number of problem sets, including ones focused on coding, math, physics, biology, and chemistry. On the American Invitational Mathematics Examination (AIME), a test for math students, GPT-4o solved on average 12 percent of the problems while o1 got 83 percent right, according to the company.

    [ad_2]

    Will Knight

    Source link

  • AI Wants to Be Free

    AI Wants to Be Free

    [ad_1]

    Photo-Illustration: Intelligencer; Photo: Getty Images

    Start-ups might be in a slump, but the biggest players in tech are still investing incredible sums of money into AI. Microsoft and OpenAI have proposed building a $100 billion supercomputer to train future models. Meta expects to spend upward of $40 billion by the end of 2024, while Google expects to spend even more. Elon Musk’s xAI is spending billions to stand up data centers so large that they need natural-gas turbines to meet their power demands.

    At the same time, the AI giants are still scrambling to figure out how they might even begin to make this money back. OpenAI, which is on track to generate around $3 billion of subscription revenue in 2024, is reportedly considering higher-priced subscriptions to help offset its massive operating costs. Google is still figuring out which parts of its AI software portfolio it can charge for, and how much, while both Amazon and Apple are rumored to be working on paid versions of upcoming Alexa and Apple Intelligence features, respectively — even Meta is wondering if it might be able to charge for its AI assistant. Meanwhile, according to the Information, Microsoft, which has been bundling paid AI features into its productivity software for a while now, has been battling a “lukewarm” reception from business customers due to “performance and cost issues.”

    The gentle way to describe what’s happening is that the companies spending the most on generative AI are betting that customers will soon come and be willing to pay. A more honest way might be to say that, with potentially hundreds of billions of dollars missing from balance sheets in the near future, they simply need people to pay something for anything, and the best plan they’ve come up with — or borrowed from OpenAI — is subscriptions.

    The most obvious challenge here is that most potential customers don’t yet know why they should pay for generative AI tools and chatbots of any sort. It’s a challenge these companies hope to overcome with a combination of aggressive salesmanship and, ideally and eventually, software so self-evidently good and valuable that it sells itself. In the meantime, though, two slightly counterintuitive factors are working against big tech’s build-it-and-they-will-come plan. One is that for customers that already pay for AI — developers buying major capacity from AI providers, for example — competition between AI firms and open-source AI models has driven the price of using AI dramatically down. While companies like OpenAI and Google are brainstorming ways to charge future customers more, they’ve been engaged in a race-to-the-bottom price war to keep the customers that they already have. (OpenAI itself has boasted that its cost-per-token has fallen by 99 percent since 2022.)

    For anyone interested in using generative AI, this is great news: Leading models are becoming more efficient and cheaper to use at scale, and there are lots of alternatives for different tasks. For OpenAI, which has been able to reduce the cost of serving its low-paying customers and vastly more numerous free users, it’s a mixed bag. Its customers, who have been conditioned through a dazzling hype cycle to think in terms of exponential progress, are expecting a lot more for a lot less. Customers have been complaining about ChatGPT’s performance and pricing for nearly as long as it’s been around. In theory, AI companies will soon be charging more than ever for next-generation models. In practice, in the actual marketplace, competition is driving prices to the floor.

    The other complicating factor in big tech’s plan for AI is about to become more obvious: Users who don’t already pay for AI are becoming accustomed to using it for free. This, too, is something that big AI firms are acutely aware of — they, and their competitors, are the ones rolling out free-to-use features to hundreds of millions of users. There’s a ChatGPT-grade chatbot tucked into Facebook, Instagram, and WhatsApp; some of Google’s AI features, including chatbots and AI-powered search results, are showing up for users whether they ask for them or not, and are built right into newer Android phones; Microsoft includes a limited free version of Copilot, its AI assistant, in Windows. Mostly, these free offerings are part of a current or potential upsell. But they run the risk familiar to companies, like Meta and Google, that have tried to push into subscription services after years of offering services subsidized by ads. It’s hard to get people to pay for things that they’ve come to expect for free. It doesn’t help that most of these features haven’t been breathtakingly impressive to regular users, despite marketing suggesting that they’re revolutionary. They feel, instead, like routine software updates.

    On Monday, Apple announced its new iPhones for late 2024, which will ship with Apple Intelligence features for free (or at least free for now). These features, which include writing help, summarization tools, photo editing, and a more fluent and capable version of Siri are fairly conservative expressions of what’s currently possible with generative AI — they’re not going to make iPhone 16 buyers feel like they’re interacting with a superintelligent machine. But having extensively used most of these new Apple features for a while now (as well as free and cheap paid offerings from Meta, Google, OpenAI, and others), it’s clear that users will, in a very short time, take them for granted. That’s not to say they aren’t useful or occasionally impressive: While Apple’s AI notification summaries are still frequently strange or wrong, they attempt and at least partially solve the platform’s massive notification-spam problem; the new writing-assistant tools are unobtrusive and mostly welcome, more like an extension of autocorrect and spellcheck than an impertinent Clippy; Apple’s new photo-editing tools are useful; and Siri, in the end, might end up being usable for more than setting timers and reading texts. Apple is promising more advanced features to come and will integrate outside chatbots as well, but with the release of the next iPhone and the next version of iOS, Apple is about to get a lot of people used to getting a lot of AI for free.

    This isn’t an emergency for Apple, of course, because Apple makes most of its money selling phones and these features may help it do that. But most of the other companies giving away AI don’t sell $1,000 handsets. They’re either companies with legacy advertising businesses, like Meta or Google, or AI firms like OpenAI and xAI that are banking on building subscription businesses, winning the race to achieve AGI, or some combination of both. Altman and Musk might be imagining a future in which employee-like AI agents are priced against human labor, but short of that is a world in which generative AI could disappear into the standard software update cycle: a ubiquitous set of tools that users might find useful, but that they expect to be cheap — or to cost nothing at all.


    See All



    [ad_2]

    John Herrman

    Source link

  • Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works

    Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works

    [ad_1]

    Apple is making every production PCC server build publicly available for inspection so people unaffiliated with Apple can verify that PCC is doing (and not doing) what the company claims, and that everything is implemented correctly. All of the PCC server images are recorded in a cryptographic attestation log, essentially an indelible record of signed claims, and each entry includes a URL for where to download that individual build. PCC is designed so Apple can’t put a server into production without logging it. And in addition to offering transparency, the system works as a crucial enforcement mechanism to prevent bad actors from setting up rogue PCC nodes and diverting traffic. If a server build hasn’t been logged, iPhones will not send Apple Intelligence queries or data to it.

    PCC is part of Apple’s bug bounty program, and vulnerabilities or misconfigurations researchers find could be eligible for cash rewards. Apple says, though, that since the iOS 18.1 beta became available in late July, no on has found any flaws in PCC so far. The company recognizes that it has only made the tools to evaluate PCC available to a select group of researchers so far.

    Multiple security researchers and cryptographers tell WIRED that Private Cloud Compute looks promising, but they haven’t spent significant time digging into it yet.

    “Building Apple silicon servers in the data center when we didn’t have any before, building a custom OS to run in the data center was huge,” Federighi says. He adds that “creating the trust model where your device will refuse to issue a request to a server unless the signature of all the software the server is running has been published to a transparency log was certainly one of the most unique elements of the solution—and totally critical to the trust model.”

    To questions about Apple’s partnership with OpenAI and integration of ChatGPT, the company emphasizes that partnerships are not covered by PCC and operate separately. ChatGPT and other integrations are turned off by default, and users must manually enable them. Then, if Apple Intelligence determines that a request would be better fulfilled by ChatGPT or another partner platform, it notifies the user each time and asks whether to proceed. Additionally, people can use these integrations while logged into their account for a partner service like ChatGPT or can use them through Apple without logging in separately. Apple said in June that another integration with Google’s Gemini is also in the works.

    Apple said this week that beyond launching in United States English, Apple Intelligence is coming to Australia, Canada, New Zealand, South Africa, and the United Kingdom in December. The company also said that additional language support—including for Chinese, French, Japanese, and Spanish—will drop next year. Whether that means that Apple Intelligence will be permitted under the European Union’s AI Act and whether Apple will be able to offer PCC in its current form in China is another question.

    “Our goal is to bring ideally everything we can to provide the best capabilities to our customers everywhere we can,” Federighi says. “But we do have to comply with regulations, and there is uncertainty in certain environments we’re trying to sort out so we can bring these features to our customers as soon as possible. So, we’re trying.”

    He adds that as the company expands its ability to do more Apple Intelligence computation on-device, it may be able to use this as a workaround in some markets.

    Those who do get access to Apple Intelligence will have the ability to do far more than they could with past versions of iOS, from writing tools to photo analysis. Federighi says that his family celebrated their dog’s recent birthday with an Apple Intelligence–generated GenMoji (viewed and confirmed to be very cute by WIRED). But while Apple’s AI is meant to be as helpful and invisible as possible, the stakes are incredibly high for the security of the infrastructure underpinning it. So how are things going so far? Federighi sums it up without hesitation: “The rollout of Private Cloud Compute has been delightfully uneventful.”

    [ad_2]

    Lily Hay Newman

    Source link

  • What You Need to Know About Grok AI and Your Privacy

    What You Need to Know About Grok AI and Your Privacy

    [ad_1]

    But X also makes it clear the onus is on the user to judge the AI’s accuracy. “This is an early version of Grok,” xAI says on its help page. Therefore chatbot may “confidently provide factually incorrect information, missummarize, or miss some context,” xAI warns.

    “We encourage you to independently verify any information you receive,” xAI adds. “Please do not share personal data or any sensitive and confidential information in your conversations with Grok.”

    Grok Data Collection

    Vast amounts of data collection are another area of concern—especially since you are automatically opted in to sharing your X data with Grok, whether you use the AI assistant or not.

    The xAI’s Grok Help Center page describes how xAI “may utilize your X posts as well as your user interactions, inputs and results with Grok for training and fine-tuning purposes.”

    Grok’s training strategy carries “significant privacy implications,” says Marijus Briedis, chief technology officer at NordVPN. Beyond the AI tool’s “ability to access and analyze potentially private or sensitive information,” Briedis adds, there are additional concerns “given the AI’s capability to generate images and content with minimal moderation.”

    While Grok-1 was trained on “publicly available data up to Q3 2023” but was not “pre-trained on X data (including public X posts),” according to the company, Grok-2 has been explicitly trained on all “posts, interactions, inputs, and results” of X users, with everyone being automatically opted in, says Angus Allan, senior product manager at CreateFuture, a digital consultancy specializing in AI deployment.

    The EU’s General Data Protection Regulation (GDPR) is explicit about obtaining consent to use personal data. In this case, xAI may have “ignored this for Grok,” says Allan.

    This led to regulators in the EU pressuring X to suspend training on EU users within days of the launch of Grok-2 last month.

    Failure to abide by user privacy laws could lead to regulatory scrutiny in other countries. While the US doesn’t have a similar regime, the Federal Trade Commission has previously fined Twitter for not respecting users’ privacy preferences, Allan points out.

    Opting Out

    One way to prevent your posts from being used for training Grok is by making your account private. You can also use X privacy settings to opt out of future model training.

    To do so select Privacy & Safety > Data sharing and Personalization > Grok. In Data Sharing, uncheck the option that reads, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”

    Even if you no longer use X, it’s still worth logging in and opting out. X can use all of your past posts—including images—for training future models unless you explicitly tell it not to, Allan warns.

    It’s possible to delete all of your conversation history at once, xAI says. Deleted conversations are removed from its systems within 30 days, unless the firm has to keep them for security or legal reasons.

    No one knows how Grok will evolve, but judging by its actions so far, Musk’s AI assistant is worth monitoring. To keep your data safe, be mindful of the content you share on X and stay informed about any updates in its privacy policies or terms of service, Briedis says. “Engaging with these settings allows you to better control how your information is handled and potentially used by technologies like Grok.”

    [ad_2]

    Kate O’Flaherty

    Source link

  • Bill Gates Is Betting on Nuclear Fission and Fusion to Solve the Climate Crisis

    Bill Gates Is Betting on Nuclear Fission and Fusion to Solve the Climate Crisis

    [ad_1]

    The Microsoft founder is ramping up his investments in nuclear power. Halil Sagirkaya/Anadolu via Getty Images

    Despite the decades-long efforts of scientists around the world, the commercialization of nuclear fusion technology has not yet been achieved on Earth. However, Bill Gates, who has invested significantly in both nuclear fission and fusion startups, is betting on cutting-edge tech to provide a promising path toward green energy. “I’m a big believer that nuclear energy can help us solve the climate problem,” the Microsoft (MSFT) co-founder told The Verge in a wide-ranging interview published today (Sept. 5).

    Gates has long been outspoken about his adventurous approach to climate technology. Such sentiments have become more pertinent in recent years as concerns about Big Tech’s energy use proliferate. The energy consumption of data centers that power A.I. computing, for example, is expected to potentially double to take up 9 percent of the nation’s electricity by 2030, according to the Electric Power Research Institute.

    According to Gates, A.I. data centers will actually generate a less than 10 percent increase in energy use. Even so, Big Tech is exploring clean energy sources and will pioneer fission and fusion power “to help bootstrap that green energy generation,” he said. Microsoft, for example, last year signed a power purchase agreement with Helion Energy, a nuclear fusion company backed by Sam Altman, to buy electricity from the startup in 2028.

    Lauded for its potential to provide mass amounts of affordable and clean energy, nuclear fusion is the same process that powers the sun and stars. It occurs when two light atoms combine to form a heavier one while releasing energy, a reaction that must take place in extremely high temperatures of around 10 million degrees Celsius, according to the International Atomic Energy.

    Although the process has yet to be commercially harnessed, nuclear fusion technology has received an outburst of financial support in recent years. Of $7.1 billion in total funding since 1992, the sector received $900 million last year, according to a recent report from the Fusion Industry Association, which noted that 89 percent of private fusion companies believe the technology will be operational by the end of the 2030s.

    The report identified 45 companies worldwide working to commercialize nuclear fusion. Of those startups, five are backed by Gates via Breakthrough Energy Ventures, his climate-focused investment fund. The billionaire has invested in the likes of Zap Energy, which is hoping to build a fusion power plant in the next few years, and Type One Energy, which uses magnets to help fuse atoms. Both Gates and Amazon (AMZN)’s Jeff Bezos have supported Commonwealth Fusion Systems, another startup aiming to make the commercialization of fusion power a possibility in the near future.

    Despite skepticism over whether nuclear fusion—which doesn’t emit greenhouse gases or carbon dioxide—will actually come to fruition in the next few years or decades, Gates said he remains optimistic. “Although their timeframes are further out, I think the role of fusion over time will be very, very critical,” he told The Verge.

    The billionaire has also invested in modern forms of nuclear fission energy, which produces energy when atoms are split apart. Gates is attempting to develop a cheaper form of fission via $1 billion worth of investments into TerraPower, a startup that recently broke ground on a nuclear power plant site in Kemmerer, Wyo. and aims to develop more affordable and safer forms of fission by using water to cool reactors instead of sodium. “People are appropriately skeptical because it’s never been done,” Gates told The Verge. “But they’ll get to see as we build that plant, and if so, it can make a contribution.”

    Gates isn’t alone in his embrace of all things nuclear. Bezos, too, has become a prominent investor in fusion technology, having invested in Canadian startup General Fusion’s dreams of developing a pilot plant. OpenAI’s Altman has poured capital and time into the field as well, backing and chairing both Helion Energy and the nuclear energy startup Oklo.

    Bill Gates Is Betting on Nuclear Fission and Fusion to Solve the Climate Crisis

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • This Startup Has Built an Algorithm to Pay Creators for Their Work Used to Train A.I.

    This Startup Has Built an Algorithm to Pay Creators for Their Work Used to Train A.I.

    [ad_1]

    Some startups are exploring the revenue-sharing model to solve A.I.’s growing IP dilemma. Alex Shuper/Unsplash

    OpenAI, the creator of ChatGPT, has come under fire from publishers and artists who alleged the company scraped their work from the internet to train GPT, its large language model, without their consent. These concerns have sparked lawsuits against the A.I. giant on accusations of copyright infringement, highlighting a major ethical dilemma that comes with pushing A.I.’s capabilities forward. Some startups are exploring a solution that focuses on sharing revenue with content creators. In August, Perplexity AI, an A.I.-powered search engine, introduced a program to pay publishers a portion of ad revenue generated by search queries if their content informs its outputs. ProRata.ai, a startup founded by a pioneer of the early internet monetization model, is developing a similar algorithm to compensate publishers, authors and other creators whose work is used to train generative A.I.

    ProRata claims it has created an algorithm that can review an A.I.-generated output, identify the source of information based on novel facts and textual styles, and calculate how much each source contributed to the response. These percentages are then used to cut checks to these creators at the end of every month—a model that, in theory, could help protect the livelihoods of creatives and prevent future lawsuits around intellectual property. 

    “If you don’t share, then creativity is unsustainable. There’s no way for you to make a living,” ProRata’s co-founder and CEO Bill Gross told Observer regarding the careers of artists. Gross is credited as the inventor of the pay-per-click monetization model for internet search with a company he founded in the late 1990s that was later acquired by Yahoo, according to ProRata’s website. 

    The startup, which raised $25 million from venture capital firms Mayfield Fund, Prime Movers Lab, Revolution Ventures and IdeaLab Studio in a series A funding round in August, is set to showcase the algorithm through an A.I.-powered search engine expected to release in October. Starting at $19 a month, the engine will monetize queries through advertisements and subscription payments, according to Gross. While 50 percent of the revenue generated will go to ProRata, the other half will be split proportionately across creators. 

    ProRata’s ultimate goal isn’t to create an alternative to Google Search, but to introduce a new business model that search engines could adopt to ensure creators get paid for their contributions to A.I. “We want to make that the industry standard,” Gross said. While A.I. search features from Google and Microsoft’s Bing don’t directly share ad revenue with publishers, they refer users to links from publishers as a way to drive traffic to their sites.

    The answer engine will only be trained on data from creators who partner with ProRata. That means the model will draw from a limited amount of data that could potentially compromise the accuracy of outputs. Still, ProRata isn’t focused on making its A.I. search engine a standalone product but rather on having the pay-per-use model adopted by major search engines.

    So far, the company has inked deals with publishers like The Atlantic, Fortune, Financial Times, Time, and Axel Springer, the German company that owns Politico and Business Insider. Authors like Walter Isaacson, Adam Grant, and Ian Bremmer have also agreed, as have music industry veterans like Universal Music GroupProRata hasn’t encountered any resistance or skepticism from its partners yet, according to Gross. “Most people just want us to be wildly successful so they’ll get a paycheck,” the CEO said. The real challenge, he notes, is convincing Big Tech companies who’ve been crawling web data for free to adopt ProRata’s business model.

    “It’s amazing to me that some of the people think that crawling is not stealing,” Gross said. “Basically, Mustafa, the CEO of Microsoft A.I., came out and said, ‘Hey, if it’s available on the web, it’s free for us to use.’ And that’s just bullshit,” Gross added, referring to comments made by Google Deepmind co-founder Mustafa Suleyman during a CNBC interview in July when asked if training A.I. models on web content is akin to intellectual property theft. “Just because something is available and visible doesn’t mean it’s open source,” Gross said.

    ProRata.ai CEO Bill GrossProRata.ai CEO Bill Gross
    ProRata.ai CEO Bill Gross. Andres Castaneda

    Paying creators may be a temporary “Band-Aid” solution

    Financial compensation may not fully address the ethical concerns of having a creator’s work used for A.I. training without explicit permission, according to Star Kashman, a tech lawyer and partner at Cyber Law Firm with expertise in digital copyright law. She cites actress Scarlett Johansson as an example, who allegedly refused to give OpenAI permission to use her voice for ChatGPT despite financial offers. 

    “Many authors and creators have personal, moral objections to their work being utilized for A.I. training, regardless of compensation,” Kashman told Observer. “Without explicit permission, paying creators may be a temporary ‘Band-Aid’ solution, but it may not be an all-encompassing resolution to deeper concerns about consent and the impact on creative works.” 

    The “pay-per-use” model could also potentially lead to a new crop of legal issues. Creators may disagree over whether the payment they receive “accurately reflects” what they contributed to the A.I. systems, especially if they can’t set their own rates, Kashman said. Moreover, A.I. tools may favor the work of bigger, more established creators over smaller ones even if their content is more relevant to a particular query, similar to how search engine optimization (SEO) works. Compensation may also not fully protect A.I. companies from being sued for intellectual property theft, which she said could be easier to prove in court with concrete attribution. 

    “​​There will continue to be many IP cases until the Copyright Act is amended to allow scraping on copyrighted content for the purposes of training LLMs,” Gabriel Vincent, another partner at Cyber Law Firm, told Observer, echoing Kashman’s comments. 

    ProRata has plans to diversify its model to include more than just text. After the October launch, the startup will focus on collaborating with music companies, according to Gross. He also hopes to collaborate with video and movie brands as well as smaller, independent creators and plans to license its attribution technology to A.I. companies that can implement it into their own models. 

    “A.I. is so amazing, but it needs to be fair to all parties,” Gross said. 

    This Startup Has Built an Algorithm to Pay Creators for Their Work Used to Train A.I.

    [ad_2]

    Aaron Mok

    Source link

  • OpenAI and Anthropic agree to share their models with the US AI Safety Institute

    OpenAI and Anthropic agree to share their models with the US AI Safety Institute

    [ad_1]

    OpenAI and Anthropic have agreed to share AI models — before and after release — with the US AI Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to the companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.

    The US AI Safety Institute didn’t mention other companies tackling AI. But in a statement to Engadget, a Google spokesperson told Engadget the company is in discussions with the agency and will share more info when it’s available. This week, Google began rolling out updated chatbot and image generator models for Gemini.

    “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” Elizabeth Kelly, director of the US AI Safety Institute, wrote in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

    The US AI Safety Institute is part of the National Institute of Standards and Technology (NIST). It creates and publishes guidelines, benchmark tests and best practices for testing and evaluating potentially dangerous AI systems. “Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” Vice President Kamala Harris said in late 2023 after the agency was established.

    The first-of-its-kind agreement is through a (formal but non-binding) Memorandum of Understanding. The agency will receive access to each company’s “major new models” ahead of and following their public release. The agency describes the agreements as collaborative, risk-mitigating research that will evaluate capabilities and safety. The US AI Safety Institute will also collaborate with the UK AI Safety Institute.

    It comes as federal and state regulators try to establish AI guardrails while the rapidly advancing technology is still nascent. On Wednesday, the California state assembly approved an AI safety bill (SB 10147) that mandates safety testing for AI models that cost more than $100 million to develop or require a set amount of computing power. The bill requires AI companies to have kill switches that can shut down the models if they become “unwieldy or uncontrollable.”

    Unlike the non-binding agreement with the federal government, the California bill would have some teeth for enforcement. It gives the state’s attorney general license to sue if AI developers don’t comply, especially during threat-level events. However, it still requires one more process vote — and the signature of Governor Gavin Newsom, who will have until September 30 to decide whether to give it the green light.

    Update, August 29, 2024, 4:53 PM ET: This story has been updated to add a response from a Google spokesperson.

    [ad_2]

    Will Shanklin

    Source link

  • Google Agrees to Shell Out $250M to Support Journalism—But Not Everyone Is Thrilled

    Google Agrees to Shell Out $250M to Support Journalism—But Not Everyone Is Thrilled

    [ad_1]

    The Big Tech company previously objected to similar proposals. JOSH EDELSON/AFP via Getty Images

    Google (GOOGL) has struck a rare partnership with California to support journalism across the state. The first-in-the-nation agreement, announced yesterday (Aug. 21), will see the Big Tech player invest around $170 million over the next five years to strengthen a struggling local media landscape and aid in experimentation with A.I. However, the seemingly well-intentioned deal met controversy from media industry members.

    The deal comes as lawmakers push for Big Tech companies to compensate news organizations. In recent decades, news organizations have suffered from dwindling ad revenue as advertisers and readers transition away from print to social media platforms and search engines. The journalism industry in the U.S. has lost nearly two-thirds of its reporters since 2005, according to a 2023 study from Northwestern University. Each week, two and a half local newspapers closed down, the study found.

    Under the new agreement, a total of $250 million in public and private funding will be funneled into initiatives encouraging the local sustainability of outlets. “This agreement represents a major breakthrough in ensuring the survival of newsrooms and bolstering local journalism across California—leveraging substantial tech industry resources without imposing new taxes on Californians,” said California Governor Gavin Newsom in a statement.

    In addition to continuing to dole out annual grants of $10 million to existing journalism programs it supports, Google will give $55 million over the next five years to a new fund that will be administered by the Graduate School of Journalism at the University of California, Berkeley. Known as the News Transformation Fund, it will distribute funding across California publications and emphasize underrepresented groups and news deserts.

    The search engine giant is also expected to pour $12.5 million each year into a new National A.I. Innovation Accelerator, a program that will be administered with a private nonprofit and provide resources to experiment with A.I. across a variety of industries. Both of the agreement’s initiatives are expected to go live in 2025. “California lawmakers have worked with the tech and news sectors to develop a collaborative framework to accelerate A.I. innovation and support local and national businesses and non-profit organizations,” said Kent Walker, chief legal officer for Google’s parent company Alphabet (GOOGL), in a statement.

    A questionable approach to saving journalism

    Google has previously fought more comprehensive proposals in California urging Big Tech companies to support news outlets. In response to a proposed bill that would have seen Google forced to pay outlets for surfacing their content, the company earlier this year described the solution as the “wrong approach to supporting journalism” and one that would lead to “uncapped financial exposure,” with the company even temporarily removing links to California news outlets from its search engine.

    Not everyone is pleased with the new agreement. The Media Guild of the West, which represents journalists across Southern California, described the partnership as an “undemocratic and secretive deal with one of the businesses destroying our industry” in a statement. In addition to taking issue with Google’s financial commitment, it described the A.I. accelerator project as embracing an initiative “that could very well destroy journalism jobs.”

    The threat of A.I. has been a key worry in recent years for news outlets concerned about its misuse of content and potential to replace jobs. A.I. companies have attempted to dissuade such fears by entering into partnerships with media companies, such as those struck between OpenAI and brands like Vogue, Time Magazine and The Wall Street Journal that see the startup compensate outlets in order to use their content in A.I. tools and to train models. Perplexity AI, an A.I.-powered search engine, also recently launched a revenue-sharing model that will offer publishing partners a portion of ad revenue when their material is used in its A.I. tool’s responses.

    Google Agrees to Shell Out $250M to Support Journalism—But Not Everyone Is Thrilled

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Condé Nast Signs Deal With OpenAI

    Condé Nast Signs Deal With OpenAI

    [ad_1]

    Condé Nast and OpenAI have struck a multi-year deal that will allow the AI giant to use content from the media giant’s roster of properties—which includes the New Yorker, Vogue, Vanity Fair, Bon Appetit, and, yes, WIRED. The deal will allow OpenAI to surface stories from these outlets in both ChatGPT and the new SearchGPT prototype.

    “It’s crucial that we meet audiences where they are and embrace new technologies while also ensuring proper attribution and compensation for use of our intellectual property,” Condé Nast CEO Roger Lynch wrote in a company-wide email. Lynch pointed to ongoing turmoil within the publishing industry while discussing the deal, noting that technology companies have made it harder for publishers to make money, most recently with changes to traditional search.

    “Our partnership with OpenAI begins to make up for some of that revenue, allowing us to continue to protect and invest in our journalism and creative endeavors,” he wrote.

    Lynch testified before Congress earlier this year on how AI companies like OpenAI trained their models, speaking in favor of licensing. He has previously been a vocal opponent of AI companies using content without first seeking permission, describing said data as “stolen goods.” After WIRED reported earlier this year on the web-scraping practices of the AI search engine startup Perplexity, Condé Nast sent a cease-and-desist letter demanding that the company cease using its content.

    Specific terms of the partnership have not been disclosed. OpenAI declined to comment on the deal’s terms.

    As OpenAI noted in a blog post announcing the deal, this isn’t the first media company to team up with a generative AI company. Publishers like The Atlantic, Axel Springer, and TIME have already struck deals, as have platforms like Reddit and Automattic, the owner of WordPress.com and Tumblr. Most major AI companies have traditionally gathered training data by scraping the internet without first licensing the copyrighted materials. This has resulted in a wave of lawsuits against the companies, including from other news outlets like The New York Times, arguing that the practice is unfair—and now, a continually growing wave of publishers choosing to cooperate with AI’s biggest players.

    Digital publishers rely on search engines and other platforms to drive readership to their stories. Changes to the algorithms that power Google Search or Facebook’s Feed can make or break media companies. As Google and other search engines move beyond traditional search and incorporate generative AI news summaries and other AI products into their offerings—and generative AI companies like OpenAI introduce their own search products—news outlets face a stark choice: If they do not allow these companies to scrape data, they risk making their work harder to find on the internet.

    This is a developing story. Check back for updates.

    [ad_2]

    Kate Knibbs

    Source link

  • OpenAI Says It Caught a ChatGPT-Powered ‘Iranian Influence Operation’

    OpenAI Says It Caught a ChatGPT-Powered ‘Iranian Influence Operation’

    [ad_1]

    OpenAI said on Friday that it caught an “Iranian influence operation” using ChatGPT. The group, known as Storm-1679, generated articles and social-media comments to shape public opinion around Vice President Kamala Harris and former President Donald Trump, according to OpenAI. In addition to targeting 2024 U.S. presidential candidates, OpenAI said Storm-1679 also generated content around Israel’s invasion of Gaza and its presence at the 2024 Olympics, the rights of U.S.-based Latinx communities, Venezuelan politics, and Scottish independence from the U.K.

    Most of the posts and articles spotted by OpenAI received little pickup from real people, the company said. Still, it described the incident in detail on its blog, writing that it found a dozen X (formerly Twitter) accounts posing as conservatives and progressives and using hashtags such as “#DumpTrump” and “#DumpKamala.” Storm-1679 also tapped at least one Instagram account to spread AI-generated content, per OpenAI.

    OpenAI has previously described “state-affiliated threat actors” using its tools, but this is the first time it’s disclosed a specific election interference campaign utilizing ChatGPT.

    OpenAI said it responded to said discovery by banning a “cluster” of accounts that created the content; the company also said it “shared threat intelligence with government, campaign, and industry stakeholders.” The firm did not name those stakeholders specifically, but it did share screenshots of a few of the posts. Those screenshots featured view counts ranging from 8 to 207 views and hardly any likes.

    OpenAI’s screenshot of X posts generated with ChatGPT to influence the election. © OpenAI

    OpenAI said Storm-1679 also shared ChatGPT-generated articles across several websites that “posed as both progressive and conservative news outlets.” The firm added, “The majority of social media posts that we identified received few or no likes, shares, or comments. We similarly did not find indications of the web articles being shared across social media.”

    An August 6 report from Microsoft described Storm-2035 in a similar manner — as an Iranian network with “four websites masquerading as news outlets.” According to Microsoft, the network created “polarizing” posts about the election, LGBTQIA+ rights, as well as Israel’s invasion of Gaza.

    Reports of online foreign interference in U.S. elections are now virtually commonplace. Microsoft’s August 6 report, for example, also detailed an Iran-linked phishing attack that targeted an unnamed, “high-ranking” U.S. campaign official. Shortly after Microsoft dropped the report, the Trump campaign announced that “foreign sources” had stolen some of its emails and documents in an attempt to influence the 2024 presidential election. Eight years earlier, a Russia-linked hacking group known as Guccifer 2.0 made off with Democratic National Committee emails through a similar phishing attack; they ultimately leaked thousands of DNC emails and documents ahead of the 2016 Democratic National Convention.

    Under tidelike pressure from lawmakers, big tech companies have launched various efforts over the years in response to such incidents. Their efforts include meme fact checks, wishful thinking, a short-lived political ad ban, a “war room,” and collaborations with rivals and cops alike.

    [ad_2]

    Harri Weber

    Source link

  • I Am an Investor in OpenAI — Here’s What Other Investors Must Consider About The Biggest Risks and Benefits of AI | Entrepreneur

    I Am an Investor in OpenAI — Here’s What Other Investors Must Consider About The Biggest Risks and Benefits of AI | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    While AI has been in existence since the 1950s, it has only just recently become a household name with the introduction of OpenAI’s ChatGPT less than two years ago. As an investor in both OpenAI and xAI, I continuously ask myself the ethical ramifications of this new technology, how intelligent it is and whether it is the best path forward for humanity. As an experiment, I asked ChatGPT its opinion about artificial intelligence (AI) systems in today’s society, and this is what it had to say:

    “As an artificial intelligence language model, I do not have personal opinions or beliefs. However, I can provide an objective analysis of AI systems in today’s society. AI has brought about numerous benefits to society, such as improved efficiency, accuracy, and decision-making processes. It has also contributed to advancements in medical research, transportation, and various other industries. However, the deployment of AI systems in today’s society also presents potential risks, such as job displacement, privacy concerns, and the possibility of biased algorithms. Therefore, it is crucial to ensure that AI is developed ethically and responsibly to mitigate these risks and ensure that the technology benefits society as a whole. In summary, AI systems have immense potential to positively impact society, but their deployment must be done with careful consideration and ethical responsibility.”

    Related: The Only Drawback of AI Is Your Attitude. Here Are 4 Things You Need to Change Your Outlook — And Harness AI for Success.

    While this response shows AI is still robotic and objective in its methods of “thinking,” it is also, in fact, intimately aware of the ethical concerns bearing over its use. At this critical point in history, where we sit at the precipice of mass AI adoption, we still can’t seem to decide whether AI will be our savior or our villain. Ultimately, it comes down to the applications and responsible management of AI systems.

    As an investor, I am in the unique position of being able to help push these new AI companies forward. In the last two years, we have seen a “gold rush” of AI startups. Many have raised impressive rounds; many have faltered, and some have already seen acquisitions. However, the first two I have invested in OpenAI and xAI are now well-known. The former for its ability to have tackled the challenge of global mass adoption and the latter for its commitment to building a more ethical system for the AI ecoverse.

    When it comes to applications in AI, it’s safe to say that humans have always strived to create tools to make our lives easier; this is not new. Historically, “mechanical muscles” and outsourcing shifted physical labor. Now, we’re at the point where “mechanical minds” will unleash a cascade of applications across industries. Automation has evolved from more than just the physical.

    One major development pushing forward the AI adoption frontier is Microsoft’s continued investment in OpenAI. This partnership will likely lead to everything from optimized Excels to AI-generated PowerPoints and even more support in email management. Upper-class work is now also ripe for disruption.

    In finance, AI can be leveraged in various ways. AI algorithms can identify patterns of behavior, such as unusual transactions or identity theft, allowing financial institutions to detect and prevent activities quickly. AI can also analyze market trends and make assumptions about future trends, assisting institutions in making more accurate decisions.

    While this is only one sample of industry transformation, there are some areas of work that will be impacted less by AI. These industries are those operating in information asymmetry — such as early-stage venture capital. For AI to work effectively, the model must have access to data. Industries whose data is private, segregated and complex cannot train and build inferences in the same way that a model based on publicly available information can. The strength of AI is dependent on the value of its underlying data and model, which in turn is dependent on the quality of the rules set forth by the humans crafting the algorithms.

    In terms of management, we must continue to maintain an element of skepticism and criticism as the growing adoption of AI increases. These tools will continue to develop, but they should not be treated as an all-knowing source of truth. Again, this is critical because these systems only know what is in their underlying systems. As people, investors and business-minded individuals, we must acknowledge the strength of these systems while also considering the fact that they must be constantly maintained. While AI employs a constantly evolving algorithm that learns from itself and experiences, we must still continue to adjust the rules and data sources supporting these AI systems.

    Related: AI Will Radically Transform the Workplace — Here’s How HR Teams Can Prepare for It

    Diversity of thought and perspectives is critical for those who have the power to develop these systems. A system created by humans with biases will intrinsically be biased as well. We must maintain human values in control of the systems. In today’s society, where truth seems to be subjective, belief in these systems can be both extremely advantageous and extremely detrimental. We must refrain from programming systems to tell people what they want to hear. It is important for people to understand and respect the limitations of AI.

    At the same time, just because humanity is capable of doing something doesn’t always mean it should. We could likely replace most jobs with AI, but should we? Where do we draw the line? How do we ensure honesty and integrity in future systems and platforms? This is part of what xAI and OpenAI have committed to tackling and why I have believed in and supported them from the beginning.

    AI will undoubtedly transform all of our lives, but this transformation will only be positive if we continue to be critical stewards of truth and information. As investors, I encourage everyone to maintain a healthy dose of skepticism when investing in AI-powered solutions. Look at the human beings who stand behind those systems because their beliefs are the ones teaching and driving the solution.

    [ad_2]

    Ozi Amanat

    Source link

  • I Used ChatGPT’s Advanced Voice Mode. It’s Fun, and Just a Bit Creepy

    I Used ChatGPT’s Advanced Voice Mode. It’s Fun, and Just a Bit Creepy

    [ad_1]

    I leave ChatGPT’s Advanced Voice Mode on while writing this article as an ambient AI companion. Occasionally, I’ll ask it to provide a synonym for an overused word, or some encouragement. Around half an hour in, the chatbot interrupts our silence and starts speaking to me in Spanish, unprompted. I giggle a bit and ask what’s going on. “Just a little switch up? Gotta keep things interesting,” says ChatGPT, now back in English.

    While testing Advanced Voice Mode as part of the early alpha, my interactions with ChatGPT’s new audio feature were entertaining, messy, and surprisingly varied, though it’s worth noting that the features I had access to were only half of what OpenAI demonstrated when it launched the GPT-4o model in May. The vision aspect we saw in the livestreamed demo is now scheduled for a later release, and the enhanced Sky voice, which Her actor Scarlett Johanssen pushed back on, has been removed from Advanced Voice Mode and is no longer an option for users.

    So, what’s the current vibe? Right now, Advanced Voice Mode feels reminiscent of when the original text-based ChatGPT dropped, late in 2022. Sometimes it leads to unimpressive dead ends or devolves into empty AI platitudes. But other times the low-latency conversations click in a way that Apple’s Siri or Amazon’s Alexa never have for me, and I feel compelled to keep chatting out of enjoyment. It’s the kind of AI tool you’ll show your relatives during the holidays for a laugh.

    OpenAI gave a few WIRED reporters access to the feature a week after the initial announcement but pulled it the next morning, citing safety concerns. Two months later, OpenAI soft-launched Advanced Voice Mode to a small group of users and released GPT-4o’s system card, a technical document that outlines red-teaming efforts, what the company considers to be safety risks, and mitigation steps the company has taken to reduce harm.

    Curious to give it a go yourself? Here’s what you need to know about the larger rollout of Advanced Voice Mode, and my first impressions of ChatGPT’s new voice feature, to help you get started.

    So, When’s the Full Rollout?

    OpenAI released an audio-only Advanced Voice Mode to some ChatGPT Plus users at the end of July, and the alpha group still seems relatively small. The company plans to enable it for all subscribers sometime this fall. Niko Felix, a spokesperson for OpenAI, shared no additional details when asked about the release timeline.

    Screen and video sharing were a core part of the original demo, but they are not available in this alpha test. OpenAI plans to add those aspects eventually, but it’s also not clear when that will happen.

    If you’re a ChatGPT Plus subscriber, you’ll receive an email from OpenAI when the Advanced Voice Mode is available to you. After it’s on your account, you can switch between Standard and Advanced at the top of the app’s screen when ChatGPT’s voice mode is open. I was able to test the alpha version on an iPhone as well as a Galaxy Fold.

    My First Impressions of ChatGPT’s Advanced Voice Mode

    Within the very first hour of speaking with it, I learned that I love interrupting ChatGPT. It’s not how you would talk with a human, but having the new ability to cut off ChatGPT mid-sentence and request a different version of the output feels like a dynamic improvement and a standout feature.

    Early adopters who were excited by the original demos may be frustrated to get access to a version of Advanced Voice Mode that’s restricted with more guardrails than anticipated. For example, although generative AI singing was a key component of the launch demos, with whispered lullabies and multiple voices attempting to harmonize, AI serenades are absent from the alpha version.

    [ad_2]

    Reece Rogers

    Source link

  • OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode

    OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode

    [ad_1]

    In late July, OpenAI began rolling out an eerily humanlike voice interface for ChatGPT. In a safety analysis released today, the company acknowledges that this anthropomorphic voice may lure some users into becoming emotionally attached to their chatbot.

    The warnings are included in a “system card” for GPT-4o, a technical document that lays out what the company believes are the risks associated with the model, plus details surrounding safety testing and the mitigation efforts the company’s taking to reduce potential risk.

    OpenAI has faced scrutiny in recent months after a number of employees working on AI’s long-term risks quit the company. Some subsequently accused OpenAI of taking unnecessary chances and muzzling dissenters in its race to commercialize AI. Revealing more details of OpenAI’s safety regime may help mitigate the criticism and reassure the public that the company takes the issue seriously.

    The risks explored in the new system card are wide-ranging, and include the potential for GPT-4o to amplify societal biases, spread disinformation, and aid in the development of chemical or biological weapons. It also discloses details of testing designed to ensure that AI models won’t try to break free of their controls, deceive people, or scheme catastrophic plans.

    Some outside experts commend OpenAI for its transparency but say it could go further.

    Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face, a company that hosts AI tools, notes that OpenAI’s system card for GPT-4o does not include extensive details on the model’s training data or who owns that data. “The question of consent in creating such a large dataset spanning multiple modalities, including text, image, and speech, needs to be addressed,” Kaffee says.

    Others note that risks could change as tools are used in the wild. “Their internal review should only be the first piece of ensuring AI safety,” says Neil Thompson, a professor at MIT who studies AI risk assessments. “Many risks only manifest when AI is used in the real world. It is important that these other risks are cataloged and evaluated as new models emerge.”

    The new system card highlights how rapidly AI risks are evolving with the development of powerful new features such as OpenAI’s voice interface. In May, when the company unveiled its voice mode, which can respond swiftly and handle interruptions in a natural back and forth, many users noticed it appeared overly flirtatious in demos. The company later faced criticism from the actress Scarlett Johansson, who accused it of copying her style of speech.

    A section of the system card titled “Anthropomorphization and Emotional Reliance” explores problems that arise when users perceive AI in human terms, something apparently exacerbated by the humanlike voice mode. During the red teaming, or stress testing, of GPT-4o, for instance, OpenAI researchers noticed instances of speech from users that conveyed a sense of emotional connection with the model. For example, people used language such as “This is our last day together.”

    Anthropomorphism might cause users to place more trust in the output of a model when it “hallucinates” incorrect information, OpenAI says. Over time, it might even affect users’ relationships with other people. “Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships,” the document says.

    Joaquin Quiñonero Candela, head of preparedness at OpenAI, says that voice mode could evolve into a uniquely powerful interface. He also notes that the kind of emotional effects seen with GPT-4o can be positive—say, by helping those who are lonely or who need to practice social interactions. He adds that the company will study anthropomorphism and the emotional connections closely, including by monitoring how beta testers interact with ChatGPT. “We don’t have results to share at the moment, but it’s on our list of concerns,” he says.

    [ad_2]

    Will Knight, Reece Rogers

    Source link

  • ChatGPT Cofounders, Leaders Leaving OpenAI, 3 Left of 11 | Entrepreneur

    ChatGPT Cofounders, Leaders Leaving OpenAI, 3 Left of 11 | Entrepreneur

    [ad_1]

    OpenAI’s founding team started with 11 people, including Elon Musk. Now Musk is suing the company for allegedly going against its founding mission and several of OpenAI’s cofounders are stepping away from their roles.

    Late Monday, OpenAI cofounder John Schulman announced on X that he would be leaving to join rival AI firm Anthropic. He specified that his decision was personal, and not based on lack of support for AI safety research.

    “My decision is a personal one,” he wrote, adding later that he will “still be rooting” for the OpenAI team, “even while working elsewhere.”

    Related: AI Is Standing Between You and Your Next Job — Here’s How to Get Your Application Into Human Hands.

    Schulman’s departure overlaps with another OpenAI cofounder stepping back from the company. On Monday, OpenAI president Greg Brockman stated that he would be taking an extended sabbatical for the rest of the year.

    Brockman, CEO Sam Altman, and Wojciech Zaremba, a research and language team leader, are the only members of OpenAI’s 2015 founding team who remain at the company.

    The rest, including former OpenAI chief scientist Ilya Sutskever, have left, with Sutskever founding his own Safe Superintelligence venture in June.

    Related: OpenAI Resignations: How Do We Prevent AI From Going Rogue?

    Peter Deng, OpenAI’s vice president of consumer product, also left OpenAI on Monday per The Information, though he wasn’t on the founding team.

    OpenAI has faced controversy recently, with Jan Leike, its former safety leader who departed for Anthropic in May, accusing the company of prioritizing “shiny products” over safety. In the same month, Scarlett Johansson hired legal counsel after finding that ChatGPT’s voice sounded “eerily similar” to hers.

    [ad_2]

    Sherin Shibu

    Source link

  • SearchGPT Is OpenAI’s Direct Assault on Google

    SearchGPT Is OpenAI’s Direct Assault on Google

    [ad_1]

    After months of speculation about its search ambitions, OpenAI has revealed SearchGPT, a “prototype” search engine that could eventually help the company tear off a slice of Google’s lucrative business.

    OpenAI said that the new tool would help users find what they are looking for more quickly and easily by using generative AI to gather links and answer user queries in a conversational tone. SearchGPT could eventually be integrated into OpenAI’s popular ChatGPT chatbot. In addition to a broader web search, the search engine will tap into information provided by publishers who have signed deals giving OpenAI access to their data.

    Kayla Wood, a spokesperson for OpenAI, declined to provide a SearchGPT demo or an interview about the new tool for WIRED, but confirmed that the company has already given access to unnamed partners and publishers and improved aspects of the search engine based on their feedback.

    Microsoft, an investor in OpenAI, was one of the first companies to release a generative AI search engine to the public when it launched an AI-powered version of Bing back in 2023 that relied on OpenAI’s large language models. That AI search experience from Microsoft has since been rebranded to Copilot.

    Since then, multiple competitors, like Google and Perplexity, have launched their own AI search experiences for users. Google’s AI Overviews provide AI-generated summaries of articles, often at the top of news results. OpenAI’s SearchGPT appears more similar to Perplexity’s approach, where the chatbot provides an accompanying list of relevant links and the user can ask follow-up questions.

    After OpenAI first introduced ChatGPT in November 2022, early users saw in the chatbot’s ability to dig up and summarize information from the web a potential replacement for conventional web search. The shortcomings of large language models make chatbots imperfect search tools, however. The models draw on training data that is often months or years out of date, and when unsure of an answer they will make up facts.

    Microsoft’s early efforts with Bing were far from a success, with the AI-powered search engine producing strange, inappropriate, and incorrect answers. Bing’s market share grew only slightly following the overhaul.

    When Google added AI Overviews to search results this May, the company also quickly ran into reliability problems, like recommending people add glue to pizza. OpenAI’s SearchGPT may use an approach to generative AI, called retrieval augmented generation, that is an industry standard for AI search and designed to lower the rate of hallucinations in chatbot answers. With a RAG approach, the AI tool references trusted information, like a preferred news website, while generating its output and links back to where the data originated.

    There’s also the question of potential copyright violations. Perplexity in particular has been criticized by publications, including WIRED, for copying aspects of original journalism with its AI search tool and seeming to ignore requests not to take content from some websites. In OpenAI’s blog post, the company mentions its commitment to publishers: “SearchGPT is designed to help users connect with publishers by prominently citing and linking to them in searches.” Multiple companies, including Vox Media, The Atlantic, News Corp, and the Financial Times, have all signed licensing agreements with OpenAI this year.

    [ad_2]

    Reece Rogers, Will Knight

    Source link

  • The AI-Powered Future of Coding Is Near

    The AI-Powered Future of Coding Is Near

    [ad_1]

    I am by no means a skilled coder, but thanks to a free program called SWE-agent, I was just able to debug and fix a gnarly problem involving a misnamed file within different code repositories on the software-hosting site GitHub.

    I pointed SWE-agent at an issue on GitHub and watched as it went through the code and reasoned about what might be wrong. It correctly determined that the root cause of the bug was a line that pointed to the wrong location for a file, then navigated through the project, located the file, and amended the code so that everything ran properly. It’s the kind of thing that an inexperienced developer (such as myself) might spend hours trying to debug.

    Many coders already use artificial intelligence to write software more quickly. GitHub Copilot was the first integrated developer environment to harness AI, but lots of IDEs will now automatically complete chunks of code when a developer starts typing. You can also ask AI questions about code or have it offer suggestions on how to improve what you’re working on.

    Last summer, John Yang and Carlos Jimenez, two Princeton PhD students, began discussing what it would take for AI to become a real-world software engineer. This led them and others at Princeton to come up with SWE-bench, a set of benchmarks for testing AI tools across a range of coding tasks. After releasing the benchmark in October, the team developed its own tool—SWE-agent—to master these tasks.

    SWE-agent (“SWE” is shorthand for “software engineering”) is one of a number of considerably more powerful AI coding programs that go beyond just writing lines of code and act as so-called software agents, harnessing the tools needed to wrangle, debug, and organize software. The startup Devin went viral with a video demo of one such tool in March.

    Ofir Press, a member of the Princeton team, says that SWE-bench could help OpenAI test the performance and reliability of software agents. “It’s just my opinion, but I think they will release a software agent very soon,” Press says.

    OpenAI declined to comment, but another source with knowledge of the company’s activities, who asked not to be named, told WIRED that “OpenAI is definitely working on coding agents.”

    Just as GitHub Copilot showed that large language models can write code and boost programmers’ productivity, tools like SWE-agent may prove that AI agents can work reliably, starting with building and maintaining code.

    A number of companies are testing agents for software development. At the top of the SWE-bench leaderboard, which measures the score of different coding agents across a variety of tasks, is one from Factory AI, a startup, followed by AutoCodeRover, an open source entry from a team at the National University of Singapore.

    Big players are also wading in. A software-writing tool called Amazon Q is another top performer on SWE-bench. “Software development is a lot more than just typing,” says Deepak Singh, vice president of software development at Amazon Web Services.

    He adds that AWS has used the agent to translate entire software stacks from one programming language to another one. “It’s like having a really smart engineer sitting next to you, writing and building an application with you,” Singh says. “I think that’s pretty transformative.”

    A team at OpenAI recently helped the Princeton crew improve a benchmark for measuring the reliability and efficacy of tools like SWE-agent, suggesting that the company might also be honing agents for writing code or doing other tasks on a computer.

    Singh says that a number of customers are already building complex backend applications using Q. My own experiments with SWE-bench suggest that anyone who codes will soon want to use agents to enhance their programming prowess, or risk being left behind.

    [ad_2]

    Will Knight

    Source link

  • OpenAI Slashes the Cost of Using Its AI With a ‘Mini’ Model

    OpenAI Slashes the Cost of Using Its AI With a ‘Mini’ Model

    [ad_1]

    OpenAI today announced a cut-price “mini” model that it says will allow more companies and programs to tap into its artificial intelligence. The new model, called GPT-4o mini and available starting today, is 60 percent cheaper than OpenAI’s most inexpensive existing model while offering higher performance, the company says.

    OpenAI characterizes the move as part of an effort to make AI “as broadly accessible as possible,” but it also reflects growing competition among AI cloud providers as well as rising interest in small and free open source AI models. Meta is expected to debut the largest version of its very capable free offering, Llama 3, next week.

    “The whole point of OpenAI is to build and distribute AI safely and make it broadly accessible,” Olivier Godement, a product manager at OpenAI responsible for the new model, tells WIRED. “Making intelligence available at a lower cost is one of the most efficient ways for us to do that.”

    Godement says the company developed a cheaper offering by improving the model architecture and refining the training data and the training regimen. GPT-4o mini outperforms other “small” models on the market in several common benchmarks, OpenAI says.

    OpenAI has gained a significant foothold in the cloud AI market thanks to the remarkable capabilities of its chatbot, ChatGPT, which debuted in late 2022. The company lets outsiders access the large language model that powers ChatGPT, called GPT-4o, for a fee. It also offers a less powerful model, called GPT-3.5 Turbo, for about a tenth of the cost of GPT-4o.

    The interest in language models triggered by ChatGPT’s wild success has prompted competitors to develop similar offerings. Google, a pioneer in AI, has made a major push to build and commercialize a large language model and chatbot under the brand name Gemini. Startups such as Anthropic, Cohere, and AI21 have raised millions to develop and market their own large language models to business customers and developers.

    Building the highest-performing large language models requires huge financial resources, but some companies have chosen to open source their creations in order to attract developers to their ecosystems. The most prominent open source AI model is Meta’s Llama; it can be downloaded and used for free, but its license imposes certain limits on commercial usage.

    This April, Meta announced Llama 3, its most powerful free model. The company released a small version of the model with 8 billion parameters—a rough measure of a model’s portability and complexity—as well as a more powerful, medium-size, 70-billion-parameter version. The medium-size model is close to OpenAI’s best offering on several benchmark scores.

    Several sources confirmed to WIRED that Meta plans to release the largest version of Llama 3, with 400 billion parameters, on July 23, although they say the release date could change. It is unclear how capable this version of Llama 3 will be, but some companies have turned their attention toward open source AI models because they are cheaper and customizable, and offer greater control over a model and the data it is fed.

    Godement concedes that customers’ needs are evolving. “What we see more and more from the market is developers and businesses combining small and large models to build the best product experience at the price and the latency that makes sense for them,” he says.

    Godement says OpenAI’s cloud offerings provide customers with models that have gone through more security testing than competitors’. He adds that OpenAI could eventually develop models that customers can run on their own devices. “If we see massive demand, we may open that door,” he says.

    [ad_2]

    Will Knight

    Source link

  • OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough

    OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough

    [ad_1]

    OpenAI has faced opprobrium in recent months from those who suggest it may be rushing too quickly and recklessly to develop more powerful artificial intelligence. The company appears intent on showing it takes AI safety seriously. Today it showcased research that it says could help researchers scrutinize AI models even as they become more capable and useful.

    The new technique is one of several ideas related to AI safety that the company has touted in recent weeks. It involves having two AI models engage in a conversation that forces the more powerful one to be more transparent, or “legible,” with its reasoning so that humans can understand what it’s up to.

    “This is core to the mission of building an [artificial general intelligence] that is both safe and beneficial,” Yining Chen, a researcher at OpenAI involved with the work, tells WIRED.

    So far, the work has been tested on an AI model designed to solve simple math problems. The OpenAI researchers asked the AI model to explain its reasoning as it answered questions or solved problems. A second model is trained to detect whether the answers are correct or not, and the researchers found that having the two models engage in a back and forth encouraged the math-solving one to be more forthright and transparent with its reasoning.

    OpenAI is publicly releasing a paper detailing the approach. “It’s part of the long-term safety research plan,” says Jan Hendrik Kirchner, another OpenAI researcher involved with the work. “We hope that other researchers can follow up, and maybe try other algorithms as well.”

    Transparency and explainability are key concerns for AI researchers working to build more powerful systems. Large language models will sometimes offer up reasonable explanations for how they came to a conclusion, but a key concern is that future models may become more opaque or even deceptive in the explanations they provide—perhaps pursuing an undesirable goal while lying about it.

    The research revealed today is part of a broader effort to understand how large language models that are at the core of programs like ChatGPT operate. It is one of a number of techniques that could help make more powerful AI models more transparent and therefore safer. OpenAI and other companies are exploring more mechanistic ways of peering inside the workings of large language models, too.

    OpenAI has revealed more of its work on AI safety in recent weeks following criticism of its approach. In May, WIRED learned that a team of researchers dedicated to studying long-term AI risk had been disbanded. This came shortly after the departure of cofounder and key technical leader Ilya Sutskever, who was one of the board members who briefly ousted CEO Sam Altman last November.

    OpenAI was founded on the promise that it would make AI both more transparent to scrutiny and safer. After the runaway success of ChatGPT and more intense competition from well-backed rivals, some people have accused the company of prioritizing splashy advances and market share over safety.

    Daniel Kokotajlo, a researcher who left OpenAI and signed an open letter criticizing the company’s approach to AI safety, says the new work is important, but incremental, and that it does not change the fact that companies building the technology need more oversight. “​The situation we are in remains unchanged,” he says. “Opaque, unaccountable, unregulated corporations racing each other to build artificial superintelligence, with basically no plan for how to control it.”

    Another source with knowledge of OpenAI’s inner workings, who asked not to be named because they were not authorized to speak publicly, says that outside oversight of AI companies is also needed. “The question is whether they’re serious about the kinds of processes and governance mechanisms you need to prioritize societal benefit over profit,” the source says. “Not whether they let any of their researchers do some safety stuff.”

    [ad_2]

    Will Knight

    Source link

  • What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality

    What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality

    [ad_1]

    OpenAI CEO Sam Altman has previously discussed his desire to achieve human-level reasoning in A.I. Justin Sullivan/Getty Images

    As part of OpenAI’s path towards artificial general intelligence (A.G.I), a term for technology matching the intelligence of humans, the company is reportedly attempting to enable A.I. models to perform advanced reasoning. Such work is taking place under a secretive project code-named ‘Strawberry,’ as reported by Reuters, which noted that the project was previously known as Q* or Q Star. While its name may have changed, the project isn’t exactly new. Researchers and co-founders of OpenAI have previously warned against the initiative, with concerns over it reportedly playing a part in the brief ousting of Sam Altman as OpenAI’s CEO in November.

    Strawberry uses a unique method of post-training A.I. models, a process that improves their performance after being trained on datasets, according to Reuters, which cited internal OpenAI documents and a person familiar with the project. With the help of “deep-research” datasets, the company aims to create models that display human-level reasoning. OpenAI reportedly is looking into how Strawberry can allow models to be able to complete tasks over an extended period of time, search the web by themselves and take actions on its findings, and perform the work of engineers. OpenAI did not respond to requests for comment from Observer.

    Elon Musk and Ilya Sutskever raised concerns about Q*

    Altman, who has previously reiterated OpenAI’s desire to create models able to reason, briefly lost control of his company last year when his board fired him for four days. Shortly before the ousting, several OpenAI employees had become concerned over breakthroughs presented by what was then known as Q*, a project spearheaded by Ilya Sutskever, OpenAI’s former chief scientist. Sutskever himself had reportedly begun to worry about the project’s technology, as did OpenAI employees working on A.I. safety at the time. After his reinstatement, Altman referred to news reports about Q* as an “unfortunate leak” in an interview with the Verge.

    Elon Musk, another OpenAI co-founder, has also raised the alarm about Q* in the past. The billionaire, who severed ties with the company in 2018, referred to the project in a lawsuit filed against OpenAI and Altman that has since been dropped. While discussing OpenAI’s close partnership with Microsoft (MSFT), Musk’s suit claimed that the terms of the deal dictate that Microsoft only has rights to OpenAI’s pre-A.G.I. technology and that it is up to OpenAI’s board to determine when the company has achieved A.G.I.

    Musk argued that OpenAI’s GPT-4 model constitutes as A.G.I, which he believes “poses a grave threat to humanity,” according to the suit. Court filings stated that “OpenAI is currently developing a model known as Q* that has an even stronger claim to A.G.I.”

    Recent internal meetings have suggested that OpenAI is making rapid progress toward the type of human-level reasoning that Strawberry is working on. In an OpenAI all-hands meeting held earlier this month, the company unveiled a five-tiered system to track its progress towards A.G.I., as reported by Bloomberg. While the company said it is currently on the first level, known as “chatbots,” it revealed that it has nearly reached the second level of “reasoners,” which involves technology that can display human-level problem-solving. The subsequent steps consist of A.I. systems acting as “agents” that can take actions, “innovators” that aid in invention and “organizations” that do the work of an organization.

    What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link