ReportWire

Tag: openai

  • OpenAI is reportedly working on more advanced AI models capable of reasoning and ‘deep research’

    OpenAI is reportedly working on more advanced AI models capable of reasoning and ‘deep research’

    [ad_1]

    A new report from claims OpenAI is developing technology to bring advanced reasoning capabilities to its AI models under a secret project code-named “Strawberry.” Among the project’s goals is to enable the company’s AI models to autonomously scour the internet in order to “plan ahead” for more complex tasks, according to an internal document seen by Reuters. The project previously went by the name of Q* (pronounced “Q star”), demos of which showed earlier this year that it could answer “tricky science and math questions,” Reuters reports, citing unnamed sources who witnessed the demonstrations.

    At this stage, much remains unknown about Strawberry — including how far along in development it is, and whether it’s the same system with “human-like reasoning” skills that OpenAI reportedly demonstrated at an employee all-hands meeting earlier this week, per . But the ability for the company’s AI to conduct “deep research,” as is said to be the aim of Strawberry, would mark a huge leap forward from what’s available today.

    [ad_2]

    Cheyenne MacDonald

    Source link

  • Microsoft AI CEO: Anything on Open Web Fair Use for Training | Entrepreneur

    Microsoft AI CEO: Anything on Open Web Fair Use for Training | Entrepreneur

    [ad_1]

    In order to write, lead advertising campaigns, and power side hustles AI needs training material. ChatGPT needed about 300 billion words to get off the ground and continues to train itself based on how users interact with it.

    However, human beings aren’t being credited or compensated for creating the content that AI is eating up. Authors, artists, and news organizations have already filed countless copyright lawsuits against AI giants like OpenAI and Microsoft as they find that AI bots can talk about their copyrighted work “too accurately” — indicating that the works are in the AI’s training data.

    That’s why Microsoft’s AI CEO Mustafa Suleyman was asked at the Aspen Ideas Festival in late June if AI companies have essentially stolen the world’s intellectual property.

    Suleyman’s answer? Almost all content on the Internet, with one possible exception, is fair game for AI training.

    Related: A Microsoft-Partnered AI Startup Is Being Sued By the Biggest Record Labels in the World

    “I think that with respect to content that is already on the open web, the social contract of that content since the ’90s has been that it is fair use,” Suleyman said.

    Suleyman stated that “anyone” can copy or recreate the content on the open web.

    “That has been freeway,” he said. “That’s been the understanding.”

    However, some news sites and publishers have asked not to be scraped or crawled.

    “That’s the gray area and I think that’s going to work its way through the courts,” Suleyman said.

    Mustafa Suleyman. Photographer: Stefan Wermuth/Bloomberg via Getty Images

    Suleyman leads Microsoft AI at a time when Microsoft has invested billions into the technology. His position on what is fair use and what isn’t fleshes out how AI companies might defend intellectual property allegations in court.

    OpenAI, for example, has allegedly used more than a million hours of YouTube videos to train ChatGPT. When asked whether YouTube or social media videos were used to make OpenAI’s video generator Sora, the company’s chief technology officer Mira Murati said, “We used publicly available data and licensed data” and wouldn’t specify further.

    AI also appears to be eating work generated by other AI, resulting in lower-quality output. Experts estimate that 90% of online content will be AI-generated within the next two years.

    Related: The Most Downloaded News App in the U.S. May Have Published Dozens of Fake, AI-Written Stories

    [ad_2]

    Sherin Shibu

    Source link

  • OpenAI Is Testing Its Powers of Persuasion

    OpenAI Is Testing Its Powers of Persuasion

    [ad_1]

    This week, Sam Altman, CEO of OpenAI, and Arianna Huffington, founder and CEO of the health company Thrive Global, published an article in Time touting Thrive AI, a startup backed by Thrive and OpenAI’s Startup Fund. The piece suggests that AI could have a huge positive impact on public health by talking people into healthier behavior.

    Altman and Huffington write that Thrive AI is working toward “a fully integrated personal AI coach that offers real-time nudges and recommendations unique to you that allows you to take action on your daily behaviors to improve your health.”

    Their vision puts a positive spin on what may well prove to be one of AI’s sharpest double-edges. AI models are already adept at persuading people, and we don’t know how much more powerful they could become as they advance and gain access to more personal data.

    Aleksander Madry, a professor on sabbatical from the Massachusetts Institute of Technology, leads a team at OpenAI called Preparedness that is working on that very issue.

    “One of the streams of work in Preparedness is persuasion,” Madry told WIRED in a May interview. “Essentially, thinking to what extent you can use these models as a way of persuading people.”

    Madry says he was drawn to join OpenAI by the remarkable potential of language models and because the risks that they pose have barely been studied. “There is literally almost no science,” he says. “That was the impetus for the Preparedness effort.”

    Persuasiveness is a key element in programs like ChatGPT and one of the ingredients that makes such chatbots so compelling. Language models are trained in human writing and dialog that contains countless rhetorical and suasive tricks and techniques. The models are also typically fine-tuned to err toward utterances that users find more compelling.

    Research released in April by Anthropic, a competitor founded by OpenAI exiles, suggests that language models have become better at persuading people as they have grown in size and sophistication. This research involved giving volunteers a statement and then seeing how an AI-generated argument changes their opinion of it.

    OpenAI’s work extends to analyzing AI in conversation with users—something that may unlock greater persuasiveness. Madry says the work is being conducted on consenting volunteers, and declines to reveal the findings to date. But he says the persuasive power of language models runs deep. “As humans we have this ‘weakness’ that if something communicates with us in natural language [we think of it as if] it is a human,” he says, alluding to an anthropomorphism that can make chatbots seem more lifelike and convincing.

    The Time article argues that the potential health benefits of persuasive AI will require strong legal safeguards because the models may have access to so much personal information. “Policymakers need to create a regulatory environment that fosters AI innovation while safeguarding privacy,” Altman and Huffington write.

    This is not all that policymakers will need to consider. It may also be crucial to weigh how increasingly persuasive algorithms could be misused. AI algorithms could enhance the resonance of misinformation or generate particularly compelling phishing scams. They might also be used to advertise products.

    Madry says a key question, yet to be studied by OpenAI or others, is how much more compelling or coercive AI programs that interact with users over long periods of time could prove to be. Already a number of companies offer chatbots that roleplay as romantic partners and other characters. AI girlfriends are increasingly popular—some are even designed to yell at you—but how addictive and persuasive these bots are is largely unknown.

    The excitement and hype generated by ChatGPT following its release in November 2022 saw OpenAI, outside researchers, and many policymakers zero in on the more hypothetical question of whether AI could someday turn against its creators.

    Madry says this risks ignoring the more subtle dangers posed by silver-tongued algorithms. “I worry that they will focus on the wrong questions,” Madry says of the work of policymakers thus far. “That in some sense, everyone says, ‘Oh yeah, we are handling it because we are talking about it,’ when actually we are not talking about the right thing.”

    [ad_2]

    Will Knight

    Source link

  • Microsoft relinquishes OpenAI board seat as regulators zero in on artificial intelligence

    Microsoft relinquishes OpenAI board seat as regulators zero in on artificial intelligence

    [ad_1]

    AI fuels surge in energy demand


    AI fuels surge in energy demand

    09:54

    Microsoft is giving up its seat on OpenAI’s board, saying its presence is no longer necessary as the ChatGPT maker’s governance has improved since its boardroom upheaval last year.

    “We appreciate the support shown by OpenAI leadership and the OpenAI board as we made this decision,” Microsoft stated in a Tuesday letter. The company’s resignation is effective immediately, Microsoft said.

    The unexpected exit comes as antitrust regulators scrutinize Microsoft’s partnertship with OpenAI, under which the software giant invested billions in OpenAI.

    Microsoft also took a seat on OpenAI’s board after a chaotic period in which OpenAI CEO Sam Altman was abruptly fired, then reinstated, with the board members who orchestrated his ouster later pushed out.

    “Over the past eight months we have witnessed significant progress by the newly formed board and are confident in the company’s direction,” Microsoft said in its letter. “Given all of this we no longer believe our limited role as an observer is necessary.”


    Why did OpenAI reinstate Sam Altman as CEO?

    04:28

    Microsoft’s decision means that OpenAI will not have observer seats on its board.

    “We are grateful to Microsoft for voicing confidence in the Board and the direction of the company, and we look forward to continuing our successful partnership,” OpenAI said in a statement.

    The Federal Trade Commission and Britain’s regulatory agency have also been reviewing Microsoft’s relationship with OpenAI, and European regulators last month said they’d take another look at the partnership under the 27-nation bloc’s antitrust rules. 

    —The Associated Press contributed to this report.

    [ad_2]

    Source link

  • Media and Tech Titans Arrive At Sun Valley 2024: In Photos So Far

    Media and Tech Titans Arrive At Sun Valley 2024: In Photos So Far

    [ad_1]

    Shari Redstone arrives at the Allen & Co. Sun Valley Conference on July 9, 2024 in Sun Valley, Idaho. Getty Images

    Today (June 9) marks the start of this year’s Allen & Co. conference in Sun Valley, Idaho. Known as the “summer camp for billionaires,” the annual get-together has since 1983 drawn in industry leaders across media, tech, politics and finance. Each year, the wealthy and elite touch down in private jets at the nearby Friedman Memorial airport, which describes the conference as its “annual fly-in event” and today experienced delays due to flight volume.

    Convening at the Sun Valley Lodge, attendees will spend the next few days networking and attending private lectures on topics like national security, health care and education.

    Media and tech titans like Shari Redstone, the chairwoman of Paramount Global who just agreed to a long-awaited merger with Skydance Media; OpenAI CEO Sam Altman and Warner Bros. Discovery (WBD) CEO David Zaslav have already been spotted outside the event. More than 60 power players in total have been invited to the exclusive conference, which has famously been the site of deals like Comcast (CMCSA)’s acquisition of NBCUniversal, Jeff Bezos’ acquisition of the Washington Post and The Walt Disney Company (DIS)’s acquisition of Capital Cities/ABC.

    Who’s been seen at Sun Valley 2024 so far?

    Sam Altman, CEO of OpenAI

    Man in grey shirt driving away in golf cart Man in grey shirt driving away in golf cart

    Shari Redstone, chairwoman of Paramount Global and president of National Amusements

    Woman in red sweater stands next to white carWoman in red sweater stands next to white car

    David Zaslav, CEO of Warner Bros. Discovery

    Man in grey jacket stands outside in front of white carMan in grey jacket stands outside in front of white car

    Barry Diller, chairman of IAC

    Man in white shirt wheels bicycle Man in white shirt wheels bicycle

     

    This story is developing. Please check back for updates.

    Media and Tech Titans Arrive At Sun Valley 2024: In Photos So Far

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Why Would OpenAI Want to Become a True For-Profit Anyway?

    Why Would OpenAI Want to Become a True For-Profit Anyway?

    [ad_1]

    OpenAI CEO Sam Altman. Drew Angerer/Getty Images

    OpenAI, the San Francisco-based A.I. powerhouse now valued at $80 billion, operates by a unique structure where it is a nonprofit entity that runs a capped-profit subsidiary in which investors can buy equity. However, CEO Sam Altman may be looking to transition the organization into a fully for-profit one, The Information reported last month. The move would be unusual, however, as OpenAI has already simultaneously reaped the benefits of positive publicity from being a nonprofit while receiving significant investments that typically go into a for-profit company.

    OpenAI was founded as a nonprofit research lab in 2015 by Altman, Elon Musk, and Ilya Sutskever, among others. Born out of concern that financial incentives could lead A.I. astray, OpenAI declared in a blog post published upon its founding, “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”

    OpenAI describes its existing structure as “a partnership between our original nonprofit and a new capped profit arm” on its website.

    In 2019, OpenAI introduced a capped-profit arm. The company describes its structure as “a partnership between our original nonprofit and a new capped profit arm.” Finding that relying purely on donations made it difficult for the organization to stay competitive, this dual-model allowed OpenAI to raise money for its capital-intensive research while staying true to its nonprofit mission.

    However, in the fine print, OpenAI reveals that the cap on returns for investors is an outstanding 100x. For context, the most prominent A.I. stock, Nvidia, has risen around 30 times in the last five years. OpenAI’s profit cap is so high that it might as well not exist.

    At the center of the model transition is OpenAI’s board

    OpenAI maintains that it is accountable to an independent nonprofit board, whose members own no equity in the company. However, observers began questioning who actually gets to call the shots at the company after its former board tried to fire Altman late last year. Microsoft (MSFT), the largest corporate investor behind OpenAI with a $13 billion stake, agreed to hire Altman within three days of his firing. Altman won his job back at OpenAI only days after, and surprisingly, Microsoft appeared to have encouraged it. This raises the question: in the fierce race for A.I. talent, why did Microsoft not try harder to retain Altman from re-joining its competitor, OpenAI?

    “What we call OpenAI should be called Microsoft A.I. Microsoft controls OpenAI,” said NYU Professor Scott Galloway in an interview with Tech.Eu. (In March, Microsoft tapped Mustafa Suleyman, a co-founder of Google’s A.I. lab DeepMind, to lead a new unit called Microsoft A.I.) Microsoft holds a non-voting observer role on the board of OpenAI. On July 3, Apple, which in June announced a partnership with OpenAI, said its App Store chief Phil Schiller would receive a similar seat on the board.

    It is unclear how OpenAI may transition to a for-profit model; it likely may involve doing away with its non-profit board that oversees the company. In a request for comment from Reuters, OpenAI said, “We remain focused on building A.I. that benefits everyone. The nonprofit is core to our mission and will continue to exist.”

    OpenAI’s capped-profit model is rare, but its hybrid governance model has a long history of precedent. Food retailer Newman’s Own is a nonprofit that wholly owns for-profit distributor No Limit, which produces and sells all Newman’s Own products. In 2022, Patagonia’s founder donated 100 percent of the for-profit clothing brand’s voting shares to a nonprofit, making it another for-profit corporation owned by a nonprofit.

    Why Would OpenAI Want to Become a True For-Profit Anyway?

    [ad_2]

    Shreyas Sinha

    Source link

  • OpenAI hit by two big security issues this week

    OpenAI hit by two big security issues this week

    [ad_1]

    OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

    Earlier this week, engineer and Swift developer Pedro José Pereira Vieito the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI’s website, and since it’s not available on the App Store, it doesn’t have to follow Apple’s sandboxing requirements. Vieito’s work was then covered by and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

    For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

    The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company’s internal messaging systems. reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company’s board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

    Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

    App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into services and how chaotic the company’s , and have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.

    [ad_2]

    Anna Washenko

    Source link

  • Midjourney is creating Donald Trump pictures when asked for images of ‘the president of the United States’

    Midjourney is creating Donald Trump pictures when asked for images of ‘the president of the United States’

    [ad_1]

    Midjourney, a popular AI-powered image generator, is creating images of Donald Trump and Joe Biden despite saying that it would block users from doing so ahead of the upcoming US presidential election.

    When Engadget prompted the service to create an image of “the president of the United States,” Midjourney generated four images in various styles of former president Donald Trump.

    Midjourney created an image of Trump despite saying it wouldn't.

    Midjourney

    When asked to create an image of “the next president of the United States,” the tool generated four images of Trump as well.

    Midjourney generated Donald Trump images despite saying it wouldn't. Midjourney generated Donald Trump images despite saying it wouldn't.

    Midjourney

    When Engadget prompted Midjourney to create an image of “the current president of the United States,” the service generated three images of Trump and one image of former president Barack Obama.

    Midjourney also created an image of former President ObamaMidjourney also created an image of former President Obama

    Midjourney

    The only time Midjourney refused to create an image of Trump or Biden was when it was asked to do so explicitly. “The Midjourney community voted to prevent using ‘Donald Trump’ and ‘Joe Biden’ during election season,” the service said in that instance. Other users on X were able to get Midjourney to generate Trump’s images too.

    The tests show that Midjourney’s guardrails to prevent users from generating images of Trump and Biden ahead of the upcoming US presidential election aren’t enough — in fact, it’s really easy for people to get around them. Other chatbots like OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Meta AI did not create images of Trump or Biden despite multiple prompts.

    Midjourney did not respond to a request for comment from Engadget.

    Midjourney was one the first AI-powered image generators to explicitly ban users from generating images of Trump and Biden. “I know it’s fun to make Trump pictures — I make Trump pictures,” the company’s CEO, David Holz, told users in a chat session on Discord, earlier this year. “However, probably better to just not — better to pull out a little bit during this election. We’ll see.” A month later, Holz reportedly told users that it was time to “put some foots down on election-related stuff for a bit” and admitted that “this moderation stuff is kind of hard.” The company’s existing content rules prohibit the creation of “misleading public figures” and “events portrayals” with the “potential to mislead.”

    Last year, Midjourney was used to create a fake image of Pope Benedict wearing a puffy white Balenciaga jacket that went viral. It was also used to create fake images of Trump being arrested ahead of his arraignment at the Manhattan Criminal Court last year for his involvement in a hush money payment made to adult film star Stormy Daniels. Shortly afterwards, the company halted free trials of the service and, instead, required people to pay at least $10 a month to use it.

    Last month, the Center for Countering Digital Hate, a non-profit organization that aims to stop the spread of misinformation and hate speech online, found that Midjourney’s guardrails against generating misleading images of popular politicians including Trump and Biden failed 40% of its tests. The CCDH was able to use Midjourney to create an image of president Biden being arrested and Trump appearing next to a body double. The CCDH was also able to bypass Midjourney’s guardrails by using descriptions of each candidate’s physical appearance rather than their names to generate misleading images.

    “Midjourney is far too easy to manipulate in practice – in some cases it’s completely evaded just by adding punctuation to slip through the net,” wrote CCDH CEO Imran Ahmed in a statement at the time. “Bad actors who want to subvert elections and sow division, confusion and chaos will have a field day, to the detriment of everyone who relies on healthy, functioning democracies.

    Earlier this year, a coalition of 20 tech companies including OpenAI, Google, Meta, Amazon, Adobe and X signed an agreement to help prevent deepfakes in elections taking place in 2024 around the world by preventing their services from generating images and other media that would influence voters. Midjourney was absent from that list.

    [ad_2]

    Pranav Dixit

    Source link

  • S&P’s gen AI tool provides up to 60% efficiency gains | Bank Automation News

    S&P’s gen AI tool provides up to 60% efficiency gains | Bank Automation News

    [ad_1]

    Data insights company S&P Global is developing internal- and external-facing generative AI tools to boost efficiencies — and it hopes to monetize the technology by offering it to customers.  The New York-based company rolled out its gen AI-driven tool Spark Assist on April 25 internally to all employees, Chief AI Officer Bhavesh Dayalji told Bank […]

    [ad_2]

    Vaidik Trivedi

    Source link

  • Please don’t get your news from AI chatbots

    Please don’t get your news from AI chatbots

    [ad_1]

    This is your periodic reminder that AI-powered chatbots still make up things and lie with all the confidence of a GPS system telling you that the shortest way home is to drive through the lake.

    My reminder comes courtesy of Nieman Lab, which to see if ChatGPT would provide correct links to articles from news publications it pays millions of dollars to. It turns out that ChatGPT does not. Instead, it confidently makes up entire URLs, a phenomenon that the AI industry calls “hallucinating,” a term that seems more apt for a real person high on their own bullshit.

    Nieman Lab’s Andrew Deck asked the service to provide links to high-profile, exclusive stories published by 10 publishers that OpenAI has struck deals worth millions of dollars with. These included the Associated Press, The Wall Street Journal, the Financial Times, The Times (UK), Le Monde, El País, The Atlantic, The Verge, Vox, and Politico. In response, ChatGPT spat back made-up URLs that led to 404 error pages because they simply did not exist. In other words, the system was working exactly as designed: by predicting the most likely version of a story’s URL instead of actually citing the correct one. Nieman Lab did a similar experiment with a single publication — Business Insider — earlier this month and .

    An OpenAI spokesperson told Nieman Lab that the company was still building “an experience that blends conversational capabilities with their latest news content, ensuring proper attribution and linking to source material — an enhanced experience still in development and not yet available in ChatGPT.” But they declined to explain the fake URLs.

    We don’t know when this new experience will be available or how reliable it will be. Despite this, news publishers continue to feed years of journalism into in exchange for cold, hard cash because the journalism industry has at figuring out how to make money without to tech companies. Meanwhile, AI companies are on content published by anyone who hasn’t signed these Faustian bargains and using it to train their models anyway. Mustafa Suleiman, Microsoft’s AI head, anything published on the internet “freeware” that is fair game for training AI models. Microsoft was valued at $3.36 trillion at the time I wrote this.

    There’s a lesson here: If ChatGPT is making up URLs, it’s also making up facts. That’s how generative AI works — at its core, the technology is a fancier version of autocomplete, simply guessing the next plausible word in a sequence. It doesn’t “understand” what you say, even though it acts like it does. Recently, I tried getting our leading chatbots to help me solve the New York Times Spelling Bee and watched them .

    If generative AI can’t even solve the Spelling Bee, you shouldn’t use it to get your facts.

    [ad_2]

    Pranav Dixit

    Source link

  • OpenAI Wants AI to Help Humans Train AI

    OpenAI Wants AI to Help Humans Train AI

    [ad_1]

    One of the key ingredients that made ChatGPT a ripsnorting success was an army of human trainers who gave the artificial intelligence model behind the bot guidance on what constitutes good and bad outputs. OpenAI now says that adding even more AI into the mix—to help assist human trainers—could help make AI helpers smarter and more reliable.

    In developing ChatGPT, OpenAI pioneered the use of reinforcement learning with human feedback, or RLHF. This technique uses input from human testers to fine-tune an AI model so that its output is judged to be more coherent, less objectionable, and more accurate. The ratings the trainers give feed into an algorithm that drives the model’s behavior. The technique has proven crucial both to making chatbots more reliable and useful and preventing them from misbehaving.

    “RLHF does work very well, but it has some key limitations,” says Nat McAleese, a researcher at OpenAI involved with the new work. For one thing, human feedback can be inconsistent. For another it can be difficult for even skilled humans to rate extremely complex outputs, such as sophisticated software code. The process can also optimize a model to produce output that seems convincing rather than actually being accurate.

    OpenAI developed a new model by fine-tuning its most powerful offering, GPT-4, to assist human trainers tasked with assessing code. The company found that the new model, dubbed CriticGPT, could catch bugs that humans missed, and that human judges found its critiques of code to be better 63 percent of the time. OpenAI will look at extending the approach to areas beyond code in the future.

    “We’re starting work to integrate this technique into our RLHF chat stack,” McAleese says. He notes that the approach is imperfect, since CriticGPT can also make mistakes by hallucinating, but he adds that the technique could help make OpenAI’s models as well as tools like ChatGPT more accurate by reducing errors in human training. He adds that it might also prove crucial in helping AI models become much smarter, because it may allow humans to help train an AI that exceeds their own abilities. “And as models continue to get better and better, we suspect that people will need more help,” McAleese says.

    The new technique is one of many now being developed to improve large language models and squeeze more abilities out of them. It is also part of an effort to ensure that AI behaves in acceptable ways even as it becomes more capable.

    Earlier this month, Anthropic, a rival to OpenAI founded by ex-OpenAI employees, announced a more capable version of its own chatbot, called Claude, thanks to improvements in the model’s training regimen and the data it is fed. Anthropic and OpenAI have both also recently touted new ways of inspecting AI models to understand how they arrive at their output in order to better prevent unwanted behavior such as deception.

    The new technique might help OpenAI train increasingly powerful AI models while ensuring their output is more trustworthy and aligned with human values, especially if the company successfully deploys it in more areas than code. OpenAI has said that it is training its next major AI model, and the company is evidently keen to show that it is serious about ensuring that it behaves. This follows the dissolvement of a prominent team dedicated to assessing the long-term risks posed by AI. The team was co-led by Ilya Sutskever, a cofounder of the company and former board member who briefly pushed CEO Sam Altman out of the company before recanting and helping him regain control. Several members of that team have since criticized the company for moving riskily as it rushes to develop and commercialize powerful AI algorithms.

    Dylan Hadfield-Menell, a professor at MIT who researches ways to align AI, says the idea of having AI models help train more powerful ones has been kicking around for a while. “This is a pretty natural development,” he says.

    Hadfield-Menell notes that the researchers who originally developed techniques used for RLHF discussed related ideas several years ago. He says it remains to be seen how generally applicable and powerful it is. “It might lead to big jumps in individual capabilities, and it might be a stepping stone towards sort of more effective feedback in the long run,” he says.

    [ad_2]

    Will Knight

    Source link

  • ChatGPT: Everything you need to know about the AI chatbot

    ChatGPT: Everything you need to know about the AI chatbot

    [ad_1]

    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies.

    That growth has propelled OpenAI itself into becoming one of the most-hyped companies in recent memory. And its latest partnership with Apple for its upcoming generative AI offering, Apple Intelligence, has given the company another significant bump in the AI race.

    2024 also saw the release of GPT-4o, OpenAI’s new flagship omni model for ChatGPT. GPT-4o is now the default free model, complete with voice and vision capabilities. But after demoing GPT-4o, OpenAI paused one of its voices, Sky, after allegations that it was mimicking Scarlett Johansson’s voice in “Her.”

    OpenAI is facing internal drama, including the sizable exit of co-founder and longtime chief scientist Ilya Sutskever as the company dissolved its Superalignment team. OpenAI is also facing a lawsuit from Alden Global Capital-owned newspapers including the New York Daily News and the Chicago Tribune for alleged copyright infringement, following a similar suit filed by The New York Times last year.

    Here’s a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. And if you have any other questions, check out our ChatGPT FAQ here.

    Timeline of the most recent ChatGPT updates

    June 2024

    Apple brings ChatGPT to its apps, including Siri

    Apple announced at WWDC 2024 that it is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems. The ChatGPT integrations, powered by GPT-4o, will arrive on iOS 18, iPadOS 18 and macOS Sequoia later this year, and will be free without the need to create a ChatGPT or OpenAI account. Features exclusive to paying ChatGPT users will also be available through Apple devices.

    House Oversight subcommittee invites Scarlett Johansson to testify about ‘Sky’ controversy

    Scarlett Johansson has been invited to testify about the controversy surrounding OpenAI’s Sky voice at a hearing for the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation. In a letter, Rep. Nancy Mace said Johansson’s testimony could “provide a platform” for concerns around deepfakes.

    ChatGPT experiences two outages in a single day

    ChatGPT was down twice in one day: one multi-hour outage in the early hours of the morning Tuesday and another outage later in the day that is still ongoing. Anthropic’s Claude and Perplexity also experienced some issues.

    May 2024

    The Atlantic and Vox Media ink content deals with OpenAI

    The Atlantic and Vox Media have announced licensing and product partnerships with OpenAI. Both agreements allow OpenAI to use the publishers’ current content to generate responses in ChatGPT, which will feature citations to relevant articles. Vox Media says it will use OpenAI’s technology to build “audience-facing and internal applications,” while The Atlantic will build a new experimental product called Atlantic Labs.

    OpenAI signs 100K PwC workers to ChatGPT’s enterprise tier

    OpenAI announced a new deal with management consulting giant PwC. The company will become OpenAI’s biggest customer to date, covering 100,000 users, and will become OpenAI’s first partner for selling its enterprise offerings to other businesses.

    OpenAI says it is training its GPT-4 successor

    OpenAI announced in a blog post that it has recently begun training its next flagship model to succeed GPT-4. The news came in an announcement of its new safety and security committee, which is responsible for informing safety and security decisions across OpenAI’s products.

    Former OpenAI director claims the board found out about ChatGPT on Twitter

    On the The TED AI Show podcast, former OpenAI board member Helen Toner revealed that the board did not know about ChatGPT until its launch in November 2022. Toner also said that Sam Altman gave the board inaccurate information about the safety processes the company had in place and that he didn’t disclose his involvement in the OpenAI Startup Fund.

    ChatGPT’s mobile app revenue saw biggest spike yet following GPT-4o launch

    The launch of GPT-4o has driven the company’s biggest-ever spike in revenue on mobile, despite the model being freely available on the web. Mobile users are being pushed to upgrade to its $19.99 monthly subscription, ChatGPT Plus, if they want to experiment with OpenAI’s most recent launch.

    OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

    After demoing its new GPT-4o model last week, OpenAI announced it is pausing one of its voices, Sky, after users found that it sounded similar to Scarlett Johansson in “Her.”

    OpenAI explained in a blog post that Sky’s voice is “not an imitation” of the actress and that AI voices should not intentionally mimic the voice of a celebrity. The blog post went on to explain how the company chose its voices: Breeze, Cove, Ember, Juniper and Sky.

    ChatGPT lets you add files from Google Drive and Microsoft OneDrive

    OpenAI announced new updates for easier data analysis within ChatGPT. Users can now upload files directly from Google Drive and Microsoft OneDrive, interact with tables and charts, and export customized charts for presentations. The company says these improvements will be added to GPT-4o in the coming weeks.

    OpenAI inks deal to train AI on Reddit data

    OpenAI announced a partnership with Reddit that will give the company access to “real-time, structured and unique content” from the social network. Content from Reddit will be incorporated into ChatGPT, and the companies will work together to bring new AI-powered features to Reddit users and moderators.

    OpenAI debuts GPT-4o “omni” model now powering ChatGPT

    OpenAI’s spring update event saw the reveal of its new omni model, GPT-4o, which has a black hole-like interface, as well as voice and vision capabilities that feel eerily like something out of “Her.” GPT-4o is set to roll out “iteratively” across its developer and consumer-facing products over the next few weeks.

    OpenAI to build a tool that lets content creators opt out of AI training

    The company announced it’s building a tool, Media Manager, that will allow creators to better control how their content is being used to train generative AI models — and give them an option to opt out. The goal is to have the new tool in place and ready to use by 2025.

    OpenAI explores allowing AI porn

    In a new peek behind the curtain of its AI’s secret instructions, OpenAI also released a new NSFW policy. Though it’s intended to start a conversation about how it might allow explicit images and text in its AI products, it raises questions about whether OpenAI — or any generative AI vendor — can be trusted to handle sensitive content ethically.

    OpenAI and Stack Overflow announce partnership

    In a new partnership, OpenAI will get access to developer platform Stack Overflow’s API and will get feedback from developers to improve the performance of their AI models. In return, OpenAI will include attributions to Stack Overflow in ChatGPT. However, the deal was not favorable to some Stack Overflow users — leading to some sabotaging their answer in protest.

    April 2024

    Alden Global Capital-owned newspapers, including the New York Daily News, the Chicago Tribune, and the Denver Post, are suing OpenAI and Microsoft for copyright infringement. The lawsuit alleges that the companies stole millions of copyrighted articles “without permission and without payment” to bolster ChatGPT and Copilot.

    OpenAI inks content licensing deal with Financial Times

    OpenAI has partnered with another news publisher in Europe, London’s Financial Times, that the company will be paying for content access. “Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and rich links to FT journalism in response to relevant queries,” the FT wrote in a press release.

    OpenAI opens Tokyo hub, adds GPT-4 model optimized for Japanese

    OpenAI is opening a new office in Tokyo and has plans for a GPT-4 model optimized specifically for the Japanese language. The move underscores how OpenAI will likely need to localize its technology to different languages as it expands.

    Sam Altman pitches ChatGPT Enterprise to Fortune 500 companies

    According to Reuters, OpenAI’s Sam Altman hosted hundreds of executives from Fortune 500 companies across several cities in April, pitching versions of its AI services intended for corporate use.

    OpenAI releases “more direct, less verbose” version of GPT-4 Turbo

    Premium ChatGPT users — customers paying for ChatGPT Plus, Team or Enterprise — can now use an updated and enhanced version of GPT-4 Turbo. The new model brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base.

    ChatGPT no longer requires an account — but there’s a catch

    You can now use ChatGPT without signing up for an account, but it won’t be quite the same experience. You won’t be able to save or share chats, use custom instructions, or other features associated with a persistent account. This version of ChatGPT will have “slightly more restrictive content policies,” according to OpenAI. When TechCrunch asked for more details, however, the response was unclear:

    “The signed out experience will benefit from the existing safety mitigations that are already built into the model, such as refusing to generate harmful content. In addition to these existing mitigations, we are also implementing additional safeguards specifically designed to address other forms of content that may be inappropriate for a signed out experience,” a spokesperson said.

    March 2024

    OpenAI’s chatbot store is filling up with spam

    TechCrunch found that the OpenAI’s GPT Store is flooded with bizarre, potentially copyright-infringing GPTs. A cursory search pulls up GPTs that claim to generate art in the style of Disney and Marvel properties, but serve as little more than funnels to third-party paid services and advertise themselves as being able to bypass AI content detection tools.

    The New York Times responds to OpenAI’s claims that it “hacked” ChatGPT for its copyright lawsuit

    In a court filing opposing OpenAI’s motion to dismiss The New York Times’ lawsuit alleging copyright infringement, the newspaper asserted that “OpenAI’s attention-grabbing claim that The Times ‘hacked’ its products is as irrelevant as it is false.” The New York Times also claimed that some users of ChatGPT used the tool to bypass its paywalls.

    OpenAI VP doesn’t say whether artists should be paid for training data

    At a SXSW 2024 panel, Peter Deng, OpenAI’s VP of consumer product dodged a question on whether artists whose work was used to train generative AI models should be compensated. While OpenAI lets artists “opt out” of and remove their work from the datasets that the company uses to train its image-generating models, some artists have described the tool as onerous.

    A new report estimates that ChatGPT uses more than half a million kilowatt-hours of electricity per day

    ChatGPT’s environmental impact appears to be massive. According to a report from The New Yorker, ChatGPT uses an estimated 17,000 times the amount of electricity than the average U.S. household to respond to roughly 200 million requests each day.

    ChatGPT can now read its answers aloud

    OpenAI released a new Read Aloud feature for the web version of ChatGPT as well as the iOS and Android apps. The feature allows ChatGPT to read its responses to queries in one of five voice options and can speak 37 languages, according to the company. Read aloud is available on both GPT-4 and GPT-3.5 models.

    February 2024

    OpenAI partners with Dublin City Council to use GPT-4 for tourism

    As part of a new partnership with OpenAI, the Dublin City Council will use GPT-4 to craft personalized itineraries for travelers, including recommendations of unique and cultural destinations, in an effort to support tourism across Europe.

    A law firm used ChatGPT to justify a six-figure bill for legal services

    New York-based law firm Cuddy Law was criticized by a judge for using ChatGPT to calculate their hourly billing rate. The firm submitted a $113,500 bill to the court, which was then halved by District Judge Paul Engelmayer, who called the figure “well above” reasonable demands.

    ChatGPT experienced a bizarre bug for several hours

    ChatGPT users found that ChatGPT was giving nonsensical answers for several hours, prompting OpenAI to investigate the issue. Incidents varied from repetitive phrases to confusing and incorrect answers to queries. The issue was resolved by OpenAI the following morning.

    Match Group announced deal with OpenAI with a press release co-written by ChatGPT

    The dating app giant home to Tinder, Match and OkCupid announced an enterprise agreement with OpenAI in an enthusiastic press release written with the help of ChatGPT. The AI tech will be used to help employees with work-related tasks and come as part of Match’s $20 million-plus bet on AI in 2024.

    ChatGPT will now remember — and forget — things you tell it to

    As part of a test, OpenAI began rolling out new “memory” controls for a small portion of ChatGPT free and paid users, with a broader rollout to follow. The controls let you tell ChatGPT explicitly to remember something, see what it remembers or turn off its memory altogether. Note that deleting a chat from chat history won’t erase ChatGPT’s or a custom GPT’s memories — you must delete the memory itself.

    OpenAI begins rolling out “Temporary Chat” feature

    Initially limited to a small subset of free and subscription users, Temporary Chat lets you have a dialogue with a blank slate. With Temporary Chat, ChatGPT won’t be aware of previous conversations or access memories but will follow custom instructions if they’re enabled.

    But, OpenAI says it may keep a copy of Temporary Chat conversations for up to 30 days for “safety reasons.”

    January 2024

    ChatGPT users can now invoke GPTs directly in chats

    Paid users of ChatGPT can now bring GPTs into a conversation by typing “@” and selecting a GPT from the list. The chosen GPT will have an understanding of the full conversation, and different GPTs can be “tagged in” for different use cases and needs.

    ChatGPT is reportedly leaking usernames and passwords from users’ private conversations

    Screenshots provided to Ars Technica found that ChatGPT is potentially leaking unpublished research papers, login credentials and private information from its users. An OpenAI representative told Ars Technica that the company was investigating the report.

    ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

    OpenAI has been told it’s suspected of violating European Union privacy, following a multi-month investigation of ChatGPT by Italy’s data protection authority. Details of the draft findings haven’t been disclosed, but in a response, OpenAI said: “We want our AI to learn about the world, not about private individuals.”

    OpenAI partners with Common Sense Media to collaborate on AI guidelines

    In an effort to win the trust of parents and policymakers, OpenAI announced it’s partnering with Common Sense Media to collaborate on AI guidelines and education materials for parents, educators and young adults. The organization works to identify and minimize tech harms to young people and previously flagged ChatGPT as lacking in transparency and privacy.

    OpenAI responds to Congressional Black Caucus about lack of diversity on its board

    After a letter from the Congressional Black Caucus questioned the lack of diversity in OpenAI’s board, the company responded. The response, signed by CEO Sam Altman and Chairman of the Board Bret Taylor, said building a complete and diverse board was one of the company’s top priorities and that it was working with an executive search firm to assist it in finding talent. 

    OpenAI drops prices and fixes ‘lazy’ GPT-4 that refused to work

    In a blog post, OpenAI announced price drops for GPT-3.5’s API, with input prices dropping to 50% and output by 25%, to $0.0005 per thousand tokens in, and $0.0015 per thousand tokens out. GPT-4 Turbo also got a new preview model for API use, which includes an interesting fix that aims to reduce “laziness” that users have experienced.

    OpenAI bans developer of a bot impersonating a presidential candidate

    OpenAI has suspended AI startup Delphi, which developed a bot impersonating Rep. Dean Phillips (D-Minn.) to help bolster his presidential campaign. The ban comes just weeks after OpenAI published a plan to combat election misinformation, which listed “chatbots impersonating candidates” as against its policy.

    OpenAI announces partnership with Arizona State University

    Beginning in February, Arizona State University will have full access to ChatGPT’s Enterprise tier, which the university plans to use to build a personalized AI tutor, develop AI avatars, bolster their prompt engineering course and more. It marks OpenAI’s first partnership with a higher education institution.

    Winner of a literary prize reveals around 5% her novel was written by ChatGPT

    After receiving the prestigious Akutagawa Prize for her novel The Tokyo Tower of Sympathy, author Rie Kudan admitted that around 5% of the book quoted ChatGPT-generated sentences “verbatim.” Interestingly enough, the novel revolves around a futuristic world with a pervasive presence of AI.

    Sam Altman teases video capabilities for ChatGPT and the release of GPT-5

    In a conversation with Bill Gates on the Unconfuse Me podcast, Sam Altman confirmed an upcoming release of GPT-5 that will be “fully multimodal with speech, image, code, and video support.” Altman said users can expect to see GPT-5 drop sometime in 2024.

    OpenAI announces team to build ‘crowdsourced’ governance ideas into its models

    OpenAI is forming a Collective Alignment team of researchers and engineers to create a system for collecting and “encoding” public input on its models’ behaviors into OpenAI products and services. This comes as a part of OpenAI’s public program to award grants to fund experiments in setting up a “democratic process” for determining the rules AI systems follow.

    OpenAI unveils plan to combat election misinformation

    In a blog post, OpenAI announced users will not be allowed to build applications for political campaigning and lobbying until the company works out how effective their tools are for “personalized persuasion.”

    Users will also be banned from creating chatbots that impersonate candidates or government institutions, and from using OpenAI tools to misrepresent the voting process or otherwise discourage voting.

    The company is also testing out a tool that detects DALL-E generated images and will incorporate access to real-time news, with attribution, in ChatGPT.

    OpenAI changes policy to allow military applications

    In an unannounced update to its usage policy, OpenAI removed language previously prohibiting the use of its products for the purposes of “military and warfare.” In an additional statement, OpenAI confirmed that the language was changed in order to accommodate military customers and projects that do not violate their ban on efforts to use their tools to “harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

    ChatGPT subscription aimed at small teams debuts

    Aptly called ChatGPT Team, the new plan provides a dedicated workspace for teams of up to 149 people using ChatGPT as well as admin tools for team management. In addition to gaining access to GPT-4, GPT-4 with Vision and DALL-E3, ChatGPT Team lets teams build and share GPTs for their business needs.

    OpenAI’s GPT store officially launches

    After some back and forth over the last few months, OpenAI’s GPT Store is finally here. The feature lives in a new tab in the ChatGPT web client, and includes a range of GPTs developed both by OpenAI’s partners and the wider dev community.

    To access the GPT Store, users must be subscribed to one of OpenAI’s premium ChatGPT plans — ChatGPT Plus, ChatGPT Enterprise or the newly launched ChatGPT Team.

    Developing AI models would be “impossible” without copyrighted materials, OpenAI claims

    Following a proposed ban on using news publications and books to train AI chatbots in the U.K., OpenAI submitted a plea to the House of Lords communications and digital committee. OpenAI argued that it would be “impossible” to train AI models without using copyrighted materials, and that they believe copyright law “does not forbid training.”

    OpenAI claims The New York Times’ copyright lawsuit is without merit

    OpenAI published a public response to The New York Times’s lawsuit against them and Microsoft for allegedly violating copyright law, claiming that the case is without merit.

    In the response, OpenAI reiterates its view that training AI models using publicly available data from the web is fair use. It also makes the case that regurgitation is less likely to occur with training data from a single source and places the onus on users to “act responsibly.”

    OpenAI’s app store for GPTs planned to launch next week

    After being delayed in December, OpenAI plans to launch its GPT Store sometime in the coming week, according to an email viewed by TechCrunch. OpenAI says developers building GPTs will have to review the company’s updated usage policies and GPT brand guidelines to ensure their GPTs are compliant before they’re eligible for listing in the GPT Store. OpenAI’s update notably didn’t include any information on the expected monetization opportunities for developers listing their apps on the storefront.

    OpenAI moves to shrink regulatory risk in EU around data privacy

    In an email, OpenAI detailed an incoming update to its terms, including changing the OpenAI entity providing services to EEA and Swiss residents to OpenAI Ireland Limited. The move appears to be intended to shrink its regulatory risk in the European Union, where the company has been under scrutiny over ChatGPT’s impact on people’s privacy.

    FAQs:

    What is ChatGPT? How does it work?

    ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

    When did ChatGPT get released?

    November 30, 2022 is when ChatGPT was released for public use.

    What is the latest version of ChatGPT?

    Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.

    Can I use ChatGPT for free?

    There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.

    Who uses ChatGPT?

    Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

    What companies use ChatGPT?

    Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.

    Most recently, Microsoft announced at it’s 2023 Build conference that it is integrating it ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

    What does GPT mean in ChatGPT?

    GPT stands for Generative Pre-Trained Transformer.

    What is the difference between ChatGPT and a chatbot?

    A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

    ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

    Can ChatGPT write essays?

    Yes.

    Can ChatGPT commit libel?

    Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

    We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

    Does ChatGPT have an app?

    Yes, there is a free ChatGPT mobile app for iOS and Android users.

    What is the ChatGPT character limit?

    It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

    Does ChatGPT have an API?

    Yes, it was released March 1, 2023.

    What are some sample everyday uses for ChatGPT?

    Everyday examples include programing, scripts, email replies, listicles, blog ideas, summarization, etc.

    What are some advanced uses for ChatGPT?

    Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

    How good is ChatGPT at writing code?

    It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

    Can you save a ChatGPT chat?

    Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

    Are there alternatives to ChatGPT?

    Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.

    How does ChatGPT handle data privacy?

    OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

    The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.

    In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”

    What controversies have surrounded ChatGPT?

    Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

    An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

    CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

    Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.

    There have also been cases of ChatGPT accusing individuals of false crimes.

    Where can I find examples of ChatGPT prompts?

    Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.

    Can ChatGPT be detected?

    Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.

    Are ChatGPT chats public?

    No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

    What lawsuits are there surrounding ChatGPT?

    None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

    Are there issues regarding plagiarism with ChatGPT?

    Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

    [ad_2]

    Alyssa Stringer

    Source link

  • OpenAI-Backed Nonprofits Have Gone Back on Their Transparency Pledges

    OpenAI-Backed Nonprofits Have Gone Back on Their Transparency Pledges

    [ad_1]

    Neither database mandates nor generally contains up-to-date versions of the records that UBI Charitable and OpenResearch had said they provided in the past.

    The original YC Research conflict-of-interest policy that Das did share calls for company insiders to be upfront about transactions in which their impartiality could be questioned and for the board to decide how to proceed.

    Das says the policy “may have been amended since OpenResearch’s policies changed (including when the name was changed from YC Research), but the core elements remain the same.”

    No Website

    UBI Charitable launched in 2020 with $10 million donated from OpenAI, as first reported by TechCrunch last year. UBI Charitable’s aim, according to its government filings, is putting the over $31 million it received by the end of 2022 to support initiatives that try to offset “the societal impacts” of new technologies and ensure no one is left behind. It has donated largely to CitySquare in Dallas and Heartland Alliance in Chicago, both of which work on a range of projects to fight poverty.

    UBI Charitable doesn’t appear to have a website but shares a San Francisco address with OpenResearch and OpenAI, and OpenAI staff have been listed on UBI Charitable’s government paperwork. Its three Form 990 filings since launching all state that records including governing documents, financial statements, and a conflict-of-interest policy were available upon request.

    Rick Cohen, chief operating and communications officer for National Council of Nonprofits, an advocacy group, says “available upon request” is a standard answer plugged in by accounting firms. OpenAI, OpenResearch, and UBI Charitable have always shared the same San Francisco accounting firm, Fontanello Duffield & Otake, which didn’t respond to a request for comment.

    Miscommunication or poor oversight could lead to the standard answer about access to records getting submitted, “even if the organization wasn’t intending to make them available,” Cohen says.

    The disclosure question ended up on what’s known as the Form 990 as part of an effort in 2008 to help the increasingly complex world of nonprofits showcase their adherence to governance best practices, at least as implied by the IRS, says Kevin Doyle, senior director of finance and accountability at Charity Navigator, which evaluates nonprofits to help guide donors’ giving decisions. “Having that sort of transparency story is a way to indicate to donors that their money is going to be used responsibly,” Doyle says.

    OpenResearch solicits donations on its website, and UBI Charitable stated on its most recent IRS filing that it had received over $27 million in public support. Doyle says Charity Navigator’s data show donations tend to flow to organizations it rates higher, with transparency among the measured factors.

    It’s certainly not unheard of for organizations to share a wide range of records. Charity Navigator has found that most of the roughly 900 largest US nonprofits reliant on individual donors publish financial statements on their websites. It doesn’t track disclosure of bylaws or conflict-of-interest policies.

    Charity Navigator publishes its own audited financial statements and at least eight nonstandard policies it maintains, including ones on how long it retains documents, how it treats whistleblower complaints, and which gifts staff can accept. “Donors can look into what we’re doing and make their own judgment rather than us operating as a black box, saying, ‘Please give us money, but don’t ask any questions,’” Doyle says.

    Cohen of the National Council of Nonprofits cautions that over-disclosure could create vulnerabilities. Posting a disaster-recovery plan, for example, could offer a roadmap to computer hackers. He adds that just because organizations have a policy on paper doesn’t mean they follow it. But knowing what they were supposed to do to evaluate a potential conflict of interest could still allow for more public accountability than otherwise possible, and if AI could be as consequential as Altman envisions, the scrutiny may very well be needed.

    [ad_2]

    Paresh Dave

    Source link

  • Elon Musk threatens to ban Apple devices at his companies over its new OpenAI deal

    Elon Musk threatens to ban Apple devices at his companies over its new OpenAI deal

    [ad_1]

    Tesla CEO Elon Musk is threatening to ban his employees from taking Apple devices into their workplaces after Apple CEO Tim Cook announced a partnership on Monday to integrate OpenAI’s artificial intelligence technology into its operating systems. 

    On Monday evening, Musk wrote on his social media platform X that adding OpenAI’s tech into Apple’s systems “is an unacceptable security violation.” He added that visitors to his businesses, which also include SpaceX, “will have to check their Apple devices at the door, where they will be stored in a Faraday cage.”

    Musk’s threat to ban Apple devices at his workplaces, which employ more than 100,000 workers across Tesla, SpaceX and X alone, comes amid a legal battle between the Tesla CEO and OpenAI. In March, Musk sued OpenAI and its CEO, Sam Altman, alleging that the artificial intelligence company had violated its original mission statement by putting profits over benefiting humanity. 

    Apple’s announcement of its partnership with OpenAI emphasized that users’ personal data would remain private, even as the iPhone maker integrates AI into operating systems including iOS 18 and macOS Sequoia. The tech giant said it won’t collect data on users or search their personal data stored on their devices when they use the AI system.

    Yet Musk expressed skepticism that that Apple users’ personal data will remain private. 

    “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!” Musk wrote on X. “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”


    Apple expected to unveil AI features at Worldwide Developers Conference

    04:29

    Apple didn’t immediately respond to a request for comment. 

    Musk helped found OpenAI in 2015, but stepped down from its board in 2018. At the same time, he is working to build a rival AI company, xAI, which has recruited researchers from OpenAI and other top tech firms with the mission to “maximally benefit all of humanity.”

    [ad_2]

    Source link

  • Apple may integrate Google’s Gemini AI into iOS in the future

    Apple may integrate Google’s Gemini AI into iOS in the future

    [ad_1]

    Apple is integrating GPT-4o, the large language model that powers ChatGPT into iOS 18, iPadOS 18 and MacOS Sequioa thanks to a partnership with OpenAI announced at WWDC, the company’s annual developer conference, on Monday. But shortly after the keynote ended, Craig Federighi, Apple’s senior vice president of software engineering said that the company might also bake in Gemini, Google’s family of large language model, into its operating systems.

    “We want to enable users ultimately to choose the models they want, maybe Google Gemini in the future,” Federighi said in a conversation with reporters after the keynote. “Nothing to announce right now.”

    The news is notable because even though Apple did mention plans to add more AI models into its operating system in the keynote, it didn’t mention Gemini specifically. Letting people choose the AI model they want on their devices instead of simply foisting one on them would give Apple devices a level of customization that none of its competitors like Google or Samsung have.

    Catch up here for all the news out of Apple’s WWDC 2024.

    [ad_2]

    Pranav Dixit

    Source link

  • Don’t Let Mistrust of Tech Companies Blind You to the Power of AI

    Don’t Let Mistrust of Tech Companies Blind You to the Power of AI

    [ad_1]

    Meanwhile, in less visible ways, AI is already changing education, commerce, and the workplace. One friend recently told me about a big IT firm he works with. The company had a lengthy and long-established protocol for launching major initiatives that involved designing solutions, coding up the product, and engineering the rollout. Moving from concept to execution took months. But he recently saw a demo that applied state-of-the-art AI to a typical software project. “All of those things that took months happened in the space of a few hours,” he says. “That made me agree with your column. Tons of the companies that surround us are now animated corpses.” No wonder people are freaked.

    What fuels a lot of the rage against AI is mistrust of the companies building and promoting it. By coincidence I had a breakfast scheduled this week with Ali Farhadi, the CEO of the Allen Institute for AI, a nonprofit research effort. He’s 100 percent convinced that the hype is justified but also empathizes with those who don’t accept it—because, he says, the companies that are trying to dominate the field are viewed with suspicion by the public. “AI has been treated as this black box thing that no one knows about, and it’s so expensive only four companies can do it,” Farhadi says. The fact that AI developers are moving so quickly fuels the distrust even more. “We collectively don’t understand this, yet we’re deploying it,” he says. “I’m not against that, but we should expect these systems will behave in unpredictable ways, and people will react to that.” Fahadi, who is a proponent of open source AI, says that at the least the big companies should publicly disclose what materials they use to train their models.

    Compounding the issue is that many people involved in building AI also pledge their devotion to producing AGI. While many key researchers believe this will be a boon to humanity—it’s the founding principle of OpenAI—they have not made the case to the public. “People are frustrated with the notion that this AGI thing is going to come tomorrow or one year or in six months,” says Farhadi, who is not a fan of the concept. He says AGI is not a scientific term but a fuzzy notion that’s mucking up the adoption of AI. “In my lab when a student uses those three letters, it just delays their graduation by six months,” he says.

    Personally I’m agnostic on the AGI issue—I don’t think we’re on the cusp of it but simply don’t know what will happen in the long run. When you talk to people on the front lines of AI, it turns out that they don’t know, either.

    Some things do seem clear to me, and I think that these will eventually become apparent to all—even those pitching spitballs at me on X. AI will get more powerful. People will find ways to use it to make their jobs and personal lives easier. Also, many folks are going to lose their jobs, and entire companies will be disrupted. It will be small consolation that new jobs and firms might emerge from an AI boom, because some of the displaced people will still be stuck in unemployment lines or cashiering at Walmart. In the meantime, everyone in the AI world—including columnists like me—would do well to understand why people are so enraged, and respect their justifiable discontent.

    [ad_2]

    Steven Levy

    Source link

  • US National Security Experts Warn AI Giants Aren’t Doing Enough to Protect Their Secrets

    US National Security Experts Warn AI Giants Aren’t Doing Enough to Protect Their Secrets

    [ad_1]

    Google, in public comments to the NTIA ahead of its report, said it expects “to see increased attempts to disrupt, degrade, deceive, and steal” models. But it added that its secrets are guarded by a “security, safety, and reliability organization consisting of engineers and researchers with world-class expertise” and that it was working on “a framework” that would involve an expert committee to help govern access to models and their weights.

    Like Google, OpenAI said in comments to the NTIA that there was a need for both open and closed models, depending on the circumstances. OpenAI, which develops models such as GPT-4 and the services and apps that build on them, like ChatGPT, last week formed its own security committee on its board and this week published details on its blog about the security of the technology it uses to train models. The blog post expressed hope that the transparency would inspire other labs to adopt protective measures. It didn’t specify from whom the secrets needed protecting.

    Speaking alongside Rice at Stanford, RAND CEO Jason Matheny echoed her concerns about security gaps. By using export controls to limit China’s access to powerful computer chips, the US has hampered Chinese developers’ ability to develop their own models, Matheny said. He claimed that has increased their need to steal AI software outright.

    By Matheny’s estimate, spending a few million dollars on a cyberattack that steals AI model weights, which might cost an American company hundreds of billions of dollars to create, is well worth it for China. “It’s really hard, and it’s really important, and we’re not investing enough nationally to get that right,” Matheny said.

    China’s embassy in Washington, DC, did not immediately respond to WIRED’s request for comment on theft accusations, but in the past has described such claims as baseless smears by Western officials.

    Google has said that it tipped off law enforcement about the incident that became the US case alleging theft of AI chip secrets for China. While the company has described maintaining strict safeguards to prevent the theft of its proprietary data, court papers show it took considerable time for Google to catch the defendant, Linwei Ding, a Chinese national who has pleaded not guilty to the federal charges.

    The engineer, who also goes by Leon, was hired in 2019 to work on software for Google’s supercomputing data centers, according to prosecutors. Over about a year starting in 2022, he allegedly copied more than 500 files with confidential information over to his personal Google account. The scheme worked in part, court papers say, by the employee pasting information into Apple’s Notes app on his company laptop, converting the files to PDFs, and uploading them elsewhere, all the while evading Google’s technology meant to catch that sort of exfiltration.

    While engaged in the alleged stealing, the US claims the employee was in touch with the CEO of an AI startup in China and had moved to start his own Chinese AI company. If convicted, he faces up to 10 years in prison.

    [ad_2]

    Paresh Dave

    Source link

  • OpenAI Offers a Peek Inside the Guts of ChatGPT

    OpenAI Offers a Peek Inside the Guts of ChatGPT

    [ad_1]

    ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.

    Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain concepts—including those that might cause an AI system to misbehave.

    Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the technology’s long-term risks.

    The former group’s coleads, Ilya Sutskever and Jan Leike—both of whom have left OpenAI—are named as coauthors. Sutskever, a cofounder of OpenAI and formerly chief scientist, was among the board members who voted to fire CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.

    ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data, but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.

    “Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work wrote in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models, including ChatGPT, could perhaps be used to design chemical or biological weapons and coordinate cyberattacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.

    OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is in refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.

    OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work, as well as a visualization tool that can be used to see how words in different sentences activate concepts, including profanity and erotic content, in GPT-4 and another model. Knowing how a model represents certain concepts could be a step toward being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.

    [ad_2]

    Will Knight

    Source link

  • The Tribeca Film Festival will debut a bunch of short films made by AI

    The Tribeca Film Festival will debut a bunch of short films made by AI

    [ad_1]

    The Tribeca Film Festival will debut five short films made by AI, . The shorts will use OpenAI’s Sora model, which transforms . This is the first time this type of technology will take center stage at the long-running film festival.

    “Tribeca is rooted in the foundational belief that storytelling inspires change. Humans need stories to thrive and make sense of our wonderful and broken world,” said co-founder and CEO of Tribeca Enterprises Jane Rosenthal. Who better to chronicle our wonderful and broken world than some lines of code owned by a company that to let CEO Sam Altman and other board members ?

    The unnamed filmmakers were all given access to the Sora model, which isn’t yet available to the public, though they have to follow the terms of the agreements negotiated during the recent strikes . OpenAI’s COO, Brad Lightcap, says the feedback provided by these filmmakers will be used to “make Sora a better tool for all creatives.”

    When we last covered Sora, it could only handle 60 seconds of video from a single prompt. If that’s still the case, these short films will make Quibi shows look like a Ken Burns documentary. The software also struggles with cause and effect and, well, that’s basically what a story is. However, all of these limitations come from the ancient days of February, and this tech tends to move quickly. Also, I assume there’s no rule against using prompts to create single scenes, which the filmmaker can string together to make a story.

    We don’t have that long to find out if cold technology can accurately peer into our warm human hearts. The shorts will screen on June 15 and there’s a conversation with the various filmmakers immediately following the debut.

    This follows a spate of agreements between . Vox Media, The Atlantic, News Corp, Dotdash Meredith and even Reddit have all struck deals with OpenAI to let the company train its models on their content. Meanwhile, Meta and Google are looking for to train its models. It looks like we are going to get this “AI creates everything” future, whether we want it or not.

    [ad_2]

    Lawrence Bonk

    Source link

  • Foreign Influence Campaigns Don’t Know How to Use AI Yet Either

    Foreign Influence Campaigns Don’t Know How to Use AI Yet Either

    [ad_1]

    Today, OpenAI released its first threat report, detailing how actors from Russia, Iran, China, and Israel have attempted to use its technology for foreign influence operations across the globe. The report named five different networks that OpenAI identified and shut down between 2023 and 2024. In the report, OpenAI reveals that established networks like Russia’s Doppleganger and China’s Spamoflauge are experimenting with how to use generative AI to automate their operations. They’re also not very good at it.

    And while it’s a modest relief that these actors haven’t mastered generative AI to become unstoppable forces for disinformation, it’s clear that they’re experimenting, and that alone should be worrying.

    The OpenAI report reveals that influence campaigns are running up against the limits of generative AI, which doesn’t reliably produce good copy or code. It struggles with idioms—which make language sound more reliably human and personal—and also sometimes with basic grammar (so much so that OpenAI named one network “Bad Grammar.”) The Bad Grammar network was so sloppy that it once revealed its true identity: “As an AI language model, I am here to assist and provide the desired comment,” it posted.

    One network used ChatGPT to debug code that would allow it to automate posts on Telegram, a chat app that has long been a favorite of extremists and influence networks. This worked well sometimes, but other times it led to the same account posting as two separate characters, giving away the game.

    In other cases, ChatGPT was used to create code and content for websites and social media. Spamoflauge, for instance, used ChatGPT to debug code to create a WordPress website that published stories attacking members of the Chinese diaspora who were critical of the country’s government.

    According to the report, the AI-generated content didn’t manage to break out from the influence networks themselves into the mainstream, even when shared on widely used platforms like X, Facebook, and Instagram. This was the case for campaigns run by an Israeli company seemingly working on a for-hire basis and posting content that ranged from anti-Qatar to anti-BJP, the Hindu-nationalist party currently in control of the Indian government.

    Taken altogether, the report paints a picture of several relatively ineffective campaigns with crude propaganda, seemingly allaying fears that many experts have had about the potential for this new technology to spread mis- and disinformation, particularly during a crucial election year.

    But influence campaigns on social media often innovate over time to avoid detection, learning the platforms and their tools, sometimes better than the employees of the platforms themselves. While these initial campaigns may be small or ineffective, they appear to be still in the experimental stage, says Jessica Walton, a researcher with the CyberPeace Institute who has studied Doppleganger’s use of generative AI.

    In her research, the network would use real-seeming Facebook profiles to post articles, often around divisive political topics. “The actual articles are written by generative AI,” she says. “And mostly what they’re trying to do is see what will fly, what Meta’s algorithms will and won’t be able to catch.”

    In other words, expect them only to get better from here.

    [ad_2]

    Vittoria Elliott

    Source link