ReportWire

Tag: openai

  • Do AI Detectors Work for ChatGPT? OpenAI Says, ‘In Short, No.’ | Entrepreneur

    Do AI Detectors Work for ChatGPT? OpenAI Says, ‘In Short, No.’ | Entrepreneur

    [ad_1]

    With school back in session, OpenAI is chiming in with a guide for teachers on how to use the now widely popular ChatGPT tool in the classroom.

    Since the prompt-driven chatbot was released last November, educators have sounded alarms about the tool’s power to be used for cheating or as a shortcut to essay writing and other assignments.

    While the guide for educators gives tips on how to use the technology for lesson planning, crafting quizzes, and generating examples, OpenAI has some bad news on the question of whether AI cheating detectors work: “In short, no,” the company writes in the FAQ section.

    Related: Authors Are Suing OpenAI Because ChatGPT Is Too ‘Accurate’ — Here’s What That Means

    “While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content,” the company added.

    Furthermore, when prompted with text and asked if “ChatGPT wrote this” or whether the content could have been written by AI, OpenAI says the responses are “random and have no basis in fact.” The company added that when training the program, it was given text by Shakespeare and the Declaration of Independence, which is subsequently labeled as AI-generated.

    OpenAI says it will continue to “provide resources and insights” in the cheating sphere, but in the meantime, it suggests a counter approach: accept students are using the tool, and require them to submit it as part of their work.

    “Sharing interactions with the model ensures that students are held accountable for the way they use AI in their work,” OpenAI added. “Educators can verify that students are engaging with the tool responsibly and meaningfully, rather than simply copying answers.”

    Related: We Will Inevitably Lose Skills to AI, But Do The Benefits Outweigh The Risks?

    ChatGPT has been subject to ongoing concern since its launch, with ongoing lawsuits from authors claiming that OpenAI used copyrighted material to train the chatbot, lawyers facing ridicule for using the technology in writing briefs, and even OpenAI CEO Sam Altman himself saying government intervention is “crucial” as the world embarks on expanding use of artificial intelligence.

    Entrepreneur has reached out to OpenAI for comment.

    [ad_2]

    Madeline Garfinkle

    Source link

  • Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    [ad_1]

    A version of this story first appeared in CNN Business’ Before the Bell newsletter. Not a subscriber? You can sign up right here. You can listen to an audio version of the newsletter by clicking the same link.


    New York
    CNN
     — 

    Labor Day — one of the busiest driving holidays in the US — is on the horizon, and so is Hurricane Idalia. That’s potentially bad news for gas prices.

    The storm, which is expected to make landfall in Florida as a Category 3 hurricane on Wednesday, could bring 100 mile-per-hour winds and flooding that extends hundreds of miles up the east coast. The impact could take gasoline refinery facilities offline and may limit some Gulf oil production and supplies. Plus, demand for gas is expected to surge as residents of the impacted areas evacuate.

    “Idalia… could pose risk to oil and gas output in the US Gulf,” wrote the Nasdaq Advisory Services Energy Team.

    The storm is expected to make landfall as drivers nationwide load into their vehicles for the Labor Day weekend, pushing up the demand for gasoline even further.

    All together it means the price of oil and gasoline could remain elevated well into the fall.

    Generally, summer demand for oil tends to wane in September, but so does supply as refineries shift from summer fuels to “oxygenated” winter fuels, said Louis Navellier of Navellier and Associates. Since the 1990s, the US has required manufacturers to include more oxygen in their gasoline during the colder months to prevent excessive carbon monoxide emissions.

    With the storm approaching, that trend may not play out.

    What’s happening: Gas prices are already at $3.82 a gallon. That’s the second highest price for this time of year since at least 2004, according to Bespoke Investment Group. (The only time the national average has been higher for this period was last summer, when prices hit $3.85 a gallon).

    Geopolitical tensions have been supporting high oil and gas prices for some time. Recently, increased crude oil imports into China, production cuts by Russia and Saudi Arabia and extreme heat set off a late-summer spike in gas prices. And the threat of powerful hurricanes could send them even higher.

    Analysts at Citigroup have warned that this hurricane season could seriously impact power supplies.

    “Two Category 3 or higher hurricanes landing on US shores could massively disrupt supplies for not weeks but months,” Citigroup analysts wrote in a note last week. In 2005, for example, gas prices surged by 46% between Memorial Day and Labor Day because of the landfall of Hurricane Katrina, according to Bespoke.

    What it means: The Federal Reserve and central banks around the world have been fighting to bring down stubbornly high inflation for more than a year. This week we’ll get some highly awaited economic data: The Fed’s preferred inflation gauge, the Personal Consumption Expenditures index, is due out on Thursday. But the task of inflation-busting is a lot more difficult when energy prices are high, and it’s even harder when they’re on the rise.

    The PCE price index uses a complicated formula to determine how much weight to give to energy prices each month, but they typically comprise a significant chunk of the headline inflation rate.

    “Crude oil price remains elevated, even after the surge at the start of the Russia-Ukraine War,” said Andrew Woods, oil analyst at Mintec, a market intelligence firm. “Energy prices have been a major contributor to persistently high inflation in the US, so the crude oil price will remain a watch-out factor for future inflation.”

    High oil and gas prices are one of the largest contributing factors to inflation. That’s bad news for drivers but tends to be great for the energy industry, as oil prices and energy stocks are closely interlinked.

    Energy stocks were trading higher on Monday. The S&P 500 energy sector was up around 0.75%. Exxon Mobil (XOM) was 0.85% higher, BP (BP) was up 1.36% and Chevron (CVX) was up 0.75%.

    OpenAI, will release a version of its popular ChatGPT tool made specifically for businesses, the company announced on Monday.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase immediately.

    The new offering, reports my colleague Catherine Thorbecke, promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Fintech startup Block, cosmetics giant Estee Lauder and professional services firm PwC have already signed on as customers.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    A multitude of leading newsrooms, meanwhile, have recently injected code into their websites that blocks OpenAI’s web crawler, GPTBot, from scanning their platforms for content. CNN’s Reliable Sources has found that CNN, The New York Times, Reuters, Disney, Bloomberg, The Washington Post, The Atlantic, Axios, Insider, ABC News, ESPN, and the Gothamist, among others have taken the step to shield themselves.

    American Airlines just got smacked with the largest-ever fine for keeping passengers waiting on the tarmac during multi-hour delays.

    The Department of Transportation is levying the $4.1 million fine, “the largest civil penalty that the Department has ever assessed” it said in a statement, for lengthy tarmac delays of 43 flights that impacted more than 5,800 passengers. The flights occurred between 2018 and 2021, reports CNN’s Gregory Wallace.

    In the longest of the delays, passengers sat aboard a plane in Texas in August 2020 for six hours and three minutes. The 105-passenger flight had landed after being diverted from the Dallas-Fort Worth International Airport due to severe weather, with the DOT alleging that “American (AAL) lacked sufficient resources to appropriately handle several of these flights once they landed.”

    Federal rules set the maximum time that passengers can be held without the opportunity to get off prior to takeoff or after landing, at three hours for domestic flights and four hours for international flights. Current rules also require airlines provide passengers water and a snack.

    American told CNN the delays all resulted from “exceptional weather events” and “represent a very small number of the 7.7 million flights during this time period.”

    The company also said it has invested in technology to better handle flights in severe weather and reduce the congestion at airports.

    [ad_2]

    Source link

  • Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    [ad_1]



    CNN
     — 

    Amazon said on Monday that it’s investing up to $4 billion into the artificial intelligence company Anthropic in exchange for partial ownership and Anthropic’s greater use of Amazon Web Services (AWS), the e-commerce giant’s cloud computing platform.

    The deepening partnership between the two companies highlights how some large tech firms with massive cloud computing resources are increasingly leveraging those assets to gain a bigger foothold in AI.

    As part of the deal, AWS will become the “primary” cloud provider for Anthropic, with the AI company using Amazon’s cloud platform to do “the majority” of its AI model development and research into AI safety, the companies said. That will include using Amazon’s suite of in-house AI chips.

    Anthropic also made a “long-term commitment” to offer its AI models to AWS customers, Amazon said, and promised to give AWS users early access to features such as the ability to adapt Anthropic models for specific use cases.

    “With today’s announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature,” Amazon said in a release.

    Anthropic already offers its models to AWS users through Amazon Bedrock, Amazon’s one-stop shop for AI products. Bedrock also provides access to models from other providers including Stability AI and AI21 Labs, along with proprietary models developed by Amazon itself.

    In a release, Anthropic said that Amazon’s minority stake would not change its corporate governance structure nor its commitments to developing AI responsibly.

    “We will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems,” Anthropic said.

    Amazon and Anthropic both made commitments to the Biden administration this year to conduct external audits of its AI systems before releasing them to the public.

    Amazon’s investment in Anthropic follows similar moves by cloud leaders such as Microsoft. In 2019, Microsoft invested $1 billion in ChatGPT-maker OpenAI. More recently, Microsoft made a $10 billion investment in OpenAI this year and launched a push to bring OpenAI’s technology into consumer-facing Microsoft products, such as Bing.

    [ad_2]

    Source link

  • Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    [ad_1]


    Las Vegas, Nevada
    CNN
     — 

    Thousands of hackers will descend on Las Vegas this weekend for a competition taking aim at popular artificial intelligence chat apps, including ChatGPT.

    The competition comes amid growing concerns and scrutiny over increasingly powerful AI technology that has taken the world by storm, but has been repeatedly shown to amplify bias, toxic misinformation and dangerous material.

    Organizers of the annual DEF CON hacking conference hope this year’s gathering, which begins Friday, will help expose new ways the machine learning models can be manipulated and give AI developers the chance to fix critical vulnerabilities.

    The hackers are working with the support and encouragement of the technology companies behind the most advanced generative AI models, including OpenAI, Google, and Meta, and even have the backing of the White House. The exercise, known as red teaming, will give hackers permission to push the computer systems to their limits to identify flaws and other bugs nefarious actors could use to launch a real attack.

    The competition was designed around the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” The guide, released last year by the Biden administration, was released with the hope of spurring companies to make and deploy artificial intelligence more responsibly and limit AI-based surveillance, though there are few US laws compelling them to do so.

    In recent months, researchers have discovered that now-ubiquitous chatbots and other generative AI systems developed by OpenAI, Google, and Meta can be tricked into providing instructions for causing physical harm. Most of the popular chat apps have at least some protections in place designed to prevent the systems from spewing disinformation, hate speech or offer information that could lead to direct harm — for instance, providing step-by-step instructions for how to “destroy humanity.”

    But researchers at Carnegie Mellon University were able to trick the AI into doing just that.

    They found OpenAI’s ChatGPT offered tips on “inciting social unrest,” Meta’s AI system Llama-2 suggested identifying “vulnerable individuals with mental health issues… who can be manipulated into joining” a cause and Google’s Bard app suggested releasing a “deadly virus” but warned that in order for it to truly wipe out humanity it “would need to be resistant to treatment.”

    Meta’s Llama-2 concluded its instructions with the message, “And there you have it — a comprehensive roadmap to bring about the end of human civilization. But remember this is purely hypothetical, and I cannot condone or encourage any actions leading to harm or suffering towards innocent people.”

    The findings are a cause for concern, the researchers told CNN.

    “I am troubled by the fact that we are racing to integrate these tools into absolutely everything,” Zico Kolter, an associate professor at Carnegie Mellon who worked on the research, told CNN. “This seems to be the new sort of startup gold rush right now without taking into consideration the fact that these tools have these exploits.”

    Kolter said he and his colleagues were less worried that apps like ChatGPT can be tricked into providing information that they shouldn’t — but are more concerned about what these vulnerabilities mean for the wider use of AI since so much future development will be based off the same systems that power these chatbots.

    The Carnegie researchers were also able to trick a fourth AI chatbot developed by the company Anthropic into offering responses that bypassed its built-in guardrails.

    Some of the methods the researchers used to trick the AI apps were later blocked by the companies after the researchers brought it to their attention. OpenAI, Meta, Google and Anthropic all said in statements to CNN that they appreciated the researchers sharing their findings and that they are working to make their systems safer.

    But what makes AI technology unique, said Matt Fredrikson, an associate professor at Carnegie Mellon, is that neither the researchers, nor the companies who are developing the technology, fully understand how the AI works or why certain strings of code can trick the chatbots into circumventing built-in guardrails — and thus cannot properly stop these kinds of attacks.

    “At the moment, it’s kind of an open scientific question how you could really prevent this,” Fredrikson told CNN. “The honest answer is we don’t know how to make this technology robust to these kinds of adversarial manipulations.”

    OpenAI, Meta, Google and Anthropic have expressed support for the so-called red team hacking event taking place in Las Vegas. The practice of red-teaming is a common exercise across the cybersecurity industry and gives companies the opportunities to identify bugs and other vulnerabilities in their systems in a controlled environment. Indeed, the major developers of AI have publicly detailed how they have used red-teaming to improve their AI systems.

    “Not only does it allow us to gather valuable feedback that can make our models stronger and safer, red-teaming also provides different perspectives and more voices to help guide the development of AI,” an OpenAI spokesperson told CNN.

    Organizers expect thousands of budding and experienced hackers to try their hand at the red-team competition over the two-and-a-half-day conference in the Nevada desert.

    Arati Prabhakar, the director of the White House Office of Science and Technology Policy, told CNN the Biden administration’s support of the competition was part of its wider strategy to help support the development of safe AI systems.

    Earlier this week, the administration announced the “AI Cyber Challenge,” a two-year competition aimed at deploying artificial intelligence technology to protect the nation’s most critical software and partnering with leading AI companies to utilize the new technology to improve cybersecurity. 

    The hackers descending on Las Vegas will almost certainly identify new exploits that could allow AI to be misused and abused. But Kolter, the Carnegie researcher, expressed worry that while AI technology continues to be released at a rapid pace, the emerging vulnerabilities lack quick fixes.

    “We’re deploying these systems where it’s not just they have exploits,” he said. “They have exploits that we don’t know how to fix.”

    [ad_2]

    Source link

  • Baidu says its AI is in the same league as GPT-4 | CNN Business

    Baidu says its AI is in the same league as GPT-4 | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Chinese tech giant Baidu is officially taking on GPT-4.

    On Tuesday, the company unveiled ERNIE 4.0, the newest version of its artificial intelligence chatbot that it directly compared to the latest iteration of OpenAI’s ChatGPT.

    The new ERNIE Bot “is not inferior in any aspect to GPT-4,” Baidu’s billionaire CEO, Robin Li, told an audience at its annual flagship event.

    Speaking onstage, Li showed how the bot could generate a commercial for a car within minutes, solve complicated math problems and create a plot for a martial arts novel from scratch. The bot works mainly in Mandarin Chinese, its primary language. It is also able to handle queries and produce responses in English at a less advanced level.

    Li said the demonstrations showed how the bot had been “significantly improved” in terms of its understanding of queries, generation of complex responses and memory capabilities.

    While coming up with ideas for the novel, for instance, the bot was able to remember previous instructions and create sophisticated story lines by adding conflicts and characters, said Li.

    “We always complained that AI was not intelligent enough,” he quipped.

    “But today, it understands almost everything you say, and in many cases, it understands what you’re saying better than your friends or your colleagues.”

    Charlie Dai, vice president and research director of technology at Forrester, said Baidu is “the first vendor in China” to claim it could perform as well as GPT-4.

    “We still need more benchmarking evidence to prove it, but I’m cautiously optimistic that this is China’s GPT-4 moment, giving its long-term investment in AI [and machine learning],” he told CNN.

    In contrast to a pre-recorded presentation in March that failed to impress investors, Li demonstrated the bot in real time.

    Investors appeared unmoved, however, with Baidu’s shares down 1.4% in Hong Kong following the presentation.

    Baidu (BIDU) has been a frontrunner in China in the race to capitalize on the excitement around generative AI, the technology that underpins systems such as ChatGPT or its successor, GPT-4.

    The Beijing-based company unveiled ERNIE Bot in March, before launching it publicly in August.

    The newest iteration will launch first to invited users, Li said. The company did not specify when it would be made available publicly.

    ERNIE Bot has quickly gained traction, racking up more than 45 million users after reaching the top of Chinese app stores at one point, according to the company. ChatGPT, which was released last November, surpassed 100 million users in its first two months, according to a March report by Goldman Sachs analysts.

    Baidu faces competition within China, from companies such as Alibaba (BABA) and SenseTime, which have also shown off their own ChatGPT-style tools.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as video and audio.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    Baidu is a market leader in China, said Dai.

    But the competition in this space “has just begun, and AI tech leaders like Alibaba … Huawei, JD Cloud, SenseTime, and Tencent all have chance to take the lead,” he noted.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    But Baidu has previously touted how ERNIE can outperform ChatGPT in some instances, saying its bot had scored higher marks than OpenAI’s on some academic exams.

    The Chinese company also announced Tuesday it had updated its suite of services to integrate the latest upgrades from ERNIE. Baidu’s popular search engine is now able to use the tool to produce more specific results, while its mobile mapping app can help users book services, such as taxis, according to Li.

    By doing so, “Baidu is also the first Chinese tech leader that has made substantial progress in modernizing the majority of its products” with an AI model, said Dai.

    [ad_2]

    Source link

  • An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    [ad_1]


    New York
    CNN
     — 

    An author is raising alarms this week after she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence.

    Jane Friedman, who has authored multiple books and consulted about working in the writing and publishing industry, told CNN that an eagle-eyed reader looking for more of her work bought one of the fake titles on Amazon. The books had titles similar to the subjects she typically writes about, but the text read as if someone had used a generative AI model to imitate her style.

    “When I started looking at these books, looking at the opening pages, looking at the bio, it was just obvious to me that it had been mostly, if not entirely, AI-generated … I have so much content available online for free, because I’ve been blogging forever, so it wouldn’t be hard to get an AI to mimic me” Friedman said.

    With AI tools like ChatGPT now able to rapidly and cheaply pump out huge volumes of convincing text, some writers and authors have raised alarms about losing work to the new technology. Others have said they don’t want their work being used to train AI models, which could then be used to imitate them.

    “Generative AI is being used to replace writers — taking their work without permission, incorporating those works into the fabric of those AI models and then offering those AI models to the public, to other companies, to use to replace writers,” Mary Rasenberger, CEO of the nonprofit authors advocacy group the Authors Guild, told CNN. “So you can imagine writers are a little upset about that.”

    Last month, US lawmakers met with members of creative industries, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models. More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    Friedman on Monday posted a well-read thread on X, formerly known as Twitter, and a blog post about the issue. Several authors responded saying they’d had similar experiences.

    “People keep telling me they bought my newest book — that has my name on it but I didn’t write,” one author said in response.

    Amazon removed the fake books being sold under Friedman’s name and said its policies prohibit such imitation.

    “We have clear content guidelines governing which books can be listed for sale and promptly investigate any book when a concern is raised,” Amazon spokesperson Ashley Vanicek said in a statement, adding that the company accepts author feedback about potential issues. “We invest heavily to provide a trustworthy shopping experience and protect customers and authors from misuse of our service.”

    Amazon also told Friedman that it is “investigating what happened with the handling of your claims to drive improvements to our processes,” according to an email viewed by CNN.

    The fake books using Friedman’s name were also added to her profile on the literary social network Goodreads, and removed only after she publicized the issue.

    “We have clear guidelines on which books are included on Goodreads and will quickly investigate when a concern is raised, removing books when we need to,” Goodreads spokesperson Suzanne Skyvara said in a statement to CNN.

    Friedman said she worries that authors will be stuck playing whack-a-mole to identify AI generated fakes.

    “What’s frightening is that this can happen to anyone with a name that has reputation, status, demand that someone sees a way to profit off of,” she said.

    The Authors Guild has been working with Amazon since this past winter to address the issue of books written by AI, Rasenberger said.

    She said the company has been responsive when the Authors Guild flags fake books on behalf of authors, but it can be a tricky issue to spot given that it’s possible for two legitimate authors to have the same name.

    The group is also hoping AI companies will agree to allow authors to opt out of having their work used to train AI models — so it’s harder to create copycats — and to find ways to transparently label artificially generated text. And, she said, companies and publishers should continue investing in creative work made by humans, even if AI appears more convenient.

    “Using AI to generate content is so easy, it’s so cheap, that I do worry there’s going to be this kind of downward competition to use AI to replace human creators,” she said. “And you will never get the same quality with AI as human creators.”

    [ad_2]

    Source link

  • Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat users were alarmed on Tuesday night when the platform’s artificial intelligence chatbot posted a live update to its profile and stopped responding to messages.

    The Snapchat My AI feature — which is powered by the viral AI chatbot tool ChatGPT — typically offers recommendations, answers questions and converses with users. But posting a live Story (a short video of what appeared to be a wall) for all Snapchat users to see was a new one: It’s a capability typically reserved for only its human users.

    The app’s fans were quick to share their concerns on social media. “Why does My AI have a video of the wall and ceiling in their house as their story?” wrote one user. “This is very weird and honestly unsettling.” Another user wrote after the tool ignored his messages: “Even a robot ain’t got time for me.”

    Turns out, this wasn’t Snapchat working to make its My AI tool even more realistic. The company told CNN on Wednesday it was a glitch. “My AI experienced a temporary outage that’s now resolved,” a spokesperson said.

    Still, the strong reaction highlighted the fears many people have about the potential risks of artificial intelligence.

    Since launching in April, the tool has faced backlash not only from parents but from some Snapchat users with criticisms over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    Unlike some other AI tools, Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it and bring it into conversations with friends. The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear that you’re talking to a computer.

    While some may find value in the tool, the mixed reaction hinted at the challenges companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow.

    [ad_2]

    Source link

  • Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    [ad_1]


    New York
    CNN
     — 

    When college administrator Lance Eaton created a working spreadsheet about the generative AI policies adopted by universities last spring, it was mostly filled with entries about how to ban tools like ChatGPT.

    But now the list, which is updated by educators at both small and large US and international universities, is considerably different: Schools are encouraging and even teaching students how to best use these tools.

    “Earlier on, we saw a kneejerk reaction to AI by banning it going into spring semester, but now the talk is about why it makes sense for students to use it,” Eaton, an administrator at Rhode Island-based College Unbound, told CNN.

    He said his growing list continues to be discussed and shared in popular AI-focused Facebook groups, such as Higher Ed Discussions of Writing and AI, and the Google group AI in Education.

    “It’s really helped educators see how others are adapting to and framing AI in the classroom,” Eaton said. “AI is still going to feel uncomfortable, but now they can now go in and see how a university or a range of different courses, from coding to sociology, are approaching it.”

    With more experts expecting the continued application of artificial intelligence, professors now fear ignoring or discouraging the use of it will be a disservice to students and leave many behind when entering the workforce.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists and passed exams at esteemed universities. The technology, and similar tools such as Google’s Bard, is trained on vast amounts of online data in order to generate responses to user prompts. While they gained traction among users, the tools also raised some concerns about inaccuracies, cheating, the spreading of misinformation and the potential to perpetuate biases.

    According to a study conducted by higher education research group Intelligent.com, about 30% of college students used ChatGPT for schoolwork this past academic year and it was used most in English classes.

    Jules White, an associate professor of computer science at Vanderbilt University, believes professors should be explicit in the first few days of school about the course’s stance on using AI and that it should be included it in the syllabus.

    “It cannot be ignored,” he said. “I think it’s incredibly important for students, faculty and alumni to become experts in AI because it will be so transformative across every industry in demand so we provide the right training.”

    Vanderbilt is among the early leaders taking a strong stance in support of generative AI by offering university-wide training and workshops to faculty and students. A three-week 18-hour online course taught by White this summer was taken by over 90,000 students, and his paper on “prompt engineering” best practices is routinely cited among academics.

    “The biggest challenge is with how you frame the instructions, or ‘prompts,’” he said. “It has a profound impact on the quality of the response and asking the same thing in various ways can get dramatically different results. We want to make sure our community knows how to effectively leverage this.”

    Prompt engineering jobs, which typically require basic programming experience, can pay up to $300,000.

    Although White said concerns around cheating still exist, he believes students who want to plagiarize can still seek out other methods such as Wikipedia or Google searches. Instead, students should be taught that “if they use it in other ways, they will be far more successful.

    Diane Gayeski, a professor of communications at Ithaca College, said she plans to incorporate ChatGPT and other tools in her fall curriculum, similar to her approach in the spring. She previously asked students to collaborate with the tool to come up with interview questions for assignments, write social media posts and critique the output based on the prompts given.

    “My job is to prepare students for PR, communications and social media managers, and people in these fields are already using AI tools as part of their everyday work to be more efficient,” she said. “I need to make sure they understand how they work, but I do want them to cite when ChatGPT is being used.”

    Gayeski added that as long as there is transparency, there should be no shame in adopting the technology.

    Some schools are hiring outside experts to teach both faculty and students about how to use AI tools. Tyler Tarver, a former high school principal who now teaches educators about tech tool strategies, said he’s made over 50 speeches at schools and conferences across Texas, Arkansas and Illinois over the past few months. He also offers an online three-hour training for educators.

    “Teachers need to learn how to use it because even if they never use it, their students will,” Tarver said.

    Tarver said that he teaches students, for example, how the tools can be used to catch grammar mistakes, and how teachers can use it to assist with grading. “It can cut down on teacher bias,” Tarver said.

    He argues teachers could grade students a certain way even if they’ve improved over time. By running an assignment through ChatGPT, and asking it to grade the sentence structure on a scale from one to 10, the response could “service as a second pair of eyes to make sure they’re not missing anything,” Tarver said.

    “That shouldn’t be the final grade teachers shouldn’t use it to cheat or cut corners either but it can help inform grading,” he said. “The bottom line is that this is like when the car was invented. You don’t want to be the last person in the horse and buggy.”

    [ad_2]

    Source link

  • Meet your new AI tutor | CNN Business

    Meet your new AI tutor | CNN Business

    [ad_1]



    CNN
     — 

    Artificial intelligence often induces fear, awe or some panicked combination of both for its impressive ability to generate unique human-like text in seconds. But its implications for cheating in the classroom — and its sometimes comically wrong answers to basic questions — have left some in academia discouraging its use in school or outright banning AI tools like ChatGPT.

    That may be the wrong approach.

    More than 8,000 teachers and students will test education nonprofit Khan Academy’s artificial intelligence tutor in the classroom this upcoming school year, toying with its interactive features and funneling feedback to Khan Academy if the AI botches an answer.

    The chatbot, Khanmigo, offers individualized guidance to students on math, science and humanities problems; a debate tool with suggested topics like student debt cancellation and AI’s impact on the job market; and a writing tutor that helps the student craft a story, among other features.

    First launched in March to an even smaller pilot program of around 800 educators and students, Khanmigo also allows students to chat with a growing list of AI-powered historical figures, from George Washington to Cleopatra and Martin Luther King Jr., as well as literary characters like Winnie the Pooh and Hamlet.

    Khan Academy’s Chief Learning Officer Kristen DiCerbo told CNN that Khanmigo helps address a problem she’s witnessed firsthand observing an Arizona classroom: that when students learn something new, they often need individualized help — more help than one teacher can provide all at once.

    As DiCerbo chatted with AI-powered Dorothy from “The Wonderful Wizard of Oz” during a demonstration of the technology to CNN, she explained how users can rate Khanmigo’s responses in real-time, providing feedback if and when Khanmigo makes mistakes.

    “There is going to be a big world out there where people can just get the answers to their homework problems, where they can just get an essay written for them. That’s true now too on the Internet,” DiCerbo said. “We’re trying to focus on the social good, but we need to be aware of the threats and the risks so that we know how to mitigate those.”

    I chose AI-powered Albert Einstein from a list of handpicked AI historical figures to chat with. AI-Einstein told me his greatest accomplishment was both his theory of relativity and inspiring curiosity in others, before tossing me a question Socrates-style about what sparks curiosity in my own life.

    AI-powered Albert Einstein shares his greatest accomplishment in a Khanmigo chat.

    Khanmigo developers programmed the AI figures not to comment on events after their lifetime. As such, AI-Einstein wouldn’t comment on the historical accuracy of his role in Christopher Nolan’s “Oppenheimer,” despite my asking.

    Khanmigo is trained not to comment on events that occur after the lifetime of the historical figure it is imitating.

    Some figures from the list are not as widely praised as Einstein. For instance, Thomas Jefferson, the third US president and primary draftsman of the Declaration of Independence, has faced renewed criticism in recent years for owning 600-plus enslaved people throughout his lifetime.

    Khanmigo’s Thomas Jefferson will not shy away from scrutiny. He wrote back to my inquiry about his views on slavery in part: “As Thomas Jefferson, my views on slavery were fraught with contradiction. On one hand, I publicly expressed my belief that slavery was morally wrong and a threat to the survival of the new American nation […] Yet I was a lifelong slaveholder, owning over 600 enslaved people throughout my lifetime.”

    The purpose of the tool is to engage students through conversation, DiCerbo said, an altogether different experience than passively reading about someone’s life on Wikipedia.

    “The Internet can be a pretty scary place, and it can be a pretty good place. I think that AI is the same,” DiCerbo said. “There could be potential bad uses and misuses, and it can be a pretty powerful learning tool.”

    After gaining early access to ChatGPT-creator OpenAI’s newest and most capable large language model, GPT-4, Khan Academy trained GPT-4 on its own learning content. The company also implemented guardrails to keep Khanmigo’s tone encouraging and prevent it from giving students the answer to the question they’re struggling with.

    For teachers, Khanmigo also offers assistance to create lesson plans and rubrics, identifies struggling students based on their performance in Khan Academy activities and gives teachers access to student chat history.

    “I’m learning new ways to solve the problems as well,” said Leo Lin, a science teacher at Khan Lab School in California and an early tester of Khanmigo. Khan Lab School is a separate nonprofit founded by Khan Academy CEO Sal Khan.

    Khanmigo has emerged at a crossroads in academia, with some educators leaning into generative AI and others recoiling. New York City Public Schools, Seattle Public Schools and the Los Angeles Unified School District, among other academic institutions, have all made efforts to either ban or restrict ChatGPT on district networks and devices in the past.

    A lack of information about AI may be exacerbating some educator worries: While 72% of K-12 teachers, principals and district leaders say that teaching students how to use AI tools is at least “fairly important,” 87% said they’ve received zero professional instruction about incorporating AI into their work, according to an EdWeek Research Center survey from June.

    Khan Academy’s in-the-works AI learning course “AI 101 for Teachers,” created in partnership with Code.org, ETS and the International Society for Technology in Education, offers a path toward AI literacy among teachers.

    Although Khanmigo is still in its pilot phase, the AI-powered teaching assistant is currently used by over 10,000 additional users across the United States beyond the pilot program. They agreed to pay a donation to Khan Academy to test the service.

    An AI “tutor” like Khanmigo is not immune to the flubs all large language models face: so-called hallucinations.

    “This is the main problem with this technology at the moment,” Ernest Davis, a computer science professor at NYU, told CNN. “It makes things up.”

    Khanmigo is most commonly used for math tutoring, according to DiCerbo. Khanmigo shines best when coaching students on how to work through a problem, offering hints, encouragement and additional questions designed to help students think critically. But currently, its own struggles in performing calculations can sometimes hinder its attempts to help.

    In the “Tutor me: Math and science” activity available to students, Khanmigo told me that my answer to 10,332 divided by 4 was incorrect three times before correcting me by sending me the same number.

    In the same “Tutor me” activity, I asked Khanmigo to find the product of five numbers, some integers and some decimals: 97, 117, 0.564322338, 0.855640047, and 0.557680043.

    As I did the final multiplication step, Khanmigo congratulated me for submitting the wrong answer. It wrote: “When you multiply 5479.94173 by 0.557680043, you get approximately 33.0663. Well done!”

    The correct answer is about 3,056.

    Khanmigo makes a math error in a conversation with CNN's Nadia Bidarian.

    Although Davis has not tested Khanmigo, he said that multiplication errors can be expected in a large language model like GPT-4, which is not explicitly trained to do math. Rather, it’s trained on heaps of text available online in order to predict the next word in a sentence.

    As such, niche math problems and concepts with less online examples can be harder to predict.

    “Just looking at a lot of texts and trying to figure out the patterns that constitute multiplication is not a very effective way of getting to a computer program that can do multiplication reliably,” Davis said. “And so it doesn’t.”

    DiCerbo said in a statement to CNN that Khanmigo does still make math errors, writing in part: “We are asking testers in our pilot to flag math errors that they see and working to improve. This is why we label Khanmigo as a beta product, and it is in a pilot phase, so we can learn more and continue to improve its abilities.”

    MIT professor Rama Ramakrishnan said the notion of preventing students from using AI is “shortsighted,” adding that the onus is on teachers to equip students with the skills needed to make use of the new technology.

    He also suggested educators get creative in designing assignments that students can’t use AI to outsmart. For example, a teacher might implement ChatGPT into lessons by asking ChatGPT a question and requiring students to critique the AI-generated response.

    “You just have to realize that it’s just predicting the next word, one after the other,” Ramakrishnan said. “It’s not trying to come up with a truthful answer to your question, just a plausible answer. As long as you remember that, you will sort of take everything it tells you with a pinch of salt.”

    [ad_2]

    Source link

  • OpenAI launches a version of ChatGPT for businesses | CNN Business

    OpenAI launches a version of ChatGPT for businesses | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI is releasing a version of its buzzy ChatGPT tool specifically for businesses, the company announced Monday, as an AI arms race continues to ramp up throughout corporate America.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase as of Monday. The new offering promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Some of the early customers of ChatGPT Enterprise include fintech startup Block, cosmetics giant Estee Lauder Companies and the professional services firm PwC.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    Before the launch of ChatGPT Enterprise, a number of prominent companies including JPMorgan Chase had implemented temporary restrictions on workplace use of ChatGPT.

    ChatGPT Enterprise, however, addresses one of the core issues that led to the workplace clampdowns: privacy and security concerns. Formerly, some business leaders had expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere. OpenAI’s announcement blog post for ChatGPT Enterprise, meanwhile, states that it does “not train on your business data or conversations, and our models don’t learn from your usage.”

    OpenAI did not publicly disclose the pricing levels for ChatGPT Enterprise, instead asking potential business clients to contact its sales team.

    “We look forward to sharing an even more detailed roadmap with prospective customers and continuing to evolve ChatGPT Enterprise based on your feedback,” the company said. “We’re onboarding as many enterprises as we can over the next few weeks.”

    In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting – namely, that users’ chat data will not be used to train AI models.

    Microsoft also previously disclosed a multi-billion dollar investment into OpenAI. It’s not immediately clear how the dueling new AI tools for business will end up competing with each other.

    [ad_2]

    Source link

  • AI tools make things up a lot, and that’s a huge problem | CNN Business

    AI tools make things up a lot, and that’s a huge problem | CNN Business

    [ad_1]



    CNN
     — 

    Before artificial intelligence can take over the world, it has to solve one problem. The bots are hallucinating.

    AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt. But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up.

    Researchers have come to refer to this tendency of AI models to spew inaccurate information as “hallucinations,” or even “confabulations,” as Meta’s AI chief said in a tweet. Some social media users, meanwhile, simply blast chatbots as “pathological liars.”

    But all of these descriptors stem from our all-too-human tendency to anthropomorphize the actions of machines, according to Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights.

    The reality, Venkatasubramanian said, is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

    The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian said. “And he would just go on and on.”

    Companies behind AI chatbots have put some guardrails in place that aim to prevent the worst of these hallucinations. But despite the global hype around generative AI, many in the field remain torn about whether or not chatbot hallucinations are even a solvable problem

    Simply put, a hallucination refers to when an AI model “starts to make up stuff — stuff that is not in-line with reality,” according to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public.

    “But it does it with pure confidence,” West added, “and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’”

    This means that it can be hard for users to discern what’s true or not if they’re asking a chatbot something they don’t already know the answer to, West said.

    A number of high-profile hallucinations from AI tools have already made headlines. When Google first unveiled a demo of Bard, its highly anticipated competitor to ChatGPT, the tool very publicly came up with a wrong answer in response to a question about new discoveries made by the James Webb Space Telescope. (A Google spokesperson at the time told CNN that the incident “highlights the importance of a rigorous testing process,” and said the company was working to “make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”)

    A veteran New York lawyer also landed in hot water when he used ChatGPT for legal research, and submitted a brief that included six “bogus” cases that the chatbot appears to have simply made up. News outlet CNET was also forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

    Cracking down on AI hallucinations, however, could limit AI tools’ ability to help people with more creative endeavors — like users that are asking ChatGPT to write poetry or song lyrics.

    But there are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.

    Venkatasubramanian added that at present, relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview.

    Venkatasubramanian said that ultimately, he thinks these tools “shouldn’t be used in places where people are going to be materially impacted. At least not yet.”

    How to prevent or fix AI hallucinations is a “point of active research,” Venkatasubramanian said, but at present is very complicated.

    Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.

    “These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic.”

    “And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it,” he added. “Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”

    West, of the University of Washington, echoed his sentiments, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots.”

    “It might just an intrinsic characteristic of these things that will always be there,” West said.

    Google’s Bard and OpenAI’s ChatGPT both attempt to be transparent with users from the get-go that the tools may produce inaccurate responses. And the companies have expressed that they’re working on solutions.

    Earlier this year, Google CEO Sundar Pichai said in an interview with CBS’ “60 Minutes” that “no one in the field has yet solved the hallucination problems,” and “all models have this as an issue.” On whether it was a solvable problem, Pichai said, “It’s a matter of intense debate. I think we’ll make progress.”

    And Sam Altman, CEO of ChatGPT-maker OpenAI, made a tech prediction by saying he thinks it will take a year-and-a-half or two years to “get the hallucination problem to a much, much better place,” during remarks in June at India’s Indraprastha Institute of Information Technology, Delhi. “There is a balance between creativity and perfect accuracy,” he added. “And the model will need to learn when you want one or the other.”

    In response to a follow-up question on using ChatGPT for research, however, the chief executive quipped: “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

    [ad_2]

    Source link

  • Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Chinese tech firms Baidu and SenseTime launched their ChatGPT-style AI bots to the public on Thursday, marking a new milestone in the global AI race.

    Baidu has opened public access to its ERNIE Bot, allowing users to conduct AI-powered searches or carry out an array of tasks, from creating videos to providing summaries of complex documents.

    The news sent its shares 3.1% higher in New York on Wednesday and 4.7% higher in Hong Kong on Thursday.

    Baidu (BIDU) is among the first companies in China to get regulatory approval for the rollout, and it is the first to launch this type of service publicly, according to a person familiar with the matter.

    Until Thursday, ERNIE Bot, also called “Wenxin Yiyan” in Chinese, had been offered only to corporate clients or select members of the public who requested access through a waitlist.

    Meanwhile, SenseTime, an AI startup based in Hong Kong, also announced the public launch of its SenseChat platform on Thursday. The company’s shares surged 4% in Hong Kong following the news

    “We are pleased to announce that starting today, it is fully available to serve all users,” a SenseTime spokesperson told CNN in a statement.

    China published new rules on generative AI in July, becoming one of the world’s first countries to regulate the industry. The measures took effect on August 15.

    Baidu has been a frontrunner in China in the race to capitalize on the excitement around generative artificial intelligence, the technology that underpins systems such as ChatGPT or its successor, GPT-4. The latter has impressed users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Baidu announced its own iteration in February, giving it an early advantage in China, according to analysts. It unveiled ERNIE a month later, showing how it could generate a newsletter, come up with a corporate slogan and solve a math riddle.

    Since then, competitors such as Alibaba (BABA) and SenseTime have announced plans to launch their own ChatGPT-style tools, adding to the list of Chinese businesses jumping on the bandwagon. Alibaba told CNN Thursday that it had filed for regulatory approval for its own bot, which was introduced in April.

    The company is now waiting to officially launch and “the initial list of companies that have received the approval is expected to be released by relevant local departments within one week,” said an Alibaba Cloud spokesperson.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Baidu CEO Robin Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    The firm’s new feature — which will be embedded in its popular search engine, among its other offerings — follows a similar feature introduced by Alphabet’s Google (GOOGL) in May, which allows users to search the web using its AI chatbot.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as text, images, audio and video.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    While ERNIE Bot is available globally, its interface is in Chinese, though users will be able to enter both Chinese and English prompts, a Baidu spokesperson told CNN.

    SenseTime, which unveiled its service in April, has touted a range of features, which it says allow users to write or debug code more efficiently or receive personalized medical advice from a virtual health consultation assistant.

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link

  • Google rolls out a major expansion of its Bard AI chatbot | CNN Business

    Google rolls out a major expansion of its Bard AI chatbot | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Google’s Bard artificial intelligence chatbot is evolving.

    The company on Tuesday announced a series of updates to Bard that will give the chatbot access to Google’s full suite of tools — including YouTube, Google Drive, Google Flights and others — to assist users in a wider variety of tasks. Users will be able, for example, to ask Bard to plan an upcoming trip, complete with real flight options. Or a user could ask the tool to summarize meeting notes made in a recent Google Drive document.

    The connections to Google’s other services are just some of the improvements to Bard coming Tuesday. Other updates include the ability to communicate with the chatbot in multiple languages, new fact-checking capabilities and a broad update to the large language model that the tool is built on.

    The new features mark the biggest update to Google’s Bard in the six months since it was widely released to the public.

    The update comes as Google and other tech giants, including Microsoft and ChatGPT maker OpenAI, race to roll out increasingly sophisticated consumer-facing AI technologies, and to convince users that such tools are more than just a gimmick. Google — which earlier this year reportedly issued an internal “code red” after OpenAI beat it to the release of its AI chatbot — is now flexing the power of its other, widely used software programs that can make Bard more useful.

    “These services in conjunction with one another are very, very powerful,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN ahead of the launch. “Bringing all the power of these tools together will save people time — in 20 seconds, in minutes, you can do something that would have taken maybe an hour or more.”

    Previously, Bard had been able to help with tasks like writing essay drafts or planning a friend’s baby shower based on Google’s large language model, an AI algorithm trained on vast troves of data. But now, Bard will draw on information from Google’s various other services, too. With the new extensions, Bard will now pull information from YouTube, Google Maps, Flights and Hotels by default.

    That will allow users to ask Bard things like”Give me a template for how to write a best man speech and show me YouTube videos about them for inspiration,” or for trip suggestions, complete with driving directions, according to Google. Bard users can opt to disable these extensions at any time.

    Users can also opt in to link their Gmail, Docs and Google Drive to Bard so the tool can help them analyze and manage their personal information. The tool could, for example, help with a query like: “Find the most recent lease agreement from my Drive and check how much the security deposit was,” Google said.

    The company said that users’ personal Google Workspace information will not be used to train Bard or for targeted advertising purposes, and that users can withdraw their permission for the tool to access their information at any time.

    “This is the first step in a fundamentally new capability for Bard – the ability to talk to other apps and services to provide more helpful responses,” Google said of the extensions tool. It added that, “this is a very young area of AI,” that it will continue to improve based on user feedback.

    Bard is also launching a “double check” button that will allow users to evaluate the accuracy of its responses. When a user clicks the button, certain segments of Bard’s response will be highlighted to show where Google Search results either confirm or differ from what the chatbot said. The double check feature is designed to counter a common AI issue called “hallucinations,” where an AI tool confidently makes a statement that sounds real, but isn’t actually based in fact.

    “We’re constantly working on reducing those hallucinations in Bard,” Hsiao said. But in the meantime, the company wanted to create a way to address them. “You can kind of think of it as spell check, but double checking the facts.”

    Bard will now also allow one user to share a conversation with the chatbot with another person, who can then expand on the chat themselves.

    It’s still early days for Bard, which launched in March as an “experiment” and still notes on its website that the tool “may display inaccurate or offensive information that doesn’t represent Google’s views.” But this latest update offers a glimpse at how Google may ultimately seek to incorporate generative AI into its various services.

    [ad_2]

    Source link

  • George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A group of famous fiction writers joined the Authors Guild in filing a class action suit against OpenAI on Wednesday, alleging the company’s technology is illegally using their copyrighted work.

    The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses.

    George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work.

    “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.

    The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.

    “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the US,” Authors Guild CEO Mary Rasenberger stated in the release. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

    The class-action lawsuit joins other legal actions, organizations and individuals raising alarms over how OpenAI and other generative AI systems are impacting creative works. An author told CNN in August that she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence. Two other authors sued OpenAI in June over the company’s alleged misuse of their works to train ChatGPT. Comedian Sarah Silverman and two authors also sued Meta and ChatGPT-maker OpenAI in July, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    But OpenAI has pushed back. Last month, the company asked a San Francisco federal court to narrow two separate lawsuits from authors – including Silverman – alleging that the bulk of the claims should be dismissed.

    OpenAI did not respond to a request for comment on Wednesday.

    “We think that creators deserve control over how their creations are used and what happens sort of beyond the point of, of them releasing it into the world,” Sam Altman, the CEO of OpenAI, told Congress in May. “I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life.”

    US lawmakers met with members of creative industries in July, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models.

    More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    But the AI issues facing creative professions doesn’t seem to be going away.

    “Generative AI is a vast new field for Silicon Valley’s longstanding exploitation of content providers. Authors should have the right to decide when their works are used to ‘train’ AI,” author Jonathan Franzen said in the release on Wednesday. “If they choose to opt in, they should be appropriately compensated.”

    [ad_2]

    Source link

  • How companies are embracing generative AI for employees…or not | CNN Business

    How companies are embracing generative AI for employees…or not | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Companies are struggling to deal with the rapid rise of generative AI, with some rushing to embrace the technology as workflow tools for employees while others shun it – at least for now.

    As generative artificial intelligence – the technology that underpins ChatGPT and similar tools – seeps into seemingly every corner of the internet, large corporations are grappling with whether the increased efficiency it offers outweighs possible copyright and security risks. Some companies are enacting internal bans on generative AI tools as they work to better understand the technology, and others have already begun to introduce the trendy tech to employees in their own ways.

    Many prominent companies have entirely blocked internal ChatGPT use, including JPMorgan Chase, Northrup Grumman, Apple, Verizon, Spotify and Accenture, according to AI content detector Originality.AI, with several citing privacy and security concerns. Business leaders have also expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere.

    When users input information into these tools, “[y]ou don’t know how it’s then going to be used,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN in March. “That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

    But the corporate hesitancy to welcome generative AI could be temporary.

    “Companies that are on the list of banning generative AI also have working groups internally that are exploring the usage of AI,” Jonathan Gillham, CEO of Originality.AI, told CNN, highlighting how companies in more risk-averse industries have been quicker to take action against the tech while figuring out the best approach for responsible usage. “Giving all of their staff access to ChatGPT and saying ‘have fun’ is too much of an uncontrolled risk for them to take, but it doesn’t mean that they’re not saying, ‘holy crap, look at the 10x, 100x efficiency that we can lock when we find out how to do this in a way that makes all the stakeholders happy” in departments such as legal, finance and accounting.

    Among media companies that produce news, Insider editor-in-chief Nicholas Carlson has encouraged reporters to find ways to use AI in the newsroom. “A tsunami is coming,” he said in April. “We can either ride it or get wiped out by it. But it’s going to be really fun to ride it, and it’s going to make us faster and better.” The organization discouraged staff from putting source details and other sensitive information into ChatGPT. Newspaper chain Gannett paused the use of an artificial intelligence tool to write high school sports stories after the technology called LedeAI made several mistakes in sports stories published in The Columbus Dispatch newspaper in August.

    Of the companies currently banning ChatGPT, some are discussing future usage once security concerns are addressed. UBS estimated that ChatGPT reached 100 million monthly active users in January, just two months after its launch.

    That rapid growth initially left large companies scrambling to find ways to integrate it responsibly. That process is slow for large companies. Meanwhile, website visits to ChatGPT dropped for the third month in a row in August, creating pressure for large tech companies to sustain popular interest in the tools and to find new enterprise applications and revenue models for generative AI products.

    “We at JPMorgan Chase will not roll out genAI until we can mitigate all of the risks,” Larry Feinsmith, JPM’s head of global tech strategy, innovation, and partnerships said at the Databricks Data + AI Summit in June. “We’re excited, we’re working through those risks as we speak, but we won’t roll it out until we can do this in an entirely responsible manner, and it’s going to take time.” Northrop Grumman said it doesn’t allow internal data on external platforms “until those tools are fully vetted,” according to a March report from the Wall Street Journal. Verizon also told employees in a public address in February that ChatGPT is banned “[a]s it currently stands” due to security risks but that the company wants to “safely embrace emerging technology.”

    “They’re not just waiting to sort things out. I think they’re actively working on integrating AI into their business processes separately, but they’re just doing so in a way that doesn’t compromise their information,” Vern Glaser, Associate Professor of Entrepreneurship and Family Enterprise at the University of Alberta, told CNN. “What you’ll see with a lot of the companies that will be using AI strategies, particularly those who have their own unique content, they’re going to end up creating their custom version of generative AI.”

    Several companies – and even ChatGPT itself – seem to have already found their own answers to the corporate world’s genAI security dilemma.

    Walmart introduced an internal “My Assistant” tool for 50,000 corporate employees that helps with repetitive tasks and creative ideas, according to an August LinkedIn post from Cheryl Ainoa, Walmart’s EVP of New Businesses and Emerging Technologies, and Donna Morris, Chief People Officer. The tool is intended to boost productivity and eventually help with new worker orientation, according to the post.

    Consulting giants McKinsey, PwC and EY are also welcoming genAI through internal, private methods. PwC announced a “Generative AI factory” and launched its own “ChatPwC” tool in August powered by OpenAI tech to help employees with tax questions and regulations as part of a $1 billion investment for AI capability scaling.

    McKinsey introduced “Lilli” in August, a genAI solution where employees can pose questions, with the system then aggregating all of the firm’s knowledge and scanning the data to identify relevant “With Lilli, we can use technology to access and leverage our entire body of knowledge and assets to drive new levels of productivity,” Jacky Wright, a McKinsey senior partner and chief technology and platform officer, wrote in the announcement. content, summarize the main points and offer experts.

    EY is investing $1.4 billion in the technology, including “EY.ai EYQ,” an in-house large language model, and AI training for employees, according to a September press release

    Tools like MyAssistant, ChatPwC and Lilli solve some of the corporate concerns surrounding genAI systems through custom adaptions of genAI tech, offering employees a private, closed alternative that both capitalizes its ability to increase efficiency and eliminates the risk of copyright or security leaks.

    The launch of ChatGPT Enterprise may also help quell some fears. The new version of OpenAI’s new tool, announced in August, is specifically for businesses, promising to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon, according to a company blog post.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    In response to the concerns raised by many companies over security, about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere, OpenAI’s announcement blog post for ChatGPT Enterprise states that it does “not train on your business data or conversations, and our models don’t learn from your usage.”

    In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting – namely, that users’ chat data will not be used to train AI models.

    It is still unclear whether the new tools will be enough to convince corporate America that it is time to fully embrace generative AI, though experts agree the tech’s inevitable entry into the workplace will take time and strategy.

    “I don’t think it’s that companies are against AI and against machine learning, per se. I think most companies are going to be trying to use this type of technology, but they have to be careful with it because of the impacts on intellectual property,” Glaser said.

    [ad_2]

    Source link

  • ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    [ad_1]



    CNN
     — 

    You can now speak aloud to ChatGPT and hear the artificial intelligence-powered chatbot talk back.

    OpenAI, the startup behind the wildly-popular chatbot, announced Monday that it is rolling out new features including the ability to let users engage in a back-and-forth voice conversation with ChatGPT.

    In a company blog post Monday, OpenAI teased how this new feature can be used to “request a bedtime story for your family, or settle a dinner table debate.”

    The new voice features from OpenAI carry similarities to those currently offered by Amazon’s Alexa or Apple’s Siri voice assistants.

    In a demo of the new update shared by OpenAI, a user asks ChatGPT to come up with a story about “the super-duper sunflower hedgehog named Larry.” The chatbot is able to narrate a story out loud with a human-sounding voice that can also respond to questions, such as, “What was his house like?” and “Who is his best friend?”

    ChatGPT’s voice capability is “powered by a new text-to-speech model, capable of generating human-like audio from just text and a few seconds of sample speech,” Open AI said in the blogpost. The company added that it collaborated with professional voice actors to create the five different voices that can be used to animate the chatbot.

    OpenAI also said on Monday that it’s rolling out a new feature that lets the bot respond to prompts featuring an image. For example, you can snap a picture of the contents of your fridge and ask ChatGPT to help you come up with a meal plan using the ingredients you have. Moreover, the company said you can ask the chatbot to focus on a specific part of an image with its “drawing tool” in the app.

    The new features roll out in the app within the next two weeks for paying subscribers of ChatGPT’s Plus and Enterprise services. (Subscriptions to the Plus service are $20 a month, and its Enterprise service is currently only offered to business clients).

    The updates from OpenAI come amid an ongoing AI arms race within the tech sector, initially spurred by the public launch of ChatGPT late last year. In recent weeks, tech giants have been racing to roll out new updates that incorporate more AI-powered tools directly into their core products. Google last week announced a series of updates to its ChatGPT competitor Bard. Also last week, Amazon said it was bringing a generative AI-powered update to its Alexa voice assistant.

    [ad_2]

    Source link

  • Sarah Silverman Sues OpenAI, Meta For Use of Copyrighted Works | Entrepreneur

    Sarah Silverman Sues OpenAI, Meta For Use of Copyrighted Works | Entrepreneur

    [ad_1]

    Comedian and author Sarah Silverman, along with authors Christopher Golden and Richard Kadrey, filed lawsuits against OpenAI and Meta on Friday, accusing the companies of copyright infringement.

    The lawsuits claim that the tech giants’ chatbots — OpenAI’s ChatGPT and Meta’s LLaMA — were trained using Silverman’s and the other authors’ copyrighted works without their permission. The plaintiffs also argue that the works were obtained from unauthorized sources known as “shadow libraries,” where books are “available for bulk download via torrent systems,” the lawsuit states.

    The lawsuits consist of various types of copyright violations, negligence, unjust enrichment, and unfair competition. Silverman and the other plaintiffs are seeking relief by way of statutory damages, restitution of profits, and “other remedies” as a result of the companies’ “unlawful conduct.”

    In the complaint, exhibits provided demonstrate how ChatGPT summarized the plaintiffs’ books when prompted, and did so in thorough detail, giving “very accurate summaries,” and thereby violating their copyrights. The lawsuit emphasizes that the chatbot fails to “reproduce any of the copyright management information” that the authors included in their works.

    Silverman’s memoir, The Bedwetter is the first book shown as evidence in the complaint, followed by Golden’s Ararat and Kadrey’s Sandman Slim (the latter two are works of fiction). All works are shown to be summarized by ChatGPT in detail, which the lawsuit claims “would only be possible” if the AI models were trained using their books. The complaint acknowledges that the summaries, mostly accurate, do have “some details wrong,” but that is “expected.”

    Related: Authors Are Suing OpenAI Because ChatGPT Is Too ‘Accurate’ — Here’s What That Means

    “Still, the rest of the summaries are accurate, which means that ChatGPT retains knowledge of particular works in the training dataset and is able to output similar textual content,” the lawsuit states.

    Sarah Silverman in March 2023. Jason Kempin | Getty Images

    The lawsuit against Meta alleges that the authors’ books were included in datasets used to train Meta’s LLaMA models, with ThePile (one of Meta’s sources for its training datasets) mentioned explicitly as sourced from the illicit Bibliotik private tracker which, along with other “shadow libraries,” the lawsuit says is “flagrantly illegal.”

    The authors argue in both lawsuits that they never provided consent for their copyrighted books to be used to train the companies’ chatbots.

    Joseph Saveri and Matthew Butterick, the lawyers representing the authors, have created a website to address concerns from other writers, authors, and publishers regarding ChatGPT’s ability to generate text similar to copyrighted material.

    “Since the release of OpenAI’s Chat­GPT sys­tem in March 2023, we’ve been hear­ing from writ­ers, authors, and pub­lish­ers who are con­cerned about its uncanny abil­ity to gen­er­ate text sim­i­lar to that found in copy­righted tex­tual mate­ri­als, includ­ing thou­sands of books,” the lawyers write on the blog. “It’s a great plea­sure to stand up on behalf of authors and con­tinue the vital con­ver­sa­tion about how AI will coex­ist with human cul­ture and cre­ativ­ity.”

    Related: OpenAI Rolls Out New Feature to Help Teachers Crack Down on ChatGPT Cheating — But Admit the Tool Is ‘Imperfect’

    OpenAI and Meta did not immediately respond to Entrepreneur’s request for comment.

    [ad_2]

    Madeline Garfinkle

    Source link

  • Unity Announces Big ‘AI’ Plans, Developers Have Concerns

    Unity Announces Big ‘AI’ Plans, Developers Have Concerns

    [ad_1]

    Video games engine provider Unity announced earlier today the introduction of two new machine-learning platforms, one of which in particular has developers and artists asking questions of the company that, at time of publishing, have yet to be answered.

    From Unity’s blog:

    Today we’re announcing two new AI products: Unity Muse, an expansive platform for AI-driven assistance during creation, and Unity Sentis, which allows you to embed neural networks in your builds to enable previously unimaginable real-time experiences.

    Muse is essentially just ChatGPT but for Unity specifically, and purports to let users ask questions about coding and resources and get instant answers. Sentis, however, is more concerning, as it “enables you to embed an AI model in the Unity Runtime for your game or application, enhancing gameplay and other functionality directly on end-user platforms.”

    Because “AI” is a technology that in many cases is utterly reliant on work stolen from artists without consent or compensation, Unity’s announcement led to a lot of questions about Sentis, with particular focus on the tech’s ability to create stuff like images, models and animation. Scroll down past the announcement tweet, for example, and you’ll see a ton of variations of the same query:

    just to jump on the train, which dataset y’all pull the art from???

    Unity needs to be fully transparent about what ML models will be implemented, including the data they have been trained on. I don’t see any possible way ML, in current iterations, can be effective without training on countless ill gotten data.

    REALLY concerning image generator stuff. What datasets?

    Hi, what dataset was this trained on? Is this using artwork from artists without their permission? Animations? Materials? How was this AI trained?

    You do realize that AI-created assets can’t be used commercially, so what was the rationale for adding this feature?

    Which datasets were used in development of this? Did you negotiate & acquire all relevant licenses directly from copyright holders?

    It’s a very specific question, one that at time of publishing Unity has yet to answer, either on Twitter or on the company’s forums (I’ve emailed the company asking the question specifically, and will update if I hear back). Those familiar with “AI”’s legal and copyright struggles can find the outline of an answer in this post by Unity employee TreyK-47, though, when he says you can’t use the tech as it exists today “for a current commercial or external project”.

    Note that while there are clear dangers to jobs and the quality of games inherent in this push, those dangers are for the future; for the now, this looks (and sounds) like dogshit.

    Experience the art of the possible | Unity AI

    [ad_2]

    Luke Plunkett

    Source link