ReportWire

Tag: generative ai

  • Exec tells first UN council meeting that big tech can’t be trusted to guarantee AI safety

    Exec tells first UN council meeting that big tech can’t be trusted to guarantee AI safety

    [ad_1]

    UNITED NATIONS — The handful of big tech companies leading the race to commercialize AI can’t be trusted to guarantee the safety of systems we don’t yet understand and that are prone to “chaotic or unpredictable behavior,” an artificial intelligence company executive told the first U.N. Security Council meeting on AI’s threats to global peace on Tuesday.

    Jack Clark, co-founder of the AI company Anthropic, said that’s why the world must come together to prevent the technology’s misuse.

    Clark, who says his company bends over backwards to train its AI chatbot to emphasize safety and caution, said the most useful things that can be done now “are to work on developing ways to test for capabilities, misuses and potential safety flaws of these systems.” Clark left OpenAI, creator of the best-known ChatGPT chatbot, to form Anthropic, whose competing AI product is called Claude.

    He traced the growth of AI over the past decade to 2023 where new AI systems can beat military pilots in air fighting simulations, stabilize the plasma in nuclear fusion reactors, design components for next generation semiconductors, and inspect goods on production lines.

    But while AI will bring huge benefits, its understanding of biology, for example, may also use an AI system that can produce biological weapons, he said.

    Clark also warned of “potential threats to international peace, security and global stability” from two essential qualities of AI systems – their potential for misuse and their unpredictability “as well as the inherent fragility of them being developed by such a narrow set of actors.”

    Clark stressed that across the world it’s the tech companies that have the sophisticated computers, large pools of data and capital to build AI systems and therefore they seem likely to continue to define their development

    In a video briefing to the U.N.’s most powerful body, Clark also expressed hope that global action will succeed.

    He said he’s encouraged to see many countries emphasize the importance of safety testing and evaluation in their AI proposals, including the European Union, China and the United States.

    Right now, however, there are no standards or even best practices on “how to test these frontier systems for things like discrimination, misuse or safety,” which makes it hard for governments to create policies and lets the private sector enjoy an information advantage, he said.

    “Any sensible approach to regulation will start with having the ability to evaluate an AI system for a given capability or flaw,” Clark said. “And any failed approach will start with grand policy ideas that are not supported by effective measurements and evaluations.”

    With robust and reliable evaluation of AI systems, he said, “governments can keep companies accountable, and companies can earn the trust of the world that they want to deploy their AI systems into.” But if there is no robust evaluation, he said, “we run the risk of regulatory capture compromising global security and handing over the future to a narrow set of private sector actors.”

    Other AI executives such as OpenAI’s CEO, Sam Altman, have also called for regulation. But skeptics say regulation could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their large language models adhere to regulatory strictures.

    U.N. Secretary-General Antonio Guterres said the United Nations is “the ideal place” to adopt global standards to maximize AI’s benefits and mitigate its risks.

    He warned the council that the advent of generative AI could have very serious consequences for international peace and security, pointing to its potential use by terrorists, criminals and governments causing “horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.”

    As a first step to bringing nations together, Guterres said he is appointing a high-level Advisory Board for Artificial Intelligence that will report back on options for global AI governance by the end of the year.

    The U.N. chief also said he welcomed calls from some countries for the creation of a new United Nations body to support global efforts to govern AI, “inspired by such models as the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change.”

    Professor Zeng Yi, director of the Chinese Academy of Sciences Brain-inspired Cognitive Intelligence Lab, told the council “the United Nations must play a central role to set up a framework on AI for development and governance to ensure global peace and security.”

    Zeng, who also co-directs the China-UK Research Center for AI Ethics and Governance, suggested that the Security Council consider establishing a working group to consider near-term and long-term challenges AI poses to international peace and security.

    In his video briefing, Zeng stressed that recent generative AI systems “are all information processing tools that seem to be intelligent” but don’t have real understanding, and therefore “are not truly intelligent.”

    And he warned that “AI should never, ever pretend to be human,” insisting that real humans must maintain control especially of all weapons systems.

    Britain’s Foreign Secretary James Cleverly, who chaired the meeting as the UK holds the council presidency this month, said this autumn the United Kingdom will bring world leaders together for the first major global summit on AI safety.

    “No country will be untouched by AI, so we must involve and engage the widest coalition of international actors from all sectors,” he said. “Our shared goal will be to consider the risks of AI and decide how they can be reduced through coordinated action.”

    ——

    AP Technology Writer Frank Bajak contributed to this report from Boston

    [ad_2]

    Source link

  • ChatGPT-maker OpenAI signs deal with AP to license news stories

    ChatGPT-maker OpenAI signs deal with AP to license news stories

    [ad_1]

    ChatGPT-maker OpenAI and The Associated Press said Thursday that they’ve made a deal for the artificial intelligence company to license AP’s archive of news stories.

    “The arrangement sees OpenAI licensing part of AP’s text archive, while AP will leverage OpenAI’s technology and product expertise,” the two organizations said in a joint statement.

    Financial terms of the deal were not disclosed.

    The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot.

    Google says it’s rolling out its AI-powered chatbot Bard across Europe and in Brazil, expanding its availability to hundreds of millions more users.

    Elon Musk is finally starting to talk about the artificial intelligence company he founded to compete with ChatGPT-maker OpenAI.

    Ask ChatGPT about comedian Sarah Silverman’s memoir “The Bedwetter” and the artificial intelligence chatbot can come up with a detailed synopsis of every part of the book.

    OpenAI and other technology companies must ingest large troves of written works, such as books, news articles and social media chatter, to improve their AI systems known as large language models. Last year’s release of ChatGPT has sparked a boom in “generative AI” products that can create new passages of text, images and other media.

    The tools have raised concerns about their propensity to spout falsehoods that are hard to notice because of the system’s strong command of the grammar of human languages. They also have raised questions about to what extent news organizations and others whose writing, artwork, music or other work was used to “train” the AI models should be compensated.

    This week, the U.S. Federal Trade Commission told OpenAI it had opened an investigation into whether the company had engaged in unfair or deceptive privacy or data security practices in scraping public data — or caused harm by publishing false information through its chatbot products. The FTC did not immediately reply to a request for comment on the investigation, which The Washington Post was first to report.

    Along with news organizations, book authors have sought compensation for their works being used to train AI systems. More than 4,000 writers — among them Nora Roberts, Margaret Atwood, Louise Erdrich and Jodi Picoult — signed a letter late last month to the CEOs of OpenAI, Google, Microsoft, Meta and other AI developers accusing them of exploitative practices in building chatbots that “mimic and regurgitate” their language, style and ideas. Some novelists and the comedian Sarah Silverman have also sued OpenAI for copyright infringement.

    “We are pleased that OpenAI recognizes that fact-based, nonpartisan news content is essential to this evolving technology, and that they respect the value of our intellectual property,” said a written statement from Kristin Heitmann, AP senior vice president and chief revenue officer. “AP firmly supports a framework that will ensure intellectual property is protected and content creators are fairly compensated for their work.”

    The two companies said they are also examining “potential use cases for generative AI in news products and services,” though didn’t give specifics. OpenAI and AP both “believe in the responsible creation and use of these AI systems,” the statement said.

    OpenAI will have access to AP news stories going back to 1985.

    The AP deal is valuable to a company like OpenAI because it provides a trove of material that it can use for training purposes, and is also a hedge against losing access to material because of lawsuits that have threatened its access to material, said Nick Diakopoulos, a professor of communications studies and computer science at Northwestern University.

    “In order to guard against how the courts may decide, maybe you want to go out and sign licensing deals so you’re guaranteed legal access to the material you’ll need,” Diakopoulos said.

    The AP doesn’t currently use any generative AI in its news stories, but has used other forms of AI for nearly a decade, including to automate corporate earnings reports and recap some sporting events. It also runs a program that helps local news organizations incorporate AI into their operations, and recently launched an AI-powered image archive search.

    The deal’s effects could reach far beyond the AP because of the organization’s size and its deep ties to other news outlets, said news industry analyst Ken Doctor.

    When AP decided to open up its content for free on the internet in the 1990s, it led many newspaper companies to do the same, which “turned out to be a very bad idea” for the news business, Doctor said.

    He said navigating “a new, AI-driven landscape is deeply uncertain” and presents similar risks.

    “The industry is far weaker today. AP is in OK shape. It’s stable. But the newspaper industry around it is really gasping for air,” Doctor said. “On the positive side, AP has the clout to do a deal like this and can work with local publishers to try to assess both the potential and the risk.”

    ___

    Associated Press writer David Bauder contributed to this report.

    [ad_2]

    Source link

  • FTC reportedly investigating ChatGPT creator OpenAI over consumer protection issues

    FTC reportedly investigating ChatGPT creator OpenAI over consumer protection issues

    [ad_1]

    The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot, according to reports in the Washington Post and the New York Times.

    The agency sent OpenAI a 20-page letter requesting detailed information on its AI technology, products, customers, privacy safeguards and data security arrangements, according to the reports. An FTC spokesman had no comment.

    OpenAI founder Sam Altman tweeted disappointment that news of the investigation started as a “leak,” noting that the move would “not help build trust,” but added the company will work with the FTC.

    “It’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law,” he wrote. “We protect user privacy and design our systems to learn about the world, not private individuals.”

    The FTC’s move represents the most significant regulatory threat so far to the nascent but fast-growing AI industry, although it’s not the only challenge facing these companies. Comedian Sarah Silverman and two other authors have sued both OpenAI and Facebook parent Meta for copyright infringement, claiming that the companies’ AI systems were illegally “trained” by exposing them to datasets containing illegal copies of their works.

    On Thursday, OpenAI and The Associated Press announced a deal under which the AI company will license AP’s archive of news stories.

    Altman has emerged as a global AI ambassador of sorts following his testimony before Congress in May and a subsequent tour of European capitals where regulators were putting final touches on a new AI regulatory framework. Altman himself has called for AI regulation, although he has tended to emphasize difficult-to-evaluate existential threats such as the possibility that superintelligent AI systems could one day turn against humanity.

    Some argue that focusing on a far-off “science fiction trope” of superpowerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation.

    “It’s the fear of these systems and our lack of understanding of them that is making everyone have a collective freak-out,” Suresh Venkatasubramanian, a Brown University computer scientist and former assistant director for science and justice at the White House Office of Science and Technology Policy, told the AP in May. “This fear, which is very unfounded, is a distraction from all the concerns we’re dealing with right now.”

    News of the FTC’s OpenAI investigation broke just hours after a combative House Judiciary Committee hearing in which FTC Chair Lina Khan faced off against Republican lawmakers who said she has been too aggressive in pursuing technology companies for alleged wrongdoing.

    Republicans said she has been harassing Twitter since its acquisition by Elon Musk, arbitrarily suing large tech companies and declining to recuse herself from certain cases. Khan pushed back, arguing that more regulation is necessary as the companies have grown and that tech conglomeration could hurt the economy and consumers.

    [ad_2]

    Source link

  • FTC reportedly investigating ChatGPT creator OpenAI over consumer protection issues

    FTC reportedly investigating ChatGPT creator OpenAI over consumer protection issues

    [ad_1]

    The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot, according to reports in the Washington Post and the New York Times.

    The agency sent OpenAI a 20-page letter requesting detailed information on its AI technology, products, customers, privacy safeguards and data security arrangements, according to the reports. An FTC spokesman had no comment.

    OpenAI founder Sam Altman tweeted disappointment that news of the investigation started as a “leak,” noting that the move would “not help build trust,” but added the company will work with the FTC.

    “It’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law,” he wrote. “We protect user privacy and design our systems to learn about the world, not private individuals.”

    The FTC’s move represents the most significant regulatory threat so far to the nascent but fast-growing AI industry, although it’s not the only challenge facing these companies. Comedian Sarah Silverman and two other authors have sued both OpenAI and Facebook parent Meta for copyright infringement, claiming that the companies’ AI systems were illegally “trained” by exposing them to datasets containing illegal copies of their works.

    On Thursday, OpenAI and The Associated Press announced a deal under which the AI company will license AP’s archive of news stories.

    Altman has emerged as a global AI ambassador of sorts following his testimony before Congress in May and a subsequent tour of European capitals where regulators were putting final touches on a new AI regulatory framework. Altman himself has called for AI regulation, although he has tended to emphasize difficult-to-evaluate existential threats such as the possibility that superintelligent AI systems could one day turn against humanity.

    Some argue that focusing on a far-off “science fiction trope” of superpowerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation.

    “It’s the fear of these systems and our lack of understanding of them that is making everyone have a collective freak-out,” Suresh Venkatasubramanian, a Brown University computer scientist and former assistant director for science and justice at the White House Office of Science and Technology Policy, told the AP in May. “This fear, which is very unfounded, is a distraction from all the concerns we’re dealing with right now.”

    News of the FTC’s OpenAI investigation broke just hours after a combative House Judiciary Committee hearing in which FTC Chair Lina Khan faced off against Republican lawmakers who said she has been too aggressive in pursuing technology companies for alleged wrongdoing.

    Republicans said she has been harassing Twitter since its acquisition by Elon Musk, arbitrarily suing large tech companies and declining to recuse herself from certain cases. Khan pushed back, arguing that more regulation is necessary as the companies have grown and that tech conglomeration could hurt the economy and consumers.

    [ad_2]

    Source link

  • AI and Humans Equally Effective in Engaging Education Content Now, Study by Rask AI

    AI and Humans Equally Effective in Engaging Education Content Now, Study by Rask AI

    [ad_1]

    Press Release


    Jul 13, 2023 10:15 EDT

    67% of the respondents didn’t mention the AI aspect as they were more interested in the content of the video itself.

    Does AI-generated content impact audience engagement? The Rask AI team transformed this question into a groundbreaking study on how AI transforms the online education market in 2023. Their research compares audience engagement in synthetic learning videos vs. human-created learning videos and evaluates the benefits of investing in new learning content creation and distribution technologies.

    Main insights:

    • The survey of more than 300 audience members showed that AI-generated content is equally as engaging as human-created content now. While a certain degree of FUD (fear, uncertainty, and doubt) remains – in addition to some technological limitations – what this research reveals is that AI is well-equipped to maintain the accessibility and personalization of educational content, without losing the audience engagement.
       
    • Even though participants recognized that one video was AI-generated, they were more focused on the topic of the content than how that content was created (67%). 
       
    • 13% showed great enthusiasm for AI after watching the synthetic video and expressed an interest in learning more about this field. 

    The study also covers the latest trends and data on the AI education market in 2023 with citations from AI experts as well as the practical guide on how to use AI in education: an overview of new AI tools to make learning more personalized, accessible and inclusive.

    Complete Study Results: https://www.rask.ai/research/ai-in-education

    Study Methodology

    The study surveyed 300 respondents and aimed to gain an understanding of participants’ perceptions, thoughts, feelings and behaviors during and after watching the educational videos. It has input from 30 AI experts and 12 data sources published between 2021 and 2023, including data from Statista, McKinsey Technology Trends Outlook, Straits Research, KPMG and others. 

    Rask AI is a brand of the company Brask Inc., an American company developing products and services for AI content creation and distribution.

    Source: Rask AI

    [ad_2]

    Source link

  • Elon Musk launches his new company, xAI

    Elon Musk launches his new company, xAI

    [ad_1]

    Elon Musk launches X.Ai. 

    Jonathan Raa | Nurphoto | Getty Images

    Elon Musk, CEO of Tesla and SpaceX, and owner of Twitter, on Wednesday announced the debut of a new artificial intelligence company, xAI, with the goal to “understand the true nature of the universe.” According to the company’s website, Musk and his team will share more information in a live Twitter Spaces chat on Friday.

    Team members behind xAI are alumni of DeepMind, OpenAI, Google Research, Microsoft Research, Twitter and Tesla, and have worked on projects including DeepMind’s AlphaCode and OpenAI’s GPT-3.5 and GPT-4 chatbots. Musk seems to be positioning xAI to compete with companies like OpenAI, Google and Anthropic, which are behind leading chatbots like ChatGPT, Bard and Claude.

    News of the startup was previously reported by the Financial Times in April, along with reports that Musk had secured thousands of GPU processors from Nvidia in order to power a potential large language model. That same month, Musk shared details of his plans for a new AI tool called “TruthGPT” during a taped interview on Fox News Channel, adding that he feared existing AI companies are prioritizing systems that are “politically correct.”

    One of the AI startup’s advisors will be Dan Hendrycks, executive director of the Center for AI Safety, a San Francisco-based nonprofit that published a letter in May signed by tech leaders claiming that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The letter received pushback from many academics and ethicists of the belief that too much focus on AI’s growing power and its future threats distracts from real-life harms that some algorithms cause to marginalized communities right now, rather than in an unspecified future.

    According to Greg Yang, co-founder of xAI, the startup will delve into the “mathematics of deep learning,” a facet of AI, and “develop the ‘theory of everything’ for large neural networks” to take AI “to the next level.”

    Musk reportedly incorporated xAI in Nevada in March. Previously, he had changed the name of Twitter to “X Corp.” in some financial filings, but on xAI’s website, the company notes its separation from X Corp., adding that it will “work closely with X (Twitter), Tesla, and other companies to make progress towards our mission.”

    [ad_2]

    Source link

  • Sarah Silverman and novelists sue ChatGPT-maker OpenAI for ingesting their books

    Sarah Silverman and novelists sue ChatGPT-maker OpenAI for ingesting their books

    [ad_1]

    Ask ChatGPT about comedian Sarah Silverman’s memoir “The Bedwetter” and the artificial intelligence chatbot can come up with a detailed synopsis of every part of the book.

    Does that mean it effectively “read” and memorized a pirated copy? Or it scraped so many customer reviews and online chatter about the bestseller or the musical it inspired that it passes for an expert?

    The U.S. courts may now help sort that out after Silverman sued ChatGPT-maker OpenAI for copyright infringement this week, joining a growing number of writers who say they unwittingly built the foundation for Silicon Valley’s red-hot AI boom.

    Silverman’s lawsuit says she never gave permission for OpenAI to ingest the digital version of her 2010 book to train its AI models, and it was likely stolen from a “shadow library” of pirated works. It says the memoir was copied “without consent, without credit, and without compensation.”

    It’s one of a mounting number of cases that could crack open the secrecy of OpenAI and its rivals about the valuable data used to train increasingly widely used “generative AI” products that create new text, images and music. And it raises questions about the ethical and legal bedrock of tools that the McKinsey Global Institute projects will add the equivalent of $2.6 trillion to $4.4 trillion to the global economy.

    “This is an open, dirty secret of the whole machine learning industry,” said Matthew Butterick, one of the lawyers representing Silverman and other authors in seeking a class-action case. “They love book data and they get it from these illicit sites. We’re kind of blowing the whistle on that whole practice.”

    OpenAI declined to comment on the allegations. Another lawsuit from Silverman makes similar claims about an AI model built by Facebook and Instagram parent company Meta, which also declined comment.

    It may be a tough case for writers to win, especially after Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing small portions of them to the public amount to “copyright infringement on an epic scale.”

    “I think what OpenAI has done with books is awfully close to what Google was allowed to do with its Google Books project and so will be legal,” said Deven Desai, associate professor of law and ethics at the Georgia Institute of Technology.

    While only a handful have sued, including Silverman and bestselling novelists Mona Awad and Paul Tremblay, concerns about the tech industry’s AI-building practices have gained traction in literary and artist communities.

    Other prominent authors — among them Nora Roberts, Margaret Atwood, Louise Erdrich and Jodi Picoult — signed a letter late last month to the CEOs of OpenAI, Google, Microsoft, Meta and other AI developers accusing them of exploitative practices in building chatbots that “mimic and regurgitate” their language, style and ideas.

    “Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” said the open letter organized by the Authors Guild and signed by more than 4,000 writers. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

    The AI systems behind popular products such as ChatGPT, Google’s Bard and Microsoft’s Bing chatbot are known as large language models that have “learned” by analyzing and picking up patterns from a wide body of ingested text. They’ve awed the public with their strong command of the human language, though they’re also known for a tendency to spout falsehoods.

    While the models have also been trained on news articles and social media feeds, books are particularly valuable, as OpenAI acknowledged in a 2018 paper cited in Silverman’s lawsuit.

    The earliest version of OpenAI’s large language model, known as GPT-1, relied on a dataset compiled by university researchers called the Toronto Book Corpus that included thousands of unpublished books, some in the adventure, fantasy and romance genres.

    “Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information,” OpenAI researchers said at the time. Other tech companies such as Google and Amazon also relied on the same data, which is no longer available in its original form.

    But since then, OpenAI and other top AI developers have grown more secretive about their sources of data, even as they have ingested even larger troves of written works. Butterick said circumstantial evidence points to the use of so-called shadow libraries of pirated content that held the works of Silverman and other plaintiffs.

    “It’s important for their models because books are the best source of long-form, well-edited, coherent writing,” he said. “You basically can’t have a high-quality language model unless you have books in your training data.”

    It could be weeks or months before a formal response is due from OpenAI. But once the case proceeds, tech executives could have to testify, under oath, about what sources of books they downloaded.

    “As far as we know, the other side hasn’t denied it,” said Joseph Saveri, another of Silverman’s lawyers. “They don’t have an alternative explanation for this.”

    Saveri said authors aren’t necessarily asking tech companies to throw away their algorithms and training data and start over — though the U.S. Federal Trade Commission has set a precedent for forcing companies to destroy ill-gotten AI data. But some way of compensating writers is needed, he said.

    [ad_2]

    Source link

  • Sarah Silverman and novelists sue ChatGPT-maker OpenAI for ingesting their books

    Sarah Silverman and novelists sue ChatGPT-maker OpenAI for ingesting their books

    [ad_1]

    Ask ChatGPT about comedian Sarah Silverman’s memoir “The Bedwetter” and the artificial intelligence chatbot can come up with a detailed synopsis of every part of the book.

    Does that mean it effectively “read” and memorized a pirated copy? Or it scraped so many customer reviews and online chatter about the bestseller or the musical it inspired that it passes for an expert?

    The U.S. courts may now help sort that out after Silverman sued ChatGPT-maker OpenAI for copyright infringement this week, joining a growing number of writers who say they unwittingly built the foundation for Silicon Valley’s red-hot AI boom.

    Silverman’s lawsuit says she never gave permission for OpenAI to ingest the digital version of her 2010 book to train its AI models, and it was likely stolen from a “shadow library” of pirated works. It says the memoir was copied “without consent, without credit, and without compensation.”

    It’s one of a mounting number of cases that could crack open the secrecy of OpenAI and its rivals about the valuable data used to train increasingly widely used “generative AI” products that create new text, images and music. And it raises questions about the ethical and legal bedrock of tools that the McKinsey Global Institute projects will add the equivalent of $2.6 trillion to $4.4 trillion to the global economy.

    “This is an open, dirty secret of the whole machine learning industry,” said Matthew Butterick, one of the lawyers representing Silverman and other authors in seeking a class-action case. “They love book data and they get it from these illicit sites. We’re kind of blowing the whistle on that whole practice.”

    OpenAI declined to comment on the allegations. Another lawsuit from Silverman makes similar claims about an AI model built by Facebook and Instagram parent company Meta, which also declined comment.

    It may be a tough case for writers to win, especially after Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing small portions of them to the public amount to “copyright infringement on an epic scale.”

    “I think what OpenAI has done with books is awfully close to what Google was allowed to do with its Google Books project and so will be legal,” said Deven Desai, associate professor of law and ethics at the Georgia Institute of Technology.

    While only a handful have sued, including Silverman and bestselling novelists Mona Awad and Paul Tremblay, concerns about the tech industry’s AI-building practices have gained traction in literary and artist communities.

    Other prominent authors — among them Nora Roberts, Margaret Atwood, Louise Erdrich and Jodi Picoult — signed a letter late last month to the CEOs of OpenAI, Google, Microsoft, Meta and other AI developers accusing them of exploitative practices in building chatbots that “mimic and regurgitate” their language, style and ideas.

    “Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” said the open letter organized by the Authors Guild and signed by more than 4,000 writers. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

    The AI systems behind popular products such as ChatGPT, Google’s Bard and Microsoft’s Bing chatbot are known as large language models that have “learned” by analyzing and picking up patterns from a wide body of ingested text. They’ve awed the public with their strong command of the human language, though they’re also known for a tendency to spout falsehoods.

    While the models have also been trained on news articles and social media feeds, books are particularly valuable, as OpenAI acknowledged in a 2018 paper cited in Silverman’s lawsuit.

    The earliest version of OpenAI’s large language model, known as GPT-1, relied on a dataset compiled by university researchers called the Toronto Book Corpus that included thousands of unpublished books, some in the adventure, fantasy and romance genres.

    “Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information,” OpenAI researchers said at the time. Other tech companies such as Google and Amazon also relied on the same data, which is no longer available in its original form.

    But since then, OpenAI and other top AI developers have grown more secretive about their sources of data, even as they have ingested even larger troves of written works. Butterick said circumstantial evidence points to the use of so-called shadow libraries of pirated content that held the works of Silverman and other plaintiffs.

    “It’s important for their models because books are the best source of long-form, well-edited, coherent writing,” he said. “You basically can’t have a high-quality language model unless you have books in your training data.”

    It could be weeks or months before a formal response is due from OpenAI. But once the case proceeds, tech executives could have to testify, under oath, about what sources of books they downloaded.

    “As far as we know, the other side hasn’t denied it,” said Joseph Saveri, another of Silverman’s lawyers. “They don’t have an alternative explanation for this.”

    Saveri said authors aren’t necessarily asking tech companies to throw away their algorithms and training data and start over — though the U.S. Federal Trade Commission has set a precedent for forcing companies to destroy ill-gotten AI data. But some way of compensating writers is needed, he said.

    [ad_2]

    Source link

  • How the generative A.I. boom could forever change online advertising

    How the generative A.I. boom could forever change online advertising

    [ad_1]

    Sebastien Bozon | AFP | Getty Images

    Shortly after ChatGPT hit the market last year and instantly captured headlines for its ability to appear human in answering user queries, digital marketing veteran Shane Rasnak began experimenting.

    As someone who had built a career in creating online ad campaigns for clients, Rasnak saw how generative artificial intelligence could transform his industry. Whether it was coming up with headlines for Facebook ads or short blurbs of ad copy, Rasnak said, jobs that would have taken him 30 minutes to an hour are now 15-minute projects.

    And that’s just the beginning.

    Rasnak is also playing with generative AI tools such as Midjourney, which turns text-based prompts into images, as he tries to dream up compelling visuals to accompany Facebook ads. The software is particularly handy for someone without a graphic design background, Rasnak said, and can help alongside popular graphic-editing tools from Canva and Adobe’s Photoshop.

    While it’s all still brand new, Rasnak said generative AI is “like the advent of social media” in terms of its impact on the digital ad industry. Facebook and Twitter made it possible for advertisers to target consumers based on their likes, friends and interests, and generative AI now gives them the ability to create tailored messaging and visuals in building and polishing campaigns.

    “In terms of how we market our work, the output, the quality and the volume that they’re able to put out, and how personalized you can get as a result of that, that just completely changes everything,” Rasnak said.

    Rasnak is far from alone on the hype train.

    Meta, Alphabet and Amazon, the leaders in online advertising, are all betting generative AI will eventually be core to their businesses. They’ve each recently debuted products or announced plans to develop various tools to help companies more easily create messages, images and even videos for their respective platforms.

    Their products are mostly still in trial phases and, in some cases, have been criticized for being rushed to market, but ad experts told CNBC that, taken as a whole, generative AI represents the next logical step in targeted online advertising.

    “This is going to have a seismic impact on digital advertising,” said Cristina Lawrence, executive vice president of consumer and content experience at Razorfish, a digital marketing agency that’s part of the ad giant Publicis Groupe.

    In May, Meta announced its AI Sandbox testing suite for companies to more easily use generative AI software to create background images and experiment with different advertising copy. The company also introduced updates to its Meta Advantage service, which uses machine learning to improve the efficiency of ads running on its various social apps.

    Meta has been pitching the Advantage suite as a way for companies to get better performance from their campaigns after Apple’s 2021 iOS privacy update limited their ability to track users across the internet.

    ‘Personalization at scale’

    Meta Platforms CEO Mark Zuckerberg speaks at Georgetown University in Washington, Oct. 17, 2019.

    Andrew Caballero-Reynolds | AFP | Getty Images

    Varos CEO Yarden Shaked said the increase shows Facebook is having some success in persuading advertisers to rely on its automated ad technology. However, Shaked said he’s “not sold on the creative piece yet,” regarding Meta’s nascent foray into providing generative AI tools for advertisers.

    Similarly, Rasnak said Midjourney’s tool isn’t “quite there yet” when it comes to producing realistic imagery that could be incorporated into an online ad, but is effective at generating “cartoony designs” that resonate with some smaller clients.

    Jay Pattisall, an analyst at Forrester, said several major hurdles prevent generative AI from having a major immediate impact on the online ad industry.

    One is brand safety. Companies are uncomfortable outsourcing campaigns to generative AI, which can generate visuals and phrases that reflect certain biases or are otherwise offensive and can be inaccurate.

    Earlier this year, Bloomberg News found that AI-created imagery from the popular Stable Diffusion tool produced visuals that reflected a number of stereotypes, generating images of people with darker skin tones when fed prompts such as “fast-food worker” or “social worker” and associating lighter skin tones with high-paying jobs.

    There are also potential legal issues when it comes to using generative AI powered by models trained on data that’s “scraped from the internet,” Pattisall said. Reddit, Twitter and Stack Overflow have said they will charge AI companies for use of the mounds of data on their platforms.

    Scott McKelvey, a longtime marketing writer and consultant, cited other limitations surrounding the quality of the output. Based on his limited experience with ChatGPT, the AI chatbot created by OpenAI, McKelvey said the technology fails to produce the kind of long-form content that companies could find useful as promotional copy.

    “It can provide fairly generic content, pulling from information that’s already out there,” McKelvey said. “But there’s no distinctive voice or point of view, and while some tools claim to be able to learn your brand voice based on your prompts and your inputs, I haven’t seen that yet.”

    An OpenAI spokesperson declined to comment.

    A spokesperson for Meta said in an email that the company has done extensive research to try to mitigate bias in its AI systems. Additionally, the company said it has brand-safety tools intended to give advertisers more control over where their ads appear online and it will remove any AI-generated content that’s in violation of its rules.

    “We are actively monitoring any new trends in AI-generated content,” the email said. “If the substance of the content, regardless of its creation mechanism, violates our Community Standards or Ads Standards, we remove the content. We are in the process of reviewing our public-facing policies to ensure that this standard is clear.”

    The Meta spokesperson added that as new chatbots and other automated tools come to market, “the industry will need to find ways to meet novel challenges for responsible deployment of AI in production” and “Meta intends to remain at the forefront of that work.”

    Stacy Reed, an online advertising and Facebook ads consultant, is currently incorporating generative AI into her daily work. She’s using the software to come up with variations of Facebook advertising headlines and short copy, and said it’s been helpful in a world where it’s more difficult to track users online.

    Reed described generative AI as a good “starting point,” but said companies and marketers still need to hone their own brand messaging strategy and not rely on generic content. Generative AI doesn’t “think” like a human strategist when producing content and often relies on a series of prompts to refine the text, she explained.  

    Thus, companies shouldn’t simply rely on the technology to do the big picture thinking of knowing what themes resonate with different audiences or how to execute major campaigns across multiple platforms.

    “I’m dealing with large brands that are struggling, because they’ve been so disconnected from the average customer that they’re no longer speaking their language,” Reed said.

    For now, major ad agencies and big companies are using generative AI mostly for pilot projects while waiting for the technology to develop, industry experts said.

    Earlier this year, Mint Mobile aired an ad featuring actor and co-owner Ryan Reynolds reading a script that he said was generated from ChatGPT. He asked the program to write the ad in his voice and use a joke, a curse word and to let the audience know that the promotion is still going.

    After reading the AI-created text, Reynolds said, “That is mildly terrifying, but compelling.”

    Watch: Social media showdown: Instagram to launch direct competitor to Twitter

    Social media showdown: Instagram to launch direct competitor to Twitter

    [ad_2]

    Source link

  • 4 AI Trends That Have Helped the Creator Economy (and How to Take Advantage) | Entrepreneur

    4 AI Trends That Have Helped the Creator Economy (and How to Take Advantage) | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    AI has come to dominate online conversations — especially when it comes to its impact on content creators. This includes both negative and positive viewpoints.

    In reality, the introduction of AI tools serves as a way to significantly boost the creator economy, streamlining work and helping creators increase their earning potential like never before.

    By understanding the trends that are currently giving the creator economy a jump-start, you can identify opportunities to take advantage of these tools in your own work.

    1. Conceptualization

    AI may not always have the capability to generate content that is ready for audiences — as many humorous examples from across the web reveal. But while its output is often imperfect, one area where it has given creators a significant boost is in the ideation and conceptualization phase.

    AI offers a powerful advantage because it can quickly generate image mockups, taglines, product names and more — all derived from human inputs. This helps creators fine-tune their ideas, serving as a launchpad for their own work.

    This can be especially helpful for creators collaborating with clients. Some production companies even use AI to generate concept pieces to establish a clear vision for a project with a client. While the final pieces are still completed by human artists, using AI to help establish the tone and direction for a project can save a lot of time by allowing clients to fine-tune the vision of what they want.

    Related: How AI Can Help Small Businesses Do More in Less Time

    2. Building a rough draft

    In many use cases, AI can go beyond mere idea generation and provide a usable rough draft that a human creator can then polish and perfect.

    For example, the average writer spends about 75% of their writing time working on a first draft — content that is usually far from perfect and requires a fair amount of editing before it can be published.

    With generative AI, writers can get much-needed help in creating that rough draft by providing an outline and basic talking points. By using AI to produce the so-called “rough draft,” writers can quickly move on to the next phase of the work — fine-tuning and editing the writing for flow, voice and other important qualities. By using AI to provide a baseline, many creators are able to dramatically speed up their creative process.

    By one estimate, writers who use AI to help in their process spend roughly 33% less time writing their posts in comparison to other writers. By streamlining and speeding up the creative process, writers can become more efficient and better monetize their work than in the past.

    3. Editing and optimization

    Another trending use for generative AI is in what could be described as the “editing and optimization” phase of content creation. For example, AI can be used to streamline the process of removing unwanted objects from the background of a video. Or it can be used to help pair the right background with a piece of visual content in the first place.

    A Lightricks survey of 1,000 content creators showed that 53% of respondents use AI for photo backgrounds, followed by 47% who used AI for video backgrounds. In such use cases, AI isn’t being used to create the focal point of the content. Rather, it is being used to fine-tune and optimize the content that a creator has made, and it is doing so in a streamlined manner that is much less time-consuming than it would otherwise be.

    The increased efficiency that can come from using AI in the editing process is perhaps part of why 38% of creators in the survey reported that they expect to command higher pay rates as a result of using AI.

    Related: How AI is Changing the Future of Personal Branding

    4. AI as an assistant

    Each of these current use trends can help creators save time and money as they work on projects for building their own personal brand — or on collaborations with clients. However, it’s worth noting that AI is poised to also help creators with tasks beyond streamlining their creative processes.

    Bill Gates sees AI as eventually being able to serve as “personal agents” that help improve productivity.

    “It will see your latest emails, know about the meetings you attend, read what you read and read the things you don’t want to bother with,” Gates says. “This will both improve your work on the tasks you want to do and free you from the ones you don’t want to do. You’ll be able to use natural language to have this agent help you with scheduling, communications and e-commerce, and it will work across all your devices.”

    While this full-scale personal assistant is still a ways off, it’s one more way that AI can offer yet another way for creators to focus on what matters most.

    Taking advantage of current (and future) AI is a must

    As these examples reveal, generative AI is poised to streamline a wide range of activities for creators, but it can never fully replace a human touch. Even in use cases where AI is tasked to create an “end product” ad, it is still reliant on input from living, breathing people.

    What each of these examples highlights, however, is how generative AI is poised to provide a powerful boost to the creator economy. As AI helps creators shore up their individual weaknesses or simply work more effectively, they will be able to produce higher-quality work than ever before.

    [ad_2]

    Lucas Miller

    Source link

  • Goldman Sachs says A.I. will ‘super-charge’ music creation and names 5 stocks to buy

    Goldman Sachs says A.I. will ‘super-charge’ music creation and names 5 stocks to buy

    [ad_1]

    Blend Images – Pbnj Productions | Tetra Images | Getty Images

    The music industry is set for a radical shift due in part to generative AI, according to Goldman Sachs, which described the new technology as providing “significant opportunities” for the sector.

    It named five buy-rated stocks to play the trend: Live Nation, Warner Music Group, French digital music company Believe, China’s NetEase, and Universal Music Group. All of the stocks are on its conviction list of top stocks.

    “Generative AI will super-charge music creation capabilities and improve productivity,” according to Goldman’s analysts in a June 28 note. And investors’ concerns over AI-generated music, such as a track reportedly created using the technology and featuring a “fake Drake” in April, are “overstated,” they suggested.

    Companies such as Deezer and Believe are using AI to detect when a music track has been created by AI, the analysts noted, while publishers are working with streaming sites like Spotify to take artificially generated tracks down.

    The music industry is well set up to protect its intellectual property given that it is dominated by three large companies that own the majority of artists’ catalogs, according to Goldman.

    “We believe the music industry is on the cusp of another major structural change given the persistent under-monetisation of music content, outdated streaming royalty payout structures and the deployment of Generative AI,” the analysts added.

    Streaming means it’s easier than ever for people to access music, but revenue has not matched consumption, the analysts noted. “For example, we estimate that the revenue per audio stream has fallen 20% in the past 5 years and that the revenue per hour streamed of music for Spotify is 4x lower than for Netflix,” the bank stated.

    Goldman likes events promoter Live Nation as it expects artists to tour more frequently due to what it calls the globalization of music. It added that younger generations becoming more aware of performers via social media will also boost the industry.

    On Believe, the bank said: “We expect the company to continue gaining market share with its digital-first approach, particularly in the fast-growing emerging markets across Asia.”

    WMG, meanwhile, is “one of the highest quality long-term growth compounders in our coverage group,” according to the analysts, while its competitor, UMG is on its conviction list for Europe, which comprises.

    “We believe UMG possesses several competitive advantages, including its scale, clear and consistent track record in breaking artists, the depth and breadth of its catalogue, and its ability to spot new trends early, under the stewardship of an experienced management team,” the analysts stated.

    Goldman chose Chinese internet company NetEase, which has a music streaming platform, for its use of AI in its music composition tools.

    — CNBC’s Michael Bloom contributed to this report.

    [ad_2]

    Source link

  • UN council to hold first meeting on potential threats of artificial intelligence to global peace

    UN council to hold first meeting on potential threats of artificial intelligence to global peace

    [ad_1]

    UNITED NATIONS — The U.N. Security Council will hold a first-ever meeting on the potential threats of artificial intelligence to international peace and security, organized by the United Kingdom which sees tremendous potential but also major risks about AI’s possible use for example in autonomous weapons or in control of nuclear weapons.

    UK Ambassador Barbara Woodward on Monday announced the July 18 meeting as the centerpiece of its presidency of the council this month. It will include briefings by international AI experts and Secretary-General Antonio Guterres, who last month called the alarm bells over the most advanced form of AI “deafening,” and loudest from its developers.

    “These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war,” the U.N. chief said.

    Guterres announced plans to appoint an advisory board on artificial intelligence in September to prepare initiatives that the U.N. can take. He also said he would react favorably to a new U.N. agency on AI and suggested as a model the International Atomic Energy Agency, which is knowledge-based and has some regulatory powers.

    Woodward said the UK wants to encourage “a multilateral approach to managing both the huge opportunities and the risks that artificial intelligence holds for all of us,” stressing that “this is going to take a global effort.”

    She stressed that the benefits side is huge, citing AI’s potential to help U.N. development programs, improve humanitarian aid operations, assist peacekeeping operations and support conflict prevention, including by collecting and analyzing data. “It could potentially help us close the gap between developing countries and developed countries,” she added.

    But the risk side raises serious security question that must also be addressed, Woodward said.

    Europe has led the world in efforts to regulate artificial intelligence, which gained urgency with the rise of a new breed of artificial intelligence that gives AI chatbots like ChatGPT the power to generate text, images, video and audio that resemble human work. On June 14, EU lawmakers signed off on the world’s first set of comprehensive rules for artificial intelligence, clearing a key hurdle as authorities across the globe race to rein in AI.

    In May, the head of the artificial intelligence company that makes ChatGPT told a U.S. Senate hearing that government intervention will be critical to mitigating the risks of increasingly powerful AI systems, saying as this technology advances people are concerned about how it could change their lives, and “we are too.”

    OpenAI CEO Sam Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

    Woodward said the Security Council meeting, to be chaired by UK Foreign Secretary James Cleverly, will provide an opportunity to listen to expert views on AI, which is a very new technology that is developing very fast, and start a discussion among the 15 council members on its implications.

    Britain’s Prime Minister Rishi Sunak has announced that the UK will host a summit on AI later this year, “where we’ll be able to have a truly global multilateral discussion,” Woodward said.

    [ad_2]

    Source link

  • Is Twitter ready for Europe’s new Big Tech rules? EU official says it has work to do

    Is Twitter ready for Europe’s new Big Tech rules? EU official says it has work to do

    [ad_1]

    Twitter needs to do more work to fall in line with the European Union’s tough new digital rulebook, a top EU official said after overseeing a “stress test” of the company’s systems in Silicon Valley.

    European Commissioner Thierry Breton said late Thursday that he noted the “strong commitment of Twitter to comply” with the Digital Services Act, sweeping new standards that the world’s biggest online platforms all must obey in just two months.

    However, “work needs to continue,” he said in a statement after reviewing the results of the voluntary test at Twitter’s San Francisco headquarters with owner Elon Musk and new CEO Linda Yaccarino.

    Tony Estanguet won gold medals for canoeing in the 2000, 2004 and 2012 Olympic Games. Now, the trim 45-year-old is the face and chief organizer of the 2024 Paris Games.

    German Chancellor Olaf Scholz is insisting that right-wing populism won’t gain the upper hand in his country, days after a far-right party won control of a county administration for the first time since the Nazi era.

    The U.K. government’s climate advisers have slammed officials for their slow pace in meeting their net zero target and backtracking on fossil fuel commitments.

    Maltese lawmakers have unanimously approved legislation to ease the the strictest abortion laws in the European Union.

    Breton, who oversees digital policy, is also meeting other tech bosses in California. He’s the EU’s point person working to get Big Tech ready for the new rules, which will force companies to crack down on hate speech, disinformation and other harmful and illegal material on their sites. The law takes effect Aug. 25 for the biggest platforms.

    The Digital Services Act, along with new regulations in the pipeline for data and artificial intelligence, has made Brussels a trailblazer in the growing global movement to clamp down on tech giants.

    The mock exercise tested Twitter’s readiness to cope with the DSA’s requirements, including protecting children online and detecting and mitigating risks like disinformation, under both normal and extreme situations.

    “Twitter is taking the exercise seriously and has identified the key areas on which it needs to focus to comply with the DSA,” Breton said, without providing more details. “With two months to go before the new EU regulation kicks in, work needs to continue for the systems to be in place and work effectively and quickly.”

    Twitter’s global government affairs team tweeted that the company is “on track to be ready when the DSA comes into force.” Yaccarino tweeted that “Europe is very important to Twitter and we’re focused on our continued partnership.”

    Musk agreed in December to let the EU carry out the stress test, which the bloc is offering to all tech companies before the rules take effect. Breton said other online platforms will be carrying out their own stress tests in the coming weeks but didn’t name them.

    Despite Musk’s claims to the contrary, independent researchers have found misinformation — as well as hate speech — spreading on Twitter since the billionaire Tesla CEO took over the company last year. Musk has reinstated notorious election deniers, overhauled Twitter’s verification system and gutted much of the staff that had been responsible for moderating posts.

    Last month, Breton warned Twitter that it “can’t hide” from its obligations after the social media site abandoned the bloc’s voluntary “code of practice” on online disinformation, which other social media platforms have pledged to support.

    Combating disinformation will become a legal requirement under the Digital Services Act.

    “If laws are passed, Twitter will obey the law,” Musk told the France 2 TV channel this week when asked about the DSA.

    Breton’s agenda Friday includes discussions about the EU’s digital rules and upcoming artificial intelligence regulations with Meta CEO Mark Zuckerberg and OpenAI CEO Sam Altman, whose company makes the popular AI chatbot ChatGPT. But a briefing for journalists was canceled.

    The DSA is part of a sweeping update to the EU’s digital rulebook aimed at forcing tech companies to clean up their platforms and better protect users online.

    For European users of big tech platforms, it will be easier to report illegal content like hate speech, and they will get more information on why they have been recommended certain content.

    Violations will incur fines worth up to 6% of annual global revenue — amounting to billions of dollars for some tech giants — or even a ban on operating in the EU, with its with 450 million consumers.

    Breton also is meeting Jensen Huang, CEO of Nvidia, the dominant supplier of semiconductors used in AI sytems, for talks on the EU’s Chips Act to boost the continent’s chipmaking industry.

    The EU, meanwhile, is putting the final touches on its AI Act, the world’s first comprehensive set of rules on the emerging technology that has stirred fascination as well as fears it could violate privacy, upend jobs, infringe on copyright and more.

    Final approval is expected by the end of the year, but it won’t take effect until two years later. Breton has been pitching a voluntary “AI Pact” to help companies get ready for its adoption.

    [ad_2]

    Source link

  • Is Twitter ready for Europe’s new Big Tech rules? EU official says it has work to do

    Is Twitter ready for Europe’s new Big Tech rules? EU official says it has work to do

    [ad_1]

    Twitter needs to do more work to fall in line with the European Union’s tough new digital rulebook, a top EU official said after overseeing a “stress test” of the company’s systems in Silicon Valley.

    European Commissioner Thierry Breton said late Thursday that he noted the “strong commitment of Twitter to comply” with the Digital Services Act, sweeping new standards that the world’s biggest online platforms all must obey in just two months.

    However, “work needs to continue,” he said in a statement after reviewing the results of the voluntary test at Twitter’s San Francisco headquarters with owner Elon Musk and new CEO Linda Yaccarino.

    Breton, who oversees digital policy, is also meeting other tech bosses in California. He’s the EU’s point person working to get Big Tech ready for the new rules, which will force companies to crack down on hate speech, disinformation and other harmful and illegal material on their sites. The law takes effect Aug. 25 for the biggest platforms.

    The Digital Services Act, along with new regulations in the pipeline for data and artificial intelligence, has made Brussels a trailblazer in the growing global movement to clamp down on tech giants.

    The mock exercise tested Twitter’s readiness to cope with the DSA’s requirements, including protecting children online and detecting and mitigating risks like disinformation, under both normal and extreme situations.

    “Twitter is taking the exercise seriously and has identified the key areas on which it needs to focus to comply with the DSA,” Breton said, without providing more details. “With two months to go before the new EU regulation kicks in, work needs to continue for the systems to be in place and work effectively and quickly.”

    Twitter’s global government affairs team tweeted that the company is “on track to be ready when the DSA comes into force.” Yaccarino tweeted that “Europe is very important to Twitter and we’re focused on our continued partnership.”

    Musk agreed in December to let the EU carry out the stress test, which the bloc is offering to all tech companies before the rules take effect. Breton said other online platforms will be carrying out their own stress tests in the coming weeks but didn’t name them.

    Despite Musk’s claims to the contrary, independent researchers have found misinformation — as well as hate speech — spreading on Twitter since the billionaire Tesla CEO took over the company last year. Musk has reinstated notorious election deniers, overhauled Twitter’s verification system and gutted much of the staff that had been responsible for moderating posts.

    Last month, Breton warned Twitter that it “can’t hide” from its obligations after the social media site abandoned the bloc’s voluntary “code of practice” on online disinformation, which other social media platforms have pledged to support.

    Combating disinformation will become a legal requirement under the Digital Services Act.

    “If laws are passed, Twitter will obey the law,” Musk told the France 2 TV channel this week when asked about the DSA.

    Breton’s agenda Friday includes discussions about the EU’s digital rules and upcoming artificial intelligence regulations with Meta CEO Mark Zuckerberg and OpenAI CEO Sam Altman, whose company makes the popular AI chatbot ChatGPT. But a briefing for journalists was canceled.

    The DSA is part of a sweeping update to the EU’s digital rulebook aimed at forcing tech companies to clean up their platforms and better protect users online.

    For European users of big tech platforms, it will be easier to report illegal content like hate speech, and they will get more information on why they have been recommended certain content.

    Violations will incur fines worth up to 6% of annual global revenue — amounting to billions of dollars for some tech giants — or even a ban on operating in the EU, with its with 450 million consumers.

    Breton also is meeting Jensen Huang, CEO of Nvidia, the dominant supplier of semiconductors used in AI sytems, for talks on the EU’s Chips Act to boost the continent’s chipmaking industry.

    The EU, meanwhile, is putting the final touches on its AI Act, the world’s first comprehensive set of rules on the emerging technology that has stirred fascination as well as fears it could violate privacy, upend jobs, infringe on copyright and more.

    Final approval is expected by the end of the year, but it won’t take effect until two years later. Breton has been pitching a voluntary “AI Pact” to help companies get ready for its adoption.

    [ad_2]

    Source link

  • Is Twitter ready for Europe’s new Big Tech rules? EU official says it has work to do

    Is Twitter ready for Europe’s new Big Tech rules? EU official says it has work to do

    [ad_1]

    Twitter needs to do more work to fall in line with the European Union’s tough new digital rulebook, a top EU official said after overseeing a “stress test” of the company’s systems in Silicon Valley.

    European Commissioner Thierry Breton said late Thursday that he noted the “strong commitment of Twitter to comply” with the Digital Services Act, sweeping new standards that the world’s biggest online platforms all must obey in just two months.

    However, “work needs to continue,” he said in a statement after reviewing the results of the voluntary test at Twitter’s San Francisco headquarters with owner Elon Musk and new CEO Linda Yaccarino.

    Breton, who oversees digital policy, is also meeting other tech bosses in California. He’s the EU’s point person working to get Big Tech ready for the new rules, which will force companies to crack down on hate speech, disinformation and other harmful and illegal material on their sites. The law takes effect Aug. 25 for the biggest platforms.

    The Digital Services Act, along with new regulations in the pipeline for data and artificial intelligence, has made Brussels a trailblazer in the growing global movement to clamp down on tech giants.

    The mock exercise tested Twitter’s readiness to cope with the DSA’s requirements, including protecting children online and detecting and mitigating risks like disinformation, under both normal and extreme situations.

    “Twitter is taking the exercise seriously and has identified the key areas on which it needs to focus to comply with the DSA,” Breton said, without providing more details. “With two months to go before the new EU regulation kicks in, work needs to continue for the systems to be in place and work effectively and quickly.”

    Twitter’s global government affairs team tweeted that the company is “on track to be ready when the DSA comes into force.” Yaccarino tweeted that “Europe is very important to Twitter and we’re focused on our continued partnership.”

    Musk agreed in December to let the EU carry out the stress test, which the bloc is offering to all tech companies before the rules take effect. Breton said other online platforms will be carrying out their own stress tests in the coming weeks but didn’t name them.

    Despite Musk’s claims to the contrary, independent researchers have found misinformation — as well as hate speech — spreading on Twitter since the billionaire Tesla CEO took over the company last year. Musk has reinstated notorious election deniers, overhauled Twitter’s verification system and gutted much of the staff that had been responsible for moderating posts.

    Last month, Breton warned Twitter that it “can’t hide” from its obligations after the social media site abandoned the bloc’s voluntary “code of practice” on online disinformation, which other social media platforms have pledged to support.

    Combating disinformation will become a legal requirement under the Digital Services Act.

    “If laws are passed, Twitter will obey the law,” Musk told the France 2 TV channel this week when asked about the DSA.

    Breton’s agenda Friday includes discussions about the EU’s digital rules and upcoming artificial intelligence regulations with Meta CEO Mark Zuckerberg and OpenAI CEO Sam Altman, whose company makes the popular AI chatbot ChatGPT.

    Breton was scheduled to hold a briefing for journalists, but it was canceled at the last minute.

    The DSA is part of a sweeping update to the EU’s digital rulebook aimed at forcing tech companies to clean up their platforms and better protect users online.

    For European users of big tech platforms, it will be easier to report illegal content like hate speech, and they will get more information on why they have been recommended certain content.

    Violations will incur fines worth up to 6% of annual global revenue — amounting to billions of dollars for some tech giants — or even a ban on operating in the EU, with its with 450 million consumers.

    Breton also is meeting Jensen Huang, CEO of Nvidia, the dominant supplier of semiconductors used in AI sytems, for talks on the EU’s Chips Act to boost the continent’s chipmaking industry.

    The EU, meanwhile, is putting the final touches on its AI Act, the world’s first comprehensive set of rules on the emerging technology that has stirred fascination as well as fears it could violate privacy, upend jobs, infringe on copyright and more.

    Final approval is expected by the end of the year, but it won’t take effect until two years later. Breton has been pitching a voluntary “AI Pact” to help companies get ready for its adoption.

    [ad_2]

    Source link

  • Former College Tutor Creates ES.AI, a Revolutionary AI Toolkit for College Applicants

    Former College Tutor Creates ES.AI, a Revolutionary AI Toolkit for College Applicants

    [ad_1]

    ES.AI (pronounced [ES] + [AY] + [EYE]), the emerging provider of affordable and ethical AI tools for students and young professionals, has announced the launch of a generative AI tool designed to help college applicants stand out from the crowd. 

    Introducing ES.AI’s College Application Essay Tool — the first in a suite of four advanced AI solutions created specifically for college-bound students who want to gain an edge on their competition. This groundbreaking technology automates the time-consuming process of brainstorming, outlining, and editing compelling and effective application essays, helping users focus on crafting their personal stories while leaving behind the tedious research work. 

    “The days of expensive college tutors charging outrageous hourly fees are over,” said Julia Dixon, Founder of ES.AI. “Our mission is simple: make access to high-quality education tools available to all, regardless of income or background.” 

    As a former college essay tutor, Dixon knows firsthand how difficult it can be for many students to afford traditional tutoring services that can cost upwards of $100 per hour. With this new tool from ES.AI, she hopes to level the playing field so that every student has a fair shot at success. 

    The College Application Essay Tool utilizes cutting-edge natural language processing (NLP) algorithms developed by top experts in machine learning and artificial intelligence. These algorithms analyze user input data such as academic achievements, extracurricular activities, passions, interests and athletic accomplishments — then generate detailed personalized recommendations on how to best present themselves within their essay prompts, frame their writing, and incorporate specific characteristics of the schools they’re applying to.

    “Our work doesn’t just stop with providing affordable AI tools; we actively contribute towards building communities around our products through regular content updates and expert advice,” adds Dixon. 

    ES.AI’s College Application Essay Tool represents a step forward in its overarching goal: putting students and their stories first by providing them with innovative, accessible technologies for their educational journey. 

    About ES.AI:

    ES.AI provides affordable, ethical, high-quality AI tools for students seeking an edge in today’s competitive landscape. Their College Application Essay Tool is one in a suite of AI solutions designed to make higher education more accessible and affordable. The company is committed to putting students first, ensuring that every student has access to high-quality writing tools regardless of income or background.

    Source: ES.AI Toolkit LLC

    [ad_2]

    Source link

  • Europe, US urged to investigate the type of AI that powers systems like ChatGPT

    Europe, US urged to investigate the type of AI that powers systems like ChatGPT

    [ad_1]

    European Union consumer protection groups are urging regulators to investigate the type of artificial intelligence underpinning systems like ChatGPT over risks that leave people vulnerable

    FILE – The ChatGPT app is seen on an iPhone in New York, Thursday, May 18, 2023. Authorities worldwide are racing to rein in artificial intelligence, including in the European Union, where groundbreaking legislation is set to pass a key hurdle. European Parliament lawmakers are due to vote Wednesday, June 14 on the proposal, along with controversial facial recognition amendments. (AP Photo/Richard Drew, file)

    The Associated Press

    LONDON — European Union consumer protection groups urged regulators on Tuesday to investigate the type of artificial intelligence underpinning systems like ChatGPT, citing risks that leave people vulnerable and the delay before the bloc’s groundbreaking AI regulations take effect.

    In a coordinated effort, 13 watchdog groups wrote to their national consumer, data protection, competition and product safety authorities warning them about a range of concerns around generative artificial intelligence.

    A transatlantic coalition of consumer groups also wrote to U.S. President Joe Biden asking him to take action to protect consumers from possible harms caused by generative AI.

    Europe has led the world in efforts to regulate artificial intelligence, which gained urgency with the rise of a new breed of artificial intelligence that gives AI chatbots like ChatGPT the power to generate text, images, video and audio that resemble human work.

    The EU is putting the finishing touches on the world’s first set of comprehensive rules for the technology, but they are not expected to take effect for two years.

    The groups called for European and U.S. leaders to use both existing laws and bring in new legislation to address the harms that generative AI can cause.

    They cited a report by the Norwegian Consumer Council outlining dangers that AI chatbots pose, including providing incorrect medical information, manipulating people, making up news articles and illegally using vast amounts of personal data scraped off the internet.

    The consumer groups, in countries including Italy, Spain, Sweden, the Netherlands, Greece and Denmark, warn that while the EU’s AI Act addresses some of the concerns, they won’t start applying for several years, “leaving consumers unprotected from a technology which is insufficiently regulated in the meantime, and developing at great pace.”

    Some authorities have already acted. Italy’s privacy watchdog ordered ChatGPT maker OpenAI to temporarily stop processing user’s personal information while it investigated a possible data breach. France, Spain and Canada also have been looking into OpenAI and ChatGPT.

    [ad_2]

    Source link

  • How Europe is leading the world in the push to regulate AI

    How Europe is leading the world in the push to regulate AI

    [ad_1]

    LONDON — Lawmakers in Europe signed off Wednesday on the world’s first set of comprehensive rules for artificial intelligence, clearing a key hurdle as authorities across the globe race to rein in AI.

    The European Parliament vote is one of the last steps before the rules become law, which could act as a model for other places working on similar regulations.

    A yearslong effort by Brussels to draw up guardrails for AI has taken on more urgency as rapid advances in chatbots like ChatGPT show the benefits the emerging technology can bring — and the new perils it poses.

    Here’s a look at the EU’s Artificial Intelligence Act:

    HOW DO THE RULES WORK?

    The measure, first proposed in 2021, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable.

    Riskier applications, such as for hiring or tech targeted to children, will face tougher requirements, including being more transparent and using accurate data.

    It will be up to the EU’s 27 member states to enforce the rules. Regulators could force companies to withdraw their apps from the market.

    In extreme cases, violations could draw fines of up to 30 million euros ($33 million) or 6% of a company’s annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions.

    WHAT ARE THE RISKS?

    One of the EU’s main goals is to guard against any AI threats to health and safety and protect fundamental rights and values.

    That means some AI uses are an absolute no-no, such as “social scoring” systems that judge people based on their behavior.

    Also forbidden is AI that exploits vulnerable people, including children, or uses subliminal manipulation that can result in harm, for example, an interactive talking toy that encourages dangerous behavior.

    Predictive policing tools, which crunch data to forecast who will commit crimes, is also out.

    Lawmakers beefed up the original proposal from the European Commission, the EU’s executive branch, by widening the ban on real-time remote facial recognition and biometric identification in public. The technology scans passers-by and uses AI to match their faces or other physical traits to a database.

    A contentious amendment to allow law enforcement exceptions such as finding missing children or preventing terrorist threats did not pass.

    AI systems used in categories like employment and education, which would affect the course of a person’s life, face tough requirements such as being transparent with users and taking steps to assess and reduce risks of bias from algorithms.

    Most AI systems, such as video games or spam filters, fall into the low- or no-risk category, the commission says.

    WHAT ABOUT CHATGPT?

    The original measure barely mentioned chatbots, mainly by requiring them to be labeled so users know they’re interacting with a machine. Negotiators later added provisions to cover general purpose AI like ChatGPT after it exploded in popularity, subjecting that technology to some of the same requirements as high-risk systems.

    One key addition is a requirement to thoroughly document any copyright material used to teach AI systems how to generate text, images, video and music that resemble human work.

    That would let content creators know if their blog posts, digital books, scientific articles or songs have been used to train algorithms that power systems like ChatGPT. Then they could decide whether their work has been copied and seek redress.

    WHY ARE THE EU RULES SO IMPORTANT?

    The European Union isn’t a big player in cutting-edge AI development. That role is taken by the U.S. and China. But Brussels often plays a trend-setting role with regulations that tend to become de facto global standards and has become a pioneer in efforts to target the power of large tech companies.

    The sheer size of the EU’s single market, with 450 million consumers, makes it easier for companies to comply than develop different products for different regions, experts say.

    But it’s not just a crackdown. By laying down common rules for AI, Brussels is also trying to develop the market by instilling confidence among users.

    “The fact this is regulation that can be enforced and companies will be held liable is significant” because other places like the United States, Singapore and Britain have merely offered “guidance and recommendations,” said Kris Shrishak, a technologist and senior fellow at the Irish Council for Civil Liberties.

    “Other countries might want to adapt and copy” the EU rules, he said.

    Businesses and industry groups warn that Europe needs to strike the right balance.

    “The EU is set to become a leader in regulating artificial intelligence, but whether it will lead on AI innovation still remains to be seen,” said Boniface de Champris, a policy manager for the Computer and Communications Industry Association, a lobbying group for tech companies.

    “Europe’s new AI rules need to effectively address clearly defined risks, while leaving enough flexibility for developers to deliver useful AI applications to the benefit of all Europeans,” he said.

    Sam Altman, CEO of ChatGPT maker OpenAI, has voiced support for some guardrails on AI and signed on with other tech executives to a warning about the risks it poses to humankind. But he also has said it’s “a mistake to go put heavy regulation on the field right now.”

    Others are playing catch up on AI rules. Britain, which left the EU in 2020, is jockeying for a position in AI leadership. Prime Minister Rishi Sunak plans to host a world summit on AI safety this fall.

    “I want to make the U.K. not just the intellectual home but the geographical home of global AI safety regulation,” Sunak said at a tech conference this week.

    WHAT’S NEXT?

    It could be years before the rules fully take effect. The next step is three-way negotiations involving member countries, the Parliament and the European Commission, possibly facing more changes as they try to agree on the wording.

    Final approval is expected by the end of this year, followed by a grace period for companies and organizations to adapt, often around two years.

    Brando Benifei, an Italian member of the European Parliament who is co-leading its work on the AI Act, said they would push for quicker adoption of the rules for fast-evolving technologies like generative AI.

    To fill the gap before the legislation takes effect, Europe and the U.S. are drawing up a voluntary code of conduct that officials promised at the end of May would be drafted within weeks and could be expanded to other “like-minded countries.”

    ___

    This story has been corrected to show that Kris Shrishak’s last name was misspelled.

    [ad_2]

    Source link

  • The Beatles are releasing their ‘final’ record. AI helped make it possible

    The Beatles are releasing their ‘final’ record. AI helped make it possible

    [ad_1]

    LONDON — Artificial intelligence has been used to extract John Lennon’s voice from an old demo to create “the last Beatles record,” decades after the band broke up, Paul McCartney said Tuesday.

    McCartney, 80, told the BBC that the technology was used to separate the Beatles’ voices from background sounds during the making of director Peter Jackson’s 2021 documentary series, “The Beatles: Get Back.” The “new” song is set to be released later this year, he said.

    Jackson was “able to extricate John’s voice from a ropey little bit of cassette and a piano,” McCartney told BBC radio. “He could separate them with AI, he’d tell the machine ‘That’s a voice, this is a guitar, lose the guitar’.”

    “So when we came to make what will be the last Beatles record, it was a demo that John had that we worked on,” he added. “We were able to take John’s voice and get it pure through this AI so then we could mix the record as you would do. It gives you some sort of leeway.”

    McCartney didn’t identify the name of the demo, but the BBC and others said it was likely to be an unfinished 1978 love song by Lennon called “Now and Then.” The demo was included on a cassette labeled “For Paul” that McCartney had received from Lennon’s widow, Yoko Ono, the BBC reported.

    McCartney described AI technology as “kind of scary but exciting,” adding: “We will just have to see where that leads.”

    The same technology enabled McCartney to “duet” virtually with Lennon, who was murdered in 1980, on “I’ve Got a Feeling” last year at Glastonbury Festival.

    Holly Herndon, a multidisciplinary artist with a doctorate in composition from Stanford University, used nascent AI machine technology on her last album, 2019’s “Proto,” and developed Holly+, an online protocol that allows the public to upload tracks to be reinterpreted and performed by a deepfake version of her voice. She theorizes that the Beatles’ recording was likely created using a process called “source separation.”

    “Source separation has become much easier to do with machine learning. This allows you to extract a voice from a recording, isolating it so that you might accompany it with new instrumentation,” she explains.

    That differs from a deepfake vocal. “A deepfake is an entirely new vocal line spawned from a machine learning model trained on old vocal lines,” she said. “While it does not appear to be happening in this example, it is now possible to spawn infinite new media from analyzing older material, which is a similar process, in spirit, to this song.”

    McCartney is set to open an exhibition later this month at the National Portrait Gallery in London featuring previously unseen photographs that he took during the early days of the Beatles at the start of “Beatlemania,” when the band rose to worldwide fame.

    The exhibition, titled “Eyes of the Storm,” showcases more than 250 photos McCartney took on his camera between 1963 and 1964 — including portraits of Ringo Starr, George Harrison and Lennon, as well as Beatles manager Brian Epstein.

    ___

    This story has been corrected to show that the title of McCartney’s photo exhibition is “Eyes of the Storm,” not “Eye of the Storm.”

    ___

    Sherman reported from Los Angeles.

    [ad_2]

    Source link

  • Schumer to host first of three senator-only A.I. briefings as Congress considers how to regulate

    Schumer to host first of three senator-only A.I. briefings as Congress considers how to regulate

    [ad_1]

    U.S. Senate Majority Leader Chuck Schumer, Democrat of New York, speaks about China competitiveness legislation alongside Democratic Senate committee chairs at the U.S. Capitol in Washington, D.C., May 3, 2023.

    Saul Loeb | AFP | Getty Images

    Senate Majority Leader Chuck Schumer, D-N.Y., is set to host the first of three educational sessions about artificial intelligence Tuesday as Congress considers how best to regulate the technology.

    Schumer announced Monday on the floor of the Senate that Massachusetts Institute of Technology professor Antonio Torralba, a machine learning expert, would lead the first of the senators-only sessions. Tuesday’s talk is set to offer a general overview of AI and its current capabilities, Schumer said.

    Lawmakers across Congress are trying to learn more about the technology and figure out what new legislation might be needed to tackle its unique challenges. Hearings about AI have focused on topics ranging from its effects on intellectual property to human rights.

    Lawmakers heard from Sam Altman, the CEO of ChatGPT-maker OpenAI, in May. Since then, other experts in the field have hoped policymakers would engage with a diverse range of voices as they consider legislation, so as not to be overly swayed by an early business leader in the space.

    The series of talks was first announced in a Dear Colleague letter Schumer sent last week alongside Sens. Mike Rounds, R-S.D., Martin Heinrich, D-N.M., and Todd Young, R-Ind. In the letter, the senators said the three discussions would ask the following questions:

    1. Where is AI today?
    2. What is the frontier of AI and how do we maintain American leadership?
    3. How do the Department of Defense and Intelligence Community use AI today and what do we know about how our adversaries are using AI[?]

    The third question would be tackled in a classified all-senators briefing, the first of its kind on AI.

    “The Senate must deepen our expertise in this pressing topic. AI is already changing our world, and experts have repeatedly told us that it will have a profound impact on everything from our national security to our classrooms to our workforce, including potentially significant job displacement,” the group wrote. “We must take the time to learn from the leading minds in AI, across sectors, and consider both the benefits and risks of this technology.”

    In his remarks on the floor Monday, Schumer reiterated, “It’s imperative that we senators take the time to educate ourselves on AI and its implications, so that we can ensure it becomes a force for human prosperity, while mitigating its very real risks.”

    Subscribe to CNBC on YouTube.

    WATCH: Can China’s ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

    [ad_2]

    Source link