ReportWire

Tag: openai

  • ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    [ad_1]



    CNN
     — 

    Less than six months after ChatGPT-creator OpenAI unveiled an AI detection tool with the potential to help teachers and other professionals detect AI generated work, the company has pulled the feature.

    OpenAI quietly shut down the tool last week citing a “low rate of accuracy,” according to an update to the original company blog post announcing the feature.

    “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company wrote in the update. OpenAI said it is also committed to helping “users to understand if audio or visual content is AI-generated.”

    The news may renew concerns about whether the companies behind a new crop of generative AI tools are equipped to build safeguards. It also comes as educators prepare for the first full school year with tools like ChatGPT publicly available.

    The sudden rise of ChatGPT quickly raised alarms among some educators late last year over the possibility that it could make it easier than ever for students to cheat on written work. Public schools in New York City and Seattle banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators moved with remarkable speed to rethink their assignments in response to ChatGPT, even as it remained unclear how widespread use of the tool was among students and how harmful it could really be to learning.

    Against that backdrop, OpenAI announced the AI detection tool in February to allow users to check if an essay was written by a human or AI. The feature, which worked on English AI-generated text, was powered by a machine learning system that takes an input and assigns it to several categories. After pasting a body of text such as a school essay into the new tool, it gave one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    But even on its launch day, OpenAI admitted the tool was “imperfect” and results should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Lama Ahmad, policy research director at OpenAI, told CNN at the time.

    While the tool might provide another reference point, such as comparing past examples of a student’s work and writing style, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Although OpenAI may be shelving its tool for now, there are some alternatives on the market.

    Other companies such as Turnitin have also rolled out AI plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan introduced a similar AI detection feature, called ZeroGPT.

    [ad_2]

    Source link

  • 300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    As many as 300 million full-time jobs around the world could be automated in some way by the newest wave of artificial intelligence that has spawned platforms like ChatGPT, according to Goldman Sachs economists.

    They predicted in a report Sunday that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies than emerging markets.

    That’s partly because white-collar workers are seen to be more at risk than manual laborers. Administrative workers and lawyers are expected to be most affected, the economists said, compared to the “little effect” seen on physically demanding or outdoor occupations, such as construction and repair work.

    In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, the bank estimates.

    If generative artificial intelligence “delivers on its promised capabilities, the labor market could face significant disruption,” the economists wrote. The term refers to the technology behind ChatGPT, the chatbot sensation that has taken the world by storm.

    ChatGPT, which can answer prompts and write essays, has already prompted many businesses to rethink how people should work every day.

    This month, its developer unveiled the latest version of the software behind the bot, GPT-4. The platform has quickly impressed early users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Further use of such AI will likely lead to job losses, the Goldman Sachs economists wrote. But they noted that technological innovation that initially displaces workers has historically also created employment growth over the long haul.

    While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity — and boost global GDP by 7% annually over a 10-year period, according to Goldman Sachs.

    “Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI,” the economists added.

    “Most workers are employed in occupations that are partially exposed to AI automation and, following AI adoption, will likely apply at least some of their freed-up capacity toward productive activities that increase output.”

    Of US workers expected to be affected, for instance, 25% to 50% of their workload “can be replaced,” the researchers added.

    “The combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labor productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer.”

    — CNN’s Nicole Goodkind contributed to this report.

    [ad_2]

    Source link

  • White House unveils an AI plan ahead of meeting with tech CEOs | CNN Business

    White House unveils an AI plan ahead of meeting with tech CEOs | CNN Business

    [ad_1]



    CNN
     — 

    The White House on Thursday announced a series of measures to address the challenges of artificial intelligence, driven by the sudden popularity of tools such as ChatGPT and amid rising concerns about the technology’s potential risks for discrimination, misinformation and privacy.

    The US government plans to introduce policies that shape how federal agencies procure and use AI systems, the White House said. The step could significantly influence the market for AI products and control how Americans interact with AI on government websites, at security checkpoints and in other settings.

    The National Science Foundation will also spend $140 million to promote research and development in AI, the White House added. The funds will be used to create research centers that seek to apply AI to issues such as climate change, agriculture and public health, according to the administration.

    The plan comes the same day that Vice President Kamala Harris and other administration officials are expected to meet with the CEOs of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic to emphasize the importance of ethical and responsible AI development. And it coincides with a UK government inquiry launched Thursday into the risks and benefits of AI.

    “Tech companies have a fundamental responsibility to make sure their products are safe and secure, and that they protect people’s rights before they’re deployed or made public,” a senior Biden administration official told reporters on a conference call.

    Officials cited a range of risks the public faces in the widespread adoption of AI tools, including the possible use of AI-created deepfakes and misinformation that could undermine the democratic process. Job losses linked to rising automation, biased algorithmic decision-making, physical dangers arising from autonomous vehicles and the threat of AI-powered malicious hackers are also on the White House’s list of concerns.

    It’s just the latest example of the federal government acknowledging concerns from the rapid development and deployment of new AI tools, and trying to find ways to address some of the risks.

    Testifying before Congress, members of the Federal Trade Commission have argued AI could “turbocharge” fraud and scams. Its chair, Lina Khan, wrote in a New York Times op-ed this week that the US government has ample existing legal authority to regulate AI by leaning on its mandate to protect consumers and competition.

    Last year, the Biden administration unveiled a proposal for an AI Bill of Rights calling for developers to respect the principles of privacy, safety and equal rights as they create new AI tools.

    Earlier this year, the Commerce Department released voluntary risk management guidelines for AI that it said could help organizations and businesses “govern, map, measure and manage” the potential dangers in each part of the development cycle. In April, the Department also said it is seeking public input on the best policies for regulating AI, including through audits and industry self-regulation.

    The US government isn’t alone in seeking to shape AI development. European officials anticipate hammering out AI legislation as soon as this year that could have major implications for AI companies around the world.

    [ad_2]

    Source link

  • Forget about the AI apocalypse. The real dangers are already here | CNN Business

    Forget about the AI apocalypse. The real dangers are already here | CNN Business

    [ad_1]



    CNN
     — 

    Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse.

    Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsoft’s CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter last month stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

    The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to “civilization destruction.” But he still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google.

    Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services.

    “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. Some of the execs are likely “genuinely worried about what they have unleashed,” he said, but others may be trying to focus attention on “abstract possibilities to detract from the more immediate possibilities.”

    Representatives for Google and OpenAI did not immediately respond to a request for comment. In a statement, a Microsoft spokesperson said: “We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly.”

    For Marcus, a self-described critic of AI hype, “the biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation.”

    Generative AI tools like OpenAI’s ChatGPT and Dall-E are trained on vast troves of data online to create compelling written work and images in response to user prompts. With these tools, for example, one could quickly mimic the style or likeness of public figures in an attempt to create disinformation campaigns.

    In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

    Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.

    Gary Marcus, professor emeritus at New York University, right, listens to Sam Altman, chief executive officer and co-founder of OpenAI, speak during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.

    Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

    Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

    “If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN.

    Regulators may be the real intended audience for the tech industry’s doomsday messaging.

    As Bender puts it, execs are essentially saying: “‘This stuff is very, very dangerous, and we’re the only ones who understand how to rein it in.’”

    Judging from Altman’s appearance before Congress, this strategy might work. Altman appeared to win over Washington by echoing lawmakers’ concerns about AI — a technology that many in Congress are still trying to understand — and offering suggestions for how to address it.

    This approach to regulation would be “hugely problematic,” Bender said. It could give the industry influence over the regulators tasked with holding it accountable and also leave out the voices and input of other people and communities experiencing negative impacts of this technology.

    “If the regulators kind of orient towards the people who are building and selling the technology as the only ones who could possibly understand this, and therefore can possibly inform how regulation should work, we’re really going to miss out,” Bender said.

    Bender said she tries, at every opportunity, to tell people “these things seem much smarter than they are.” As she put it, this is because “we are as smart as we are” and the way that we make sense of language, including responses from AI, “is actually by imagining a mind behind it.”

    Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?”

    [ad_2]

    Source link

  • Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    [ad_1]



    CNN
     — 

    Some of the world’s top artificial intelligence companies are launching a new industry body to work together — and with policymakers and researchers — on ways to regulate the development of bleeding-edge AI.

    The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society.

    Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry.

    News of the forum comes after the four AI firms, along with several others including Amazon and Meta, pledged to the Biden administration to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.

    The industry-led forum, which is open to other companies designing the most advanced AI models, plans to make its technical evaluations and benchmarks available through a publicly accessible library, the companies said in a joint statement.

    “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

    The announcement comes a day after AI experts such as Anthropic CEO Dario Amodei and AI pioneer Yoshua Bengio warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development.

    “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.

    Within two to three years, Amodei said, AI could become powerful enough to help malicious actors build functional biological weapons, where today those actors may lack the specialized knowledge needed to complete the process.

    The best way to prevent major harms, Bengio told a Senate panel, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world.

    The European Union is moving toward legislation that could be finalized as early as this year that would ban the use of AI for predictive policing and limit its use in lower-risk scenarios.

    US lawmakers are much further behind. While a number of AI-related bills have already been introduced in Congress, much of the driving force for a comprehensive AI bill rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry through a series of briefings this summer.

    Starting in September, Schumer has said, the Senate will hold a series of nine additional panels for members to learn about how AI could affect jobs, national security and intellectual property.

    [ad_2]

    Source link

  • Welcome to the era of viral AI generated ‘news’ images | CNN Business

    Welcome to the era of viral AI generated ‘news’ images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.

    None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.

    The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, “didn’t give it a second thought. no way am I surviving the future of technology.” The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.

    The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.

    While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

    “I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence,” said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.

    Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.

    What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.

    “The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south,” said Ben Decker, CEO of threat intelligence group Memetica. “Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects.”

    Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.

    Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as “a small self-funded team,” with just 11 full-time staff members.

    A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.

    Midjourney has emerged as a popular tool for users to create AI-generated images.

    The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to “extraordinary demand and trial abuse,” according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.

    The rules page on the company’s Discord site asks users: “Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content.”

    “Moderation is hard and we’ll be shipping improved systems soon,” Holz told CNN. “We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful.”

    In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.

    There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.

    Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, “it could give people false confidence,” Ajder said. “They’ll say, ‘there’s a detection system that says it’s real, so it must be real.’”

    Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.

    “The idea is that these institutions are all committed to disclosure, consent and transparency,” Leibowicz said.

    A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.

    “This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology,” he said. “At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models.”

    [ad_2]

    Source link

  • Microsoft opens up its AI-powered Bing to all users | CNN Business

    Microsoft opens up its AI-powered Bing to all users | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft is rolling out the new AI-powered version of its Bing search engine to anyone who wants to use it.

    Nearly three months after the company debuted a limited preview version of its new Bing, powered by the viral AI chatbot ChatGPT, Microsoft is opening it up to all users without a waitlist – as long as they’re signed into the search engine via Microsoft’s Edge browser.

    The move highlights Microsoft’s commitment to move forward with the product even as the AI technology behind it has sparked concerns around inaccuracies and tone. In some cases, people who baited the new Bing were subject to some emotionally reactive and aggressive responses.

    “We’re getting better at speed, we’re getting better at accuracy … but we are on a never-ending quest to make things better and better,” Yusuf Mehdi, a VP at Microsoft overseeing its AI initiatives, told CNN on Wednesday.

    Bing now gets more than 100 million daily active users each day, a significant uptick in the past few months, according to Mehdi. Google, which has long dominated the market, is also adding similar AI features to its search engine.

    In February, Microsoft showed off how its revamped search engine could write summaries of search results, chat with users to answer additional questions about a query and write emails or other compositions based on the results.

    At a press event in New York City on Wednesday, the company shared an early look at some updates, including the ability to ask questions with pictures, access chat history so the chatbot remembers its rapport with users, and export responses to Microsoft Word. Users can also personalize the tone and style of the chatbot’s responses, selecting from a lengthier, creative reply to something that’s shorter and to the point.

    The wave of attention in recent months around ChatGPT, developed by OpenAI with financial backing from Microsoft, helped renew an arms race among tech companies to deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    Beyond adding AI features to search, Microsoft has said it plans to bring ChatGPT technology to its core productivity tools, including Word, Excel and Outlook, with the potential to change the way we work. The decision to add generative AI features to Bing could be particularly risky, however, given how much people rely on search engines for accurate and reliable information.

    Microsoft’s moves also come amid heightened scrutiny on the rapid pace of advancement in AI technology. In March, some of the biggest names in tech, including Elon Musk and Apple co-founder Steve Wozniak, called for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Mehdi said he doesn’t believe the AI industry is moving too fast and suggested the calls for a pause aren’t particularly helpful.

    “Some people think we should pause development for six months but I’m not sure that fixes anything or improves or moves things along,” he said. “But I understand where it’s coming from concern wise.”

    He added: “The only way to really build this technology well is to do it out in the open in the public so we can have conversations about it.”

    [ad_2]

    Source link

  • Everything you need to know about AI but were too afraid to ask | CNN Business

    Everything you need to know about AI but were too afraid to ask | CNN Business

    [ad_1]



    CNN
     — 

    Business executives keep talking about it. Teachers are struggling with what to do about it. And artists like Drake seem angry about it.

    Love it or hate it, everyone is paying attention to artificial intelligence right now. Almost overnight, a new crop of AI tools has found its way into products used by billions of people, changing the way we work, shop, create and communicate with each other.

    AI advocates tout the technology’s potential to supercharge our productivity, creating a new era of better jobs, better education and better treatments for diseases. AI skeptics have raised concerns about the technology’s potential to disrupt jobs, mislead people and possibly bring about the end of humanity as we know it. Confusingly, some execs in Silicon Valley seem to hold both sets of views at once.

    What’s clear, however, is that AI is not going away, but it is changing very fast. Here’s everything you need to know to keep up.

    In the public consciousness, “artificial intelligence” may conjure up images of murderous machines eager to overtake humans, and capable of doing so. But in the tech industry, it’s a broad term that refers to different tools that are trained to perform a wide range of complex tasks that might previously have required some input from an actual person.

    If you use the internet, then you almost certainly use services that rely on AI to sort data, filter content and make suggestions, among other tasks.

    It’s the technology that allows Netflix to recommend movies and that helps remove spam, hate speech and other inappropriate content from your social media feeds. It helps power everything from autocorrect features and Google Translate to facial recognition services, the last of which uses AI that, in Microsoft’s words, “mimics a human capability to recognize human faces.”

    AI can also be successful in developing techniques for solving a wide range of real world problems, such as adjusting traffic signals in real time to manage congestion issues or helping medical professionals analyze images to make a diagnosis. AI is also central to developing self-driving cars by processing tremendous amounts of visual data so the vehicles can understand their surroundings.

    The short answer: ChatGPT.

    For years, AI has largely operated in the background of services we use every day. That changed following the November launch of ChatGPT, a viral chatbot that put the power of AI front and center.

    People have already used ChatGPT, a tool created by OpenAI, to draft lawsuits, write song lyrics and create research paper abstracts so good they’ve even fooled some scientists. The tool has even passed standardized exams. And ChatGPT has sparked an intense competition among tech companies to develop and deploy similar tools.

    Microsoft and Google have each introduced features powered by generative AI, the technology underpinning ChatGPT, into their most widely used productivity tools. Meta, Amazon and Alibaba have said they’re working on generative AI tools, too. And numerous other businesses also want in on the action.

    It’s rare to see a cutting-edge technology become so ubiquitous almost overnight. Now businesses, educators and lawmakers are all racing to adapt.

    Generative AI enables tools to create written work, images and even audio in response to prompts from users.

    To get those responses, several Big Tech companies have developed their own large language models trained on vast amounts of online data. The scope and purpose of these data sets can vary. For example, the version of ChatGPT that went public last year was only trained on data up until 2021 (it’s now more up to date).

    These models work through a method called deep learning, which learns patterns and relationships between words, so it can make predictive responses and generate relevant outputs to user prompts.

    As impressive as some generative AI services may seem, they essentially just do pattern matching. These tools can mimic the writing of others or make predictions about what words might be relevant in their responses based on all the data they’ve previously been trained on.

    AGI, on the other hand, promises something more ambitious — and scary.

    AGI — short for artificial general intelligence — refers to technology that can perform intelligent tasks such as learning, reasoning and adapting to new situations in the way that humans do. OpenAI CEO Sam Altman has teased the possibility of a superintelligent AGI that could go on to change the world or perhaps backfire and end humanity.

    For the moment, however, AGI remains purely a hypothetical, so don’t worry too much about it.

    Anytime there’s an excess of buzz around a technology, it’s good to be skeptical — and there is certainly a lot of that here. Investor fascination with AI has helped push Wall Street back into a bull market, despite lingering economic uncertainty.

    Not all AI tools are equally useful and many companies will certainly tout AI features and strategies simply to tap into the current hype cycle. But even in just the past six months, AI has already shown potential to change how people do numerous everyday tasks.

    One of the biggest selling points around AI chatbots, for example, is their ability to make people more productive. Earlier this year, some real estate agents told CNN that ChatGPT saved them hours of work not only by writing listings for homes for sale but also looking up the permitted uses for certain land and calculating what mortgage payments or the return on investment might be for a client, which typically involve formulas and mortgage calculators.

    Artificial intelligence is also much broader than ChatGPT and other generative AI tools. Even if you think AI chatbots are annoying or might be a fad, the underlying technology will continue to power meaningful advances in products and services for years to come.

    The fear is AI will eliminate millions of jobs. The hope is it will help improve how millions do their jobs. The current reality is somewhere in between.

    Companies will likely need new workers to help them implement and manage AI tools. Employment of data analysts and scientists, machine learning specialists and cybersecurity experts is forecast to grow 30% on average by 2027, according to one recent estimate from the World Economic Forum.

    But the proliferation of AI will also likely put many roles at risk eventually. There could be 26 million fewer record-keeping and administrative jobs by 2027, the WEF predicted. Data entry clerks and executive secretaries are expected to see the steepest losses.

    For now, there are clearly limits to how well AI can do the job of a human on its own. When CNET, a media outlet, experimented with using AI to write articles, it came under scrutiny for publishing pieces with factual errors. Likewise, a lawyer in May made headlines for citing false court cases to a judge provided to him by ChatGPT. In an affidavit, the lawyer said he had never used ChatGPT as a legal research tool before and “was unaware of the possibility that its content could be false.”

    Alphabet CEO Sundar Pichai, left, and OpenAI CEO Sam Altman arrive to the White House for a meeting with Vice President Kamala Harris on artificial intelligence, Thursday, May 4, 2023, in Washington.

    Top AI executives have warned that AI could potentially bring about human extinction. But these same executives are also racing to deploy the technology into their products.

    Some experts say that focusing on far-off doomsday scenarios may distract from the more immediate harms that AI can cause, such as spreading misinformation, perpetuating biases that exist in training data, and enabling discrimination.

    For example, generative AI could be used to create deepfakes to spread propaganda during an election or enable a frightening new era of scams. Some AI models have also been criticized for what the industry calls “hallucinations,” or making up information.

    Even before the rise of ChatGPT, there were concerns about AI acting as a gatekeeper that can determine who does and does not move forward in a hiring process, for example. AI-powered facial recognition systems have also resulted in some wrongful arrests, and research has shown these systems are drastically more prone to error when trying to match the faces of darker skinned people.

    The more AI tools are incorporated into core parts of society, the more potential there is for unintended consequences.

    Regulators in the United States and Europe are pushing for legislation to help put guardrails in place for AI, which could ultimately impact how the technology develops. But it’s unclear if lawmakers can keep pace with the rapid advances in AI.

    Experts believe in the months ahead, generative AI will go on to create even more realistic images, videos, and audio that could further disrupt media, entertainment, tech and other industries. The technology will likely become increasingly conversational and personalized.

    In March, OpenAI unveiled GPT-4, the next-generation version of the technology that powers ChatGPT. According to the company and early tests, GPT-4 is able to provide more detailed and accurate written responses, pass academic tests with high marks and build a working website from a hand-drawn sketch. (Altman has previously said OpenAI is not yet training GPT-5.)

    AI will almost certainly be infused into many more products and services in the coming months. That means we’ll all have to learn how to live with it.

    As ChatGPT put it in response to a prompt from CNN, “AI has the potential to transform our lives … but it’s crucial for companies and individuals to be mindful of the accompanying risks and responsibly address concerns.”

    [ad_2]

    Source link

  • Italy blocks ChatGPT over privacy concerns | CNN Business

    Italy blocks ChatGPT over privacy concerns | CNN Business

    [ad_1]


    London
    CNN
     — 

    Regulators in Italy issued a temporary ban on ChatGPT Friday, effective immediately, due to privacy concerns and said they had opened an investigation into how OpenAI, the US company behind the popular chatbot, uses data.

    Italy’s data protection agency said users lacked information about the collection of their data and that a breach at ChatGPT had been reported on March 20.

    “There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” the agency said.

    The Italian regulator also expressed concerns over the lack of age verification for ChatGPT users. It argued that this “exposes children to receiving responses that are absolutely inappropriate to their age and awareness.” The platform is supposed to be for users older than 13, it noted.

    The data protection agency said OpenAI would be barred from processing the data of Italian users until it “respects the privacy regulation.”

    OpenAI has been given 20 days to communicate the measures it will take to comply with Italy’s data rules. Otherwise, it could face a penalty of up to €20 million ($21.8 million), or up to 4% of its annual global turnover.

    Since its public release four months ago, ChatGPT has become a global phenomenon, amassing millions of users impressed with its ability to craft convincing written content, including academic essays, business plans and short stories.

    But concerns have also emerged about its rapid spread and what large-scale uptake of such tools could mean for society, putting pressure on regulators around the world to act.

    The European Union is finalizing rules on the use of artificial intelligence in the bloc. In the meantime, EU companies must comply with the General Data Protection Regulation, or GDPR, as well as the Digital Services Act and Digital Markets Act, which apply to tech platforms.

    Meanwhile, so-called “generative AI” tools available to the public are proliferating.

    Earlier this month, OpenAI released GPT-4, a new version of the technology underpinning ChatGPT that is even more powerful. The company said the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%.

    This week, some of the biggest names in tech, including Elon Musk, called for AI labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    — Julia Horowitz contributed reporting.

    [ad_2]

    Source link

  • TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    [ad_1]



    CNN
     — 

    By any standard, John August is a successful screenwriter. He’s written such films as “Big Fish,” “Charlie’s Angels” and “Go.” But even he is concerned about the impact AI could have on his work.

    A powerful new crop of AI tools, trained on vast troves of data online, can now generate essays, song lyrics and other written work in response to user prompts. While there are clearly limits for how well AI tools can produce compelling creative stories, these tools are only getting more advanced, putting writers like August on guard.

    “Screenwriters are concerned about our scripts being the feeder material that is going into these systems to generate other scripts, treatments, and write story ideas,” August, a Writers Guild of America (WGA) committee member, told CNN. “The work that we do can’t be replaced by these systems.”

    August is one of the more than 11,000 members of the WGA who went on strike Tuesday morning, bringing an immediate halt to the production of some television shows and possibly delaying the start of new seasons of others later this year.

    WGA is demanding a host of changes from the Alliance of Motion Picture and Television Producers (AMPTP), from an increase in pay to receiving clear guidelines around working with streaming services. But as part of their demands, the WGA is also fighting to protect their livelihoods from AI.

    In a proposal published on WGA’s website this week, the labor union said AI should be regulated so it “can’t write or rewrite literary material, can’t be used as source material” and that writers’ work “can’t be used to train AI.”

    August said the AI demand “was one of the last things” added to the WGA list, but that it’s “clearly an issue writers are concerned about” and need to address now rather than when their contact is up again in three years. By then, he said, “it may be too late.”

    WGA said the proposal was rejected by AMPTP, which countered by offering annual meetings to discuss advancements in the technology. August said AMPTP’s response shows they want to keep their options open.

    In a document sent to CNN responding to some of WGA’s asks, AMPTP said it values the work of creatives and “the best stories are original, insightful and often come from people’s own experiences.”

    “AI raises hard, important creative and legal questions for everyone,” it wrote. “Writers want to be able to use this technology as part of their creative process, without changing how credits are determined, which is complicated given AI material can’t be copyrighted. So it’s something that requires a lot more discussion, which we’ve committed to doing.”

    It added that the current WGA agreement defines a “writer” as a “person,” and said “AI-generated material would not be eligible for writing credit.”

    The writers’ attempt at bargaining over AI is perhaps the most high-profile labor battle yet to address concerns about the cutting-edge technology that has captivated the world’s attention in the six months since the public release of ChatGPT.

    Goldman Sachs economists estimate that as many as 300 million full-job jobs globally could be automated in some way by the newest wave of AI. White-collar workers, including those in administrative and legal roles, are expected to be the most affected. And the impact may hit sooner than some think: IBM’s CEO recently suggested AI could eliminate the need for thousands of jobs at his company alone in the next five years.

    David Gunkel, a professor at the department of communications at Northern Illinois University who tracks AI in media and entertainment, said screenwriters want clear guidelines around AI because “they can see the writing on the wall.”

    “AI is already displacing human labor in many other areas of content creation—copywriting, journalism, SEO writing, and so on,” he said. “The WGA is simply trying to get out-in-front of and to protect their members against … ‘technological unemployment.’”

    While film and TV writers in Hollywood may currently be leading the charge, professionals in other industries will almost certainly be paying attention.

    “There’s certainly other industries that need to be paying close attention to this space,” said Rowan Curran, an analyst at Forrester Research who focuses on AI. He noted that digital artists, musicians, engineers, real estate professionals and customer service workers will all feel the impact of generative AI.

    “Watch this #WGA strike carefully,” Justine Bateman, a writer, director and former actress, wrote in a tweet shortly after the strike kicked off. “Understand that our fight is the same fight that is coming to your professional sector next: it’s the devaluing of human effort, skill, and talent in favor of automation and profits.”

    AI has had a place in Hollywood for years. In the 2018 “Marvel Avengers Infinity Wars” film, the face of Thanos – a character played by actor Josh Brolin – was created in part with the technology.

    Crowd and battle scenes in films including the “Lord of the Rings” and “Meg” have utilized AI, and the most recent Indiana Jones used it to make Harrison Ford’s character appear younger. It’s also been used for color correction, finding footage more quickly during post production and making improvements such as removing scratches and dust from footage.

    But AI in screenwriting is in its infancy. In March, a “South Park” episode called “Deep Learning,” was co-written by ChatGPT and the tool was highly focused on in the plot (the characters use ChatGPT to talk to girls and write school papers).

    August said writers are largely willing to play ball with tools, as long as they’re used as launching pads or for research and writers are still credited and utilized throughout the production process.

    “Screenwriters are not luddites, and we’ve been quick to use new technologies to help us tell our stories,” August said. “We went from typewriters to word processors happily and it increased productivity. …. But we don’t need a magical typewriter that types scripts all by itself.”

    Because large language models are trained on text that humans have written before, and find patterns in words and sentences to create responses to prompts, concerns around intellectual property exist, too. “It is entirely possible for a [chatbot] to generate a script in the style of a particular kind of filmmaker or scriptwriter without prior consent of the original artist or the Hollywood studio that holds the IP for that material,” Gunkel said.

    For example, one could prompt ChatGPT to generate a zombie apocalypse drama in the style of David Mamet. “Who should get credited for that?” August said. “What happens if we allow a producer or studio executive to come up with a treatment or pitch or something that looks like a screenplay that no writer has touched?”

    For now, the legal landscape remains very much unsettled on the matter, with regulations lagging behind the rapid pace of AI development. In early April, the Biden administration said it is seeking public comments on how to hold artificial intelligence systems like ChatGPT accountable.

    “We can’t protect studios from their own bad choices,” August said. “We can only protect writers from abuses.”

    The strike, and the demands around AI specifically, come at a time when both the writers and the studios are feeling financial pain.

    Many of the businesses represented by AMPTP have seen drops in their stock price, prompting deep cost cutting, including layoffs. The need to manage costs, combined with addressing the fallout from the strike, might only make the companies feel more pressure to turn to AI for scriptwriting.

    “In the short term, this could be an effective way to circumvent the WGA strike, mainly because [large language models], which are considered property and not personnel, can be employed for this task without violating the picket line,” Gunkel said. Such an “experiment” could also show production studios whether it’s possible “to get by with less humans involved,” he said.

    But Joshua Glick, a visiting professor of film and electronic arts at Bard University, believes such a move would be ill-advised.

    “It would be a pretty aggressive and antagonistic move for studios to move forward with AI-generated scripts in terms of getting writers to come to the negotiating table because AI is such a crucial sticking point in the negotiations,” said Glick, who also co-created Deepfake: Unstable Evidence on Screen, an exhibition at the Museum of the Moving Image in New York.

    “At the same time, I think the result of those scripts would be pretty mediocre at best,” he said.

    However the studios react, the issue is unlikely to go away in Hollywood. Film and TV actors’ contracts are up in June, and many are worried about how their faces, bodies and voices will be impacted by AI, August said.

    “As writers, we don’t want tools to replace us but actors have the same concerns with AI, as do directors, editors and everyone else who does creative work in this industry,” he added.

    [ad_2]

    Source link

  • Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Police in China have detained a man they say used ChatGPT to create fake news and spread it online, in what state media has called the country’s first criminal case related to the AI chatbot.

    According to a statement from police in the northwest province of Gansu, the suspect allegedly used ChatGPT to generate a bogus report about a train crash, which he then posted online for profit. The article received about 15,000 views, the police said in Sunday’s statement.

    ChatGPT, developed by Microsoft

    (MSFT)
    -backed OpenAI, is banned in China, though internet users can use virtual private networks (VPN) to access it.

    Train crashes have been a sensitive issue in China since 2011, when authorities faced pressure to explain why state media had failed to provide timely updates on a bullet train collision in the city of Wenzhou that resulted in 40 deaths.

    Gansu authorities said the suspect, surnamed Hong, was questioned in the city of Dongguan in southern Guangdong province on May 5.

    “Hong used modern technology to fabricate false information, spreading it on the internet, which was widely disseminated,” the Gansu police said in the statement.

    “His behavior amounted to picking quarrels and provoking trouble,” they added, explaining the offense that Hong was accused of committing.

    Police said the arrest was the first in Gansu since China’s Cyberspace Administration enacted new regulations in January to rein in the use of deep fakes. State broadcaster CGTN says it was the country’s first arrest of a person accused of using ChatGPT to fabricate and spread fake news.

    Formally known as deep synthesis, deep fake refers to highly realistic textual and visual content generated by artificial intelligence.

    The new legislation bars users from generating deep fake content on topics already prohibited by existing laws on China’s heavily censored internet. It also outlines take down procedures for content considered false or harmful.

    The arrest also came amid a 100-day campaign launched by the internet branch of the Ministry of Public Security in March to crack down on the spread of internet rumors.

    Since the beginning of the year, Chinese internet giants such as Baidu

    (BIDU)
    and Alibaba

    (BABA)
    have sought to catch up with OpenAI, launching their own versions of the ChatGPT service.

    Baidu unveiled “Wenxin Yiyan” or “ERNIE Bot” in March. Two months later, Alibaba launched “Tongyi Qianwen,” which roughly translates as seeking truth by asking a thousand questions.

    In draft guidelines issued last month to solicit public feedback, China’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities, as well as providing details about the scale and type of data they use, their basic algorithms and other technical information.

    [ad_2]

    Source link

  • OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI, the company behind the viral ChatGPT tool, has been hit with a lawsuit alleging the company stole and misappropriated vast swaths of peoples’ data from the internet to train its AI tools.

    The proposed class action lawsuit, filed Wednesday in a California federal court, claims that OpenAI secretly scraped “massive amounts of personal data from the internet,” according to the complaint. The nearly 160-page complaint alleges that this personal data, including “essentially every piece of data exchanged on the internet it could take,” was also seized by the company without notice, consent or “just compensation.”

    Moreover, this data scraping occurred at an “unprecedented scale,” the suit claims.

    OpenAI did not immediately respond to CNN’s request for comment Wednesday. Microsoft, a major investor into OpenAI, was also named as a defendant in the suit and did not immediately respond to a request for comment.

    “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable – but unacceptable by any measure of responsible data protection and use,” Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, said in a statement to CNN Wednesday.

    The complaint also claims that OpenAI products “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

    The lawsuit seeks injunctive relief in the form of a temporary freeze on further commercial use of OpenAI’s products. It also seeks payments of “data dividends” as financial compensation to people whose information was used to develop and train OpenAI’s tools.

    OpenAI publicly launched ChatGPT late last year, and the tool immediately went viral for its ability to generate compelling, human-sounding responses to user prompts. The success of ChatGPT spurred an apparent AI arms race in the tech world, as companies big and small are now racing to develop and deploy AI tools into as many products as possible.

    [ad_2]

    Source link

  • OpenAI CEO Sam Altman to testify before Congress | CNN Business

    OpenAI CEO Sam Altman to testify before Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman will testify before Congress next Tuesday as lawmakers increasingly scrutinize the risks and benefits of artificial intelligence, according to a Senate Judiciary subcommittee.

    During Tuesday’s hearing, lawmakers will question Altman for the first time since OpenAI’s chatbot, ChatGPT, took the world by storm late last year.

    The groundbreaking generative AI tool has led to a wave of new investment in AI, prompting a scramble among US policymakers who have called for guardrails and regulation amid fears of AI’s misuse.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    [ad_2]

    Source link

  • ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    [ad_1]


    Dortmund, Germany
    CNN
     — 

    Dozens of Europe’s top business leaders have pushed back on the European Union’s proposed legislation on artificial intelligence, warning that it could hurt the bloc’s competitiveness and spur an exodus of investment.

    In an open letter sent to EU lawmakers Friday, C-suite executives from companies including Siemens

    (SIEGY)
    , Carrefour

    (CRERF)
    , Renault

    (RNLSY)
    and Airbus

    (EADSF)
    raised “serious concerns” about the EU AI Act, the world’s first comprehensive AI rules.

    Other prominent signatories include big names in tech, such as Yann LeCun, chief AI scientist of Meta

    (FB)
    , and Hermann Hauser, founder of British chipmaker ARM.

    “In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of more than 160 executives said in the letter.

    They argue that the draft rules go too far, especially in regulating generative AI and foundation models, the technology behind popular platforms such as ChatGPT.

    Since the craze over generative AI began this year, technologists have warned of the potential dark side of systems that allow people to use machines to write college essays, take academic tests and build websites. Last month, hundreds of top experts warned about the risk of human extinction from AI, saying mitigating that possibility “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The EU proposal applies a broad brush to such software “regardless of [its] use cases,” and could push innovative companies and investors out of Europe because they would face high compliance costs and “disproportionate liability risks,” according to the executives.

    “Such regulation could lead to highly innovative companies moving their activities abroad” and investors withdrawing their capital from European AI, the group wrote.

    “The result would be a critical productivity gap between the two sides of the Atlantic.”

    The executives are calling for policymakers to revise the terms of the bill, which was agreed upon by European Parliament lawmakers earlier this month and is now being negotiated with EU member states.

    “In a context where we know very little about the real risks, the business model, or the applications of generative AI, European law should confine itself to stating broad principles in a risk-based approach,” the group wrote.

    The business leaders called for a regulatory board of experts to oversee these principles and ensure they can be continuously adapted to changes in the fast-moving technology.

    The group also urged lawmakers to work with their US counterparts, noting that regulatory proposals had also been made in the United States. EU lawmakers should try to “create a legally binding level playing field,” the executives wrote.

    If such action isn’t taken and Europe is constrained by regulatory demands, it could hurt the region’s international standing, the group suggested.

    “Like the invention of the Internet or the breakthrough of silicon chips, generative AI is the kind of technology that will be decisive for the performance capacity and therefore the significance of different regions,” it said.

    Tech experts have increasingly called for greater regulation of AI as it becomes more widely used. In recent months, the United States and China have also laid out plans to regulate the technology. Sam Altman, CEO of ChatGPT maker OpenAI, has used high-profile trips around the world in recent weeks to call for co-ordinated international regulation of AI.

    The EU rules are the world’s “first ever attempt to enact” legally binding rules that apply to different areas of AI, according to the European Parliament.

    Negotiators of the AI Act hope to reach an agreement before the end of the year, and once the final rules are adopted by the European Parliament and EU member states, the act will become law.

    As they stand now, the rules would ban AI systems deemed to be harmful, including real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China.

    The Act also outlines transparency requirements for AI systems. For instance, systems such as ChatGPT would have to disclose that their content was AI-generated and provide safeguards against the generation of illegal content.

    Engaging in prohibited AI practices could lead to hefty fines: up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    But penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for startups.

    Not everyone has pushed back on the legislation so far. Earlier this month, Digital Europe, a trade association that counts SAP

    (SAP)
    and Ericsson

    (ERIC)
    among its members, called the rules “a text we can work with.”

    “However, there remain some areas which can be improved to ensure Europe becomes a competitive hub for AI innovation,” the group said in a statement.

    Dragos Tudorache, a Romanian member of parliament who led the bill’s drafting, said he was convinced that those who signed the new letter “have not read the text but have rather reacted on the stimulus of a few.”

    “The only concrete suggestions made are in fact what the [draft] text now contains: an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency. Nothing else,” he said in a statement.

    “It is a pity that the aggressive lobby of a few is capturing other serious companies in the net, which unfortunately undermines the undeniable lead that Europe has taken.”

    Brando Benifei, an Italian member of parliament who also led the drafting of the legislation, told CNN “we will listen to all concerns and stakeholders when dealing with AI regulation, but we have a firm commitment to deliver clear and enforceable rules.”

    “Our work could positively affect the global conversation and direction when dealing with artificial intelligence and its impact on fundamental rights, without hindering the necessary pursuit of innovation,” he said.

    [ad_2]

    Source link

  • Alibaba unveils its ChatGPT-style service | CNN Business

    Alibaba unveils its ChatGPT-style service | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Alibaba showed off its answer to the ChatGPT craze on Tuesday, demonstrating new software that it plans to eventually roll out across all its platforms.

    The Chinese tech giant unveiled Tongyi Qianwen, a large language model that will be embedded in its Tmall Genie smart speakers and workplace messaging platform DingTalk. It was trained on vast troves of data in order to generate compelling responses to users’ prompts.

    The technology will initially be integrated into those two products and eventually added to all Alibaba

    (BABA)
    applications, from e-commerce to mapping services, according to the company.

    Group CEO Daniel Zhang, who also oversees Alibaba’s cloud division, presented the new AI-powered service at a conference in Beijing, where the company demonstrated how it will allow users to transcribe meeting notes, craft business pitches and tell children’s stories.

    The company has opened up Tongyi Qianwen — which roughly translates as “seeking truth by asking a thousand questions” — to enterprise customers for testing before making it available to more users.

    “We are at a technological watershed moment, driven by generative AI and cloud computing,” Zhang said.

    Generative AI refers to the technology that underpins platforms like ChatGPT. The service has exploded in popularity in recent months, and Chinese tech companies have been racing to release their own versions, prompting some critics to predict that the trend will add fuel to an existing US-China rivalry in emerging technologies.

    Alibaba, which has a large cloud computing business, will also allow clients of that division to use the new technology to build their own customized large language models, the firm said in a statement.

    The debut comes after that of Baidu

    (BIDU)
    , which launched its own ChatGPT-style service last month. During a similar presentation, Baidu

    (BIDU)
    showed how its chatbot, called ERNIE, could generate a company newsletter, come up with a corporate slogan and solve a math riddle.

    On Monday, SenseTime, one of China’s most prominent AI companies, launched a suite of new services, including a chatbot called SenseChat.

    China will be setting rules to govern the operation of such services. In draft guidelines issued Tuesday to solicit public feedback, the country’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities. In addition, they must provide information about the scale and type of data they use, their basic algorithms and other technical information.

    Alibaba’s shares in Hong Kong ticked up 1.6% following its demonstration.

    The company announced last month that it planned to split its business into six units. Most of those units, including its cloud services business that oversees AI projects, will be authorized to raise capital and pursue public listings.

    — Juliana Liu contributed to this report.

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    [ad_1]



    CNN
     — 

    When Elyse Nguyen was nearing her wedding date in February and still hadn’t started writing her vows, a friend suggested she try a new source of inspiration: ChatGPT.

    The AI chatbot, which was released publicly in late November, can generate compelling written responses to user prompts and offers the promise of helping people get over writer’s block, whether it be for an essay, an email, or an emotional speech.

    “At first we inputted the prompt as a joke and the output was pretty cheesy with personal references to me and my husband,” said Nguyen, a financial analyst at Qualcomm. “But the essence of what vows should incorporate was there – our promises to each other and structure.”

    She made edits, changed the prompts to add humor and details about her partner’s interests, and added some personal touches. Nguyen ultimately ended up using a good portion of ChatGPT’s suggestions and said her husband was on board with it.

    “It helped alleviate some stress because I had no prior experience with wedding vows nor did I know what should be included,” Nguyen said. “Plus, ChatGPT is a genius with alliteration, analogies and metaphors. Having something like, ‘I promise to be your partner in life with the enthusiasm of a golfer’s first hole in one’ in my back pocket was comical.”

    Nearly five months after ChatGPT went viral and ignited a new AI arms race in Silicon Valley, more couples are looking to it for help with wedding planning, including writing vows and speeches, drafting religious marriage contracts, and setting up websites for the special day.

    Ellen Le recently created some of her wedding website through a new Writer’s Block Assistant tool on online wedding planning service Joy, which was one of the first third-party platforms to incorporate ChatGPT’s technology. (Last month, OpenAI, the company behind ChatGPT, opened up access to the chatbot, paving the way for it to be integrated into numerous apps and services.)

    Le, a product manager at a startup, said she used the feature to draft an “about us” page and write directions from San Francisco to her Napa Valley wedding. The Writer’s Block Assistant tool helps users write vows, best man and maid of honor speeches, thank you cards and wedding website “about us” pages. It also lets users highlight personal stories and select the style or tone before pulling it into a speech.

    “I started drafting my vows and when I typed in how we met, it produced this very delightful story,” Le said. “Some of it was inaccurate, making up certain details, but it gave me a helping hand and something to react to, rather than just spending 10 hours thinking about how to get started.”

    Le said her fiance, who often uses ChatGPT for work, is considering using AI to help with his vows too.

    Joy co-founder and CEO Vishal Joshi, who studied artificial intelligence and electrical engineering at NIT Rourkela in India, said the company launched Writer’s Block Assistant in March after it conducted an internal study that found most of its users were somewhat overwhelmed with getting started on writing vows and speeches, and wished they had help. He said the company has already seen thousands of submissions since launching the tool.

    “Almost two decades ago, AI enthusiasts like myself and my research peers had only dreamt of mass market adoption we are seeing today, and we know this is just the true beginning,” Joshi said. “Just like smartphones, if applied well, the positive impact of AI on our lives can far outshine the negatives. We’re working on responsibly innovating using AI to advance the wedding and event industry as a whole.”

    Michael Grinn and Kate Gardiner used viral AI tool ChatGPT to write the Ketubah, a Jewish wedding contract, for their June wedding.

    ChatGPT has sparked concerns in recent months about its potential to perpetuate biases, spread misinformation and upend certain livelihoods. Now, as it finds its way into marriage ceremonies, it could raise more nuanced questions about whether people risk losing something by injecting technology into what is supposed to be a deeply personal and, for many, spiritual moment in life.

    Michael Grinn, an anesthesiologist with practices in Miami and New York, was experimenting with ChatGPT when he asked it to produce a traditional Ketubah – a Jewish marriage contract – for his upcoming June wedding.

    Grinn and his fiance Kate Gardiner, the founder and CEO of a public relations firm, then requested it make some language changes around gender equality and intimacy. “At the end, we both looked at each other and were like, we can’t disagree with the result,” he said.

    Editing took about an hour, but it still shaved hours off what otherwise could have been a lengthy process, he said. Still, Grinn plans to write his own vows. “I want them to be less refined and something no one else helped me with.”

    He does, however, plan to use ChatGPT for inspiration for officiating his best man’s wedding. “It mostly comes down to time because I’ve been working so much,” he said, “and this is so efficient.”

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    Amazon wants investors to know it won’t be left behind in the latest Big Tech arms race over artificial intelligence.

    In a letter to shareholders Thursday, Amazon

    (AMZN)
    CEO Andy Jassy said the company is “investing heavily” in large language models (LLMs) and generative AI, the same technology that underpins ChatGPT and other similar AI chatbots.

    “We have been working on our own LLMs for a while now, believe it will transform and improve virtually every customer experience, and will continue to invest substantially in these models across all of our consumer, seller, brand, and creator experiences,” Jassy wrote in his letter to shareholders.

    The remarks, which were part of Jassy’s second annual letter to shareholder since taking over as CEO, hint at the pressure that many tech companies feel to explain how they can tap into the rapidly evolving marketplace for AI products. Since ChatGPT was released to the public in late November, Google

    (GOOG)
    , Facebook

    (FB)
    and Microsoft

    (MSFT)
    have all talked up their growing focus on generative AI technology, which can create compelling essays, stories and visuals in response to user prompts.

    Amazon’s goal, according to Jassy, is to offer less costly machine learning chips so that “small and large companies can afford to train and run their LLMs in production.” Large language models are trained on vast troves of data in order to generate responses to user prompts.

    “Most companies want to use these large language models, but the really good ones take billions of dollars to train and many years, most companies don’t want to go through that,” Jassy said in an interview with CNBC on Thursday morning.

    “What they want to do is they want to work off of a foundational model that’s big and great already, and then have the ability to customize it for their own purposes,” Jassy told CNBC.

    With that in mind, Amazon on Thursday unveiled a new service called Bedrock. It essentially makes foundation models (large models that are pre-trained on vast amounts of data) from AI21 Labs, Anthropic, Stability AI and Amazon accessible to clients via an API, Amazon said in a blog post.

    Jassy told CNBC he thinks Bedrock “will change the game for people.”

    In his letter to shareholders, Jassy also touted AWS’s CodeWhisperer, another AI-powered tool which he said “revolutionizes developer productivity by generating code suggestions in real time.”

    “I could write an entire letter on LLMs and Generative AI as I think they will be that transformative, but I’ll leave that for a future letter,” Jassy wrote. “Let’s just say that LLMs and Generative AI are going to be a big deal for customers, our shareholders, and Amazon.”

    In the letter, Jassy also reflected on leading Amazon through “one of the harder macroeconomic years in recent memory,” as the e-commerce giant cut some 27,000 jobs as part of a major bid to rein in costs in recent months.

    “There were an unusual number of simultaneous challenges this past year,” Jassy said in the letter, before outlining steps Amazon took to rethink certain free shipping options, abandon some of its physical store concepts and significantly reduce overall headcount.

    Amazon disclosed in a securities filing Thursday that Jassy’s pay package last year was valued at some $1.3 million, and that the CEO did not receive any new stock awards in 2022. (When Jassy took over as CEO in 2021, he was awarded a pay package mostly comprised of stock awards that valued his total compensation package at some $212 million.)

    Despite the challenges at Amazon, however, Jassy said in his letter that he finds himself “optimistic and energized by what lies ahead.” Jassy added: “I strongly believe that our best days are in front of us.”

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link