ReportWire

Tag: iab-artificial intelligence

  • One news publication had an AI tool write articles. It didn’t go well | CNN Business

    One news publication had an AI tool write articles. It didn’t go well | CNN Business

    [ad_1]


    New York
    CNN
     — 

    News outlet CNET said Wednesday it has issued corrections on a number of articles, including some that it described as “substantial,” after using an artificial intelligence-powered tool to help write dozens of stories.

    The outlet has since hit pause on using the AI tool to generate stories, CNET’s editor-in-chief Connie Guglielmo said in an editorial on Wednesday.

    The disclosure comes after CNET was previously called out publicly for quietly using AI to write articles and later for errors. While using AI to automate news stories is not new – the Associated Press began doing so nearly a decade ago – the issue has gained new attention amid the rise of ChatGPT, a viral new AI chatbot tool that can quickly generate essays, stories and song lyrics in response to user prompts.

    Guglielmo said CNET used an “internally designed AI engine,” not ChatGPT, to help write 77 published stories since November. She said this amounted to about 1% of the total content published on CNET during the same period, and was done as part of a “test” project for the CNET Money team “to help editors create a set of basic explainers around financial services topics.”

    Some headlines from stories written using the AI tool include, “Does a Home Equity Loan Affect Private Mortgage Insurance?” and “How to Close A Bank Account.”

    “Editors generated the outlines for the stories first, then expanded, added to and edited the AI drafts before publishing,” Guglielmo wrote. “After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit.”

    The result of the audit, she said, was that CNET identified additional stories that required correction, “with a small number requiring substantial correction.” CNET also identified several other stories with “minor issues such as incomplete company names, transposed numbers, or language that our senior editors viewed as vague.”

    One correction, which was added to the end of an article titled “What Is Compound Interest?” states that the story initially gave some wildly inaccurate personal finance advice. “An earlier version of this article suggested a saver would earn $10,300 after a year by depositing $10,000 into a savings account that earns 3% interest compounding annually. The article has been corrected to clarify that the saver would earn $300 on top of their $10,000 principal amount,” the correction states.

    Another correction suggests the AI tool plagiarized. “We’ve replaced phrases that were not entirely original,” according to the correction added to an article on how to close a bank account.

    Guglielmo did not state how many of the 77 published stories required corrections, nor did she break down how many required “substantial” fixes versus more “minor issues.” Guglielmo said the stories that have been corrected include an editors’ note explaining what was changed.

    CNET did not immediately respond to CNN’s request for comment.

    Despite the issues, Guglielmo left the door open to resuming use of the AI tool. “We’ve paused and will restart using the AI tool when we feel confident the tool and our editorial processes will prevent both human and AI errors,” she said.

    Guglielmo also said that CNET has more clearly disclosed to readers which stories were compiled using the AI engine. The outlet took some heat from critics on social media for not making overtly clear to its audience that “By CNET Money Staff” meant it was written using AI tools. The new byline is just: “By CNET Money.”

    [ad_2]

    Source link

  • Microsoft quarterly profit falls 12% but cloud computing business shows strength | CNN Business

    Microsoft quarterly profit falls 12% but cloud computing business shows strength | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Tuesday posted weaker-than-expected revenue and a double-digit percentage drop in profit for the final three months of last year amid broader economic uncertainty and reduced demand for personal computers and software.

    The tech giant reported revenue of $52.7 billion for the quarter, a modest 2% increase from the year prior but slightly less than analysts had expected. It reported net income of $16.4 billion, a 12% decline from the year prior.

    The earnings results come at a turbulent moment for Microsoft, and the tech industry as a whole. Microsoft said last week that it plans to lay off 10,000 employees as part of broader cost-cutting measures. In his explanation of the cuts, CEO Satya Nadella pointed to changing demand for digital services years into the pandemic as well as looming recession fears.

    Demand for personal computers, and the Microsoft operating systems that power them, has pulled back after experiencing a boom early in the pandemic. Consulting firm Gartner said earlier this month that worldwide PC shipments fell more than 28% in the fourth quarter of 2022 compared to the same period the prior year. This marked the largest quarterly shipment decline since Gartner began tracking the PC market in the mid-90s.

    On Tuesday, Microsoft reported revenue declines from its Windows OEM operations and from its Xbox content and services lines. Microsoft also said it would incur $800 million in severance expenses from the layoffs announced this month, as well as charges from “changes to our hardware portfolio, and costs related to lease consolidation activities.”

    But the earnings report had some bright spots. Revenue from its cloud computing division, a key area of focus for Microsoft in recent years, increased 22% from the prior year. An analyst at Evercore described the results as “a sigh of relief.”

    Shares of Microsoft rose 4% in after-hours trading Tuesday on the news.

    “The next major wave of computing is being born, as the Microsoft Cloud turns the world’s most advanced AI models into a new computing platform,” CEO Satya Nadella said in a statement accompanying the results. “We are committed to helping our customers use our platforms and tools to do more with less today and innovate for the future in the new era of AI.”

    Earlier this week, Microsoft confirmed it is making a “multibillion dollar” investment into OpenAI, the company behind the viral AI-powered chatbot tool ChatGPT. The deepening partnership between the two companies – Microsoft was an early investor in OpenAI – could help catapult Microsoft as an AI leader and pave the way for the company to incorporate elements of ChatGPT into some of its hallmark applications, such as Outlook and Word.

    In his memo to staffers announcing the job cuts, Nadella said the company will continue to invest in “strategic areas for our future” and pointed to advances in AI as “the next major wave” of computing.

    [ad_2]

    Source link

  • Microsoft confirms it’s investing billions in ChatGPT creator OpenAI | CNN Business

    Microsoft confirms it’s investing billions in ChatGPT creator OpenAI | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Monday confirmed it is making a “multibillion dollar” investment in OpenAI, the company behind the viral new chatbot tool called ChatGPT.

    Microsoft, an early investor in OpenAI, said it plans to expand its existing partnership with the company as part of a greater effort to add more artificial intelligence to its suite of products. In a separate blog post, OpenAI said the multi-year investment will be used to “develop AI that is increasingly safe, useful, and powerful.”

    In late November, OpenAI opened up access to ChatGPT, an AI-powered chatbot that can provide lengthy, thoughtful and thorough responses to user prompts and questions. Its responses, while sometimes inaccurate, have stunned users, including academics and some in the tech industry.

    The investment comes days after Microsoft announced plans to lay off 10,000 employees as part of broader cost-cutting measures, making it the latest tech company to reduce staff because of growing economic uncertainty.

    Microsoft CEO Satya Nadella said that the company was not immune to a weaker global economy, but he also said the company will continue to invest in “strategic areas for our future” and pointed to advances in AI as “the next major wave” of computing.

    The investment in OpenAI could catapult Microsoft as an AI leader and ultimately pave the way for the company to incorporate ChatGPT into some of its hallmark applications, such as Word, PowerPoint and Outlook.

    As a result of its existing exclusive deal with OpenAI, Microsoft recently said it would soon add ChatGPT features to to its cloud computing service, Azure. If ChatGPT becomes available on that service, businesses could use the tools directly within its apps and services, too.

    Ahead of Monday’s announcement, David Lobina, an artificial intelligence analyst at ABI Research, told CNN there are big benefits of a further Microsoft investment for OpenAI, too.

    “OpenAI is looking to monetize their systems, considering the huge compute costs of creating these models, and their partnership with Microsoft can be an easy way to do so,” he said.

    [ad_2]

    Source link

  • Asia’s richest man Gautam Adani is addicted to ChatGPT | CNN Business

    Asia’s richest man Gautam Adani is addicted to ChatGPT | CNN Business

    [ad_1]


    New Delhi
    CNN
     — 

    Asia’s richest man Gautam Adani says he is addicted to ChatGPT, the powerful new AI tool that interacts with users in an eerily convincing and conversational way.

    In a LinkedIn post last week, the 60-year-old India tycoon said that the release of ChatGPT was a “transformational moment in the democratization of AI given its astounding capabilities as well as comical failures.”

    The billionaire admitted to “some addiction” to ChatGPT since he has started using it.

    The tool, which artificial intelligence research company OpenAI made available to the general public late last year, has sparked conversations about how “generative AI” services — which can turn prompts into original essays, stories, songs and images after training on massive online datasets — could radically transform how we live and work.

    Some claim it will put artists, tutors, coders, and writers out of a job. Others are more optimistic, postulating that it will allow employees to tackle to-do lists with greater efficiency.

    “But there can be no doubt that generative AI will have massive ramifications,” Adani wrote in his post, adding that generative AI holds the “same potential and danger” as silicon chips.

    “Nearly five decades ago, the pioneering of chip design and large-scale chip production put the US ahead of rest of the world and led to the rise of many partner countries and tech behemoths like Intel, Qualcomm, TSMC, etc,” Adani, who has businesses in sectors ranging from ports to power stations, wrote.

    “It also paved the way for precision and guided weapons used in modern warfare with more chips mounted than ever before,” he added. The race in the field of generative AI will quickly get as “complex and as entangled as the ongoing silicon chip war,” he said.

    Chipmaking has emerged recently as a new flashpoint in US-China tensions, with Washington blocking sales of advanced computer chips and chip-making equipment to Chinese companies. Some Chinese investments in European chipmaking have also been blocked.

    The Indian infrastructure magnate believes that China has an edge over the United States in the AI race because Chinese researchers published twice as many academic papers on the subject as their American counterparts in 2021, he wrote in the post published on Friday after attending the World Economic Forum in Davos.

    Back home, Adani is also considering taking five new businesses to the stock market in the next five years, according to his conglomerate’s chief financial officer Jugeshinder Singh.

    Speaking to reporters on Saturday in the western Indian city of Ahmedabad — where the Adani empire is headquartered — Singh said the group’s metals and mining, energy, data center, airports, and roads businesses will likely be spun off between 2025 to 2028.

    Adani Enterprises, the conglomerate’s flagship company, functions as an incubator for Adani’s businesses. Once they have matured, they are often given their independence via a stock market listing. Many of Adani companies have become leading players in their respective sectors.

    Later this month, Adani Enterprises is also raising 200 billion rupees ($2.5 billion) by issuing new shares. It would be India’s biggest ever follow-on public share offering.

    A college dropout and a self-made industrialist, Adani is worth over $120 billion, making him the world’s third richest man, ahead of Jeff Bezos and Bill Gates.

    Shares of Adani’s seven listed companies — in sectors ranging from ports to power stations — have seen turbocharged growth in the last few years. But some analysts fear that this growth comes at a huge risk as Adani’s $206 billion juggernaut has been fueled by a $30 billion borrowing binge, making his business one of the most indebted in the country.

    [ad_2]

    Source link

  • Teachers are adapting to concerns about a powerful new AI tool | CNN Business

    Teachers are adapting to concerns about a powerful new AI tool | CNN Business

    [ad_1]



    CNN
     — 

    When Kristen Asplin heard about a powerful new AI chatbot tool called ChatGPT going viral online recently with its ability to write frighteningly good essays in seconds, she worried about how her students could use it to cheat.

    Asplin, a professor at University of Pittsburgh at Greensburg, soon joined a new Facebook group for teachers like herself to swap concerns and suggestions on how to restructure their lessons and assignments in response to ChatGPT. The tool, which launched in late November, can create detailed responses to simple prompts like “Who was the 25th president of the United States?” as well as answers to more complex questions like “What political developments led to the fall of the Roman Empire?”

    Asplin eventually decided to tweak her approach to written assignments. Instead of focusing just on the final product, which could potentially be spit out easily by ChatGPT, she’s now asking students to hand in their papers at various stages of the writing process.

    “I am emphasizing and being more vigilant about the early steps in the writing process so I can see their progress,” Asplin said about her new approach to class assignments. “This will give students more confidence in the process of writing so they are less likely to be desperate enough to cheat. It will also show me their work along the way so they can’t just type a prompt in the program and have the computer do their work for them.”

    In the weeks since the artificial intelligence research group OpenAI launched ChatGPT, which is trained on a massive trove of information online to create its responses, the tool has been used to write articles (with more than a couple factual inaccuracies) for at least one news publication; penned lyrics in the style of various artists (one of whom later responded, “this song sucks”) and drafted research paper abstracts that fooled some scientists.

    But while many may view the tool as a novelty with unknown long-term consequences, a growing number of schools and teachers are concerned about its immediate impact on students and their ability to cheat on assignments. The Facebook group that Asplin joined, for example, has added more than 800 members in just the few weeks since it was created.

    Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning. In interviews with CNN, some college instructors said they are shifting back to in-classroom essays for the first time in years, and others are requiring more personalized essays. Some teachers said they’ve also heard of students being required to film short videos that elaborate on their thought process. Public schools in New York City and Seattle, meanwhile, have already banned students and teachers from using ChatGPT on the district’s networks and devices.

    While there have been some anecdotes of cheating cases circling the internet and stirring fears of more to come, some teachers are urging their peers not to overreact to a new technology.

    “There’s been a mass hysteria response to ChatGPT potentially ruining writing, while other people think it’s actually a good thing,” said Alan Reid, an associate professor of English at Coastal Carolina University. “We have to try to straddle the two sides and recognize the drawbacks alongside the positives.”

    In recent weeks, Kevin Pittle, an associate professor at Biola University in California, has found himself thinking about what ChatGPT knows.

    “Before assigning materials, I thoroughly interrogate ChatGPT to see what it does or does not ‘know’ about the material or have access to,” he said. With that in mind, he said he’s now requiring his students to show citations of specific sources that are unavailable to ChatGPT, including textbooks, articles behind paywalls, and materials produced after ChatGPT was trained on internet data available as of 2021.

    And he’s not stopping there.

    “ChatGPT doesn’t ‘have soul’ – its fictional reflections are generally pretty lifeless – so in one course I am requiring much more ‘soul-searching’ and reflective journaling than ChatGPT seems able to fake,” he said.

    OpenAI previously told CNN it made ChatGPT available as a preview to learn from real world use. A spokesperson called that step a “critical part of developing and deploying capable, safe AI systems.”

    “We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system,” the spokesperson said. “We look forward to working with educators on useful solutions, and other ways to help teachers and students benefit from artificial intelligence.”

    Some companies such as Turnitin are already actively working on ChatGPT plagiarism detection tools that could help teachers identify when assignments are written by the tool. (Turnitin already works with 16,000 schools, publishers and corporations with its other plagiarism detection tools). Princeton student Edward Tuan told CNN more than 95,000 people have already tried the beta version of his own ChatGPT detection feature, called ZeroGPT, noting there has been “incredible demand among teachers” so far.

    The concern extends beyond the United States. Alex Steel, the director of teaching strategy and a professor of law at the University of New South Wales, said a number of universities across Australia have announced a move back to closed book exams.

    “There is an increasing number of academics concerned that they will not be able to detect AI-written answers,” he told CNN. “Partly the concerns are driven by a lack of understanding from teachers of what sort of questions might be susceptible … so staff may push for return to exams until [these issues] can be addressed.”

    Not all teachers are looking for ways to crack down on ChatGPT. Reid, the professor at Coastal Carolina University, believes teachers should work with ChatGPT and teach best practices in the classroom.

    Reid said teachers could encourage students to plug an assignment question into the tool and have them compare that result to what they personally wrote. “This could also allow a teaching opportunity for students to see what they missed, analyze the various approaches they could have taken or use it as a starting point to help with an outline,” Reid said.

    He argued there will always be ways for students to cheat online, so teaching them how ChatGPT may improve their own writing could be a practical step forward.

    “The burden falls onto the educators – and many don’t want to be police in the classroom,” he said. “The way to handle it is for teachers to examine their own practices and think about how it can be used positively. If they ignore this thing and don’t know anything about it, that leaves the door open for students to use it to cheat and get away with it.”

    The OpenAI website ChatGPT about page on laptop computer arranged in the Brooklyn borough of New York, US, on Thursday, Jan. 12, 2023.

    Leslie Layne, an English and linguistics professor at the University of Lynchburg in Virginia, agrees. She now plans to teach students how ChatGPT could improve their writing.

    “ChatGPT can give students a running start, so they’re not starting on a blank page. But it doesn’t come close to a finished product,” she said. “We want students to include more sourcing and evidence, so it could be used as something to build on.”

    She likened ChatGPT to the outcry around calculators when they first came out. “People were very concerned we would lose the ability to do basic math,” she said. “Now we carry one wherever we go with our phones, and it is so helpful.”

    Layne said teachers could consider having students critique how ChatGPT handled an assignment question, teach students how to find the best prompt for the best response, and have ChatGPT argue one side of a topic and a student argue the other side.

    “Like with other new technologies, this could be a tool instructors use to help students express their ideas,” she said. “Students just have to learn how to improve its writing and adapt it to their own voice.”

    [ad_2]

    Source link

  • CEOs at Davos are using ChatGPT to write work emails | CNN Business

    CEOs at Davos are using ChatGPT to write work emails | CNN Business

    [ad_1]


    Davos, Switzerland
    CNN
     — 

    Jeff Maggioncalda, the CEO of online learning provider Coursera, said that when he first tried ChatGPT, he was “dumbstruck.” Now, it’s part of his daily routine.

    He uses the powerful new AI chatbot tool to bang out emails. He uses it to craft speeches “in a friendly, upbeat, authoritative tone with mixed cadence.” He even uses it to help break down big strategic questions — such as how Coursera should approach incorporating artificial intelligence tools like ChatGPT into its platform.

    “I use it as a writing assistant and as a thought partner,” Maggioncalda told CNN.

    Maggioncalda is one of thousands of business leaders, politicians and academics gathered in Davos, Switzerland this week for the World Economic Forum. On the agenda is an array of pressing issues weighing on the global economy, from the energy crisis to the war in Ukraine and the transformation of trade. But what many can’t stop talking about is ChatGPT.

    The tool, which artificial intelligence research company OpenAI made available to the general public late last year, has sparked conversations about how “generative AI” services — which can turn prompts into original essays, stories, songs and images after training on massive online datasets — could radically transform how we live and work.

    Some claim it will put artists, tutors, coders, and writers (yes, even journalists) out of a job. Others are more optimistic, postulating that it will allow employees to tackle to-do lists with greater efficiency or focus on higher-level tasks.

    It’s a debate that’s captivated many C-suite leaders, often after they tested the tool themselves.

    Christian Lanng, CEO of digital supply chain platform Tradeshift, said he was blown away by the capabilities displayed by ChatGPT, even after years of exposure to Silicon Valley hype.

    He’s also used the platform to write emails and claims no one has noticed the difference. He even had it perform some accounting work, a service for which Tradeshift currently employs an expensive professional services firm.

    To date, ChatGPT has mostly been treated as a curiosity and a harbinger of what’s to come. It relies on OpenAI’s GPT-3.5 language model, which is already out of date; the more advanced GPT-4 version is in the works and could be released this year.

    Critics — of which there are many — are quick to point out that it makes mistakes, is painfully neutral and displays a clear lack of human empathy. One tech news publication, for example, was forced to issue several significant corrections for an article written by ChatGPT. And New York City public schools have banned students and teachers from using it.

    Yet the software, or similar programs from competitors, could soon take the business world by storm.

    Microsoft

    (MSFT)
    , an investor in OpenAI, announced this week that the company’s tools — including GPT-3.5, programming assistant Codex and image generator DALL-E 2 — are now generally available to business clients in a package called Azure OpenAI Service. ChatGPT is being added soon.

    “I see these technologies acting as a copilot, helping people do more with less,” Microsoft CEO Satya Nadella told an audience in Davos this week.

    Maggioncalda has a similar perspective. He wants to integrate generative AI into Coursera’s offering this year, seeing an opportunity to make learning more interactive for students who don’t have access to in-person classroom instruction or one-on-one time with subject matter experts.

    He acknowledges challenges such as preventing cheating and ensuring accuracy need to be addressed. And he’s worried that increasing use of generative AI may not be wholly good for society — people may become less agile thinkers, for example, since the act of writing can be helpful to process complex ideas and hone takeaways.

    Still, he sees the need to move quickly.

    “Anybody who doesn’t use this will shortly be at a severe disadvantage. Like, shortly. Like, very soon,” Maggioncalda said. “I’m just thinking about my cognitive ability with this tool. Versus before, it’s a lot higher, and my efficiency and productivity is way higher.”

    [ad_2]

    Source link

  • Getty Images suing the makers of popular AI art tool for allegedly stealing photos | CNN Business

    Getty Images suing the makers of popular AI art tool for allegedly stealing photos | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Getty Images announced a lawsuit against Stability AI, the company behind popular AI art tool Stable Diffusion, alleging the tech company committed copyright infringement.

    The stock image giant accused Stability AI of copying and processing millions of its images without obtaining the proper licensing, according to a press release issued Tuesday. London-based Stability AI announced it had raised $101 million in funding for open-source AI tech in October and released version 2.1 of its Stable Diffusion tool in December.

    “Getty Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights,” Getty wrote in the statement. “Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long standing legal protections in pursuit of their stand-alone commercial interests.”

    Getty declined to comment further on the suit to CNN, but said that it requested a response from the AI firm before taking action. Stability AI did not respond to CNN’s request for comment.

    AI art and traditional media suppliers have struggled to coexist in recent months as computer-generated images grow more available and advanced, using human-created images and art as data training.

    Once available only to a select group of tech insiders, text-to-image AI systems are becoming increasingly popular and powerful. These systems include Stable Diffusion and DALL-E, from OpenAI.

    Shutterstock, a Getty Images competitor and fellow stock image platform, announced plans in October to expand its partnership with OpenAI, the company behind DALL-E and viral AI chat bot ChatGPT, and enhance AI-generated content while launching a fund to compensate artists for their contributions.

    These tools, which typically offer some free credits before charging, can create all kinds of images with just a few words, including those that are clearly evocative of the works of many, many artists, if not seemingly created by them. Users can invoke those artists with words such as “in the style of” or “by” along with a specific name. Current uses for these tools can range from personal amusement and hobbies to more commercial cases.

    In just months, millions of people have flocked to text-to-image AI systems which are already being used to create experimental films, magazine covers and images to illustrate news stories. An image generated with an AI system called Midjourney recently won an art competition at the Colorado State Fair, creating an uproar among artists, who are concerned that their art can be stolen by these systems without due credit.

    “I don’t want to participate at all in the machine that’s going to cheapen what I do,” Daniel Danger, an illustrator and print maker who learned a number of his works were used to train Stable Diffusion, told CNN in October.

    Stability AI founder and CEO Emad Mostaque told CNN Business in October via email that art is a tiny fraction of the LAION training data behind Stable Diffusion. “Art makes up much less than 0.1% of the dataset and is only created when deliberately called by the user,” he said.

    [ad_2]

    Source link

  • From color-changing cars to self-driving strollers, here’s some of the coolest tech from CES 2023 | CNN Business

    From color-changing cars to self-driving strollers, here’s some of the coolest tech from CES 2023 | CNN Business

    [ad_1]



    CNN
     — 

    A long list of companies once again showed off an assortment of cutting edge technology and oddball gadgets at the Consumer Electronics Show in Las Vegas last week.

    There were new twists on foldable devices, cars that changed colors and smart ovens that live streamed dinners. There was a self-driving stroller, a pillow that pulsates to reduce anxiety and a locker from LG that claims to deodorize smelly sneakers in less than 40 minutes. At the event, some people gathered in groups, sitting in silence, to test out the latest virtual reality products.

    While some of these devices may never find their way into households, the products on display offer a glimpse at some of the biggest tech trends companies are anticipating this year and in the years ahead.

    Here’s a look at some of the buzziest products announced last week:

    BMW unveiled a wild color-changing concept car with 260 e-panels that can change up to 32 colors. During a demo, different parts of the car, including the wheel covers, flashed in varying hues and swirls of colors. The technology, which relies on panels that receive electrical impulses, isn’t ready for production. (Breaks between panels and what looked like wiring could be seen on the outside of the car.) But just imagine being able to drive a sporty red car on the weekends and then a conservative gray model when you go to work.

    If you think snapping photos of your meal for Instagram is overdone, now you can livestream your dinner as it cooks in real time and post it to your social feeds. Samsung’s new AI Wall oven features an internal camera that can capture footage of your baking food or allow you to keep tabs on it without ever leaving the couch. The oven, which uses an algorithm to recognize dishes and suggest cooking times and temperatures, also pushes notifications to your phone to prevent you from burning meals. The oven will launch in North America later this year; a price has not yet been announced.

    The self-driving stroller allows for hands-free strolling but only when a child is not inside

    Canadian-based baby gear startup Gluxkind was showed off its Ella AI Powered Smart Stroller. It offers much of the same tech seen in autonomous cars and delivery robots, including a dual-motor system for uphill walks and automatic downhill brake assist. It’s meant to serve as an “extra pairs of eyes and an extra set of hands,” according to the company’s website – not a replacement for a caregiver. The Ella stroller is able to drive itself for hands-free strolling – but only when a child is not inside.

    The Shiftall Mutalk mouthpiece puts a Bluetooth microphone over the mouth to quiet a user's voice

    No gadget at CES this year was as striking as the Mutalk mouthpiece from startup Shiftall. The device, which looks like a muzzle, features a soundproof Bluetooth microphone that makes it difficult for others in the room to hear your voice when you’re on calls. The company thinks the $200 gadget will come in handy for everything from voice chats and playing online games to shouting in VR when you don’t want to disturb anyone else nearby. Instead of hearing you, they will simply see your new mouthpiece; you can decide which is worse.

    If you ever wanted to hit 15 miles per hour on roller skates, this electric pair from French startup AtmosGear promises to help get you there. With a battery pack that holds an hour charge and the ability to travel over 12 miles, the skates can clip onto any existing roller skates, turning them into motor-propelled footwear. The skates are currently available for pre-order for $525.

    JBL Tour 2 Pro earbuds and case with smartphone-like abilities

    You’ve probably heard of smartphones that come with headphones, but what about headphones that come with a screen? The JBL Tour Pro 2 earbuds adds a touchscreen to the case to bring smartwatch-like capabilities by allowing users to control its settings, answer calls, set alarms, manage music and check battery life. No launch date has been announced, but the new buds will cost $250 when they eventually go on sale.

    Samsung's Flex Hybrid Display concept folds and slides

    Some companies offered a new twist on the foldable phone concept. For example, Samsung Display’s Flex Hybrid prototype features a foldable and slidable display (the right side slides to offer more screen space). Meanwhile, the Asus $3500 Zenbook 17 Fold OLED – the world’s first foldable 17-inch laptop – picked up significant buzz on the show floor, acting almost like a large tablet that can be folded in half when on the go.

    Dubbed “the world’s first awareable,” the $500 Nowatch is a watch… with no clock. The Amsterdam-based startup of the same name launched the device to help users monitor stress, body temperature, heart rate, movement and sleep. But unlike other smartwatches, there’s no watchface – instead, a gemstone sits where the touchscreen display typically goes. “We’ve replaced the traditional watch face with ancient stones, celebrating the belief that time is NOW,” the company said on its website.

    Representative Director, Chairman and CEO of Sony Honda Mobility Yasuhide Mizuno in front of a Afeela concept vehicle during a press event at CES 2023 at the Mandalay Bay Convention Center on January 04, 2023 in Las Vegas, Nevada.

    Honda and Sony have joined forces to create tech-filled electric cars that, they say, will be both fun to drive and filled with the latest entertainment innovation. According to the CEO of Sony Honda Mobility, its cars will recognize your moods and be highly communicative and sensitive to your needs. The car will have screens on the outside so it can “express itself” and share information and will be able to “detect and understand people and society by utilizing sensing and [artificial intelligence] technologies,” according to the company. That’s why the company named its first joint car brand Afeela, in that it just has to “feel” right. But it’s unclear if we’re afeeling that name.

    Withings U-Scan attaches to the toilet to collect data from urine

    While it typically requires a blood panel and a visit to the doctor’s office to learn more about vitamin deficiencies, Withins says its new $500 U-Scan device can tell you similar information right from the comfort of your own toilet. The device attaches to existing toilets and collects data from your urine stream to detect vitamin deficiencies, check hydration and monitor metabolism, according to the company. An additional device called the U-Scan Cycle Sync tracks periods and ovulation cycles.

    Schlage’s new smart lock is one of the first to work with Apple’s Home Key functionality, which allows users to upload their keys to their Apple Wallet and unlock their deadbolted front door directly from their phone or Apple Watch. The lock also works with Amazon Alexa and Google Assistant for voice controlled, hands-free locking. Available in two finishes, the deadbolt can manage access codes, view lock history and handle multiple locks at once. The lock, which will cost $300, will be available for purchase late this spring, according to a company press release.

    – CNN’s Peter Valdes-Depena contributed to this report

    [ad_2]

    Source link

  • New York City public schools ban access to AI tool that could help students cheat | CNN Business

    New York City public schools ban access to AI tool that could help students cheat | CNN Business

    [ad_1]


    New York
    CNN
     — 

    New York City public schools will ban students and teachers from using ChatGPT, a powerful new AI chatbot tool, on the district’s networks and devices, an official confirmed to CNN on Thursday.

    The move comes amid growing concerns that the tool, which generates eerily convincing responses and even essays in response to user prompts, could make it easier for students to cheat on assignments. Some also worry that ChatGPT could be used to spread inaccurate information.

    “Due to concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content, access to ChatGPT is restricted on New York City Public Schools’ networks and devices,” Jenna Lyle, the deputy press secretary for the New York public schools, said in a statement. “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”

    Although the chatbot is restricted under the new policy, New York City public schools can request to gain specific access to the tool for AI and tech-related educational purposes.

    Education publication ChalkBeat first reported the news.

    New York City appears to be one of the first major school districts to crack down on ChatGPT, barely a month after the tool first launched. Last month, the Los Angeles Unified School District moved to preemptively block the site on all networks and devices in their system “to protect academic honesty while a risk/benefit assessment is conducted,” a spokesperson for the district told CNN this week.

    While there are genuine concerns about how ChatGPT could be used, it’s unclear how widely adopted it is among students. Other districts, meanwhile, appear to be moving more slowly.

    Peter Feng, the public information officer for the South San Francisco Unified School District, said the district is aware of the potential for its students to use ChatGPT but it has “not yet instituted an outright ban.” Meanwhile, a spokesperson for the School District of Philadelphia said it has “no knowledge of students using the ChatGPT nor have we received any complaints from principals or teachers.”

    In a statement shared with CNN after publication, a spokesperson for OpenAI, the artificial intelligence research lab behind the tool, said it made ChatGPT available as a research preview to learn from real-world use. The spokesperson called that step a “critical part of developing and deploying capable, safe AI systems.”

    “We are constantly incorporating feedback and lessons learned,” the spokesperson added.

    The company said it aims to work with educators on ways to help teachers and students benefit from artificial intelligence. “We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system,” the spokesperson said.

    OpenAI opened up access to ChatGPT in late November. It is able to provide lengthy, thoughtful and thorough responses to questions and prompts, ranging from factual questions like “Who was the president of the United States in 1955” to more open-ended questions such as “What’s the meaning of life?”

    The tool stunned users, including academics and some in the tech industry. ChatGPT is a large language model trained on a massive trove of information online to create its responses. It comes from the same company behind DALL-E, which generates a seemingly limitless range of images in response to prompts from users.

    ChatGPT went viral just days after its launch. Open AI co-founder Sam Altman, a prominent Silicon Valley investor, said on Twitter in early December that ChatGPT had topped one million users.

    But many educators fear students will use the tool to cheat on assignments. One user, for example, fed ChatGPT an AP English exam question; it responded with a 5 paragraph essay about Wuthering Heights. Another user asked the chat bot to write an essay about the life of William Shakespeare four times; he received a unique version with the same prompt each time.

    Darren Hicks, assistant professor of philosophy at Furman University, previously told CNN it will be harder to prove when a student misuses ChatGPT than with other forms of cheating.

    “In more traditional forms of plagiarism – cheating off the internet, copy pasting stuff – I can go and find additional proof, evidence that I can then bring into a board hearing,” he said. “In this case, there’s nothing out there that I can point to and say, ‘Here’s the material they took.’”

    “It’s really a new form of an old problem where students would pay somebody or get somebody to write their paper for them – say an essay farm or a friend that has taken a course before,” Hicks added. “This is like that only it’s instantaneous and free.”

    Feng, from the South San Francisco Unified School District, told CNN that “some teachers have responded to the rise of AI text generators by using tools of their own to check whether work submitted by students has been plagiarized or generated via AI.”

    Some companies such as Turnitin – a detection tool that thousands of school districts use to scan the internet for signs of plagiarism – are now looking into how its software could detect the usage of AI generated text in student submissions.

    Hicks said teachers will need to rethink assignments so they couldn’t be easily written by the tool. “The bigger issue,” Hicks added, “is going to be administrations who have to figure out how they’re going to adjudicate these kinds of cases.”

    – CNN’s Abby Phillip contributed to this report.

    [ad_2]

    Source link

  • Video: ‘Swifties’ take on Ticketmaster, new AI chatbot coming for your job and Apple sued for AirTag stalking on CNN Nightcap | CNN Business

    Video: ‘Swifties’ take on Ticketmaster, new AI chatbot coming for your job and Apple sued for AirTag stalking on CNN Nightcap | CNN Business

    [ad_1]

    The AI chatbot coming for your job, ‘Swifties’ take on Ticketmaster, and Apple sued for AirTag stalking

    Nightcap’s Jon Sarlin talks to futurist Amy Webb about the implications for ChatGPT, the next-gen AI tool that’s blowing everyone’s minds. Plus, Morgan Harper of the American Economic Liberties Project on whether Ticketmaster has met its match in Taylor Swift and her legion of devoted fans. And CNN’s Sam Kelly on the lawsuit filed against Apple by two women alleging their exes used AirTags to stalk them. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.


    13:31

    – Source:
    CNN

    [ad_2]

    Source link

  • FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Artificial intelligence tools such as ChatGPT could lead to a “turbocharging” of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

    Addressing House lawmakers, FTC chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

    In recent months, a new crop of AI tools have gained attention for their ability to generate convincing emails, stories and essays as well as images, audio and videos. While these tools have potential to change the way people work and create, some have also raised concerns about how they could be use to deceive by impersonating individuals.

    Even as policymakers across the federal government debate how to promote specific AI rules, citing concerns about possible algorithmic discrimination and privacy issues, companies could still face FTC investigations today under a range of statutes that have been on the books for years, Khan and her fellow commissioners said.

    “Throughout the FTC’s history we have had to adapt our enforcement to changing technology,” said FTC Commissioners Rebecca Slaughter. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … [and] not be scared off by this idea that this is a new, revolutionary technology.”

    FTC Commissioner Alvaro Bedoya said companies cannot escape liability simply by claiming that their algorithms are a black box.

    “Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” said Bedoya. “There is law, and companies will need to abide by it.”

    The FTC has previously issued extensive public guidance to AI companies, and the agency last month received a request to investigate OpenAI over claims that the company behind ChatGPT has misled consumers about the tool’s capabilities and limitations.

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Bill Gates says AI risks are real but nothing we can’t handle | CNN Business

    Bill Gates says AI risks are real but nothing we can’t handle | CNN Business

    [ad_1]



    CNN
     — 

    Bill Gates sounds less worried than some other executives in Silicon Valley about the risks of artificial intelligence.

    In a blog post on Tuesday, the Microsoft co-founder outlined some of the biggest areas of concern with artificial intelligence, including the potential for spreading misinformation and displacing jobs. But he stressed that these risks are “manageable.”

    “This is not the first time a major innovation has introduced new threats that had to be controlled,” Gates wrote. “We’ve done it before.”

    Gates likened AI to previous “transformative” changes in society, such as the introduction of the car, which then required the public to adopt seat belts, speed limits, driver’s licenses and other safety standards. Innovation, he said, can create “a lot of turbulence” in the beginning, but society can “come out better off in the end.”

    Microsoft is one of the leaders in the race to develop and deploy a new crop of generative AI tools into popular products with the promise of helping people be more productive and creative. But a number of prominent figures in the industry have also publicly raised doomsday scenarios about the rapidly evolving technology.

    In late May, tech leaders including Microsoft’s CTO Kevin Scott joined dozens of AI researchers and some celebrities in signing a one-sentence letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    Gates has previously said people should not “panic” about apocalyptic AI scenarios. In a blog post earlier this year, Gates wrote: “Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.”

    In his blog post this week, Gates said he believes one of the biggest areas of concern for AI is the potential for deepfakes and AI-generated misinformation to undermine elections and democracy. Gates said he is “hopeful” that “AI can help identify deepfakes as well as create them.” He also said laws needs to be clear about deepfake usage and labeling “so everyone understands when something they’re seeing or hearing is not genuine.”

    Gates also expressed concern over how AI could make it easier for hackers and even countries to launch cyberattacks on people and governments. Gates urged the development of related cybersecurity measures and for governments to consider creating a global body for AI similar to the International Atomic Energy Agency.

    Gates ticked through other concerns, too, including how AI could take away people’s jobs,perpetuate biases baked into the data on which it’s trained, and even disrupt the way kids learn to write.

    “It reminds me of the time when electronic calculators became widespread in the 1970s and 1980s,” Gates wrote. “Some math teachers worried that students would stop learning how to do basic arithmetic, but others embraced the new technology and focused on the thinking skills behind the arithmetic.”

    Gates said “it’s natural to feel unsettled” during a transition period, but added he is optimistic about the future and how “history shows that it’s possible to solve the challenges created by new technologies.”

    “It’s the most transformative innovation any of us will see in our lifetimes,” he wrote, “and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks.”

    [ad_2]

    Source link

  • The viral new ‘Drake’ and ‘Weeknd’ song is not what it seems | CNN Business

    The viral new ‘Drake’ and ‘Weeknd’ song is not what it seems | CNN Business

    [ad_1]



    CNN
     — 

    One of the buzziest songs recently circulating on TikTok and climbing the Spotify charts featured the familiar voices of best-selling artists Drake and the Weeknd. But there’s a twist: Drake and the Weeknd appear to have had nothing to do with it.

    The viral track, “Heart on my Sleeve,” comes from an anonymous TikTok user named Ghostwriter977, who claims to have used artificial intelligence to generate the voices of Drake and the Weeknd for the track.

    “I was a ghostwriter for years and got paid close to nothing just for major labels to profit,” Ghostwriter977 wrote in the video comments. “The future is here.”

    “Heart on my Sleeve” racked up more than 11 million views across several videos in just a few days and was streamed on Spotify hundreds of thousands of times. The original TikTok video has seemingly been taken down, and the song has since been removed from streaming services including YouTube, Apple Music and Spotify. (TikTok, YouTube, Apple and Spotify did not respond to a request for comment.)

    The exact origin of the song remains unclear, and some have suggested it could be a publicity stunt. But the stunning traction for “Heart on my Sleeve” may only add to the anxiety inside the music industry as it goes on offense against the possible threat posed by a new crop of increasingly powerful AI tools on the market.

    Universal Music Group, the music label that represents Drake, The Weeknd and numerous other superstars, sent urgent letters in April to streaming platforms, including Spotify and Apple Music, asking them to block AI platforms from training on the melodies and lyrics of their copywritten songs.

    “The training of generative AI using our artists’ music — which represents both a breach of our agreements and a violation of copyright law as well as the availability of infringing content created with generative AI on digital service providers – begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” the company said in a statement this week to CNN.

    The record label said platforms have “a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

    But attempting to crack down on AI-generated music may pose a unique challenge. The legal landscape for AI work remains unclear, the tools to create it are widely accessible and social media makes it easier than ever to distribute it.

    AI-generated music is not new. Taryn Southern’s debut song “Break Free,” which was composed and produced with AI, hit the Top 100 radio charts back in 2018, and VAVA, an AI music artist (i.e. not a human), currently has a single out in Thailand.

    But a new crop of AI tools have made it easier than ever to quickly generate convincing images, audio, video and written work. Some services such as Boomy specifically leverage generative AI to make music creation more accessible.

    There’s little known about who is behind the Ghostwriter977 account, or which tools the creator used to make the track. The user did not respond to a CNN request for comment.

    In the bio section of the user’s TikTok account, a link directs users to a page on Laylo, a website where fans can sign up to get notifications from artists when new songs are dropped or merchandise and tickets become available. The company told CNN the account likely registered to build up its fan base and brought in “tens of thousands” of signups in the past few days.

    Laylo CEO Alec Ellin denied that the company was behind the viral track as some have speculated, but Ellin told CNN whoever did make it was “clearly a really savvy creator” and called it “a perfect example of the power of using Laylo to own your audience.”

    Michael Inouye, an analyst at ABI Research, said “Heart on my Sleeve” could have been made in several ways depending on the sophistication of the AI and level of musical talent.

    “If music artists were involved, they could create the background music and the lyrics, and then the AI model could be trained with content from Drake and The Weekend to replicate their voices and singing styles,” he said. “AI could also have generated most of the song, lyrics and replicated the artists again based on the training data set and any prompts given to direct the AI model.”

    He added that part of this fascination and virality of the song comes from “just how good AI has gotten at creating content, which includes replicating famous people.”

    Roberto Nickson, who is building an AI platform to help boost productivity and work flow, recently posted a video on Twitter showing how easy it is to record a verse and train an AI model to replace his vocals. He used the artist formerly known as Kanye West as an example.

    “The results will blow your mind,” he said. “You’re going to be listening to songs by your favorite artist that are completely indistinguishable and you’re not going to know if it’s them or not.”

    Although the entertainment industry has seen these issues coming, regulations are lagging behind the rapid pace of AI development.

    Audrey Benoualid, an entertainment lawyer based in Los Angeles, said one could argue “Heart On My Sleeve” does not infringe copyright as it appears to be an “original” composition.

    “Ghostwriter also publicized that Drake and The Weeknd were not involved in the making of the song, which could protect them from a ‘passing off’ claim, where profits are generated as consumers are misled into believing the song is actually a Drake-Weeknd collaboration,” she said in an email to CNN.

    However, Benoualid added, machine learning and generative AI programs may also be found to infringe copyright in existing works, either by making copies of those works to train the AI or by generating outputs that are substantially similar to those existing works. “Major labels would undoubtedly, and have already begun to, argue that their copyrights (and their artists’ intellectual property rights) are being infringed,” she said.

    Michael Nash, an executive VP at Universal Music Group, recently wrote in an op-ed that AI music is “diluting the market, making original creations harder to find, and violating artists’ legal rights to compensation from their work.”

    No regulations exist that dictate on what AI can and cannot train. But last month, in response to individuals looking to seek copyright for AI-generated works, the US Copyright Office released new guidance around how to register literary, musical, and artistic works made with AI.

    The copyright will be determined on a case-by-case basis, the guidance continued, based on how the AI tool operates and how it was used to create the final piece or work. The US Copyright Office announced it will also be seeking public input on how the law should apply to copywritten works the AI trains on, and how the office should treat those works.

    “AI and copyright law and the rights of musicians and labels have crashed into one another (once again), and it will take time for the dust to settle,” Benoualid said. “The landscape is anything but clear at the moment.”

    Inouye said if AI generated content becomes associated with famous individuals in a negative way that could be grounds for a lawsuit to not only take content down but to cease and desist their operations and potentially seek damage.

    “On the flip side, if the content were to be popular and the creator were to make revenue off of the artists’ image or likeness then again the artists could similarly request the content to be taken down and potentially sue for any monetary gains,” he said.

    But for now, concerned parties may be forced to play whack-a-mole. While services like Spotify pulled “Heart on my Sleeve,” versions of it appeared to continue circulating as of Tuesday on other online platforms.

    Even a song made with artificial intelligence may find real staying power online.

    – CNN’s Vanessa Yurkevich contributed to this report.

    [ad_2]

    Source link

  • US senator introduces bill to create a federal agency to regulate AI | CNN Business

    US senator introduces bill to create a federal agency to regulate AI | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Days after OpenAI CEO Sam Altman testified in front of Congress and proposed creating a new federal agency to regulate artificial intelligence, a US senator has introduced a bill to do just that.

    On Thursday, Colorado Democratic Sen. Michael Bennet unveiled an updated version of legislation he introduced last year that would establish a Federal Digital Platform Commission.

    The updated bill, which was reviewed by CNN, makes numerous changes to more explicitly cover AI products, including by amending the definition of a digital platform to include companies that offer “content primarily generated by algorithmic processes.”

    “There’s no reason that the biggest tech companies on Earth should face less regulation than Colorado’s small businesses – especially as we see technology corrode our democracy and harm our kids’ mental health with virtually no oversight,” Bennet said in a statement. “Technology is moving quicker than Congress could ever hope to keep up with. We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest.”

    The revised bill expands on the definition of an algorithmic process, clarifying that the proposed commission would have jurisdiction over the use of personal data to generate content or to make a decision — two key applications associated with generative AI, the technology behind popular tools such as OpenAI’s viral chatbot, ChatGPT.

    And for the most significant platforms — companies the bill calls “systemically important” — the bill would create requirements for algorithmic audits and public risk assessments of the harms their tools could cause.

    The bill retains existing language mandating that the commission ensure platform algorithms are “fair, transparent, and safe.” And under the bill, the commission would continue to have broad oversight authority over social media sites, search engines and other online platforms.

    But the added emphasis on AI highlights how Congress is rapidly gearing up for policymaking on a cutting-edge technology it is scrambling to understand. The debate over whether the US government should establish a separate federal agency to police AI tools may become a significant focus of those efforts following Altman’s testimony this week.

    Altman suggested in a Senate hearing on Tuesday that such an agency could restrict how AI is developed through licenses or credentialing for AI companies. Some lawmakers appeared receptive to the idea, with Louisiana Republican Sen. John Kennedy even asking Altman whether he would be open to serving as its chair.

    “I love my current job,” Altman demurred, to laughter from the audience.

    Thursday’s bill does not explicitly provide for such a licensing program, though it directs the would-be commission to design rules appropriate for overseeing the industry, according to a Bennet aide. Bennet’s office did not consult with OpenAI on either the original bill or Thursday’s revised version.

    But even as some lawmakers have embraced the concept of a specialized regulator for internet companies — which could conflict with existing cops on the beat at agencies including the Justice Department and the Federal Trade Commission — others have warned of the potential risks of creating a whole new bureaucracy.

    Gary Marcus, a New York University professor and self-described critic of AI “hype,” told lawmakers at Tuesday’s hearing that a separate agency could fall victim to “regulatory capture,” a term that describes when industries gain dominating influence over the government agencies created to hold them accountable.

    Connecticut Democratic Sen. Richard Blumenthal, a former state attorney general who has prosecuted consumer protection cases, said no agency can be effective without proper support.

    “I’ve been doing this stuff for a while,” Blumenthal said. “You can create 10 new agencies, but if you don’t give them the resources — and I’m not just talking about dollars, I’m talking about scientific expertise — [industry] will run circles around them.”

    [ad_2]

    Source link

  • Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    [ad_1]


    London
    CNN
     — 

    The European Union took a major step Wednesday toward setting rules — the first in the world — on how companies can use artificial intelligence.

    It’s a bold move that Brussels hopes will pave the way for global standards for a technology used in everything from chatbots such as OpenAI’s ChatGPT to surgical procedures and fraud detection at banks.

    “We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists.

    Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law.

    “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added.

    Hundreds of top AI scientists and researchers warned last month that the technology posed an extinction risk to humanity, and several prominent figures — including Microsoft President Brad Smith and OpenAI CEO Sam Altman — have called for greater regulation of the technology.

    At the Yale CEO Summit this week, more than 40% of business leaders — including Walmart chief Doug McMillion and Coca-Cola

    (KO)
    CEO James Quincy — said AI had the potential to destroy humanity five to 10 years from now.

    Against that backdrop, the EU AI Act seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects.”

    Here are the key takeaways.

    Once approved, the Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.

    The extent of regulation depends on the risks created by a particular application, from minimal to “unacceptable.”

    Systems that fall into the latter category are banned outright. These include real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China, which assign people a “health score” based on their behavior.

    The legislation also sets tight restrictions on “high-risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.”

    These include systems used to influence voters in an election, as well as social media platforms with more than 45 million users that recommend content to their users — a list that would include Facebook, Twitter and Instagram.

    The Act also outlines transparency requirements for AI systems.

    For instance, systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.

    Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.

    AI systems with minimal or no risk, such as spam filters, fall largely outside of the rules.

    Most AI systems will likely fall into the high-risk or prohibited categories, leaving their owners exposed to potentially enormous fines if they fall foul of the regulations, according to Racheal Muldoon, a barrister (litigator) at London law firm Maitland Chambers.

    Engaging in prohibited AI practices could lead to a fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    That goes much further than Europe’s signature data privacy law, the General Data Protection Regulation, under which Meta was hit with a €1.2 billion ($1.3 billion) fine last month. GDPR sets fines of up to €10 million ($10.8 million), or up to 2% of a firm’s global turnover.

    Fines under the AI Act serve as a “war cry from the legislators to say, ‘take this seriously’,” Muldoon said.

    At the same time, penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for start-ups.

    The Act also requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed.

    “The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added.

    The Act gives citizens the right to file complaints against providers of AI systems and makes a provision for an EU AI Office to monitor enforcement of the legislation. It also requires member states to designate national supervisory authorities for AI.

    Microsoft

    (MSFT)
    — which, together with Google, is at the forefront of AI development globally — welcomed progress on the Act but said it looked forward to “further refinement.”

    “We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said in a statement.

    IBM

    (IBM)
    , meanwhile, called on EU policymakers to take a “risk-based approach” and suggested four “key improvements” to the draft Act, including further clarity around high-risk AI “so that only truly high-risk use cases are captured.”

    The Act may not come into force until 2026, according to Muldoon, who said revisions were likely, given how rapidly AI was advancing. The legislation has already gone through several updates since drafting began in 2021.

    “The law will expand in scope as the technology develops,” Muldoon said.

    [ad_2]

    Source link

  • With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Last month, a video posted to Twitter by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s top infectious disease specialist, were tricky to spot: they were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership, did not remove the video. Instead, it eventually added a community note — a contributor-led feature to highlight misinformation on the social media platform — to the post, alerting the site’s users that in the video “3 still shots showing Trump embracing Fauci are AI generated images.”

    Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 US Presidential election in ways that could confuse or mislead voters.

    A new crop of AI tools offer the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 US election.

    “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

    Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.

    Several major social networks have pulled back on their enforcement of some election-related misinformation and undergone significant layoffs over the past six months, which in some cases hit election integrity, safety and responsible AI teams. Current and former US officials have also raised alarms that a federal judge’s decision earlier this month to limit how some US agencies communicate with social media companies could have a “chilling effect” on how the federal government and states address election-related disinformation. (On Friday, an appeals court temporarily blocked the order.)

    Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

    “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

    The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

    The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

    Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily and cheaply create huge quantities of fake content.

    And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the US election rolls around next year.

    “We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

    The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face.

    AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

    OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

    Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

    A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

    Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.

    Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

    TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

    YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

    “Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”

    A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact checkers.

    TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

    Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

    Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform, and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

    Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated review to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

    But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated.

    Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

    One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information; it could also make it harder for them to trust real information about everything from voting to crisis situations.

    “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.”

    [ad_2]

    Source link

  • Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat is about to give new meaning to the “chat” part of its name.

    Snap, the company behind Snapchat, announced on Wednesday that its customizable My AI chatbot, is now accessible to all users within the app. The feature, which is powered by the viral AI chatbot ChatGPT, was previously only available to paying Snapchat+ subscribers.

    The tool offers recommendations, answers questions, helps users make plans and can write a haiku in seconds, according to the company. It can be brought into conversation with friends when it’s mentioned with “@MyAI.” Users can also give it a name and design a custom Bitmoji avatar for it to personalize it more.

    The move comes more than a month after ChatGPT creator OpenAI opened up access to its chatbot to third-party businesses. Snap, Instacart and tutor app Quizlet were among the early partners experimenting with adding ChatGPT.

    Since its public release in November 2022, ChatGPT has stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    The initial batch of companies tapping into ChatGPT’s functionality each have slightly different visions for how to incorporate it. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.

    Adding ChatGPT features also may come with some risks. The tool, which is trained on vast troves of data online, can spread inaccurate information and has the potential to respond to users in ways they might find inappropriate.

    In a blog post on Wednesday, Snap acknowledged “My AI is far from perfect but we’ve made a lot of progress.”

    It said, for example, about 99.5% of My AI responses conform to its community guidelines. Snap said it has made changes to “help protect against responses that could be inappropriate or harmful.” The company also said it has added moderation technology and included the new feature to its in-app parental tools.

    “We will continue to use these early learnings to make AI a more safe, fun, and useful experience, and we’re eager to hear your thoughts,” the company said.

    [ad_2]

    Source link

  • How the CEO behind ChatGPT won over Congress | CNN Business

    How the CEO behind ChatGPT won over Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman seems to have achieved in a matter of hours what other tech execs have been struggling to do for years: He charmed the socks off Congress.

    Despite wide-ranging concerns that artificial intelligence tools like OpenAI’s ChatGPT could disrupt democracy, national security, and the economy, Altman’s appearance Tuesday before a Senate subcommittee went so smoothly that viewers could have been forgiven for thinking the year was closer to 2013 than 2023.

    It was a pivotal moment for the AI industry. Altman’s testimony on Tuesday alongside Christina Montgomery, IBM’s chief privacy officer, promised to set the tone for how Washington regulates a technology that many fear could eliminate jobs or destabilize elections.

    But where lawmakers could have followed a familiar pattern, blasting the tech industry with hostile questioning and leveling withering allegations of reckless innovation, members of the Senate Judiciary Committee instead heaped praise on the companies — and often, on Altman in particular.

    The difference seemed to come down to OpenAI calling for proactive government regulation — and persuading lawmakers it was serious. Unlike the long list of social media hearings in recent years, this AI hearing came earlier in OpenAI’s lifecycle and, crucially, before the company or its technology had suffered any high-profile mishaps.

    Altman, more than any other figure in tech, has emerged as the face of a new crop of powerful and disruptive AI tools that can generate compelling written work and images in response to user prompts. Much of the federal government is now racing to figure out how to regulate the cutting-edge technology.

    But after his performance on Tuesday, the CEO whose company helped spark the new AI arms race may have maneuvered himself into a privileged position of influence over the rules that may soon govern the tools he’s developing.

    Altman’s easy-going, plain-spoken demeanor helped disarm skeptical lawmakers and appeared to win over Democrats and Republicans alike. His approach contrasted with the wooden, lawyerly performances that have afflicted some other tech CEOs in the past during their time in the hotseat.

    “I sense there is a willingness to participate here that is genuine and authentic,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the committee’s technology panel.

    New Jersey Democratic Sen. Cory Booker, adopting an unusual level of familiarity with a witness, found himself repeatedly addressing Altman as “Sam,” even as he referred to other panelists by their last names.

    Even Altman’s fellow witnesses couldn’t resist gushing about his style.

    “His sincerity in talking about those [AI] fears is very apparent, physically, in a way that just doesn’t communicate on the television screen,” Gary Marcus, a former New York University professor and a self-described critic of AI “hype,” told lawmakers.

    With a relaxed yet serious tone, Altman did not deflect or shy away from lawmakers’ concerns. He agreed that large-scale manipulation and deception using AI tools are among the technology’s biggest potential flaws. And he validated fears about AI’s impact on workers, acknowledging that it may “entirely automate away some jobs.”

    “If this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”

    Altman’s candor and openness has captivated many in Washington.

    On Monday evening, Altman spoke to a dinner audience of roughly 60 House lawmakers from both parties. One person in the room, speaking on condition of anonymity to discuss a closed-door meeting, described members of Congress as “riveted” by the conversation, which also saw Altman demonstrating ChatGPT’s capabilities “to much amusement” from the audience.

    Lawmakers have spent years railing against social media companies, attacking them for everything from their content moderation decisions to their economic dominance. On Tuesday, they seemed ready — or even relieved — to be dealing with another area of the technology industry.

    Whether this time is truly different remains unclear, though. The AI industry’s biggest players and aspirants include some of the same tech giants Congress has sharply criticized, including Google and Meta. OpenAI is receiving billions of dollars of investment from Microsoft in a multi-year partnership. And with his remarks on Tuesday, Altman appeared to draw from a familiar playbook for Silicon Valley: Referring to technology as merely a neutral tool, acknowledging his industry’s imperfections and inviting regulation.

    Some AI ethicists and experts questioned the value of asking a leading industry spokesperson how he would like to be regulated. Marcus, the New York University professor, cautioned that creating a new federal agency to police AI could lead to “regulatory capture” by the tech industry, but the warning could have applied just as easily to Congress itself.

    “It seems very very bad that ahead of a hearing meant to inform how this sector gets regulated, the CEO of one of the corporations that would be subject to that regulation gets to present a magic show to the regulators,” Emily Bender, a professor of computational linguistics at the University of Washington, said of Altman’s dinner with House lawmakers.

    She added: “Politicians, like journalists, must resist the urge to be impressed.”

    After years of fidgety evasiveness from other tech CEOs, however, lawmakers this week seemed easily wowed by Altman and his seemingly straight-shooting answers.

    Louisiana Republican Sen. John Kennedy, after expressing frustration with IBM’s Montgomery for providing a nuanced answer he couldn’t comprehend, visibly brightened when Altman quickly and smoothly outlined his regulatory proposals in a bulleted list. Kennedy began joking with Altman and even asked whether Altman might consider heading up a hypothetical federal agency charged with regulating the AI industry.

    “I love my current job,” Altman deadpanned, to audience laughter, before offering to send Kennedy’s office some potential candidates.

    Compounding lawmakers’ attraction to Altman is a belief on Capitol Hill that Congress erred in extending broad liability protections to online platforms at the dawn of the internet. That decision, which allowed for an explosion of blogs, e-commerce sites, streaming media and more, has become an object of regret for many lawmakers in the face of alleged mental health harms stemming from social media.

    “I don’t want to repeat that mistake again,” said Judiciary Committee Chairman Dick Durbin.

    Here too, Altman deftly seized an opportunity to curry favor with lawmakers by emphasizing distinctions between his industry and the social media industry.

    “We try to design systems that do not maximize for engagement,” Altman said, alluding to the common criticism that social media algorithms tend to prioritize outrage and negativity to boost usage. “We’re not an advertising-based model; we’re not trying to get people to use it more and more, and I think that’s a different shape than ad-supported social media.”

    In providing simple-sounding solutions with a smile, Altman is doing much more than shaping policy: He is offering members of Congress a shot at redemption, one they seem grateful to accept. Despite the many pitfalls of AI they identified on Tuesday, lawmakers appeared to thoroughly welcome Altman as a partner, not a potential adversary needing oversight and scrutiny.

    “We need to be mindful,” Blumenthal said, “of ways that rules can enable the big guys to get bigger and exclude innovation, and competition, and responsible good guys such as our representative in this industry right now.”

    [ad_2]

    Source link

  • Forget about the AI apocalypse. The real dangers are already here | CNN Business

    Forget about the AI apocalypse. The real dangers are already here | CNN Business

    [ad_1]



    CNN
     — 

    Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse.

    Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsoft’s CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter last month stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

    The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to “civilization destruction.” But he still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google.

    Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services.

    “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. Some of the execs are likely “genuinely worried about what they have unleashed,” he said, but others may be trying to focus attention on “abstract possibilities to detract from the more immediate possibilities.”

    Representatives for Google and OpenAI did not immediately respond to a request for comment. In a statement, a Microsoft spokesperson said: “We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly.”

    For Marcus, a self-described critic of AI hype, “the biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation.”

    Generative AI tools like OpenAI’s ChatGPT and Dall-E are trained on vast troves of data online to create compelling written work and images in response to user prompts. With these tools, for example, one could quickly mimic the style or likeness of public figures in an attempt to create disinformation campaigns.

    In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

    Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.

    Gary Marcus, professor emeritus at New York University, right, listens to Sam Altman, chief executive officer and co-founder of OpenAI, speak during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.

    Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

    Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

    “If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN.

    Regulators may be the real intended audience for the tech industry’s doomsday messaging.

    As Bender puts it, execs are essentially saying: “‘This stuff is very, very dangerous, and we’re the only ones who understand how to rein it in.’”

    Judging from Altman’s appearance before Congress, this strategy might work. Altman appeared to win over Washington by echoing lawmakers’ concerns about AI — a technology that many in Congress are still trying to understand — and offering suggestions for how to address it.

    This approach to regulation would be “hugely problematic,” Bender said. It could give the industry influence over the regulators tasked with holding it accountable and also leave out the voices and input of other people and communities experiencing negative impacts of this technology.

    “If the regulators kind of orient towards the people who are building and selling the technology as the only ones who could possibly understand this, and therefore can possibly inform how regulation should work, we’re really going to miss out,” Bender said.

    Bender said she tries, at every opportunity, to tell people “these things seem much smarter than they are.” As she put it, this is because “we are as smart as we are” and the way that we make sense of language, including responses from AI, “is actually by imagining a mind behind it.”

    Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?”

    [ad_2]

    Source link