ReportWire

Tag: Artificial Intelligence

  • Universal Music Group calls AI music a ‘fraud,’ wants it banned from streaming platforms. Experts say it’s not that easy | CNN Business

    Universal Music Group calls AI music a ‘fraud,’ wants it banned from streaming platforms. Experts say it’s not that easy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Universal Music Group — the music company representing superstars including Sting, The Weeknd, Nicki Minaj and Ariana Grande — has a new Goliath to contend with: artificial intelligence.

    The music group sent urgent letters in April to streaming platforms, including Spotify

    (SPOT)
    and Apple Music, asking them to block artificial intelligence platforms from training on the melodies and lyrics of their copywritten songs.

    The company has “a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators,” a spokesperson from Universal Music Group, or UMG, told CNN. “We expect our platform partners will want to prevent their services from being used in ways that harm artists.”

    The move by UMG, first reported by the Financial Times, aims to stop artificial intelligence from creating an existential threat to the industry.

    Artificial intelligence, and specifically AI music, learns by either training on existing works on the internet or through a library of music given to the AI by humans.

    UMG says it is not against the technology itself, but rather AI that is so advanced it can recreate melodies and even musicians’ voices in seconds. That could possibly threaten UMG’s deep library of music and artists that generate billions of dollars in revenue.

    “UMG’s success has been, in part, due to embracing new technology and putting it to work for our artists — as we have been doing with our own innovation around AI for some time already,” UMG said in a statement Monday. “However, the training of generative AI using our artists’ music … begs the question as to which side of history all stakeholders in the music ecosystem want to be on.”

    The company said AI that uses artists’ music violates UMG’s agreements and copyright law. UMG has been sending requests to streamers asking them to take down AI-generated songs.

    “I understand the intent behind the move, but I’m not sure how effective this will be as AI services will likely still be able to access the copyrighted material one way or another,” said Karl Fowlkes, an entertainment and business attorney at The Fowlkes Firm.

    No regulations exist that dictate on what AI can and cannot train. But last month, in response to individuals looking to seek copyright for AI-generated works, the US Copyright Office released new guidance around how to register literary, musical, and artistic works made with AI.

    “In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form,’” the new guidance says.

    The copyright will be determined on a case-by-case basis, the guidance continued, based on how the AI tool operates and how it was used to create the final piece or work.

    The US Copyright Office announced it will also be seeking public input on how the law should apply to copywritten works the AI trains on, and how the office should treat those works.

    “AI companies using copyrighted works to train their models to create similar works is exactly the type of behavior the copyright office and courts should explicitly ban. Original art is meant to be protected by law, not works created by machines that used the original art to create new work,” said Fowlkes.

    But according to AI experts, it’s not that simple.

    “You can flag your site not to be searched. But that’s a request — you can’t prevent it. You can just request that someone not do it,” said Shelly Palmer, Professor of Advanced Media at Syracuse University.

    For example, a website can apply a robots.txt file that works like a guardrail to control which URL’s “search engine crawlers” can access a given site, according to Google. But it is not a full stop, keep-out option.

    Grammy-winning DJ and producer David Guetta proved in February just how easy it is to create new music using AI. Using ChatGPT for lyrics and Uberduck for vocals, Guetta was able to create a new song in an hour.

    The result was a rap with a voice that sounded exactly like Eminem. He played the song at one of his shows in February, but said he would never release it commercially.

    “What I think is very interesting about AI is that it’s raising a question of what is it to be an artist,” Guetta told CNN last month.

    Guetta believes AI is going to have a significant impact on the music industry, so he’s embracing it instead of fighting it. But he admits there are still questions about copyright.

    “That is an ethical problem that needs to be addressed because it sounds crazy to me that today I can type lyrics and it’s going to sound like Drake is rapping it, or Eminem,” he said.

    And that is exactly what UMG wants to avoid. The music group likens AI music to “deep fakes, fraud, and denying artists their due compensation.”

    “These instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists,” the UMG statement said.

    Music streamers Spotify, Apple Music and Pandora did not return request for comment.

    [ad_2]

    Source link

  • Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    [ad_1]



    CNN
     — 

    Less than a few hours after Snapchat rolled out its My AI chatbot to all users last week, Lyndsi Lee, a mother from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

    “It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.

    The feature is powered by the viral AI chatbot tool ChatGPT – and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

    The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear you’re talking to a computer.

    “I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,” Lee said. “I just think there is a really clear line [Snapchat] is crossing.”

    The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to reckon with questions that may have seemed theoretical only months ago.

    In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was released to Snap’s subscription customers, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. In particular, he cited reports that it can provide kids with suggestions for how to lie to their parents.

    “These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

    In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”

    In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.

    In another TikTok video with more than 1.5 million views, a user named Ariel recorded a song with an intro, chorus and piano chords written by My AI about what it’s like to be a chatbot. When she sent the recorded song back, she said the chatbot denied its involvement with the reply: “I’m sorry, but as an AI language model, I don’t write songs.” Ariel called the exchange “creepy.”

    Other users shared concerns about how the tool understands, interacts with and collects information from photos. “I snapped a picture … and it said ‘nice shoes’ and asked who the people [were] in the photo,” a Snapchat user wrote on Facebook.

    Snapchat told CNN it continues to improve My AI based on community feedback and is working to establish more guardrails to keep its users safe. The company also said that similar to its other tools, users don’t have to interact with My AI if they don’t want to.

    It’s not possible to remove My AI from chat feeds, however, unless a user subscribes to its monthly premium service, Snapchat+. Some teens say they have opted to pay the $3.99 Snapchat+ fee to turn off the tool before promptly canceling the service.

    But not all users dislike the feature.

    One user wrote on Facebook that she’s been asking My AI for homework help. “It gets all of the questions right.” Another noted she’s leaned on it for comfort and advice. “I love my little pocket, bestie!” she wrote. “You can change the Bitmoji [avatar] for it and surprisingly it offers really great advice to some real life situations. … I love the support it gives.”

    ChatGPT, which is trained on vast troves of data online, has previously come under fire for spreading inaccurate information, responding to users in ways they might find inappropriate and enabling students to cheat. But Snapchat’s integration of the tool risks heightening some of these issues, and adding new ones.

    Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.

    “If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” she said. “Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot. In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

    For now, the onus is on parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to show up in more popular apps and services.

    Sinead Bovell, the founder of WAYE, a startup that helps prepare youth for future with advanced technologies, said parents need to make it very clear “chatbots are not your friend.”

    “They’re also not your therapists or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” she said.

    “Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they would a friend – even though from a user design perspective, the chatbot exists in the same corner of Snapchat.”

    She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.

    [ad_2]

    Source link

  • US senator introduces bill to create a federal agency to regulate AI | CNN Business

    US senator introduces bill to create a federal agency to regulate AI | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Days after OpenAI CEO Sam Altman testified in front of Congress and proposed creating a new federal agency to regulate artificial intelligence, a US senator has introduced a bill to do just that.

    On Thursday, Colorado Democratic Sen. Michael Bennet unveiled an updated version of legislation he introduced last year that would establish a Federal Digital Platform Commission.

    The updated bill, which was reviewed by CNN, makes numerous changes to more explicitly cover AI products, including by amending the definition of a digital platform to include companies that offer “content primarily generated by algorithmic processes.”

    “There’s no reason that the biggest tech companies on Earth should face less regulation than Colorado’s small businesses – especially as we see technology corrode our democracy and harm our kids’ mental health with virtually no oversight,” Bennet said in a statement. “Technology is moving quicker than Congress could ever hope to keep up with. We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest.”

    The revised bill expands on the definition of an algorithmic process, clarifying that the proposed commission would have jurisdiction over the use of personal data to generate content or to make a decision — two key applications associated with generative AI, the technology behind popular tools such as OpenAI’s viral chatbot, ChatGPT.

    And for the most significant platforms — companies the bill calls “systemically important” — the bill would create requirements for algorithmic audits and public risk assessments of the harms their tools could cause.

    The bill retains existing language mandating that the commission ensure platform algorithms are “fair, transparent, and safe.” And under the bill, the commission would continue to have broad oversight authority over social media sites, search engines and other online platforms.

    But the added emphasis on AI highlights how Congress is rapidly gearing up for policymaking on a cutting-edge technology it is scrambling to understand. The debate over whether the US government should establish a separate federal agency to police AI tools may become a significant focus of those efforts following Altman’s testimony this week.

    Altman suggested in a Senate hearing on Tuesday that such an agency could restrict how AI is developed through licenses or credentialing for AI companies. Some lawmakers appeared receptive to the idea, with Louisiana Republican Sen. John Kennedy even asking Altman whether he would be open to serving as its chair.

    “I love my current job,” Altman demurred, to laughter from the audience.

    Thursday’s bill does not explicitly provide for such a licensing program, though it directs the would-be commission to design rules appropriate for overseeing the industry, according to a Bennet aide. Bennet’s office did not consult with OpenAI on either the original bill or Thursday’s revised version.

    But even as some lawmakers have embraced the concept of a specialized regulator for internet companies — which could conflict with existing cops on the beat at agencies including the Justice Department and the Federal Trade Commission — others have warned of the potential risks of creating a whole new bureaucracy.

    Gary Marcus, a New York University professor and self-described critic of AI “hype,” told lawmakers at Tuesday’s hearing that a separate agency could fall victim to “regulatory capture,” a term that describes when industries gain dominating influence over the government agencies created to hold them accountable.

    Connecticut Democratic Sen. Richard Blumenthal, a former state attorney general who has prosecuted consumer protection cases, said no agency can be effective without proper support.

    “I’ve been doing this stuff for a while,” Blumenthal said. “You can create 10 new agencies, but if you don’t give them the resources — and I’m not just talking about dollars, I’m talking about scientific expertise — [industry] will run circles around them.”

    [ad_2]

    Source link

  • Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    For years, Alexa has been synonymous with virtual assistants that can interact with users and do tasks on their behalf.

    Now Amazon is trying to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products.

    Amazon’s goal is to use AI “to create this great personal assistant,” said Dave Limp, senior VP of devices and services, in a recent interview with CNN. “We’ve been using all forms of AI for a long time, but now that we see this emergence of generative AI, we can accelerate that vision even faster.”

    Generative AI refers to a type of AI that can create new content, such as text and images, in response to user prompts. Limp did not elaborate on how generative AI could be used in Alexa products, but there are clear possibilities.

    In theory, this technology could one day help Alexa have more natural conversations with users, answer more complex questions, and be more creative by telling stories or making up song lyrics in seconds. It could also enable more personalized interactions, allowing the assistant to learn about the device owner’s interests, preferences and better tailor its responses to each person.

    “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    Alexa launched nearly a decade ago and, along with Siri, Cortana and other voice assistants, seemed poised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished that faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon is now slashing staff and shelving products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division has not escaped unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees as the global economic outlook continued to worsen. In March, the company said about 9,000 more jobs would be impacted. Limp said his division lost about 2,000 people, about half of which were from the Alexa team.

    Amazon also shut down some of the products it spun up earlier in the pandemic, such as its wearable fitness brand Halo, which allowed users to ask Alexa questions about their health and wellness. Limp said the company also shelved some “more risky” projects. “I wouldn’t doubt we’ll dust them off at some point and bring them back,” he said. “We’re still taking a lot of risks in this organization.”

    But Limp said Alexa remains a “North Star” for his division. “To give you a sense, there’s still thousands and thousands of people working on Alexa,” he said.

    Amazon is indeed still investing in Alexa and its related Echo smart speaker lineup. Last week, the company unveiled several new products, including the $39.99 Echo Pop and the $89.99 Echo Show 5, its smart speaker with a screen. While the products feature incremental updates, Limp said Amazon’s current lineup contains hints of what’s to come with its AI efforts, beyond generative AI.

    For example, if Alexa is enabled on an Echo Show, where it can rotate and follow users around the room, “you’ll see glimmers of where it’s going over the next months and years,” Limp said.

    But generative AI remains a key focus for the company. Amazon CEO Andy Jassy said in a letter to shareholders in April that the company is focused on “investing heavily” in the technology “across all of our consumer, seller, brand, and creator experiences.”

    The company is reportedly working on adding ChatGPT-like search capabilities for its e-commerce store. Amazon is also rumored to be planning to use generative AI to bring conversational language to a home robot.

    While Limp didn’t comment on the report, he said the end goal has long been for Alexa to communicate with users in a fluid, natural way, whether it’s through an Echo device or other products such as its robotic dog, Astro.

    The concept remains a “hard technical challenge,” he said, but one that is “more tractable” with generative AI. “There’s still some hard corner cases and things to work out,” he said.

    [ad_2]

    Source link

  • AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The AI boom is here, and Nvidia is reaping all the benefits.

    Shares of Nvidia

    (NVDA)
    exploded 28% higher Thursday after reporting earnings and sales that surged well above Wall Street’s already lofty expectations. That was enough to make investors temporarily forget about America’s dangerous debt ceiling standoff, sending the broader stock market higher — even after credit rating agency Fitch warned late Wednesday that America could soon lose its sterling AAA debt rating.

    Nvidia makes chips that power generative AI, a type of artificial intelligence that can create new content, such as text and images, in response to user prompts. That’s the kind of AI underlying ChatGPT, Google’s Bard, Dall-E and many of the other new AI technologies.

    “The computer industry is going through two simultaneous transitions — accelerated computing and generative AI,” said Jensen Huang, Nvidia’s CEO, in a statement. “A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process.”

    Huang said Nvidia is increasing supply of its entire suite of data center products to meet “surging demand” for them.

    Last quarter, Nvidia’s profit surged 26% to $2 billion, and sales rose 19% to $7.2 billion, each easily surpassing Wall Street analysts’ forecasts. Nvidia’s outlook for the current quarter was also significantly — about 50% — higher than analysts’ predictions.

    Nvidia’s stock is up nearly 110% this year.

    “There is not one better indicator around underlying AI demand going on … than the foundational Nvidia story,” said Dan Ives, analyst at Wedbush. “We view Nvidia at the core hearts and lungs of the AI revolution.”

    [ad_2]

    Source link

  • Microsoft unveils more secure AI-powered Bing Chat for businesses to ensure ‘data doesn’t leak’ | CNN Business

    Microsoft unveils more secure AI-powered Bing Chat for businesses to ensure ‘data doesn’t leak’ | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Tuesday announced a more secure version of its AI-powered Bing specifically for businesses and designed to assure professionals they can safely share potentially sensitive information with a chatbot.

    With Bing Chat Enterprise, the user’s chat data will not be saved, sent to Microsoft’s servers or used to train the AI models, according to the company.

    “What this [update] means is your data doesn’t leak outside the organization,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, told CNN in an interview. “We don’t co-mingle your data with web data, and we don’t save it without your permission. So no data gets saved on the servers, and we don’t use any of your data chats to train the AI models.”

    Since ChatGPT launched late last year, a new crop of powerful AI tools has offered the promise of making workers more productive. But in recent months, some businesses such as JPMorgan Chase banned the use of ChatGPT among its employees, citing security and privacy concerns. Other large companies have reportedly taken similar steps over concerns around sharing confidential information with AI chatbots.

    In April, regulators in Italy issued a temporary ban on ChatGPT in the country after OpenAI disclosed a bug that allowed some users to see the subject lines from other users’ chat histories. The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post at the time.

    Like other tech companies, Microsoft is racing to develop and deploy a range of AI-powered tools for consumers and professionals amid widespread investor enthusiasm for the new technology. Microsoft also said Tuesday that it will add visual searches to its existing AI-powered Bing Chat tool. And the company said the Microsoft 365 Co-pilot, its previously announced AI-powered tool that helps edit, summarize, create and compare documents across its various products, will cost $30 a month for each user.

    Bing Chat Enterprise will be free for all of its 160 million Microsoft 365 subscribers starting on Tuesday, if a company’s IT department manually turns on the tool. After 30 days, however, Microsoft will roll out access to all users by default; subscribed businesses can disable the tool if they so choose.

    Current conversational AI tools such as the consumer version of Bing Chat send data from personal chats to their servers to train and improve its AI model.

    Microsoft’s new enterprise option is identical to the consumer version of Bing but it will not recall conversations with users, so they’ll need to go back and start from scratch each time. (Bing recently started to enable saved chats on its consumer chat model.)

    With these changes, Microsoft, which uses OpenAI’s technology to power its Bing chat tool, said workers can have “complete confidence” their data “won’t be leaked outside of the organization.”

    To access the tool, a user will sign into the Bing browser with their work credentials and the system will automatically detect the account and put it into a protected mode, according to Microsoft. Above the “ask me anything” bar reads: “Your personal and company data are protected in this chat.”

    In a demo video shown to CNN ahead of its launch, Microsoft showed how a user could type confidential details into Bing Chat Enterprise, such as an someone sharing financial information as part of preparing a bid to buy a building. With the new tool, the user could ask Bing Chat to create a table to compare the property to other neighboring buildings and write an analysis that highlights the strengths and weaknesses of their bid relative to other local bids.

    In addition to trying to ease privacy and security concerns around AI in the workplace, Mehdi also addressed the problem of factual errors. To reduce the possibility of inaccuracies or “hallucinations,” as some in the industry call it, he suggested users write clear, better prompts and check the included citations.

    [ad_2]

    Source link

  • Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    [ad_1]



    CNN
     — 

    Microsoft, Google and other leading artificial intelligence companies committed Friday to put new AI systems through outside testing before they are publicly released and to clearly label AI-generated content, the White House announced.

    The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies – which also include Amazon, Meta, OpenAI, Anthropic and Inflection – aimed at making AI systems and products safer and more trustworthy while Congress and the White House develop more comprehensive regulations to govern the rapidly growing industry. President Joe Biden met with top executives from all seven companies at the White House on Friday.

    In a speech Friday, Biden called the companies commitments “real and concrete,” adding they will help fulfill their “fundamental obligations to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

    “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation,” Biden said.

    White House officials acknowledge that some of the companies have already enacted some of the commitments but argue they will as a whole raise “the standards for safety, security and trust of AI” and will serve as a “bridge to regulation.”

    “It’s a first step, it’s a bridge to where we need to go,” White House deputy chief of staff Bruce Reed, who has been managing the AI policy process, said in an interview. “It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we’ve seen before.”

    While most of the companies already conduct internal “red-teaming” exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology – such as a cyberattack or its potential to be used by malicious actors – and allows companies to proactively identify shortcomings and prevent negative outcomes.

    Reed said the external red-teaming “will help pave the way for government oversight and regulation,” potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser.

    The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation.

    The companies also committed to investing in cybersecurity and “insider threat safeguards,” in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely; creating a robust mechanism for third parties to report system vulnerabilities; prioritizing research on the societal risks of AI; and developing and deploying AI systems “to help address society’s greatest challenges,” according to the White House.

    Asked by CNN’s Jake Tapper Friday about worries he has when it comes to AI, Microsoft Vice Chair and President Brad Smith pointed to “what people, bad actors, individuals or countries will do” with the technology.

    “That they’ll use it to undermine our elections, that they will use it to seek to break in to our computer networks. You know, that they’ll use it in ways that will undermine the security of our jobs,” he said.

    But, Smith argued, “the best way to solve these problems is to focus on them, to understand them, to bring people together, and to solve them. And the interesting thing about AI, in my opinion, is that when we do that, and we are determined to do that, we can use AI to defend against these problems far more effectively than we can today.”

    Pressed by Tapper about AI and compensation concerns listed in a recent letter signed by thousands of authors, Smith said: “I don’t want it to undermine anybody’s ability to make a living by creating, by writing. That is the balance that we should all want to strike.”

    All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity.

    Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

    “If we’ve learned anything from the last decade and the complete mismanagement of social media governance, it’s that many companies offer a lot of lip service,” Common Sense Media CEO James Steyer said in a statement. “And then they prioritize their profits to such an extent that they will not hold themselves accountable for how their products impact the American people, particularly children and families.”

    The federal government’s failure to regulate social media companies at their inception – and the resistance from those companies – has loomed large for White House officials as they have begun crafting potential AI regulations and executive actions in recent months.

    “The main thing we stressed throughout the discussions with the companies was that we should make this as robust as possible,” Reed said. “The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago and I think that AI is progressing even more rapidly than that and it’s important for this bridge to regulation to be a sturdy one.”

    The commitments were crafted during a monthslong back-and-forth between the AI companies and the White House that began in May when a group of AI executives came to the White House to meet with Biden, Vice President Kamala Harris and White House officials. The White House also sought input from non-industry AI safety and ethics experts.

    White House officials are working to move beyond voluntary commitments, readying a series of executive actions, the first of which is expected to be unveiled later this summer. Officials are also working closely with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI.

    “This is a serious responsibility. We have to get it right. There’s an enormous, enormous potential upside as well,” Biden said.

    In the meantime, White House officials say the companies will “immediately” begin implementing the voluntary commitments and hope other companies sign on in the future.

    “We expect that other companies will see how they also have an obligation to live up to the standards of safety, security and trust. And they may choose – and we would welcome them choosing – joining these commitments,” a White House official said.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • Google is building an AI tool for journalists | CNN Business

    Google is building an AI tool for journalists | CNN Business

    [ad_1]



    CNN
     — 

    Google is developing an artificial intelligence tool for news publishers that can generate article text and headlines, the company said, highlighting how the technology may soon transform the journalism industry.

    The tech giant said in a statement that it is looking to partner with news outlets on the AI tool’s use in newsrooms.

    “Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” a Google spokesperson said, “just like we’re making assistive tools available for people in Gmail and in Google Docs.”

    The effort was first reported by The New York Times, which said the project is referred to internally as “Genesis” and has been pitched to The Times, The Washington Post and News Corp, which owns The Wall Street Journal.

    Google’s statement did not name those media companies but said the company is particularly focusing on “smaller publishers.” It added that the project is not aimed at replacing journalists nor their “essential role … in reporting, creating, and fact-checking their articles.”

    The new tool comes as tech companies, including Google, race to develop and deploy a new crop of generative AI features into applications used in the workplace, with the promise of streamlining tasks and making employees more productive.

    But these tools, which are trained on information online, have also raised concerns because of their potential to get facts wrong or “hallucinate” responses.

    News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on “Star Wars” published by Gizmodo earlier this month similarly required a correction. But both outlets have said they will still move forward with using the technology.

    [ad_2]

    Source link

  • ‘It’s an especially bad time’: Tech layoffs are hitting ethics and safety teams | CNN Business

    ‘It’s an especially bad time’: Tech layoffs are hitting ethics and safety teams | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In the wake of the 2016 presidential election, as online platforms began facing greater scrutiny for their impacts on users, elections and society, many tech firms started investing in safeguards.

    Big Tech companies brought on employees focused on election safety, misinformation and online extremism. Some also formed ethical AI teams and invested in oversight groups. These teams helped guide new safety features and policies. But over the past few months, large tech companies have slashed tens of thousands of jobs, and some of those same teams are seeing staff reductions.

    Twitter eliminated teams focused on security, public policy and human rights issues when Elon Musk took over last year. More recently, Twitch, a livestreaming platform owned by Amazon, laid off some employees focused on responsible AI and other trust and safety work, according to former employees and public social media posts. Microsoft cut a key team focused on ethical AI product development. And Facebook-parent Meta suggested that it might cut staff working in non-technical roles as part of its latest round of layoffs.

    Meta, according to CEO Mark Zuckerberg, hired “many leading experts in areas outside engineering.” Now, he said, the company will aim to return “to a more optimal ratio of engineers to other roles,” as part of cuts set to take place in the coming months.

    The wave of cuts has raised questions among some inside and outside the industry about Silicon Valley’s commitment to providing extensive guardrails and user protections at a time when content moderation and misinformation remain challenging problems to solve. Some point to Musk’s draconian cuts at Twitter as a pivot point for the industry.

    “Twitter making the first move provided cover for them,” said Katie Paul, director of the online safety research group the Tech Transparency Project. (Twitter, which also cut much of its public relations team, did not respond to a request for comment.)

    To complicate matters, these cuts come as tech giants are rapidly rolling out transformative new technologies like artificial intelligence and virtual reality — both of which have sparked concerns about their potential impacts on users.

    “They’re in a super, super tight race to the top for AI and I think they probably don’t want teams slowing them down,” said Jevin West, associate professor in the Information School at the University of Washington. But “it’s an especially bad time to be getting rid of these teams when we’re on the cusp of some pretty transformative, kind of scary technologies.”

    “If you had the ability to go back and place these teams at the advent of social media, we’d probably be a little bit better off,” West said. “We’re at a similar moment right now with generative AI and these chatbots.”

    When Musk laid off thousands of Twitter employees following his takeover last fall, it included staffers focused on everything from security and site reliability to public policy and human rights issues. Since then, former employees, including ex-head of site integrity Yoel Roth — not to mention users and outside experts — have expressed concerns that Twitter’s cuts could undermine its ability to handle content moderation.

    Months after Musk’s initial moves, some former employees at Twitch, another popular social platform, are now worried about the impacts recent layoffs there could have on its ability to combat hate speech and harassment and to address emerging concerns from AI.

    One former Twitch employee affected by the layoffs and who previously worked on safety issues said the company had recently boosted its outsourcing capacity for addressing reports of violative content.

    “With that outsourcing, I feel like they had this comfort level that they could cut some of the trust and safety team, but Twitch is very unique,” the former employee said. “It is truly live streaming, there is no post-production on uploads, so there is a ton of community engagement that needs to happen in real time.”

    Such outsourced teams, as well as automated technology that helps platforms enforce their rules, also aren’t as useful for proactive thinking about what a company’s safety policies should be.

    “You’re never going to stop having to be reactive to things, but we had started to really plan, move away from the reactive and really be much more proactive, and changing our policies out, making sure that they read better to our community,” the employee told CNN, citing efforts like the launch of Twitch’s online safety center and its Safety Advisory Council.

    Another former Twitch employee, who like the first spoke on condition of anonymity for fear of putting their severance at risk, told CNN that cutting back on responsible AI work, despite the fact that it wasn’t a direct revenue driver, could be bad for business in the long run.

    “Problems are going to come up, especially now that AI is becoming part of the mainstream conversation,” they said. “Safety, security and ethical issues are going to become more prevalent, so this is actually high time that companies should invest.”

    Twitch declined to comment for this story beyond its blog post announcing layoffs. In that post, Twitch noted that users rely on the company to “give you the tools you need to build your communities, stream your passions safely, and make money doing what you love” and that “we take this responsibility incredibly seriously.”

    Microsoft also raised some alarms earlier this month when it reportedly cut a key team focused on ethical AI product development as part of its mass layoffs. Former employees of the Microsoft team told The Verge that the Ethics and Society AI team was responsible for helping to translate the company’s responsible AI principles for employees developing products.

    In a statement to CNN, Microsoft said the team “played a key role” in developing its responsible AI policies and practices, adding that its efforts have been ongoing since 2017. The company stressed that even with the cuts, “we have hundreds of people working on these issues across the company, including net new, dedicated responsible AI teams that have since been established and grown significantly during this time.”

    Meta, maybe more than any other company, embodied the post-2016 shift toward greater safety measures and more thoughtful policies. It invested heavily in content moderation, public policy and an oversight board to weigh in on tricky content issues to address rising concerns about its platform.

    But Zuckerberg’s recent announcement that Meta will undergo a second round of layoffs is raising questions about the fate of some of that work. Zuckerberg hinted that non-technical roles would take a hit and said non-engineering experts help “build better products, but with many new teams it takes intentional focus to make sure our company remains primarily technologists.”

    Many of the cuts have yet to take place, meaning their impact, if any, may not be felt for months. And Zuckerberg said in his blog post announcing the layoffs that Meta “will make sure we continue to meet all our critical and legal obligations as we find ways to operate more efficiently.”

    Still, “if it’s claiming that they’re going to focus on technology, it would be great if they would be more transparent about what teams they are letting go of,” Paul said. “I suspect that there’s a lack of transparency, because it’s teams that deal with safety and security.”

    Meta declined to comment for this story or answer questions about the details of its cuts beyond pointing CNN to Zuckerberg’s blog post.

    Paul said Meta’s emphasis on technology won’t necessarily solve its ongoing issues. Research from the Tech Transparency Project last year found that Facebook’s technology created dozens of pages for terrorist groups like ISIS and Al Qaeda. According to the organization’s report, when a user listed a terrorist group on their profile or “checked in” to a terrorist group, a page for the group was automatically generated, although Facebook says it bans content from designated terrorist groups.

    “The technology that’s supposed to be removing this content is actually creating it,” Paul said.

    At the time the Tech Transparency Project report was published in September, Meta said in a comment that, “When these kinds of shell pages are auto-generated there is no owner or admin, and limited activity. As we said at the end of last year, we addressed an issue that auto-generated shell pages and we’re continuing to review.”

    In some cases, tech firms may feel emboldened to rethink investments in these teams by a lack of new laws. In the United States, lawmakers have imposed few new regulations, despite what West described as “a lot of political theater” in repeatedly calling out companies’ safety failures.

    Tech leaders may also be grappling with the fact that even as they built up their trust and safety teams in recent years, their reputation problems haven’t really abated.

    “All they keep getting is criticized,” said Katie Harbath, former director of public policy at Facebook who now runs tech consulting firm Anchor Change. “I’m not saying they should get a pat on the back … but there comes a point in time where I think Mark [Zuckerberg] and other CEOs are like, is this worth the investment?”

    While tech companies must balance their growth with the current economic conditions, Harbath said, “sometimes technologists think that they know the right things to do, they want to disrupt things, and aren’t always as open to hearing from outside voices who aren’t technologists.”

    “You need that right balance to make sure you’re not stifling innovation, but making sure that you’re aware of the implications of what it is that you’re building,” she said. “We won’t know until we see how things continue to operate moving forward, but my hope is that they at least continue to think about that.”

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • The US Senate is working to get up to speed on AI basics ahead of any legislation | CNN Business

    The US Senate is working to get up to speed on AI basics ahead of any legislation | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The US Senate is inching forward on a plan to regulate artificial intelligence, after months of seeing how ChatGPT and similar tools stand to supercharge — or disrupt— wide swaths of society.

    But despite outlining broad contours of the plan, senators are still likely months away from introducing a comprehensive bill setting guardrails for the industry, let alone passing legislation and getting it signed into law. The deliberate pace of progress contrasts with the blistering speed with which companies and organizations have embraced generative AI, and the flood of investment into the industry.

    The Senate’s plan calls for briefing lawmakers on the basic facts of artificial intelligence over the summer, before beginning to consider legislation in the following months, even as some senators have begun to pitch proposals.

    The efforts reflect how, despite urgent calls by civil society groups and industry for guardrails on the technology, many lawmakers are still getting up to speed.

    To help educate members, Senate Majority Leader Chuck Schumer on Tuesday announced a series of three senators-only information sessions to take place in the coming weeks.

    The closed-door briefings will cover topics ranging from AI’s current capabilities and competition in AI development to how US national security and defense agencies are already putting the technology to use. The latter session, Schumer said, will be the first-ever classified senators’ briefing on AI.

    “The Senate must deepen our expertise in this pressing topic,” Schumer wrote in a letter to colleagues announcing the briefings. “AI is already changing our world, and experts have repeatedly told us that it will have a profound impact on everything from our national security to our classrooms to our workforce, including potentially significant job displacement.”

    Schumer had earlier kicked off a high-level push for AI legislation in April, when he proposed shaping any eventual bill around four principles promoting transparency and democratic values.

    The information sessions are expected to wrap up by the time Congress breaks for August recess, according to South Dakota Republican Sen. Mike Rounds, one of three other senators Schumer has tapped to lead on a comprehensive AI bill.

    By that point, Rounds told reporters Wednesday on the sidelines of a Washington conference, there may be “lots of different ideas floating” but not necessarily a bill to speak of.

    Schumer, Rounds and the other leading lawmakers on the AI working group — New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — haven’t settled on how to coordinate various legislative proposals yet.

    Options include forming a select committee to craft a comprehensive AI bill, or “splitting out and having lots of different committees come up with different pieces of legislation,” Rounds said.

    The AI hype has produced high-profile hearings and scattershot policy proposals. Last month, OpenAI CEO Sam Altman testified before a Senate Judiciary subcommittee, wowing lawmakers by asking for regulation and by giving a technical demonstration to enthralled members of the House the evening before.

    Sen. Michael Bennet has introduced legislation to create a new federal agency with authority to regulate AI, for example. And on Wednesday, Sen. Josh Hawley unveiled his own framework for AI legislation that called for letting Americans sue companies for harms created by AI models.

    Rounds told reporters Schumer has not set a timeframe for coming up with AI legislation, adding that the current goal is to allow ideas to “melt for a while.”

    But he predicted that with AI’s expected impact on many agencies and industries, it would be impossible not to foresee a wide-ranging and open legislative process reflecting input from many sources, akin to how the Senate crafts the annual spending package known as the National Defense Authorization Act.

    “You bring in all of these ideas, and then you very quietly start to meld this bill together, kind of behind the scenes in a way,” he said. “You go through a committee process in which you deliver a bill that says this could pass, and then you allow other members to come in and offer their amendments to it as well. That has worked well year-in and year-out for the NDAA.”

    [ad_2]

    Source link

  • ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    [ad_1]



    CNN
     — 

    Less than six months after ChatGPT-creator OpenAI unveiled an AI detection tool with the potential to help teachers and other professionals detect AI generated work, the company has pulled the feature.

    OpenAI quietly shut down the tool last week citing a “low rate of accuracy,” according to an update to the original company blog post announcing the feature.

    “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company wrote in the update. OpenAI said it is also committed to helping “users to understand if audio or visual content is AI-generated.”

    The news may renew concerns about whether the companies behind a new crop of generative AI tools are equipped to build safeguards. It also comes as educators prepare for the first full school year with tools like ChatGPT publicly available.

    The sudden rise of ChatGPT quickly raised alarms among some educators late last year over the possibility that it could make it easier than ever for students to cheat on written work. Public schools in New York City and Seattle banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators moved with remarkable speed to rethink their assignments in response to ChatGPT, even as it remained unclear how widespread use of the tool was among students and how harmful it could really be to learning.

    Against that backdrop, OpenAI announced the AI detection tool in February to allow users to check if an essay was written by a human or AI. The feature, which worked on English AI-generated text, was powered by a machine learning system that takes an input and assigns it to several categories. After pasting a body of text such as a school essay into the new tool, it gave one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    But even on its launch day, OpenAI admitted the tool was “imperfect” and results should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Lama Ahmad, policy research director at OpenAI, told CNN at the time.

    While the tool might provide another reference point, such as comparing past examples of a student’s work and writing style, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Although OpenAI may be shelving its tool for now, there are some alternatives on the market.

    Other companies such as Turnitin have also rolled out AI plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan introduced a similar AI detection feature, called ZeroGPT.

    [ad_2]

    Source link

  • Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    Amazon wants investors to know it won’t be left behind in the latest Big Tech arms race over artificial intelligence.

    In a letter to shareholders Thursday, Amazon

    (AMZN)
    CEO Andy Jassy said the company is “investing heavily” in large language models (LLMs) and generative AI, the same technology that underpins ChatGPT and other similar AI chatbots.

    “We have been working on our own LLMs for a while now, believe it will transform and improve virtually every customer experience, and will continue to invest substantially in these models across all of our consumer, seller, brand, and creator experiences,” Jassy wrote in his letter to shareholders.

    The remarks, which were part of Jassy’s second annual letter to shareholder since taking over as CEO, hint at the pressure that many tech companies feel to explain how they can tap into the rapidly evolving marketplace for AI products. Since ChatGPT was released to the public in late November, Google

    (GOOG)
    , Facebook

    (FB)
    and Microsoft

    (MSFT)
    have all talked up their growing focus on generative AI technology, which can create compelling essays, stories and visuals in response to user prompts.

    Amazon’s goal, according to Jassy, is to offer less costly machine learning chips so that “small and large companies can afford to train and run their LLMs in production.” Large language models are trained on vast troves of data in order to generate responses to user prompts.

    “Most companies want to use these large language models, but the really good ones take billions of dollars to train and many years, most companies don’t want to go through that,” Jassy said in an interview with CNBC on Thursday morning.

    “What they want to do is they want to work off of a foundational model that’s big and great already, and then have the ability to customize it for their own purposes,” Jassy told CNBC.

    With that in mind, Amazon on Thursday unveiled a new service called Bedrock. It essentially makes foundation models (large models that are pre-trained on vast amounts of data) from AI21 Labs, Anthropic, Stability AI and Amazon accessible to clients via an API, Amazon said in a blog post.

    Jassy told CNBC he thinks Bedrock “will change the game for people.”

    In his letter to shareholders, Jassy also touted AWS’s CodeWhisperer, another AI-powered tool which he said “revolutionizes developer productivity by generating code suggestions in real time.”

    “I could write an entire letter on LLMs and Generative AI as I think they will be that transformative, but I’ll leave that for a future letter,” Jassy wrote. “Let’s just say that LLMs and Generative AI are going to be a big deal for customers, our shareholders, and Amazon.”

    In the letter, Jassy also reflected on leading Amazon through “one of the harder macroeconomic years in recent memory,” as the e-commerce giant cut some 27,000 jobs as part of a major bid to rein in costs in recent months.

    “There were an unusual number of simultaneous challenges this past year,” Jassy said in the letter, before outlining steps Amazon took to rethink certain free shipping options, abandon some of its physical store concepts and significantly reduce overall headcount.

    Amazon disclosed in a securities filing Thursday that Jassy’s pay package last year was valued at some $1.3 million, and that the CEO did not receive any new stock awards in 2022. (When Jassy took over as CEO in 2021, he was awarded a pay package mostly comprised of stock awards that valued his total compensation package at some $212 million.)

    Despite the challenges at Amazon, however, Jassy said in his letter that he finds himself “optimistic and energized by what lies ahead.” Jassy added: “I strongly believe that our best days are in front of us.”

    [ad_2]

    Source link

  • Microsoft opens up its AI-powered Bing to all users | CNN Business

    Microsoft opens up its AI-powered Bing to all users | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft is rolling out the new AI-powered version of its Bing search engine to anyone who wants to use it.

    Nearly three months after the company debuted a limited preview version of its new Bing, powered by the viral AI chatbot ChatGPT, Microsoft is opening it up to all users without a waitlist – as long as they’re signed into the search engine via Microsoft’s Edge browser.

    The move highlights Microsoft’s commitment to move forward with the product even as the AI technology behind it has sparked concerns around inaccuracies and tone. In some cases, people who baited the new Bing were subject to some emotionally reactive and aggressive responses.

    “We’re getting better at speed, we’re getting better at accuracy … but we are on a never-ending quest to make things better and better,” Yusuf Mehdi, a VP at Microsoft overseeing its AI initiatives, told CNN on Wednesday.

    Bing now gets more than 100 million daily active users each day, a significant uptick in the past few months, according to Mehdi. Google, which has long dominated the market, is also adding similar AI features to its search engine.

    In February, Microsoft showed off how its revamped search engine could write summaries of search results, chat with users to answer additional questions about a query and write emails or other compositions based on the results.

    At a press event in New York City on Wednesday, the company shared an early look at some updates, including the ability to ask questions with pictures, access chat history so the chatbot remembers its rapport with users, and export responses to Microsoft Word. Users can also personalize the tone and style of the chatbot’s responses, selecting from a lengthier, creative reply to something that’s shorter and to the point.

    The wave of attention in recent months around ChatGPT, developed by OpenAI with financial backing from Microsoft, helped renew an arms race among tech companies to deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    Beyond adding AI features to search, Microsoft has said it plans to bring ChatGPT technology to its core productivity tools, including Word, Excel and Outlook, with the potential to change the way we work. The decision to add generative AI features to Bing could be particularly risky, however, given how much people rely on search engines for accurate and reliable information.

    Microsoft’s moves also come amid heightened scrutiny on the rapid pace of advancement in AI technology. In March, some of the biggest names in tech, including Elon Musk and Apple co-founder Steve Wozniak, called for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Mehdi said he doesn’t believe the AI industry is moving too fast and suggested the calls for a pause aren’t particularly helpful.

    “Some people think we should pause development for six months but I’m not sure that fixes anything or improves or moves things along,” he said. “But I understand where it’s coming from concern wise.”

    He added: “The only way to really build this technology well is to do it out in the open in the public so we can have conversations about it.”

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • How the CEO behind ChatGPT won over Congress | CNN Business

    How the CEO behind ChatGPT won over Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman seems to have achieved in a matter of hours what other tech execs have been struggling to do for years: He charmed the socks off Congress.

    Despite wide-ranging concerns that artificial intelligence tools like OpenAI’s ChatGPT could disrupt democracy, national security, and the economy, Altman’s appearance Tuesday before a Senate subcommittee went so smoothly that viewers could have been forgiven for thinking the year was closer to 2013 than 2023.

    It was a pivotal moment for the AI industry. Altman’s testimony on Tuesday alongside Christina Montgomery, IBM’s chief privacy officer, promised to set the tone for how Washington regulates a technology that many fear could eliminate jobs or destabilize elections.

    But where lawmakers could have followed a familiar pattern, blasting the tech industry with hostile questioning and leveling withering allegations of reckless innovation, members of the Senate Judiciary Committee instead heaped praise on the companies — and often, on Altman in particular.

    The difference seemed to come down to OpenAI calling for proactive government regulation — and persuading lawmakers it was serious. Unlike the long list of social media hearings in recent years, this AI hearing came earlier in OpenAI’s lifecycle and, crucially, before the company or its technology had suffered any high-profile mishaps.

    Altman, more than any other figure in tech, has emerged as the face of a new crop of powerful and disruptive AI tools that can generate compelling written work and images in response to user prompts. Much of the federal government is now racing to figure out how to regulate the cutting-edge technology.

    But after his performance on Tuesday, the CEO whose company helped spark the new AI arms race may have maneuvered himself into a privileged position of influence over the rules that may soon govern the tools he’s developing.

    Altman’s easy-going, plain-spoken demeanor helped disarm skeptical lawmakers and appeared to win over Democrats and Republicans alike. His approach contrasted with the wooden, lawyerly performances that have afflicted some other tech CEOs in the past during their time in the hotseat.

    “I sense there is a willingness to participate here that is genuine and authentic,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the committee’s technology panel.

    New Jersey Democratic Sen. Cory Booker, adopting an unusual level of familiarity with a witness, found himself repeatedly addressing Altman as “Sam,” even as he referred to other panelists by their last names.

    Even Altman’s fellow witnesses couldn’t resist gushing about his style.

    “His sincerity in talking about those [AI] fears is very apparent, physically, in a way that just doesn’t communicate on the television screen,” Gary Marcus, a former New York University professor and a self-described critic of AI “hype,” told lawmakers.

    With a relaxed yet serious tone, Altman did not deflect or shy away from lawmakers’ concerns. He agreed that large-scale manipulation and deception using AI tools are among the technology’s biggest potential flaws. And he validated fears about AI’s impact on workers, acknowledging that it may “entirely automate away some jobs.”

    “If this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”

    Altman’s candor and openness has captivated many in Washington.

    On Monday evening, Altman spoke to a dinner audience of roughly 60 House lawmakers from both parties. One person in the room, speaking on condition of anonymity to discuss a closed-door meeting, described members of Congress as “riveted” by the conversation, which also saw Altman demonstrating ChatGPT’s capabilities “to much amusement” from the audience.

    Lawmakers have spent years railing against social media companies, attacking them for everything from their content moderation decisions to their economic dominance. On Tuesday, they seemed ready — or even relieved — to be dealing with another area of the technology industry.

    Whether this time is truly different remains unclear, though. The AI industry’s biggest players and aspirants include some of the same tech giants Congress has sharply criticized, including Google and Meta. OpenAI is receiving billions of dollars of investment from Microsoft in a multi-year partnership. And with his remarks on Tuesday, Altman appeared to draw from a familiar playbook for Silicon Valley: Referring to technology as merely a neutral tool, acknowledging his industry’s imperfections and inviting regulation.

    Some AI ethicists and experts questioned the value of asking a leading industry spokesperson how he would like to be regulated. Marcus, the New York University professor, cautioned that creating a new federal agency to police AI could lead to “regulatory capture” by the tech industry, but the warning could have applied just as easily to Congress itself.

    “It seems very very bad that ahead of a hearing meant to inform how this sector gets regulated, the CEO of one of the corporations that would be subject to that regulation gets to present a magic show to the regulators,” Emily Bender, a professor of computational linguistics at the University of Washington, said of Altman’s dinner with House lawmakers.

    She added: “Politicians, like journalists, must resist the urge to be impressed.”

    After years of fidgety evasiveness from other tech CEOs, however, lawmakers this week seemed easily wowed by Altman and his seemingly straight-shooting answers.

    Louisiana Republican Sen. John Kennedy, after expressing frustration with IBM’s Montgomery for providing a nuanced answer he couldn’t comprehend, visibly brightened when Altman quickly and smoothly outlined his regulatory proposals in a bulleted list. Kennedy began joking with Altman and even asked whether Altman might consider heading up a hypothetical federal agency charged with regulating the AI industry.

    “I love my current job,” Altman deadpanned, to audience laughter, before offering to send Kennedy’s office some potential candidates.

    Compounding lawmakers’ attraction to Altman is a belief on Capitol Hill that Congress erred in extending broad liability protections to online platforms at the dawn of the internet. That decision, which allowed for an explosion of blogs, e-commerce sites, streaming media and more, has become an object of regret for many lawmakers in the face of alleged mental health harms stemming from social media.

    “I don’t want to repeat that mistake again,” said Judiciary Committee Chairman Dick Durbin.

    Here too, Altman deftly seized an opportunity to curry favor with lawmakers by emphasizing distinctions between his industry and the social media industry.

    “We try to design systems that do not maximize for engagement,” Altman said, alluding to the common criticism that social media algorithms tend to prioritize outrage and negativity to boost usage. “We’re not an advertising-based model; we’re not trying to get people to use it more and more, and I think that’s a different shape than ad-supported social media.”

    In providing simple-sounding solutions with a smile, Altman is doing much more than shaping policy: He is offering members of Congress a shot at redemption, one they seem grateful to accept. Despite the many pitfalls of AI they identified on Tuesday, lawmakers appeared to thoroughly welcome Altman as a partner, not a potential adversary needing oversight and scrutiny.

    “We need to be mindful,” Blumenthal said, “of ways that rules can enable the big guys to get bigger and exclude innovation, and competition, and responsible good guys such as our representative in this industry right now.”

    [ad_2]

    Source link

  • This could be Apple’s biggest product launch since the Apple Watch | CNN Business

    This could be Apple’s biggest product launch since the Apple Watch | CNN Business

    [ad_1]



    CNN
     — 

    Apple may be just one day away from unveiling its most ambitious new hardware product in years.

    At its Worldwide Developers Conference, which kicks off Monday at its Cupertino, California, campus, Apple

    (AAPL)
    is widely expected to introduce a “mixed reality” headset that offers both virtual reality and augmented reality, a technology that overlays virtual images on live video of the real world.

    The highly anticipated release of an AR/VR headset would be Apple’s biggest hardware product launch since the debut of the Apple Watch in 2015. It could signal a new era for the company and potentially revolutionize how millions interact with computers and the world around them.

    But the headset is just one of many announcements expected at the developers event. Apple will also show off a long list of software updates that will shape how people use its most popular devices, including the iPhone and Apple Watch.

    Apple may also tease how it plans to incorporate AI into more of its products and services, and keep pace with a renewed arms race over the technology in Silicon Valley.

    The event will be livestreamed on Apple’s website and YouTube. It is set to start at 10:00 a.m. PT/1:00 p.m. ET.

    Here’s a closer look at what to expect:

    For years, Apple CEO Tim Cook has expressed interest in augmented reality. Now Apple finally appears ready to show off what it’s been working on.

    According to Bloomberg, the new headset, which could be called Reality One or Reality Pro, will have an iOS-like interface, display immersive video and include cameras and sensors to allow users to control it via their hands, eye movements and with Siri. The device is also rumored to have an outward-facing display that will show eye movements and facial expressions, allowing onlookers to interact with the person wearing the headset without feeling as though they’re talking to a robot.

    Apple’s new headset is expected to pack apps for gaming, fitness and meditation, and offer access to iOS apps such as Messages, FaceTime and Safari, according to Bloomberg. With the FaceTime option, for example, the headset will “render a user’s face and full body in virtual reality,” to create the feeling that both are “in the same room.”

    The decision to unveil it at WWDC suggests Apple wants to encourage developers to build apps and experiences for the product in order to make it more compelling for customers and worth the hefty price tag.

    The company is reportedly considering a $3,000 price tag for the device, far more than most of its products and testing potential buyers at a time of lingering uncertainty in the global economy. Other tech companies have struggled to find mainstream traction for headsets. And in the years that Apple has been rumored to be working on the product, the tech community has shifted its focus from VR to another buzzy technology: artificial intelligence.

    But if any company can prove skeptics wrong, it’s Apple. The company’s entry into the market combined with its vast customer base has the potential to breathe new life into the world of headsets.

    A mixed reality headset may not be the only piece of hardware to get stage time this year.

    Apple is expected to launch a new 15-inch MacBook Air packing the company’s M2 processor. The current size of the MacBook Air is 13 inches.

    Previously, users who wanted a larger-sized Apple laptop would need to buy a higher-end MacBook Pro.

    Considering WWDC is traditionally a software event, Apple executives will likely spend much of the time highlighting the changes and upgrades coming to its next-generation mobile operating systems, iOS 17 and iPadOS 17.

    While last year’s updates included a major design overhaul of the lock screen and iMessage, only minor changes are expected this year.

    With iOS 17, Apple is expected to double down on its efforts around health tracking by adding the ability to monitor everything from a user’s mood to keeping tabs on how their vision may change over time. According to the Wall Street Journal, Apple will also launch a journaling app not only as a way for users to log their thoughts but also activity levels, which can then be analyzed to reveal how much time someone spends at home or out of the house.

    The new iOS 17 is also said to get a lock screen refresh: When positioned in horizontal mode, the display will highlight widgets tied to the calendar, weather and other apps, serving as a digital hub. (iPadOS 17 is also expected to get some of the same lock screen capabilities and health features.)

    Other anticipated upgrades include an Apple Watch OS update that would focus on quick glances at widgets, and more details about its next-generation CarPlay platform, which it initially teased last year.

    While much of the focus of the event may be on VR, Apple may also attempt to show how it’s keeping pace with Silicon Valley’s current obsession: artificial intelligence.

    Apple reportedly plans to preview an AI-powered digital coaching service, which will encourage people to exercise and improve their sleeping and eating habits. It’s unclear how it could work, but the effort comes at a time when Big Tech companies are racing to introduce AI-powered technologies in the wake of ChatGPT’s viral success.

    Apple may also demo and expand on some of its recently teased accessibility tools for the iPhone and iPad, including a feature that promises to replicate a user’s voice for phone calls after only 15 minutes of training.

    Most of the other Big Tech companies have recently outlined their AI strategies. This event may be Apple’s chance to do the same.

    [ad_2]

    Source link

  • Everything you need to know about AI but were too afraid to ask | CNN Business

    Everything you need to know about AI but were too afraid to ask | CNN Business

    [ad_1]



    CNN
     — 

    Business executives keep talking about it. Teachers are struggling with what to do about it. And artists like Drake seem angry about it.

    Love it or hate it, everyone is paying attention to artificial intelligence right now. Almost overnight, a new crop of AI tools has found its way into products used by billions of people, changing the way we work, shop, create and communicate with each other.

    AI advocates tout the technology’s potential to supercharge our productivity, creating a new era of better jobs, better education and better treatments for diseases. AI skeptics have raised concerns about the technology’s potential to disrupt jobs, mislead people and possibly bring about the end of humanity as we know it. Confusingly, some execs in Silicon Valley seem to hold both sets of views at once.

    What’s clear, however, is that AI is not going away, but it is changing very fast. Here’s everything you need to know to keep up.

    In the public consciousness, “artificial intelligence” may conjure up images of murderous machines eager to overtake humans, and capable of doing so. But in the tech industry, it’s a broad term that refers to different tools that are trained to perform a wide range of complex tasks that might previously have required some input from an actual person.

    If you use the internet, then you almost certainly use services that rely on AI to sort data, filter content and make suggestions, among other tasks.

    It’s the technology that allows Netflix to recommend movies and that helps remove spam, hate speech and other inappropriate content from your social media feeds. It helps power everything from autocorrect features and Google Translate to facial recognition services, the last of which uses AI that, in Microsoft’s words, “mimics a human capability to recognize human faces.”

    AI can also be successful in developing techniques for solving a wide range of real world problems, such as adjusting traffic signals in real time to manage congestion issues or helping medical professionals analyze images to make a diagnosis. AI is also central to developing self-driving cars by processing tremendous amounts of visual data so the vehicles can understand their surroundings.

    The short answer: ChatGPT.

    For years, AI has largely operated in the background of services we use every day. That changed following the November launch of ChatGPT, a viral chatbot that put the power of AI front and center.

    People have already used ChatGPT, a tool created by OpenAI, to draft lawsuits, write song lyrics and create research paper abstracts so good they’ve even fooled some scientists. The tool has even passed standardized exams. And ChatGPT has sparked an intense competition among tech companies to develop and deploy similar tools.

    Microsoft and Google have each introduced features powered by generative AI, the technology underpinning ChatGPT, into their most widely used productivity tools. Meta, Amazon and Alibaba have said they’re working on generative AI tools, too. And numerous other businesses also want in on the action.

    It’s rare to see a cutting-edge technology become so ubiquitous almost overnight. Now businesses, educators and lawmakers are all racing to adapt.

    Generative AI enables tools to create written work, images and even audio in response to prompts from users.

    To get those responses, several Big Tech companies have developed their own large language models trained on vast amounts of online data. The scope and purpose of these data sets can vary. For example, the version of ChatGPT that went public last year was only trained on data up until 2021 (it’s now more up to date).

    These models work through a method called deep learning, which learns patterns and relationships between words, so it can make predictive responses and generate relevant outputs to user prompts.

    As impressive as some generative AI services may seem, they essentially just do pattern matching. These tools can mimic the writing of others or make predictions about what words might be relevant in their responses based on all the data they’ve previously been trained on.

    AGI, on the other hand, promises something more ambitious — and scary.

    AGI — short for artificial general intelligence — refers to technology that can perform intelligent tasks such as learning, reasoning and adapting to new situations in the way that humans do. OpenAI CEO Sam Altman has teased the possibility of a superintelligent AGI that could go on to change the world or perhaps backfire and end humanity.

    For the moment, however, AGI remains purely a hypothetical, so don’t worry too much about it.

    Anytime there’s an excess of buzz around a technology, it’s good to be skeptical — and there is certainly a lot of that here. Investor fascination with AI has helped push Wall Street back into a bull market, despite lingering economic uncertainty.

    Not all AI tools are equally useful and many companies will certainly tout AI features and strategies simply to tap into the current hype cycle. But even in just the past six months, AI has already shown potential to change how people do numerous everyday tasks.

    One of the biggest selling points around AI chatbots, for example, is their ability to make people more productive. Earlier this year, some real estate agents told CNN that ChatGPT saved them hours of work not only by writing listings for homes for sale but also looking up the permitted uses for certain land and calculating what mortgage payments or the return on investment might be for a client, which typically involve formulas and mortgage calculators.

    Artificial intelligence is also much broader than ChatGPT and other generative AI tools. Even if you think AI chatbots are annoying or might be a fad, the underlying technology will continue to power meaningful advances in products and services for years to come.

    The fear is AI will eliminate millions of jobs. The hope is it will help improve how millions do their jobs. The current reality is somewhere in between.

    Companies will likely need new workers to help them implement and manage AI tools. Employment of data analysts and scientists, machine learning specialists and cybersecurity experts is forecast to grow 30% on average by 2027, according to one recent estimate from the World Economic Forum.

    But the proliferation of AI will also likely put many roles at risk eventually. There could be 26 million fewer record-keeping and administrative jobs by 2027, the WEF predicted. Data entry clerks and executive secretaries are expected to see the steepest losses.

    For now, there are clearly limits to how well AI can do the job of a human on its own. When CNET, a media outlet, experimented with using AI to write articles, it came under scrutiny for publishing pieces with factual errors. Likewise, a lawyer in May made headlines for citing false court cases to a judge provided to him by ChatGPT. In an affidavit, the lawyer said he had never used ChatGPT as a legal research tool before and “was unaware of the possibility that its content could be false.”

    Alphabet CEO Sundar Pichai, left, and OpenAI CEO Sam Altman arrive to the White House for a meeting with Vice President Kamala Harris on artificial intelligence, Thursday, May 4, 2023, in Washington.

    Top AI executives have warned that AI could potentially bring about human extinction. But these same executives are also racing to deploy the technology into their products.

    Some experts say that focusing on far-off doomsday scenarios may distract from the more immediate harms that AI can cause, such as spreading misinformation, perpetuating biases that exist in training data, and enabling discrimination.

    For example, generative AI could be used to create deepfakes to spread propaganda during an election or enable a frightening new era of scams. Some AI models have also been criticized for what the industry calls “hallucinations,” or making up information.

    Even before the rise of ChatGPT, there were concerns about AI acting as a gatekeeper that can determine who does and does not move forward in a hiring process, for example. AI-powered facial recognition systems have also resulted in some wrongful arrests, and research has shown these systems are drastically more prone to error when trying to match the faces of darker skinned people.

    The more AI tools are incorporated into core parts of society, the more potential there is for unintended consequences.

    Regulators in the United States and Europe are pushing for legislation to help put guardrails in place for AI, which could ultimately impact how the technology develops. But it’s unclear if lawmakers can keep pace with the rapid advances in AI.

    Experts believe in the months ahead, generative AI will go on to create even more realistic images, videos, and audio that could further disrupt media, entertainment, tech and other industries. The technology will likely become increasingly conversational and personalized.

    In March, OpenAI unveiled GPT-4, the next-generation version of the technology that powers ChatGPT. According to the company and early tests, GPT-4 is able to provide more detailed and accurate written responses, pass academic tests with high marks and build a working website from a hand-drawn sketch. (Altman has previously said OpenAI is not yet training GPT-5.)

    AI will almost certainly be infused into many more products and services in the coming months. That means we’ll all have to learn how to live with it.

    As ChatGPT put it in response to a prompt from CNN, “AI has the potential to transform our lives … but it’s crucial for companies and individuals to be mindful of the accompanying risks and responsibly address concerns.”

    [ad_2]

    Source link

  • OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI, the company behind the viral ChatGPT tool, has been hit with a lawsuit alleging the company stole and misappropriated vast swaths of peoples’ data from the internet to train its AI tools.

    The proposed class action lawsuit, filed Wednesday in a California federal court, claims that OpenAI secretly scraped “massive amounts of personal data from the internet,” according to the complaint. The nearly 160-page complaint alleges that this personal data, including “essentially every piece of data exchanged on the internet it could take,” was also seized by the company without notice, consent or “just compensation.”

    Moreover, this data scraping occurred at an “unprecedented scale,” the suit claims.

    OpenAI did not immediately respond to CNN’s request for comment Wednesday. Microsoft, a major investor into OpenAI, was also named as a defendant in the suit and did not immediately respond to a request for comment.

    “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable – but unacceptable by any measure of responsible data protection and use,” Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, said in a statement to CNN Wednesday.

    The complaint also claims that OpenAI products “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

    The lawsuit seeks injunctive relief in the form of a temporary freeze on further commercial use of OpenAI’s products. It also seeks payments of “data dividends” as financial compensation to people whose information was used to develop and train OpenAI’s tools.

    OpenAI publicly launched ChatGPT late last year, and the tool immediately went viral for its ability to generate compelling, human-sounding responses to user prompts. The success of ChatGPT spurred an apparent AI arms race in the tech world, as companies big and small are now racing to develop and deploy AI tools into as many products as possible.

    [ad_2]

    Source link

  • Keeleg to Launch and Offer Free Access to Its AI-Powered Legal Tool Until May 1 in Response to COVID-19

    Keeleg to Launch and Offer Free Access to Its AI-Powered Legal Tool Until May 1 in Response to COVID-19

    [ad_1]

    Press Release



    updated: Apr 8, 2020

    Keeleg is pleased to announce the public launch and offer of free access to its AI-powered, self-help immigration legal tool in response to the COVID-19 outbreak.

    Every year, millions of people apply for legal permanent resident status, apply for naturalization to become U.S. citizens, or petition relatives to permanently immigrate to the United States of America. In addition to filing fees, the process can be quite expensive and complex. Through the help of Keeleg’s cloud-based, AI web application – dubbed Cato – users get qualified and prepare their own immigration applications or get connected to the best immigration attorneys around them. 

    Cato is a tool that combines artificial intelligence and the knowledge of award-winning attorneys to provide assistance for users with their immigration applications or search for the best immigration attorneys in their area. Cato’s knowledge currently covers hundreds of different immigration case scenarios, and new skills are being developed and added to cover other specialties and use cases.

    “After years spent in the industry, we noticed that something was broken and that there was and is a lack of access to legal knowledge and protection for all that need it. By leveraging cutting-edge technology like machine learning, natural language processing, natural language understanding, and natural language generation to power Cato, we aim to bridge this gap between supply and demand and make access to legal help and knowledge, easy, affordable, reliable, and time-saving for all,” said Anthony Remo Luna, CEO, and Co-founder of Keeleg.

    As a continuation of our mission to make legal access easy and to foster our commitment to people that seek legal knowledge, the company will be releasing a suite of other legal products dedicated to Chapter 7 individual bankruptcy filings and Wills and Trusts by Summer 2020. Users can sign up to get access to its early access program to try and prepare their applications for free. 

    About the company:

    Keeleg is dedicated to leveraging technology to make legal knowledge and help easy and available to everyone. The San Francisco-based company includes award-winning attorneys with over 30 years of experience in the private practice of law, software engineers, and tech enthusiasts and has created a new platform that leverages artificial intelligence to offer affordable, reliable and time-saving legal solutions.​

    Contact:

    Anthony Remo Luna, CEO, and Co-Founder

    social@keeleg.com ​

    Stay in the know, follow Keeleg on FacebookTwitter, and LinkedIn.

    Source: Keeleg

    [ad_2]

    Source link