ReportWire

Tag: iab-artificial intelligence

  • Welcome to the era of viral AI generated ‘news’ images | CNN Business

    Welcome to the era of viral AI generated ‘news’ images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.

    None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.

    The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, “didn’t give it a second thought. no way am I surviving the future of technology.” The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.

    The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.

    While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

    “I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence,” said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.

    Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.

    What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.

    “The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south,” said Ben Decker, CEO of threat intelligence group Memetica. “Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects.”

    Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.

    Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as “a small self-funded team,” with just 11 full-time staff members.

    A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.

    Midjourney has emerged as a popular tool for users to create AI-generated images.

    The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to “extraordinary demand and trial abuse,” according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.

    The rules page on the company’s Discord site asks users: “Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content.”

    “Moderation is hard and we’ll be shipping improved systems soon,” Holz told CNN. “We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful.”

    In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.

    There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.

    Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, “it could give people false confidence,” Ajder said. “They’ll say, ‘there’s a detection system that says it’s real, so it must be real.’”

    Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.

    “The idea is that these institutions are all committed to disclosure, consent and transparency,” Leibowicz said.

    A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.

    “This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology,” he said. “At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models.”

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    For years, Alexa has been synonymous with virtual assistants that can interact with users and do tasks on their behalf.

    Now Amazon is trying to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products.

    Amazon’s goal is to use AI “to create this great personal assistant,” said Dave Limp, senior VP of devices and services, in a recent interview with CNN. “We’ve been using all forms of AI for a long time, but now that we see this emergence of generative AI, we can accelerate that vision even faster.”

    Generative AI refers to a type of AI that can create new content, such as text and images, in response to user prompts. Limp did not elaborate on how generative AI could be used in Alexa products, but there are clear possibilities.

    In theory, this technology could one day help Alexa have more natural conversations with users, answer more complex questions, and be more creative by telling stories or making up song lyrics in seconds. It could also enable more personalized interactions, allowing the assistant to learn about the device owner’s interests, preferences and better tailor its responses to each person.

    “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    Alexa launched nearly a decade ago and, along with Siri, Cortana and other voice assistants, seemed poised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished that faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon is now slashing staff and shelving products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division has not escaped unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees as the global economic outlook continued to worsen. In March, the company said about 9,000 more jobs would be impacted. Limp said his division lost about 2,000 people, about half of which were from the Alexa team.

    Amazon also shut down some of the products it spun up earlier in the pandemic, such as its wearable fitness brand Halo, which allowed users to ask Alexa questions about their health and wellness. Limp said the company also shelved some “more risky” projects. “I wouldn’t doubt we’ll dust them off at some point and bring them back,” he said. “We’re still taking a lot of risks in this organization.”

    But Limp said Alexa remains a “North Star” for his division. “To give you a sense, there’s still thousands and thousands of people working on Alexa,” he said.

    Amazon is indeed still investing in Alexa and its related Echo smart speaker lineup. Last week, the company unveiled several new products, including the $39.99 Echo Pop and the $89.99 Echo Show 5, its smart speaker with a screen. While the products feature incremental updates, Limp said Amazon’s current lineup contains hints of what’s to come with its AI efforts, beyond generative AI.

    For example, if Alexa is enabled on an Echo Show, where it can rotate and follow users around the room, “you’ll see glimmers of where it’s going over the next months and years,” Limp said.

    But generative AI remains a key focus for the company. Amazon CEO Andy Jassy said in a letter to shareholders in April that the company is focused on “investing heavily” in the technology “across all of our consumer, seller, brand, and creator experiences.”

    The company is reportedly working on adding ChatGPT-like search capabilities for its e-commerce store. Amazon is also rumored to be planning to use generative AI to bring conversational language to a home robot.

    While Limp didn’t comment on the report, he said the end goal has long been for Alexa to communicate with users in a fluid, natural way, whether it’s through an Echo device or other products such as its robotic dog, Astro.

    The concept remains a “hard technical challenge,” he said, but one that is “more tractable” with generative AI. “There’s still some hard corner cases and things to work out,” he said.

    [ad_2]

    Source link

  • Schumer outlines plan for how Senate will regulate AI | CNN Business

    Schumer outlines plan for how Senate will regulate AI | CNN Business

    [ad_1]



    CNN
     — 

    Senate Majority Leader Chuck Schumer announced a broad, open-ended plan for regulating artificial intelligence on Wednesday, describing AI as an unprecedented challenge for Congress that effectively has policymakers “starting from scratch.”

    The plan, Schumer said at a speech in Washington, will begin with at least nine panels to identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.” The panels will be composed of experts from industry, academia and civil society, with the first sessions taking place in September, Schumer said.

    The Senate will then turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions, Schumer added, arguing that the resulting US solution could leapfrog existing regulatory proposals from around the world.

    “If we can put this together in a very serious way, I think the rest of the world will follow and we can set the direction of how we ought to go in AI, because I don’t think any of the existing proposals have captured that imagination,” Schumer said, reflecting on other recent proposals such as the European Union’s draft AI Act, which last week was approved by the European Parliament.

    The speech represents Schumer’s most definitive remarks to date on a problem that has dogged Congress for months amid the wide embrace of tools such as ChatGPT: How to catch up, or get ahead, on policymaking for a technology that is already in the hands of millions of people and evolving rapidly.

    In the wake of ChatGPT’s viral success, Silicon Valley has raced to develop and deploy a new crop of generative AI tools that can produce images and writing almost instantly, with the potential to change how people work, shop and interact with each other. But these same tools have also raised concerns for their potential to make factual errors, spread misinformation and perpetuate biases, among other issues.

    In contrast to the fast pace of AI advancements, Schumer has stressed the importance of a deliberate approach, focusing on getting lawmakers acquainted with the basic facts of the technology and the issues it raises before seeking to legislate. He and three other colleagues began last week by convening the first in a series of closed-door briefings on AI for senators that is expected to run through the summer.

    In his remarks Wednesday, Schumer appeared to acknowledge criticism of his pace.

    “I know many of you have spent months calling on us to act,” he said. “I hear you. I hear you loud and clear.”

    But he described AI as a novel issue for which Congress lacks a guide.

    “It’s not like labor, or healthcare, or defense, where Congress has had a long history we can work off of,” he said. “Experts aren’t even sure which questions policymakers should be asking. In many ways, we’re starting from scratch.”

    Schumer described his plan as laying “a foundation for AI policy” that will do “years of work in a matter of months.”

    To guide that process, Schumer expanded on a set of principles he first announced in April. Formally unveiling the framework on Wednesday, Schumer said any legislation on AI should be geared toward facilitating innovation before addressing risks to national security or democratic governance.

    “Innovation first,” Schumer said, “but with security, accountability, [democratic] foundations and explainability.”

    The last two pillars of his framework, Schumer said, may be among the most important, as unrestricted artificial intelligence could undermine electoral processes or make it impossible to critically evaluate an AI’s claims.

    Schumer’s remarks were restrained in calling for any specific proposals. At one point, he acknowledged that a consensus may even emerge that recommends against major government intervention on the technology.

    But he was clear on one point: “We do — we do — need to require companies to develop a system where in simple and understandable terms users understand why the system produced a particular answer, and where that answer came from.”

    The Senate may still be a long way off from unveiling any comprehensive proposal, however. Schumer predicted that the process is likely to take longer than weeks but shorter than years.

    “Months would be the proper timeline,” he said.

    [ad_2]

    Source link

  • Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    [ad_1]



    CNN
     — 

    Microsoft, Google and other leading artificial intelligence companies committed Friday to put new AI systems through outside testing before they are publicly released and to clearly label AI-generated content, the White House announced.

    The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies – which also include Amazon, Meta, OpenAI, Anthropic and Inflection – aimed at making AI systems and products safer and more trustworthy while Congress and the White House develop more comprehensive regulations to govern the rapidly growing industry. President Joe Biden met with top executives from all seven companies at the White House on Friday.

    In a speech Friday, Biden called the companies commitments “real and concrete,” adding they will help fulfill their “fundamental obligations to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

    “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation,” Biden said.

    White House officials acknowledge that some of the companies have already enacted some of the commitments but argue they will as a whole raise “the standards for safety, security and trust of AI” and will serve as a “bridge to regulation.”

    “It’s a first step, it’s a bridge to where we need to go,” White House deputy chief of staff Bruce Reed, who has been managing the AI policy process, said in an interview. “It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we’ve seen before.”

    While most of the companies already conduct internal “red-teaming” exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology – such as a cyberattack or its potential to be used by malicious actors – and allows companies to proactively identify shortcomings and prevent negative outcomes.

    Reed said the external red-teaming “will help pave the way for government oversight and regulation,” potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser.

    The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation.

    The companies also committed to investing in cybersecurity and “insider threat safeguards,” in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely; creating a robust mechanism for third parties to report system vulnerabilities; prioritizing research on the societal risks of AI; and developing and deploying AI systems “to help address society’s greatest challenges,” according to the White House.

    Asked by CNN’s Jake Tapper Friday about worries he has when it comes to AI, Microsoft Vice Chair and President Brad Smith pointed to “what people, bad actors, individuals or countries will do” with the technology.

    “That they’ll use it to undermine our elections, that they will use it to seek to break in to our computer networks. You know, that they’ll use it in ways that will undermine the security of our jobs,” he said.

    But, Smith argued, “the best way to solve these problems is to focus on them, to understand them, to bring people together, and to solve them. And the interesting thing about AI, in my opinion, is that when we do that, and we are determined to do that, we can use AI to defend against these problems far more effectively than we can today.”

    Pressed by Tapper about AI and compensation concerns listed in a recent letter signed by thousands of authors, Smith said: “I don’t want it to undermine anybody’s ability to make a living by creating, by writing. That is the balance that we should all want to strike.”

    All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity.

    Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

    “If we’ve learned anything from the last decade and the complete mismanagement of social media governance, it’s that many companies offer a lot of lip service,” Common Sense Media CEO James Steyer said in a statement. “And then they prioritize their profits to such an extent that they will not hold themselves accountable for how their products impact the American people, particularly children and families.”

    The federal government’s failure to regulate social media companies at their inception – and the resistance from those companies – has loomed large for White House officials as they have begun crafting potential AI regulations and executive actions in recent months.

    “The main thing we stressed throughout the discussions with the companies was that we should make this as robust as possible,” Reed said. “The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago and I think that AI is progressing even more rapidly than that and it’s important for this bridge to regulation to be a sturdy one.”

    The commitments were crafted during a monthslong back-and-forth between the AI companies and the White House that began in May when a group of AI executives came to the White House to meet with Biden, Vice President Kamala Harris and White House officials. The White House also sought input from non-industry AI safety and ethics experts.

    White House officials are working to move beyond voluntary commitments, readying a series of executive actions, the first of which is expected to be unveiled later this summer. Officials are also working closely with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI.

    “This is a serious responsibility. We have to get it right. There’s an enormous, enormous potential upside as well,” Biden said.

    In the meantime, White House officials say the companies will “immediately” begin implementing the voluntary commitments and hope other companies sign on in the future.

    “We expect that other companies will see how they also have an obligation to live up to the standards of safety, security and trust. And they may choose – and we would welcome them choosing – joining these commitments,” a White House official said.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • Italy blocks ChatGPT over privacy concerns | CNN Business

    Italy blocks ChatGPT over privacy concerns | CNN Business

    [ad_1]


    London
    CNN
     — 

    Regulators in Italy issued a temporary ban on ChatGPT Friday, effective immediately, due to privacy concerns and said they had opened an investigation into how OpenAI, the US company behind the popular chatbot, uses data.

    Italy’s data protection agency said users lacked information about the collection of their data and that a breach at ChatGPT had been reported on March 20.

    “There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” the agency said.

    The Italian regulator also expressed concerns over the lack of age verification for ChatGPT users. It argued that this “exposes children to receiving responses that are absolutely inappropriate to their age and awareness.” The platform is supposed to be for users older than 13, it noted.

    The data protection agency said OpenAI would be barred from processing the data of Italian users until it “respects the privacy regulation.”

    OpenAI has been given 20 days to communicate the measures it will take to comply with Italy’s data rules. Otherwise, it could face a penalty of up to €20 million ($21.8 million), or up to 4% of its annual global turnover.

    Since its public release four months ago, ChatGPT has become a global phenomenon, amassing millions of users impressed with its ability to craft convincing written content, including academic essays, business plans and short stories.

    But concerns have also emerged about its rapid spread and what large-scale uptake of such tools could mean for society, putting pressure on regulators around the world to act.

    The European Union is finalizing rules on the use of artificial intelligence in the bloc. In the meantime, EU companies must comply with the General Data Protection Regulation, or GDPR, as well as the Digital Services Act and Digital Markets Act, which apply to tech platforms.

    Meanwhile, so-called “generative AI” tools available to the public are proliferating.

    Earlier this month, OpenAI released GPT-4, a new version of the technology underpinning ChatGPT that is even more powerful. The company said the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%.

    This week, some of the biggest names in tech, including Elon Musk, called for AI labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    — Julia Horowitz contributed reporting.

    [ad_2]

    Source link

  • Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    [ad_1]



    CNN
     — 

    Less than a few hours after Snapchat rolled out its My AI chatbot to all users last week, Lyndsi Lee, a mother from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

    “It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.

    The feature is powered by the viral AI chatbot tool ChatGPT – and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

    The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear you’re talking to a computer.

    “I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,” Lee said. “I just think there is a really clear line [Snapchat] is crossing.”

    The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to reckon with questions that may have seemed theoretical only months ago.

    In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was released to Snap’s subscription customers, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. In particular, he cited reports that it can provide kids with suggestions for how to lie to their parents.

    “These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

    In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”

    In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.

    In another TikTok video with more than 1.5 million views, a user named Ariel recorded a song with an intro, chorus and piano chords written by My AI about what it’s like to be a chatbot. When she sent the recorded song back, she said the chatbot denied its involvement with the reply: “I’m sorry, but as an AI language model, I don’t write songs.” Ariel called the exchange “creepy.”

    Other users shared concerns about how the tool understands, interacts with and collects information from photos. “I snapped a picture … and it said ‘nice shoes’ and asked who the people [were] in the photo,” a Snapchat user wrote on Facebook.

    Snapchat told CNN it continues to improve My AI based on community feedback and is working to establish more guardrails to keep its users safe. The company also said that similar to its other tools, users don’t have to interact with My AI if they don’t want to.

    It’s not possible to remove My AI from chat feeds, however, unless a user subscribes to its monthly premium service, Snapchat+. Some teens say they have opted to pay the $3.99 Snapchat+ fee to turn off the tool before promptly canceling the service.

    But not all users dislike the feature.

    One user wrote on Facebook that she’s been asking My AI for homework help. “It gets all of the questions right.” Another noted she’s leaned on it for comfort and advice. “I love my little pocket, bestie!” she wrote. “You can change the Bitmoji [avatar] for it and surprisingly it offers really great advice to some real life situations. … I love the support it gives.”

    ChatGPT, which is trained on vast troves of data online, has previously come under fire for spreading inaccurate information, responding to users in ways they might find inappropriate and enabling students to cheat. But Snapchat’s integration of the tool risks heightening some of these issues, and adding new ones.

    Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.

    “If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” she said. “Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot. In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

    For now, the onus is on parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to show up in more popular apps and services.

    Sinead Bovell, the founder of WAYE, a startup that helps prepare youth for future with advanced technologies, said parents need to make it very clear “chatbots are not your friend.”

    “They’re also not your therapists or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” she said.

    “Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they would a friend – even though from a user design perspective, the chatbot exists in the same corner of Snapchat.”

    She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.

    [ad_2]

    Source link

  • First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    [ad_1]



    CNN
     — 

    Three US senators are pressing Facebook-parent Meta, Google-parent Alphabet and Twitter about whether their layoffs may have hindered the companies’ ability to fight the spread of misinformation ahead of the 2024 elections.

    In a letter to the companies dated Tuesday, the lawmakers warned that reported staff cuts to content moderation and other teams could make it harder for the companies to fulfill their commitments to election integrity.

    “This is particularly troubling given the emerging use of artificial intelligence to mislead voters,” wrote Minnesota Democratic Sen. Amy Klobuchar, Vermont Democratic Sen. Peter Welch and Illinois Democratic Sen. Dick Durbin, according to a copy of the letter reviewed by CNN.

    Since purchasing Twitter in October, Elon Musk has slashed headcount by more than 80%, in some cases eliminating entire teams.

    Alphabet announced plans to cut roughly 12,000 workers across product areas and regions earlier this year. And Meta has previously said it would eliminate about 21,000 jobs over two rounds of layoffs, hitting across teams devoted to policy, user experience and well-being, among others.

    “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community – including our efforts to prepare for elections around the world,” Andy Stone, a spokesperson for Meta, said in a statement to CNN about the letter.

    Alphabet and Twitter did not immediately respond to a request for comment.

    The pullback at those companies has coincided with a broader industry retrenchment in the face of economic headwinds. Peers such as Microsoft and Amazon have also trimmed their workforces, while others have announced hiring freezes.

    But the social media companies are coming under greater scrutiny now in part due to their role facilitating the US electoral process.

    Tuesday’s letter asked Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai and Twitter CEO Linda Yaccarino how each company is preparing for the 2024 elections and for mis- and disinformation surrounding the campaigns.

    To illustrate their concerns, the lawmakers pointed to recent changes at Alphabet-owned YouTube to allow the sharing of false claims that the 2020 presidential election was stolen, along with what they described as content moderation “challenges” at Twitter since the layoffs.

    The letter, which seeks responses by July 10, also asked whether the companies may hire more content moderation employees or contractors ahead of the election, and how the platforms may be specifically preparing for the rise of AI-generated deepfakes in politics.

    Already, candidates such as Florida Gov. Ron DeSantis appear to have used fake, AI-generated images to attack their opponents, raising questions about the risks that artificial intelligence could pose for democracy.

    [ad_2]

    Source link

  • ‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree | CNN Business

    ‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree | CNN Business

    [ad_1]



    CNN
     — 

    A new crop of artificial intelligence tools carries the promise of streamlining tasks, improving efficiency and boosting productivity in the workplace. But that hasn’t been Neil Clarke’s experience so far.

    Clarke, an editor and publisher, said he recently had to temporarily shutter the online submission form for his science fiction and fantasy magazine, Clarkesworld, after his team was inundated with a deluge of “consistently bad” AI-generated submissions.

    “They’re some of the worst stories we’ve seen, actually,” Clarke said of the hundreds of pieces of AI-produced content he and his team of humans now must manually parse through. “But it’s more of the problem of volume, not quality. The quantity is burying us.”

    “It almost doubled our workload,” he added, describing the latest AI tools as “a thorn in our side for the last few months.” Clarke said that he anticipates his team is going to have to close submissions again. “It’s going to reach a point where we can’t handle it.”

    Since ChatGPT launched late last year, many of the tech world’s most prominent figures have waxed poetic about how AI has the potential to boost productivity, help us all work less and create new and better jobs in the future. “In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently,” Microsoft co-founder Bill Gates said in a blog post recently.

    But as is often the case with tech, the long-term impact isn’t always clear or the same across industries and markets. Moreover, the road to a techno-utopia is often bumpy and plagued with unintended consequences, whether it’s lawyers fined for submitting fake court citations from ChatGPT or a small publication buried under an avalanche of computer-generated submissions.

    Big Tech companies are now rushing to jump on the AI bandwagon, pledging significant investments into new AI-powered tools that promise to streamline work. These tools can help people quickly draft emails, make presentations and summarize large datasets or texts.

    In a recent study, researchers at the Massachusetts Institute of Technology found that access to ChatGPT increased productivity for workers who were assigned tasks like writing cover letters, “delicate” emails and cost-benefit analyses. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust,” Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper, said in a statement.

    Mathias Cormann, the secretary-general of the Organization for Economic Co-operation and Development recently said the intergovernmental organization has found that AI can improve some aspects of job quality, but there are tradeoffs.

    “Workers do report, though, that the intensity of their work has increased after the adoption of AI in their workplaces,” Cormann said in public remarks, pointing to the findings of a report released by the organization. The report also found that for non-AI specialists and non-managers, the use of AI had only a “minimal impact on wages so far” – meaning that for the average employee, the work is scaling up, but the pay isn’t.

    Ivana Saula, the research director for the International Association of Machinists and Aerospace Workers, said that workers in her union have said they feel like “guinea pigs” as employers rush to roll out AI-powered tools on the job.

    And it hasn’t always gone smoothly, Saula said. The implementation of these new tech tools has often led to more “residual tasks that a human still needs to do.” This can include picking up additional logistics tasks that a machine simply can’t do, Saula said, adding more time and pressure to a daily work flow.

    The union represents a broad range of workers, including in air transportation, health care, public service, manufacturing and the nuclear industry, Saula said.

    “It’s never just clean cut, where the machine can entirely replace the human,” Saula told CNN. “It can replace certain aspects of what a worker does, but there’s some tasks that are outstanding that get placed on whoever remains.”

    Workers are also “saying that my workload is heavier” after the implementation of new AI tools, Saula said, and “the intensity at which I work is much faster because now it’s being set by the machine.” She added that the feedback they are getting from workers shows how important it is to “actually involve workers in the process of implementation.”

    “Because there’s knowledge on the ground, on the frontlines, that employers need to be aware of,” she said. “And oftentimes, I think there’s disconnects between frontline workers and what happens on shop floors, and upper management, and not to mention CEOs.”

    Perhaps nowhere are the pros and cons of AI for businesses as apparent as in the media industry. These tools offer the promise of accelerating if not automating copywriting, advertising and certain editorial work, but there have already been some notable blunders.

    News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on Star Wars published by Gizmodo earlier this month similarly required a correction and resulted in employee turmoil. But both outlets have signaled they will still move forward with using the technology to assist in newsrooms.

    Others like Clarke, the publisher, have tried to combat the fallout from the rise of AI by relying on more AI. Clarke said he and his team turned to AI-powered detectors of AI-generated work to deal with the deluge of submissions but found these tools weren’t helpful because of how unreliably they flag “false positives and false negatives,” especially for writers whose second language is English.

    “You listen to these AI experts, they go on about how these things are going to do amazing breakthroughs in different fields,” Clarke said. “But those aren’t the fields they’re currently working in.”

    [ad_2]

    Source link

  • How your phone learned to see in the dark | CNN Business

    How your phone learned to see in the dark | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Open up Instagram at any given moment and it probably won’t take long to find crisp pictures of the night sky, a skyline after dark or a dimly lit restaurant. While shots like these used to require advanced cameras, they’re now often possible from the phone you already carry around in your pocket.

    Tech companies such as Apple, Samsung and Google are investing resources to improve their night photography options at a time when camera features have increasingly become a key selling point for smartphones that otherwise largely all look and feel the same from one year to the next.

    Earlier this month, Google brought a faster version of its Night Sight mode, which uses AI algorithms to lighten or brighten images in dark environments, to more of its Pixel models. Apple’s Night mode, which is available on models as far back as the iPhone 11, was touted as a premier feature on its iPhone 14 lineup last year thanks to its improved camera system.

    These tools have come a long way in just the past few years, thanks to significant advancements in artificial intelligence technology as well as image processing that has become sharper, quicker, and more resilient to challenging photography situations. And smartphone makers aren’t done yet.

    “People increasingly rely on their smartphones to take photos, record videos, and create content,” said Lian Jye Su, an artificial intelligence analyst at ABI Research. “[This] will only fuel the smartphone companies to up their games in AI-enhanced image and video processing.”

    While there has been much focus lately on Silicon Valley’s renewed AI arms race over chatbots, the push to develop more sophisticated AI tools could also help further improve night photography and bring our smartphones closer to being able to see in the dark.

    Samsung’s Night mode feature, which is available on various Galaxy models but optimized for its premium S23 Ultra smartphone, promises to do what would have seemed unthinkable just five to 10 years ago: enable phones to take clearer pictures with little light.

    The feature is designed to minimize what’s called “noise,” a term in photography that typically refers to poor lighting conditions, long exposure times, and other elements that can take away from the quality of an image.

    The secret to reducing noise, according to the company, is a combination of the S23 Ultra’s adaptive 200M pixel sensor. After the shutter button is pressed, Samsung uses advanced multi-frame processing to combine multiple images into a single picture and AI to automatically adjust the photo as necessary.

    “When a user takes a photo in low or dark lighting conditions, the processor helps remove noise through multi-frame processing,” said Joshua Cho, executive vice president of Samsung’s Visual Solution Team. “Instantaneously, the Galaxy S23 Ultra detects the detail that should be kept, and the noise that should be removed.”

    For Samsung and other tech companies, AI algorithms are crucial to delivering photos taken in the dark. “The AI training process is based on a large number of images tuned and annotated by experts, and AI learns the parameters to adjust for every photo taken in low-light situations,” Su explained.

    For example, algorithms identify the right level of exposure, determine the correct color pallet and gradient under certain lighting conditions, sharpen blurred faces or objects artificially, and then makes those changes. The final result, however, can look quite different from what the person taking the picture saw in real time, in what some might argue is a technical sleight-of-hand trick.

    Lights illuminate the Atlanta Botanical Gardens, in this photo taken using Google Pixel 5 Night Sight setting.

    Google is also focused on reducing noise in photography. Its AI-powered Night Sight feature captures a burst of longer-exposure frames. It then uses something called HDR+ Bracketing, which creates several photos with different settings. After a picture is taken, the images are combined together to create “sharper photos” even in dark environments “that are still incredibly bright and detailed,” said Alex Schiffhauer, a group product manager at Google.

    While effective, there can be a slight but noticeable delay before the image is ready. But Schiffhauer said Google intends to speed up this process more on future Pixel iterations. “We’d love a world in which customers can get the quality of Night Sight without needing to hold still for a few seconds,” Schiffhauer said.

    Google also has an astrophotography feature which allows people to take shots of the night sky without needing to tweak the exposure or other settings. The algorithms detect details in the sky and enhances them to stand out, according to the company.

    Apple has long been rumored to be working on an astrophotography feature, but some iPhone 14 Pro Max users have successfully been able to capture pictures of the sky through its existing Night Mode tool. When a device detects a low-light environment, Night mode turns on to capture details and brighten shots. (The company did not respond to a request to elaborate on how the algorithms work.)

    AI can make a difference in the image, but the end results for each of these features also depend on the phone’s lenses, said Gartner analyst Bill Ray. A traditional camera will have the lens several centimeters from the sensor, but the limited space on a phone often requires squeezing things together, which can result in a more shallow depth of field and reduced image quality, especially in darker environments.

    “The quality of the lens is still a big deal, and how the phone addresses the lack of depth,” Ray said.

    While night photography on phones has come a long way, a buzzy new technology could push it ahead even more.

    Generative AI, the technology that powers the viral chatbot ChatGPT, has earned plenty of attention for its ability to create compelling essays and images in response to user prompts. But these AI systems, which are trained on vast troves of online data, also have potential to edit and process images.

    “In recent years, generative AI models have also been used in photo-editing functions like background removal or replacement,” Su said. If this technology is added to smartphone photo systems, it could eventually make night modes even more powerful, Su said.

    Big Tech companies, including Google, are already fully embracing this technology in other parts of their business. Meanwhile, smartphone chipset vendors like Qualcomm and MediaTek are looking to support more generative AI applications natively on consumer devices, Su said. These include image and video augmentation.

    “But this is still about two to three years away from limited versions of this showing up on smartphones,” he said.

    [ad_2]

    Source link

  • Why the ‘Godfather of AI’ decided he had to ‘blow the whistle’ on the technology | CNN Business

    Why the ‘Godfather of AI’ decided he had to ‘blow the whistle’ on the technology | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, also known as the “Godfather of AI,” decided he had to “blow the whistle” on the technology he helped develop after worrying about how smart it was becoming, he told CNN on Tuesday.

    “I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told CNN’s Jake Tapper in an interview on Tuesday. “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. On Monday, he made headlines for leaving his role at Google, where he had worked for a decade, in order to speak openly about his growing concerns around the technology.

    In an interview Monday with the New York Times, which was first to report his move, Hinton said he was concerned about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us, and there are very few examples of a more intelligent thing being controlled by a less intelligent thing,” Hinton told Tapper on Tuesday.

    “It knows how to program so it’ll figure out ways of getting around restrictions we put on it. It’ll figure out ways of manipulating people to do what it wants.”

    Hinton is not the only tech leader to speak out with concerns over AI. A number of members of the community signed a letter in March calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk, came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers the viral chatbot ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    Apple co-founder Steve Wozniak, who was one of the signatories on the letter, appeared on “CNN This Morning” on Tuesday, echoing concerns about its potential to spread misinformation.

    “Tricking is going to be a lot easier for those who want to trick you,” Wozniak told CNN. “We’re not really making any changes in that regard – we’re just assuming that the laws we have will take care of it.”

    Wozniak also said “some type” of regulation is probably needed.

    Hinton, for his part, told CNN he did not sign the petition. “I don’t think we can stop the progress,” he said. “I didn’t sign the petition saying we should stop working on AI because if people in America stop, people in China wouldn’t.”

    But he confessed to not having a clear answer for what to do instead.

    “It’s not clear to me that we can solve this problem,” Hinton told Tapper. “I believe we should put a big effort into thinking about ways to solve the problem. I don’t have a solution at present.”

    [ad_2]

    Source link

  • AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority.

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety.

    The statement was signed by leading industry officials including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others.

    The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. AI experts have said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction; today’s cutting-edge chatbots largely reproduce patterns based on training data they’ve been fed and do not think for themselves.

    Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur.

    The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence. In response, a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    Hinton, whose pioneering work helped shape today’s AI systems, previously told CNN he decided to leave his role at Google and “blow the whistle” on the technology after “suddenly” realizing “that these things are getting smarter than us.”

    Dan Hendrycks, director of the Center for AI Safety, said in a tweet Tuesday that the statement first proposed by David Kreuger, an AI professor at the University of Cambridge, does not preclude society from addressing other types of AI risk, such as algorithmic bias or misinformation.

    Hendrycks compared Tuesday’s statement to warnings by atomic scientists “issuing warnings about the very technologies they’ve created.”

    “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,’” Hendrycks tweeted. “From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

    [ad_2]

    Source link

  • OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI, the company behind the viral ChatGPT tool, has been hit with a lawsuit alleging the company stole and misappropriated vast swaths of peoples’ data from the internet to train its AI tools.

    The proposed class action lawsuit, filed Wednesday in a California federal court, claims that OpenAI secretly scraped “massive amounts of personal data from the internet,” according to the complaint. The nearly 160-page complaint alleges that this personal data, including “essentially every piece of data exchanged on the internet it could take,” was also seized by the company without notice, consent or “just compensation.”

    Moreover, this data scraping occurred at an “unprecedented scale,” the suit claims.

    OpenAI did not immediately respond to CNN’s request for comment Wednesday. Microsoft, a major investor into OpenAI, was also named as a defendant in the suit and did not immediately respond to a request for comment.

    “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable – but unacceptable by any measure of responsible data protection and use,” Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, said in a statement to CNN Wednesday.

    The complaint also claims that OpenAI products “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

    The lawsuit seeks injunctive relief in the form of a temporary freeze on further commercial use of OpenAI’s products. It also seeks payments of “data dividends” as financial compensation to people whose information was used to develop and train OpenAI’s tools.

    OpenAI publicly launched ChatGPT late last year, and the tool immediately went viral for its ability to generate compelling, human-sounding responses to user prompts. The success of ChatGPT spurred an apparent AI arms race in the tech world, as companies big and small are now racing to develop and deploy AI tools into as many products as possible.

    [ad_2]

    Source link

  • OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    [ad_1]

    Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, launched on Monday.

    The project’s core offering is its World ID, which the company describes as a “digital passport” to prove that its holder is a real human, not an AI bot. To get a World ID, a customer signs up to do an in-person iris scan using Worldcoin’s ‘orb’, a silver ball approximately the size of a bowling ball. Once the orb’s iris scan verifies the person is a real human, it creates a World ID.

    The company behind Worldcoin is San Francisco and Berlin-based Tools for Humanity.

    The project has 2 million users from its beta period, and with Monday’s launch, Worldcoin is scaling up “orbing” operations to 35 cities in 20 countries. As an enticement, those who sign up in certain countries will receive Worldcoin’s cryptocurrency token WLD.

    WLD’s price rose in early trading on Monday. On the world’s largest exchange, Binance, it hit a peak of $5.29 and at 1000 GMT was at $2.49 from a starting price of $0.15, having seen $25.1 million of trading volume, according to Binance’s website.

    Blockchains can store the World IDs in a way that preserves privacy and can’t be controlled or shut down by any single entity, co-founder Alex Blania told Reuters.

    The project says World IDs will be necessary in the age of generative AI chatbots like ChatGPT, which produce remarkably humanlike language. World IDs could be used to tell the difference between real people and AI bots online.

    Altman told Reuters Worldcoin also can help address how the economy will be reshaped by generative AI.

    “People will be supercharged by AI, which will have massive economic implications,” he said.

    One example Altman likes is universal basic income, or UBI, a social benefits program usually run by governments where every individual is entitled to payments. Because AI “will do more and more of the work that people now do,” Altman believes UBI can help to combat income inequality. Since only real people can have World IDs, it could be used to reduce fraud when deploying UBI.

    Altman said he thought a world with UBI would be “very far in the future” and he did not have a clear idea of what entity could dole out money, but that Worldcoin lays groundwork for it to become a reality.

    “We think that we need to start experimenting with things so we can figure out what to do,” he said.

    [ad_2]

    Source link

  • White House unveils an AI plan ahead of meeting with tech CEOs | CNN Business

    White House unveils an AI plan ahead of meeting with tech CEOs | CNN Business

    [ad_1]



    CNN
     — 

    The White House on Thursday announced a series of measures to address the challenges of artificial intelligence, driven by the sudden popularity of tools such as ChatGPT and amid rising concerns about the technology’s potential risks for discrimination, misinformation and privacy.

    The US government plans to introduce policies that shape how federal agencies procure and use AI systems, the White House said. The step could significantly influence the market for AI products and control how Americans interact with AI on government websites, at security checkpoints and in other settings.

    The National Science Foundation will also spend $140 million to promote research and development in AI, the White House added. The funds will be used to create research centers that seek to apply AI to issues such as climate change, agriculture and public health, according to the administration.

    The plan comes the same day that Vice President Kamala Harris and other administration officials are expected to meet with the CEOs of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic to emphasize the importance of ethical and responsible AI development. And it coincides with a UK government inquiry launched Thursday into the risks and benefits of AI.

    “Tech companies have a fundamental responsibility to make sure their products are safe and secure, and that they protect people’s rights before they’re deployed or made public,” a senior Biden administration official told reporters on a conference call.

    Officials cited a range of risks the public faces in the widespread adoption of AI tools, including the possible use of AI-created deepfakes and misinformation that could undermine the democratic process. Job losses linked to rising automation, biased algorithmic decision-making, physical dangers arising from autonomous vehicles and the threat of AI-powered malicious hackers are also on the White House’s list of concerns.

    It’s just the latest example of the federal government acknowledging concerns from the rapid development and deployment of new AI tools, and trying to find ways to address some of the risks.

    Testifying before Congress, members of the Federal Trade Commission have argued AI could “turbocharge” fraud and scams. Its chair, Lina Khan, wrote in a New York Times op-ed this week that the US government has ample existing legal authority to regulate AI by leaning on its mandate to protect consumers and competition.

    Last year, the Biden administration unveiled a proposal for an AI Bill of Rights calling for developers to respect the principles of privacy, safety and equal rights as they create new AI tools.

    Earlier this year, the Commerce Department released voluntary risk management guidelines for AI that it said could help organizations and businesses “govern, map, measure and manage” the potential dangers in each part of the development cycle. In April, the Department also said it is seeking public input on the best policies for regulating AI, including through audits and industry self-regulation.

    The US government isn’t alone in seeking to shape AI development. European officials anticipate hammering out AI legislation as soon as this year that could have major implications for AI companies around the world.

    [ad_2]

    Source link

  • AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The AI boom is here, and Nvidia is reaping all the benefits.

    Shares of Nvidia

    (NVDA)
    exploded 28% higher Thursday after reporting earnings and sales that surged well above Wall Street’s already lofty expectations. That was enough to make investors temporarily forget about America’s dangerous debt ceiling standoff, sending the broader stock market higher — even after credit rating agency Fitch warned late Wednesday that America could soon lose its sterling AAA debt rating.

    Nvidia makes chips that power generative AI, a type of artificial intelligence that can create new content, such as text and images, in response to user prompts. That’s the kind of AI underlying ChatGPT, Google’s Bard, Dall-E and many of the other new AI technologies.

    “The computer industry is going through two simultaneous transitions — accelerated computing and generative AI,” said Jensen Huang, Nvidia’s CEO, in a statement. “A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process.”

    Huang said Nvidia is increasing supply of its entire suite of data center products to meet “surging demand” for them.

    Last quarter, Nvidia’s profit surged 26% to $2 billion, and sales rose 19% to $7.2 billion, each easily surpassing Wall Street analysts’ forecasts. Nvidia’s outlook for the current quarter was also significantly — about 50% — higher than analysts’ predictions.

    Nvidia’s stock is up nearly 110% this year.

    “There is not one better indicator around underlying AI demand going on … than the foundational Nvidia story,” said Dan Ives, analyst at Wedbush. “We view Nvidia at the core hearts and lungs of the AI revolution.”

    [ad_2]

    Source link

  • Meta releases clues on how AI is used on Facebook and Instagram | CNN Business

    Meta releases clues on how AI is used on Facebook and Instagram | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    As demand for greater transparency in artificial intelligence mounts, Meta released tools and information Thursday aimed at helping users understand how AI influences what they see on its apps.

    The social media giant introduced nearly two dozen explainers focused on various features of its platforms, such as Instagram Stories and Facebook’s news feed. These describe how Meta selects what content to recommend to users.

    The description and disclosures came in the face of looming legislation around the world that may soon impose concrete disclosure requirements on companies that use AI technology.

    Meta’s so-called “system cards” cover how the company determines which accounts to present to users as recommended follows on Facebook and Instagram, how the company’s search tools function and how notifications work.

    For example, the system card devoted to Instagram’s search function describes how the app gathers all relevant search results in response to a user’s query, scores each result based on the user’s past interactions with the app and then applies “additional filters” and “integrity processes” to narrow the list before finally presenting it to the user.

    Meta’s president of global affairs, Nick Clegg, tied the company’s new disclosures to a global debate about the potential dangers of artificial intelligence that range from the spread of misinformation to a rise in AI-enabled fraud and scams.

    “With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks,” Clegg wrote in a blog post Thursday. “We believe that the best way to respond to those concerns is with openness.”

    A longer blog post describing how Facebook content ranking works, meanwhile, identifies detailed factors that go into determining what information the platform presents first.

    Those factors include whether a post has been flagged by a third-party fact checker, how engaging the account that posted the material may be, and whether you may have interacted with the account in the past.

    Meta’s new explainers coincide with the release of new tools for users to tailor the company’s algorithms, including the ability to tell Instagram to supply more of a certain type of content. Previously, Meta had only offered the ability for users to tell Instagram to show less, not more, Clegg wrote.

    On both Facebook and Instagram, he added, users will now be able to customize their feeds further by accessing a menu from individual posts.

    Finally, he said, Meta will be making it easier for researchers to study its platforms by providing a content library and an application programming interface (API) featuring a variety of content from Facebook and Instagram.

    Meta’s announcement comes as European lawmakers have swiftly advanced legislation that would create new requirements for explanation and transparency for companies that use artificial intelligence, and as US lawmakers have said they hope to begin working on similar legislation later this year.

    [ad_2]

    Source link

  • Google is building an AI tool for journalists | CNN Business

    Google is building an AI tool for journalists | CNN Business

    [ad_1]



    CNN
     — 

    Google is developing an artificial intelligence tool for news publishers that can generate article text and headlines, the company said, highlighting how the technology may soon transform the journalism industry.

    The tech giant said in a statement that it is looking to partner with news outlets on the AI tool’s use in newsrooms.

    “Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” a Google spokesperson said, “just like we’re making assistive tools available for people in Gmail and in Google Docs.”

    The effort was first reported by The New York Times, which said the project is referred to internally as “Genesis” and has been pitched to The Times, The Washington Post and News Corp, which owns The Wall Street Journal.

    Google’s statement did not name those media companies but said the company is particularly focusing on “smaller publishers.” It added that the project is not aimed at replacing journalists nor their “essential role … in reporting, creating, and fact-checking their articles.”

    The new tool comes as tech companies, including Google, race to develop and deploy a new crop of generative AI features into applications used in the workplace, with the promise of streamlining tasks and making employees more productive.

    But these tools, which are trained on information online, have also raised concerns because of their potential to get facts wrong or “hallucinate” responses.

    News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on “Star Wars” published by Gizmodo earlier this month similarly required a correction. But both outlets have said they will still move forward with using the technology.

    [ad_2]

    Source link

  • Microsoft opens up its AI-powered Bing to all users | CNN Business

    Microsoft opens up its AI-powered Bing to all users | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft is rolling out the new AI-powered version of its Bing search engine to anyone who wants to use it.

    Nearly three months after the company debuted a limited preview version of its new Bing, powered by the viral AI chatbot ChatGPT, Microsoft is opening it up to all users without a waitlist – as long as they’re signed into the search engine via Microsoft’s Edge browser.

    The move highlights Microsoft’s commitment to move forward with the product even as the AI technology behind it has sparked concerns around inaccuracies and tone. In some cases, people who baited the new Bing were subject to some emotionally reactive and aggressive responses.

    “We’re getting better at speed, we’re getting better at accuracy … but we are on a never-ending quest to make things better and better,” Yusuf Mehdi, a VP at Microsoft overseeing its AI initiatives, told CNN on Wednesday.

    Bing now gets more than 100 million daily active users each day, a significant uptick in the past few months, according to Mehdi. Google, which has long dominated the market, is also adding similar AI features to its search engine.

    In February, Microsoft showed off how its revamped search engine could write summaries of search results, chat with users to answer additional questions about a query and write emails or other compositions based on the results.

    At a press event in New York City on Wednesday, the company shared an early look at some updates, including the ability to ask questions with pictures, access chat history so the chatbot remembers its rapport with users, and export responses to Microsoft Word. Users can also personalize the tone and style of the chatbot’s responses, selecting from a lengthier, creative reply to something that’s shorter and to the point.

    The wave of attention in recent months around ChatGPT, developed by OpenAI with financial backing from Microsoft, helped renew an arms race among tech companies to deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    Beyond adding AI features to search, Microsoft has said it plans to bring ChatGPT technology to its core productivity tools, including Word, Excel and Outlook, with the potential to change the way we work. The decision to add generative AI features to Bing could be particularly risky, however, given how much people rely on search engines for accurate and reliable information.

    Microsoft’s moves also come amid heightened scrutiny on the rapid pace of advancement in AI technology. In March, some of the biggest names in tech, including Elon Musk and Apple co-founder Steve Wozniak, called for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Mehdi said he doesn’t believe the AI industry is moving too fast and suggested the calls for a pause aren’t particularly helpful.

    “Some people think we should pause development for six months but I’m not sure that fixes anything or improves or moves things along,” he said. “But I understand where it’s coming from concern wise.”

    He added: “The only way to really build this technology well is to do it out in the open in the public so we can have conversations about it.”

    [ad_2]

    Source link

  • Nvidia says US curbs on AI chip sales to China would cause ‘permanent loss of opportunities’ | CNN Business

    Nvidia says US curbs on AI chip sales to China would cause ‘permanent loss of opportunities’ | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Nvidia warned Wednesday that if the United States imposes new restrictions on the export of AI chips to China, it would result in a “permanent loss of opportunities” for US industry.

    The company’s chief financial officer, Colette Kress, said she didn’t anticipate any “immediate material impact” but tighter curbs would impact earnings in the future.

    US officials plan to tighten export curbs announced in October to restrict the sale of some artificial-intelligence chips to China, according to multiple media reports, including the Wall Street Journal and Financial Times. Washington has ramped up efforts to cut China off from key technologies that can support its military.

    The US Department of Commerce has not replied to a CNN request for comment.

    The rules, as reported, could make it harder for companies like Nvidia

    (NVDA)
    to sell advanced chips to China. Fueled by a boom in demand for its AI chips, the company briefly hit a market capitalization of $1 trillion in late May.

    “We are aware of reports that the US Department of Commerce is considering further controls that may restrict exports of our A800 and H800 products to China,” Kress told an investment conference.

    “Over the long-term, restrictions prohibiting the sale of our datacenter GPUs to China, if implemented, would result in a permanent loss of opportunities for US industry to compete and lead in one of the world’s largest markets and impact on our future business and financial results,” she said.

    GPUs refer to graphics processing units, which are chips or electronic circuits capable of rendering graphics for display on electronic devices.

    “Given the strength of demand for our products worldwide, we do not anticipate that such additional restrictions, if adopted, would have an immediate material impact on our financial results. We do not anticipate any immediate material impact on our financial results,” Kress added.

    Last October, the Biden administration unveiled a sweeping set of export controls that ban Chinese companies from buying advanced chips and chip-making equipment without a license.

    The new move is aimed in part at Nvidia’s A800 chip, which the US-based company created following the introduction of last year’s curbs in order to continue to sell to China, Bloomberg reported.

    China is a key market for Nvidia. Revenues from mainland China and Hong Kong accounted for 22% of the company’s revenue last year, according to its financial statements.

    On Wednesday, shares of Nvidia slumped as much as 3.2%, before recouping some of the losses. It ended down 1.8%. Chinese AI stocks suffered much heavier losses.

    Inspur Electronic Information Industry fell by 10%, the maximum allowed, on Wednesday in Shenzhen. It dropped again by 5.3% on Thursday. Chengdu Information Technology of Chinese Academy of Sciences slid 12% on Wednesday. Baidu

    (BIDU)
    , which is developing a rival to ChatGPT, sank 4.4% on Thursday in Hong Kong.

    “The US could ruin China’s AI party,” Jefferies analyst said in a research note. Local chipsets do not have Nvidia’s GPU ecosystem, thus every update may require reworking, resulting in lower efficiency and higher costs.

    The Biden administration’s chip curbs would be “much more effective” in limiting China’s advances in military power driven by AI than rules restricting US investment in China’s tech sector, the analysts added.

    China has strongly criticized US restrictions on tech exports, saying earlier this year that it “firmly opposes” such measures.

    In May, Beijing banned Chinese operators of critical information infrastructure from buying products from Micron Technology

    (MU)
    , in apparent retaliation against sanctions imposed by Washington and its allies on the country’s chip sector.

    [ad_2]

    Source link