ReportWire

Tag: iab-computing

  • ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    [ad_1]



    CNN
     — 

    You can now speak aloud to ChatGPT and hear the artificial intelligence-powered chatbot talk back.

    OpenAI, the startup behind the wildly-popular chatbot, announced Monday that it is rolling out new features including the ability to let users engage in a back-and-forth voice conversation with ChatGPT.

    In a company blog post Monday, OpenAI teased how this new feature can be used to “request a bedtime story for your family, or settle a dinner table debate.”

    The new voice features from OpenAI carry similarities to those currently offered by Amazon’s Alexa or Apple’s Siri voice assistants.

    In a demo of the new update shared by OpenAI, a user asks ChatGPT to come up with a story about “the super-duper sunflower hedgehog named Larry.” The chatbot is able to narrate a story out loud with a human-sounding voice that can also respond to questions, such as, “What was his house like?” and “Who is his best friend?”

    ChatGPT’s voice capability is “powered by a new text-to-speech model, capable of generating human-like audio from just text and a few seconds of sample speech,” Open AI said in the blogpost. The company added that it collaborated with professional voice actors to create the five different voices that can be used to animate the chatbot.

    OpenAI also said on Monday that it’s rolling out a new feature that lets the bot respond to prompts featuring an image. For example, you can snap a picture of the contents of your fridge and ask ChatGPT to help you come up with a meal plan using the ingredients you have. Moreover, the company said you can ask the chatbot to focus on a specific part of an image with its “drawing tool” in the app.

    The new features roll out in the app within the next two weeks for paying subscribers of ChatGPT’s Plus and Enterprise services. (Subscriptions to the Plus service are $20 a month, and its Enterprise service is currently only offered to business clients).

    The updates from OpenAI come amid an ongoing AI arms race within the tech sector, initially spurred by the public launch of ChatGPT late last year. In recent weeks, tech giants have been racing to roll out new updates that incorporate more AI-powered tools directly into their core products. Google last week announced a series of updates to its ChatGPT competitor Bard. Also last week, Amazon said it was bringing a generative AI-powered update to its Alexa voice assistant.

    [ad_2]

    Source link

  • Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft CEO Satya Nadella warned on Monday of a “nightmare” scenario for the internet if Google’s dominance in online search is allowed to continue, a situation, he said, that starts with searches on desktop and mobile but extends to the emerging battleground of artificial intelligence.

    Nadella testified on Monday as part of the US government’s sweeping antitrust trial against Google, now into its 14th day. He is the most senior tech executive yet to testify during the trial that focuses on the power of Google as the default search engine on mobile devices and browsers around the globe.

    Taking the stand in a charcoal suit and tie, Nadella painted Google as a technology giant that has blocked off ways for consumers to access rival search engines. His testimony reflected the frustrations of a long-running rivalry between Microsoft and Google whose tensions have permeated the weeks-long trial. (Google didn’t immediately respond to a request for comment.)

    Central to Google’s strategy has been its agreements with companies such as Apple that have made Google the default search engine for millions of internet users.

    “You get up in the morning, you brush your teeth, you search on Google,” Nadella said.

    Nadella testified that every year he has been Microsoft’s CEO, he has unsuccessfully sought to persuade Apple to switch away from Google as its default search partner. Nadella added that Microsoft has been willing to spend close to $15 billion a year for the privilege. (A senior Apple executive, Eddy Cue, testified last week that Apple has always considered Google the best search product for its users, a claim echoed by Google itself throughout the trial.)

    However, even more worrisome, Nadella argued, is that the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else’s — threatening to give Google an unassailable advantage in generative AI that would further entrench its power.

    “This is going to become even harder to compete in the AI age with someone who has that core… advantage,” Nadella testified.

    Despite being profitable, and despite investing some $100 billion in it over the past 20 years, Microsoft’s Bing search engine has only a single-digit market share in mobile search, and only slightly more — into the teens — in desktop search, Nadella said, adding that one of his dreams has been to see Bing account for at least 20% of the market in both segments.

    Bing has struggled to grow its market share in part because being the default search provider for billions of devices means Google receives enormous amounts of data through search queries that helps Google understand at scale what users are likely to be interested in, Nadella noted. And for years, that “dynamic data” has enabled Google to stay ahead of Bing, he added.

    “Every misspelling of a new movie, every local restaurant whose name you mistype,” Nadella explained, “…is a very critical asset to have your search quality get better.” And because the physical world is constantly changing, capturing shifts in search trends are essential to helping a search engine stay relevant as historical data becomes less relevant. Nadella previously led Microsoft’s cloud computing business and before that had spent several years overseeing the engineering team responsible for search and advertising at the company, making him well-versed in Bing’s various challenges.

    Now, Nadella has said that the same data advantage could create “even more of a nightmare” as large language models compete on the basis of the data they are trained on.

    “What is concerning is, it reminds me of what happened with distribution deals [in search],” he testified.

    Under questioning by a Google attorney, Nadella admitted that in some cases, defaults are not the sole determinant of success: Google was able to overcome Microsoft’s own Internet Explorer defaults on Windows PCs to become the market-leading desktop web browser.

    But Nadella attributed Google’s success to the relative openness of the Windows platform, arguing that on more tightly controlled mobile operating systems, and in search, default status plays a much larger role than in competition for desktop web browsers.

    In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. In Nadella’s own meetings with publishers, he said that he now hears that Google “wants … to write this check and we want you to match it.” (Google didn’t immediately respond to questions about those deals.)

    The requests highlight concerns that “what is publicly available today [may not be] publicly available tomorrow” for AI training, according to the testimony.

    While Microsoft and Apple have their own defaults — for example, by making Apple Maps the default maps app on iOS devices — Google goes much further than other tech companies in using “carrots and sticks” to keep people using its products by default, Nadella claimed. He cited Google’s licensing requirements that make Google’s Play Store a required installed app as a condition of using the Android operating system — another topic of dispute in the trial. The equivalent would be if Microsoft threatened to withhold Microsoft Office if Bing were not the default search engine, Nadella said, a move he claimed would not be in Microsoft’s business interests.

    Acknowledging that Google would not be in its dominant position without Microsoft’s own antitrust battles with the US government in the 1990s, Nadella said the situation involving Google today is vastly different. Internet search and, particularly on mobile devices, is the single largest software business opportunity in the world.

    Google’s dominance in search is reinforced when websites and publishers optimize for Google’s search algorithm and not Bing’s, when advertisers flock to Google and when users stick to what’s familiar, Nadella argued.

    In his fruitless negotiations with Apple, Nadella said he has tried to argue that Bing’s current role is little more than as a useful tool for Apple to “bid up the price” of hosting Google as the default search provider — but that Bing provides an important counterweight to Google and that Apple should consider investing in the Microsoft alternative for competition’s sake. Nadella has also proposed running Bing on Apple devices as a kind of “public utility,” he said.

    “Let’s say Bing exited the market,” Nadella said. “You think Google would keep paying [Apple]?”

    [ad_2]

    Source link

  • NY officials announce legislation aimed at protecting kids on social media | CNN Business

    NY officials announce legislation aimed at protecting kids on social media | CNN Business

    [ad_1]



    CNN
     — 

    Two new bills meant to protect children’s mental health online by changing the way they are served content on social media and by limiting companies’ use of their data will be introduced in the New York state legislature, state and city leaders said Wednesday.

    New York Gov. Kathy Hochul and New York Attorney General Letitia James made the announcement at the headquarters of the United Federation of Teachers Manhattan, joined by UFT President Michael Mulgrew, State Senator Andrew Gounardes, Assemblywoman Nily Rozic and community advocates.

    “Our children are in crisis, and it is up to us to save them,” Hochul said, comparing social media algorithms to cigarettes and alcohol. “The data around the negative effects of social media on these young minds is irrefutable, and knowing how dangerous the algorithms are, I will not accept that we are powerless to do anything about it.”

    The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” would limit what New York officials say are the harmful and addictive features of social media for children. The act would allow users under 18 and their parents to opt out of receiving feeds driven by algorithms designed to harness users’ personal data to keep them on the platforms for as long as possible. Those who opt out would receive chronological feeds instead, like in the early days of social media.

    The bill would also allow users and parents who opt in to receiving algorithmically generated content feeds to block access to social media platforms between 12am and 6am or to limit the total number of hours per day a minor can spend on a platform.

    “This is a major issue that we all feel strongly about and that must be addressed,” James said. “Nationwide, children and teens are struggling with significantly high rates of depression, anxiety, suicidal thoughts and other mental health issues, largely because of social media.”

    The bill targets platforms like Facebook, Instagram, TikTok, Twitter and YouTube, where feeds are comprised of user-generated content along with other material the platform suggests to users based on their personal data. Tech platforms have designed and promoted voluntary tools aimed at parents to help them control what content their kids can see, arguing that the decision about what boundaries to set should be up to individual families. But that hasn’t stopped critics from calling on platforms to do more — or from threatening further regulation.

    “Our children deserve a safer and more secure environment online, free from addictive algorithms and exploitation,” said Gounardes. “Algorithms are the new tobacco. Simple as that.”

    The New York legislation comes amid a raft of similar bills across the country that purport to safeguard young users by imposing tough new rules on platforms.

    States including Arkansas, Louisiana and Utah have passed bills requiring tech platforms to obtain a parent’s consent before creating accounts for teens. Federal lawmakers have introduced a similar bill that would ban kids under 13 from using social media altogether. And numerous lawsuits against social media platforms have accused the companies of harming users’ mental health. The latest of these suits came on Tuesday, when Utah’s attorney general sued TikTok for allegedly misleading consumers about the app’s safety.

    Mulgrew called the New York legislation necessary in part due to a lack of action by the federal government to protect kids.

    “The last time, first and only time that the United States government passed a bill to protect children in social media was 1998,” Mulgrew said, referring to the Children’s Online Privacy Protection Act (COPPA), a federal law that prohibits the collection of personal data from Americans under the age of 13 without parental consent. In July, the US Senate commerce committee voted to advance a bill that would expand COPPA’s protections to teens for the first time.

    New York officials on Wednesday also highlighted risks to children’s privacy online, including the chance their location or other personal data could fall into the hands of human traffickers and others who might prey on youth.

    “While other states and countries have enacted laws to limit the personal data that online platforms can collect from minors, no such restrictions currently exist in New York,” a press release from earlier Wednesday stated. “The two pieces of legislation introduced today will add critical protections for children and young adults online.”

    The New York Child Data Protection Act would protect children’s data online by prohibiting all online sites from collecting, using, sharing or selling the personal data of anyone under 18 for the purposes of advertising, without informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent or guardian.

    Both bills would authorize the attorney general to bring an action to enjoin or seek damages or civil penalties of up to $5,000 per violation and would allow parents or guardians of minors to sue for damages of up to $5,000 per user incident or for actual damages, whichever is greater.

    The US Department of Health and Human Services says that while social media provides some benefits, it also presents “a meaningful risk of harm to youth.” The Surgeon General’s Social Media and Youth Mental Health Advisory released in May said children and adolescents who spend more than three hours a day on social media face double the risk of mental health problems like depression and anxiety, a finding the report called “concerning” given a recent survey that showed teens spend an average of 3.5 hours a day on social media.

    [ad_2]

    Source link

  • X appears to slow load times for links to several news outlets and rival platforms | CNN Business

    X appears to slow load times for links to several news outlets and rival platforms | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Link loading times to some Twitter competitors and news media sites posted to X, the social media platform formerly known as Twitter, appeared to be delayed or throttled for much of Tuesday.

    Links posted to X that directed to sites including the New York Times, Reuters, Facebook, Substack and X competitors Bluesky and Threads took around 5 seconds to load — a notable slowdown from the typically nearly instantaneous loading times, according to observations by CNN reporters. Many other sites, such as NBA.com, CNN, retailer Target and other sites did not appear to be affected by the issue.

    The delays were first reported by users of the technology forum Hacker News.

    The reason for the delays in loading links to some sites was not clear. X did not respond to multiple requests for comment from CNN. The site has been plagued by technical issues after Musk bought the site last year and laid off the majority of the staff. And the issue seemed to have resolved for some users by Tuesday afternoon.

    However, the delays affected the sites for rival platforms, as well as news outlets that Twitter owner Elon Musk has previously criticized. Musk earlier this year feuded with the New York Times over its unwillingness to pay for his platform’s new paid verification program, and he has separately called for the outlet to be “cancelled.”

    The apparent delay in visiting links to the New York Times was easy to verify with simple commands on a computer. Will Dormann, a cybersecurity researcher, plugged the New York Times website into a basic command program on his Mac and compared the loading time for that website with that of a dummy website. The load time for the New York Times site was about 4.5 seconds longer, Dormann told CNN Tuesday.

    X, like other platforms, uses a link-shortener service to collect information on users who click on links shared on the platform. When a link for a New York Times article plugged into X’s link-shortener takes far longer to load than other websites using the same link-shortening service, “this is the clear indicator that there are server-side [at the X-operated shortener] shenanigans going on,” Dormann told CNN.

    The New York Times said in a statement to CNN that it had observed the delay, but, “We have not received any explanation from the platform about this move.”

    “While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” it said in the statement. “The mission of The New York Times is to report the news impartially without fear or favor, and we’ll continue to do so, undeterred by any attempts to hinder this.”

    Meta, the parent company of Facebook and Threads, did not respond to a request for comment on the delay. But CEO Mark Zuckerberg responded to a post about the issue on Threads with a thinking face emoji.

    Musk and Zuckerberg have in recent weeks been making plans to take one another on in a cage fight, although Zuckerberg this week signaled that the fight may be off because he believes Musk “isn’t serious.” “Elon won’t confirm a date, then says he needs surgery, and now asks to do a practice round in my backyard instead,” Zuckerberg wrote on Threads Sunday. Musk on Monday appeared to respond by suggesting in a series of tweets that he might show up at Zuckerberg’s home to fight anyway.

    Substack cofounders Chris Best, Hamish McKenzie and Jairaj Sethi said in a statement to CNN that they hoped X would reverse the delay but that “Substack was created in direct response to this kind of behavior by social media companies.”

    “Writers cannot build sustainable businesses if their connection to their audience depends on unreliable platforms that have proven they are willing to make changes that are hostile to the people who use them,” the Substack cofounders said.

    Reuters said in a statement that it was aware of reports “of a delay in opening links to Reuters stories on X. We are looking into the matter.”

    Bluesky did not immediately respond to a request for comment about the link delay.

    X briefly sparked backlash in December over a decision to ban links to rival social media services, including Facebook, Instagram and Twitter alternatives like Mastodon, which was later reversed. The platform has also faced a series of outages and technical issues in recent months that have affected users’ ability to read tweets, view photos and click through links after Musk slashed the company’s staff and cut back on infrastructure spending.

    -CNN’s Jon Passantino and Oliver Darcy contributed to this report.

    [ad_2]

    Source link

  • Here’s what Donald Trump’s return to X could mean for the platform’s business | CNN Business

    Here’s what Donald Trump’s return to X could mean for the platform’s business | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Nine months after Elon Musk reinstated Donald Trump’s account on the social network previously known as Twitter, the former president has returned to what was once his platform of choice for communicating with the country.

    The return of Trump – who used to be one of the site’s most prominent, if controversial, users – could mark a turning point for the company now called X after months of turbulence. Trump, who has nearly 87 million followers, could attract a wide set of viewers, especially in the lead up to the 2024 presidential election, where he is the front-runner for the Republican nomination. But it could also present a new set of challenges for the social network, including for its effort to revive its ad business, if Trump decides to resume regularly posting on the platform at all.

    Trump on Thursday night posted on the platform for the first time since January 2021, when he was suspended for violating Twitter’s rules against glorification of violence in the wake of the January 6, 2021, attack on the US Capitol. On Thursday, he posted a photo of his mug shot – the first such photo of a US president in history – after his surrender in Georgia on more than a dozen charges stemming from his efforts to reverse the 2020 election results there. He also added a link to a fundraiser.

    Trump’s return appeared to be welcomed by X owner Musk, who has been encouraging politicians and public figures to post on the site in a bid to improve user numbers. He shared Trump’s X post saying, “Next-level.” Later, appearing to reference the former president without explicitly naming him, Musk posted that “the speed at which your message on this platform can reach a vast number of people is mind-blowing.”

    X declined to comment for this story.

    If Trump decides to return to regularly posting on X, it could be a major boon to the platform’s effort to attract an audience as it faces increased competition. In the wake of controversial policy decisions by Musk, a slew of Twitter copycats have popped up as users seek alternative platforms, including Meta’s Threads, which rolled out a key update this week. The week of July 17, traffic to then-Twitter was down more than 9% compared to the same period in the prior year, according to the most recent public report from web traffic intelligence firm Similarweb.

    Musk’s changes at the company have also irked some advertisers, weighing on X’s core business.

    When he was president, Trump’s posts on what was then Twitter often moved the markets, set the news cycle and drove the agenda in Washington – a fact that benefited the company in the form of countless hours of user engagement and almost certainly could again. And while Trump has remained mostly on his own platform, Truth Social, since he was suspended from many mainstream social networks in early 2021, X would give him a larger reach as he vies for the 2024 Republican nomination.

    Trump’s return “should have a positive impact on [X’s] engagement at a time when it needs it,” D.A. Davidson analyst Tom Forte told CNN in an email Friday.

    (It’s not clear how Musk – who has often been X’s main character since his takeover, thanks in some cases to his own policy decisions – would feel about sharing the spotlight.)

    That engagement could be a selling point for X in its quest to lure advertisers back to the platform. But Trump’s return could also raise fresh concerns for advertisers, some of whom have pulled back their spending on the platform over fears that their ads could run next to controversial or potentially objectionable content as Musk has reduced content moderation on the site.

    Musk said last month that the company still had negative cash flow because of a 50% decline in revenue from its core ad business, although CEO Linda Yaccarino said weeks later the company is now “close to break-even.”

    And while X’s leadership has said advertisers are returning thanks to new brand safety controls, at least two brands recently paused their spending on the platform after their ads were run alongside an account celebrating the Nazi party. (X suspended the account after it was flagged and said ad impressions on the page were minimal.)

    Trump frequently pushed boundaries when he was active on Twitter. For years, the platform took a light-touch approach to moderating his account, arguing at times that as a public official, the then-president must be given wide latitude to speak. Now, if Trump returns to his old habits – the former president has, for example, continued to falsely claim in posts on Truth Social that the 2020 election was stolen – Musk could be forced to decide whether to risk alienating additional advertisers or compromise his stated commitment to “free speech.”

    Forte said he will be closely watching the impact of Trump’s return on Twitter’s advertising business. “The increased engagement should be favorable, but there is a risk that heightened controversy could hamper ad sales,” he said.

    And it’s not yet clear whether Trump will actually return to being active on X beyond Thursday’s post, which was essentially a fundraising appeal, and similar to what he posted on Truth Social. After Facebook restored Trump’s account earlier this year, many of his posts on that platform have been aimed at directing users to donate or volunteer for his campaign.

    What’s more, after making his return to X, Trump appeared to try to clarify where his loyalty lies. “I LOVE TRUTH SOCIAL. IT IS MY HOME!!” Trump posted on the X competitor platform.

    [ad_2]

    Source link

  • South Korea’s Hynix is looking into how its chips got into Huawei’s controversial smartphone | CNN Business

    South Korea’s Hynix is looking into how its chips got into Huawei’s controversial smartphone | CNN Business

    [ad_1]


    Hong Kong/Seoul
    CNN
     — 

    SK Hynix, a South Korean chipmaker, is investigating how two of its memory chips mysteriously ended up inside the Mate 60 Pro, a controversial smartphone launched by Huawei last week.

    Shares in Hynix fell more than 4% on Friday after it emerged that two of its products, a 12 gigabyte (GB) LPDDR5 chip and 512 GB NAND flash memory chip, were found inside the Huawei handset by TechInsights, a research organization based in Canada specializing in semiconductors, which took the phone apart for analysis.

    “The significance of the development is that there are restrictions on what SK Hynix can ship to China,” G Dan Hutcheson, vice chair of TechInsights, told CNN. “Where do these chips come from? The big question is whether any laws were violated.”

    A Hynix spokesperson told CNN Friday that it was aware of its chips being used in the Huawei phone and had started investigating the issue.

    The company “no longer does business with Huawei since the introduction of the US restrictions against the company,” it said in a statement.

    “SK Hynix is strictly abiding by the US government’s export restrictions,” the company said.

    Industry insiders said it was possible that Huawei had purchased the memory chips from the secondary market and not directly from the manufacturer. It’s also possible Huawei may have had a stockpile of components accumulated before the US export curbs kicked in fully.

    TechInsights had previously revealed that the “brains” of the phone were powered by a 5G Kirin 9000s chip made by China’s top chipmaker Semiconductor Manufacturing International Corporation, better known as SMIC.

    It is still examining the Mate 60 Pro and does not rule out the possibility of finding more components made by companies subject to US trade sanctions. So far, it has found that most of the phone’s components were provided by Chinese suppliers.

    Analysts have said the smartphone is a major breakthrough for China as it clashes with the United States over access to advanced technology.

    The development prompted two US congressmen, Mike Gallagher and Michael McCaul, to call on the White House – which is seeking more information about the phone – to further restrict technology export sales to Chinese companies.

    Huawei and SMIC have not replied to requests for comment.

    In 2019, the US government banned American companies from selling software and equipment to Huawei. It also restricted international chipmakers using US-made technology from working with the company.

    That is why, four years later, last week’s launch of the Mate 60 Pro shocked industry experts who didn’t understand how Huawei, which is headquartered in Shenzhen, would have the ability to manufacture such an advanced smartphone following sweeping efforts by the United States to restrict China’s access to foreign chip technology.

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link

  • Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    [ad_1]



    CNN
     — 

    Amazon said on Monday that it’s investing up to $4 billion into the artificial intelligence company Anthropic in exchange for partial ownership and Anthropic’s greater use of Amazon Web Services (AWS), the e-commerce giant’s cloud computing platform.

    The deepening partnership between the two companies highlights how some large tech firms with massive cloud computing resources are increasingly leveraging those assets to gain a bigger foothold in AI.

    As part of the deal, AWS will become the “primary” cloud provider for Anthropic, with the AI company using Amazon’s cloud platform to do “the majority” of its AI model development and research into AI safety, the companies said. That will include using Amazon’s suite of in-house AI chips.

    Anthropic also made a “long-term commitment” to offer its AI models to AWS customers, Amazon said, and promised to give AWS users early access to features such as the ability to adapt Anthropic models for specific use cases.

    “With today’s announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature,” Amazon said in a release.

    Anthropic already offers its models to AWS users through Amazon Bedrock, Amazon’s one-stop shop for AI products. Bedrock also provides access to models from other providers including Stability AI and AI21 Labs, along with proprietary models developed by Amazon itself.

    In a release, Anthropic said that Amazon’s minority stake would not change its corporate governance structure nor its commitments to developing AI responsibly.

    “We will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems,” Anthropic said.

    Amazon and Anthropic both made commitments to the Biden administration this year to conduct external audits of its AI systems before releasing them to the public.

    Amazon’s investment in Anthropic follows similar moves by cloud leaders such as Microsoft. In 2019, Microsoft invested $1 billion in ChatGPT-maker OpenAI. More recently, Microsoft made a $10 billion investment in OpenAI this year and launched a push to bring OpenAI’s technology into consumer-facing Microsoft products, such as Bing.

    [ad_2]

    Source link

  • Federal appeals court extends limits on Biden administration communications with social media companies to top US cybersecurity agency | CNN Business

    Federal appeals court extends limits on Biden administration communications with social media companies to top US cybersecurity agency | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    A federal appeals court has expanded the scope of a ruling that limits the Biden administration’s communications with social media companies, saying it now also applies to a top US cybersecurity agency.

    The ruling last month from the conservative 5th Circuit US Court of Appeals severely limits the ability of the White House, the surgeon general, the Centers for Disease Control and Prevention and the FBI to communicate with social media companies about content related to Covid-19 and elections that the government views as misinformation.

    The preliminary injunction had been on pause and a recent procedural snafu over a request from the plaintiffs in the case to broaden its scope led the court on Tuesday to withdraw its earlier opinion and issue a new one that now includes the US Cybersecurity and Infrastructure Security Agency. That agency is charged with protecting non-military networks from hacking and other homeland security threats.

    Similar to the ruling last month, in which the appeals court said the federal government had “likely violated the First Amendment” when it leaned on platforms to moderate some content, the new ruling says CISA violates the Constitution.

    “CISA used its frequent interactions with social media platforms to push them to adopt more restrictive policies on censoring election-related speech,” the three-judge panel wrote.

    “The platforms’ censorship decisions were made under policies that CISA has pressured them into adopting and based on CISA’s determination of the veracity of the flagged information,” they continued. “Thus, CISA likely significantly encouraged the platforms’ content-moderation decisions and thereby violated the First Amendment.”

    The plaintiffs in the suit, which include Missouri and Louisiana’s attorneys general, as well as several individual plaintiffs, had also asked the court to expand the scope in other ways, including by making it apply to some State Department officials. But the court’s new ruling was only modified to add CISA as an enjoined entity.

    The judges said they were pausing their new injunction for 10 days, and the Biden administration has the option of asking the Supreme Court to issue a more lasting pause on the modified ruling.

    [ad_2]

    Source link

  • The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business

    The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business

    [ad_1]


    New York
    CNN
     — 

    As the Israel-Hamas war reaches the end of its first week, millions have turned to platforms including TikTok and Instagram in hopes of comprehending the brutal conflict in real time. Trending search terms on TikTok in recent days illustrate the hunger for frontline perspectives: From “graphic Israel footage” to “live stream in Israel right now,” internet users are seeking out raw, unfiltered accounts of a crisis they are desperate to understand.

    For the most part, they are succeeding, discovering videos of tearful Israeli children wrestling with the permanence of death alongside images of dazed Gazans sitting in the rubble of their former homes. But that same demand for an intimate view of the war has created ample openings for disinformation peddlers, conspiracy theorists and propaganda artists — malign influences that regulators and researchers now warn pose a dangerous threat to public debates about the war.

    One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attacks, including false claims that they were orchestrated by the media. Another, viewed more than 100,000 times, shows a clip from the video game “Arma 3” with the caption, “The war of Israel.” (Some users in the comments of that video noted they had seen the footage circulating before — when Russia invaded Ukraine.)

    TikTok is hardly alone. One post on X, formerly Twitter, was viewed more than 20,000 times and flagged as misleading by London-based social media watchdog Reset for purporting to show Israelis staging civilian deaths for cameras. Another X post the group flagged, viewed 55,000 times, was an antisemitic meme featuring Pepe the Frog, a cartoon that has been appropriated by far-right white supremacists. On Instagram, a widely shared and viewed video of parachuters dropping in on a crowd and captioned “imagine attending a music festival when Hamas parachutes in” was debunked over the weekend and, in fact, showed unrelated parachute jumpers in Egypt. (Instagram later labeled the video as false.)

    This week, European Union officials sent warnings to TikTok, Facebook and Instagram-parent Meta, YouTube and X, highlighting reports of misleading or illegal content about the war on their platforms and reminding the social media companies they could face billions of dollars in fines if an investigation later determines they violated EU content moderation laws. US and UK lawmakers have also called on those platforms to ensure they are enforcing their rules against hateful and illegal content.

    Since the violence in Israel began, Imran Ahmed, founder and CEO of the social media watchdog group Center for Countering Digital Hate, told CNN his group has tracked a spike in efforts to pollute the information ecosystem surrounding the conflict.

    “Getting information from social media is likely to lead to you being severely disinformed,” said Ahmed.

    Everyone from US foreign adversaries to domestic extremists to internet trolls and “engagement farmers” has been exploiting the war on social media for their own personal or political gain, he added.

    “Bad actors surrounding us have been manipulating, confusing and trying to create deception on social media platforms,” Dan Brahmy, CEO of the Israeli social media threat intelligence firm Cyabra, said Thursday in a video posted to LinkedIn. “If you are not sure of the trustworthiness [of content] … do not share,” he said.

    ‘Upticks in Islamophobic and antisemitic narratives’

    Graham Brookie, senior director of the Digital Forensic Research Lab at the Atlantic Council in Washington, DC, told CNN his team has witnessed a similar phenomenon. The trend includes a wave of first-party terrorist propaganda, content depicting graphic violence, misleading and outright false claims, and hate speech – particularly “upticks in specific and general Islamophobic and antisemitic narratives.”

    Much of the most extreme content, he said, has been circulating on Telegram, the messaging app with few content moderation controls and a format that facilitates quick and efficient distribution of propaganda or graphic material to a large, dedicated audience. But in much the same way that TikTok videos are frequently copied and rebroadcast on other platforms, content shared on Telegram and other more fringe sites can easily find a pipeline onto mainstream social media or draw in curious users from major sites. (Telegram didn’t respond to a request for comment.)

    Schools in Israel, the United Kingdom and the United States this week urged parents to delete their children’s social media apps over concerns that Hamas will broadcast or disseminate disturbing videos of hostages who have been seized in recent days. Photos of dead or bloodied bodies, including those of children, have already spread across Facebook, Instagram, TikTok and X this week.

    And tech watchdog group Campaign for Accountability on Thursday released a report identifying several accounts on X sharing apparent propaganda videos with Hamas iconography or linking to official Hamas websites. Earlier in the week, X faced criticism for videos unrelated to the war being presented as on-the-ground footage and for a post from owner Elon Musk directing users to follow accounts that previously shared misinformation (Musk’s post was later deleted, and the videos were labeled using X’s “community notes” feature.)

    Some platforms are in a better position to combat these threats than others. Widespread layoffs across the tech industry, including at some social media companies’ ethics and safety teams, risk leaving the platforms less prepared at a critical moment, misinformation experts say. Much of the content related to the war is also spreading in Arabic and Hebrew, testing the platforms’ capacity to moderate non-English content, where enforcement has historically been less robust than in English-language content.

    “Of course, platforms have improved over the years. Communication & info sharing mechanisms exist that did not in years past. But they have also never been tested like this,” Brian Fishman, the co-founder of trust and safety platform Cinder who formerly led Facebook’s counterterrorism efforts, said Wednesday in a post on Threads. “Platforms that kept strong teams in place will be pushed to the limit; platforms that did not will be pushed past it.”

    Linda Yaccarino, the CEO of X, said in a letter Wednesday to the European Commission that the platform has “identified and removed hundreds of Hamas-related accounts” and is working with several third-party groups to prevent terrorist content from spreading. “We’ve diligently taken proactive actions to remove content that violates our policies, including: violent speech, manipulated media and graphic media,” she said. The European Commission on Thursday formally opened an investigation into X following its earlier warning about disinformation and illegal content linked to the war.

    Meta spokesperson Andy Stone said that since Hamas’ initial attacks, the company has established “a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation. Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds.”

    YouTube, for its part, says its teams have removed thousands of videos since the attack began, and continues to monitor for hate speech, extremism, graphic imagery and other content that violates its policies. The platform is also surfacing almost entirely videos from mainstream news organizations in searches related to the war.

    Snapchat told CNN that its misinformation team is closely watching content coming out of the region, making sure it is within the platform’s community guidelines, which prohibits misinformation, hate speech, terrorism, graphic violence and extremism.

    TikTok did not respond to a request for comment on this story.

    Large tech platforms are now subject to content-related regulation under a new EU law called the Digital Services Act, which requires them to prevent the spread of mis- and disinformation, address rabbit holes of algorithmically recommended content and avoid possible harms to user mental health. But in such a contentious moment, platforms that take too heavy a hand in moderation could risk backlash and accusations of bias from users.

    Platforms’ algorithms and business models — which generally rely on the promotion of content most likely to garner significant engagement — can aid bad actors who design content to capitalize on that structure, Ahmed said. Other product choices, such as X’s moves to allow any user to pay for a subscription for a blue “verification” checkmark that grants an algorithmic boost to post visibility, and to remove the headlines from links to news articles, can further manipulate how users perceive a news event.

    “It’s time to break the emergency glass,” Ahmed said, calling on platforms to “switch off the engagement-driven algorithms.” He added: “Disinformation factories are going to cause geopolitical instability and put Jews and Muslims at harm in the coming weeks.”

    Even as social media companies work to hide the absolute worst content from their users — whether out of a commitment to regulation, advertisers’ brand safety concerns, or their own editorial judgments — users’ continued appetite for gritty, close-up dispatches from Israelis and Palestinians on the ground is forcing platforms to walk a fine line.

    “Platforms are caught in this demand dynamic where users want the latest and the most granular, or the most ‘real’ content or information about events, including terrorist attacks,” Brookie said.

    The dynamic simultaneously highlights the business models of social media and the role the companies play in carefully calibrating their users’ experiences. The very algorithms that are widely criticized elsewhere for serving up the most outrageous, polarizing and inflammatory content are now the same ones that, in this situation, appear to be giving users exactly what they want.

    But closeness to a situation is not the same thing as authenticity or objectivity, Ahmed and Brookie said, and the wave of misinformation flooding social media right now underscores the dangers of conflating them.

    Despite giving the impression of reality and truthfulness, Brookie said, individual stories and combat footage conveyed through social media often lack the broader perspective and context that journalists, research organizations and even social media moderation teams apply to a situation to help achieve a fuller understanding of it.

    “It’s my opinion that users can interact with the world as it is — and understand the latest, most accurate information from any given event — without having to wade through, on an individual basis, all of the worst possible content about that event,” Brookie said.

    Potentially exacerbating the messy information ecosystem is a culture on social media platforms that often encourages users to bear witness to and share information about the crisis as a way of signaling their personal stance, whether or not they are deeply informed. That can lead even well-intentioned users to unwittingly share misleading information or highly emotional content created with the intention of collecting views or monetizing highly engaging content.

    “Be very cautious about sharing in the middle of a major world event,” Ahmed said. “There are people trying to get you to share bullsh*t, lies, which are designed to inculcate you to hate or to misinform you. And so sharing stuff that you’re not sure about is not helping people, it’s actually really harming them and it contributes to an overall sense that no one can trust what they’re seeing.”

    [ad_2]

    Source link

  • Dozens of states sue Instagram-parent Meta over ‘addictive’ features and youth mental health harms | CNN Business

    Dozens of states sue Instagram-parent Meta over ‘addictive’ features and youth mental health harms | CNN Business

    [ad_1]



    CNN
     — 

    Dozens of states sued Instagram-parent Meta on Tuesday, accusing the social media giant of harming young users’ mental health through allegedly addictive features such as infinite news feeds and frequent notifications that demand users’ constant attention.

    In a federal lawsuit filed in California by 33 attorneys general, the states allege that Meta’s products have harmed minors and contributed to a mental health crisis in the United States.

    “Meta has profited from children’s pain by intentionally designing its platforms with manipulative features that make children addicted to their platforms while lowering their self-esteem,” said Letitia James, the attorney general for New York, one of the states involved in the federal suit. “Social media companies, including Meta, have contributed to a national youth mental health crisis and they must be held accountable.”

    Eight additional attorneys general sued Meta on Tuesday in various state courts around the country, making similar claims as the massive multi-state federal lawsuit.

    And the state of Florida sued Meta in its own separate federal lawsuit, alleging that Meta misled users about potential health risks of its products.

    Tuesday’s multistate federal suit — filed in the US District Court for the Northern District of California — accuses Meta of violating a range of state-based consumer protection statutes, as well as a federal children’s privacy law known as COPPA that prohibits companies from collecting the personal information of children under 13 without a parent’s consent.

    “Meta’s design choices and practices take advantage of and contribute to young users’ susceptibility to addiction,” the complaint reads. “They exploit psychological vulnerabilities of young users through the false promise that meaningful social connection lies in the next story, image, or video and that ignoring the next piece of social content could lead to social isolation.”

    The federal complaint calls for court orders prohibiting Meta from violating the law and, in the case of many states, unspecified financial penalties.

    “We share the attorneys generals’ commitment to providing teens with safe, positive experiences online, and have already introduced over 30 tools to support teens and their families,” Meta said in a statement. “We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path.”

    The wave of lawsuits is the result of a bipartisan, multistate investigation dating back to 2021, Colorado Attorney General Phil Weiser said at a press conference Tuesday, after Facebook whistleblower Frances Haugen came forward with tens of thousands of internal company documents that she said showed how the company knew its products could have negative impacts on young people’s mental health.

    “We know that there were decisions made, a series of decisions to make the product more and more addictive,” Tennessee Attorney General Jonathan Skrmetti told reporters. “And what we want is for the company to undo that, to make sure that they are not exploiting these vulnerabilities in children, that they are not doing all the little, sophisticated, tricky things that we might not pick up on that drive engagement higher and higher and higher that allowed them to keep taking more and more time and data from our young people.”

    Tuesday’s multipronged legal assault also marks the newest attempt by states to rein in large tech platforms over fears that social media companies are fueling a spike in youth depression and suicidal ideation.

    “There’s a mountain of growing evidence that social media has a negative impact on our children,” said California Attorney General Rob Bonta, “evidence that more time on social media tends to be correlated with depression with anxiety, body image issues, susceptibility to addiction and interference with daily life, including learning.”

    The suits follow a raft of legislation in states ranging from Arkansas to Louisiana that clamp down on social media by establishing new requirements for online platforms that wish to serve teens and children, such as mandating that they obtain a parent’s consent before creating an account for a minor, or that they verify users’ ages.

    In some cases, the tech industry has challenged those laws in court — for example, by claiming that Arkansas’ social media law violates residents’ First Amendment rights to access information.

    New Hampshire Attorney General John Formella said the states expect Meta to mount a similar defense but that the company will not succeed because the multistate suit targets Meta’s conduct, not speech.

    Formella added that in addition to consumer protection claims, New Hampshire is also bringing negligence and product liability claims as part of the federal suit.

    The complaints filed in state courts allege violations of various state-specific laws. For example, the complaint from District of Columbia Attorney General Brian Schwalb accuses Meta of violating the district’s consumer protection statute by misleading the public about the safety of company platforms.

    Tuesday’s lawsuits come days before a federal judge in California is set to consider a slew of similar allegations against the wider tech industry. In a hearing Friday morning, District Judge Yvonne Gonzalez Rogers is expected to hear arguments by Google, Meta, Snap and TikTok urging her to dismiss nearly 200 complaints involving private plaintiffs that have accused the companies of addicting or harming their users.

    It is possible that Tuesday’s multistate suit could be merged with the consumers’ cases, said Weiser, adding that the main difference of the multistate case is that it could lead to nationwide relief.

    “The coordination that we bring across the AG community, we believe is invaluable to this,” Weiser said.

    Participating in Tuesday’s multistate federal suit are California, Colorado, Connecticut, Delaware, Georgia, Hawaii, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Maryland, Michigan, Minnesota, Missouri, Nebraska, New Jersey, New York, North Carolina, North Dakota, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Virginia, Washington, West Virginia and Wisconsin.

    The additional suits filed in state courts were brought by the District of Columbia, Massachusetts, Mississippi, New Hampshire, Oklahoma, Tennessee, Utah and Vermont.

    [ad_2]

    Source link

  • iPhone users will soon have to adjust to this small but significant change | CNN Business

    iPhone users will soon have to adjust to this small but significant change | CNN Business

    [ad_1]



    CNN
     — 

    Get your thumb ready for next month. Apple

    (AAPL)
    is making a subtle change to the iPhone’s software that will likely mess with your muscle memory: The big red “end call” button is moving.

    The iPhone’s phone app will get a series of updates coming to iOS 17, including an updated design that repositions the hang up button to the bottom right of the screen, next to other functions. The button currently sits separately at the bottom middle of the phone app, underneath the buttons to mute, access the keypad or add a call.

    The new call screen, which is already available for download in a beta version for developers, sparked some strong reactions among iOS users on social media: “iOS 17 has the FaceTime button where the end call button used to be,” tweeted one user. “Muscle memory be damned.”

    The change is likely to streamline the look of the phone app and put all functions in one place. Apple did not respond to a request for comment.

    At its annual Worldwide Developer Conference in May, the company showed off a slew of new tools coming to iOS 17 that make calling and messaging others more personalized and customized. iPhone users, for example, will be able to design contact “posters,” a custom image to appear when they call someone or receive their call.

    Meanwhile, a new feature called Live Voicemail will transcribe a caller’s message in real time, so users can decide whether to ignore or take the call, and a tool called NameDrop will let users share their contact information by holding two iPhones close together. In addition, FaceTime will support the ability to leave video messages when someone isn’t available to chat.

    Other changes coming to iOS 17 include a more accurate autocorrect, improved dictation in iMessage, and a more responsive Siri. Apple typically launches its latest mobile operating system in September, following its annual iPhone event.

    [ad_2]

    Source link

  • Maui conspiracy theories are spreading on social media. Why this always happens after a disaster | CNN Business

    Maui conspiracy theories are spreading on social media. Why this always happens after a disaster | CNN Business

    [ad_1]



    CNN
     — 

    A slew of viral conspiracy videos on social media have made baseless claims that the Maui wildfires were started intentionally as part of a land grab, highlighting how quickly misinformation spreads after a disaster.

    While the cause of the fires hasn’t been determined, Hawaiian Electric — the major power company on Maui — is under scrutiny for not shutting down power lines when high winds created dangerous fire conditions. (Hawaiian Electric previously said both the company and the state are conducting investigations into what happened). Maui experienced high winds from Hurricane Dora in the south while it was also grappling with a drought. Wildfires across the region have long been a concern.

    Still, conspiracy theories continue to circulate as nearly 400 people are still unaccounted for.

    It’s not uncommon for conspiracy theories to make the rounds after a national crisis. According to Renee DiResta, a research manager at Stanford University who studies misinformation, people often look for a way to make sense of the world when they are anxious or have a feeling of powerlessness.

    “Theories that attribute the cause of a crisis to a specific bad actor offer a villain to blame, someone to potentially hold responsible,” DiResta said. “The conspiracy theories that are the most effective and plausible are usually based on some grain of truth and connect to some existing set of beliefs about the world.”

    For example, someone who distrusts the government may be more inclined to believe someone who posts negatively about a government agency.

    Conspiracy theorists on varying platforms claim the fires, which killed at least 114 people earlier this month, were planned as part of a strategic effort to weed out less wealthy residents on Maui and make room for multi-million dollar developments.

    In one video, a user claims a friend sent him a video of a laser beam “coming out of the sky, directly targeting the city.” “This was a direct energy weapon assault,” he said. The video remains posted but now includes a label from Instagram listing it as “false information.” The imagery appears to be from a previous SpaceX launch in California.

    Related far-fetched theories say the alleged “laser beams” were programmed not to hit anything blue, explaining why so many blue beach umbrellas were left unscathed by the fires.

    Other social media users allege elite Maui residents were behind the fires so they could buy the destroyed land at a discounted price and rebuild potentially a “smart city.”

    “You’re telling me that these cheaper lower middle class houses burnt down directly across the street and all of the mansions are still standing?” one YouTube user posted, referencing aerial imagery taken of the destruction.

    One tweet about a celebrity purchasing hundreds of acres across Maui over the past few years has received more than 12 million views on X, the platform formerly known as Twitter.

    When a conspiracy theory gains traction online, others may chime in and offer explanations for details not discussed in the original post. Social media algorithms can amplify these theories based on user attention and interactions.

    “Social media is incredibly valuable in crisis events as people on the ground can report the facts directly, but that usefulness is tempered, and can be dangerous, if misleading claims proliferate particularly in the immediate aftermath,” DiResta said.

    Social media platforms like Instagram, TikTok and YouTube have taken steps to curb the spread of conspiracy theories and misinformation, but some videos can slip through the cracks. Many platforms use a mix of tech monitoring tools and human reviewers to enforce their community guidelines.

    Ahead of the publishing of this article, TikTok removed several conspiracy theory videos sent by CNN that were in violation of its community guidelines, which it characterizes as “inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent” on the platform. A company spokesperson said more than 40,000 trust and safety professionals around the world review and moderate content at all hours of the day.

    Meanwhile, in a statement provided to CNN, YouTube spokesperson Elena Hernandez said the platform uses different sections, such as top news, developing news and a fact-check panel, to provide users with as much context and background information as possible on certain trending topics, and will remove content when necessary.

    “During major news events, such as the horrific fires in Hawaii, our systems are designed to raise up content from authoritative sources in search results and recommendations,” Hernandez said.

    Instagram also employs third-party fact-checkers to contact sources, check public data and work to verify images and videos on questionable content. They then rate and provide labels to the content in question, such as “false,” “altered” or “missing context,” to encourage viewers to think critically about what they’re about to see.

    As a result, those posts show up far less often in users’ feeds and repeat offenders can face varying risks, such as losing monetization on their pages.

    Social media platform X did not immediately respond to a request for comment.

    Michael Inouye, a principal analyst at market research firm ABI Research, said social media companies are in a challenging spot because they want to uphold freedom of speech, but do so in an environment where posts that receive the most shares and likes often rise to the top of user feeds. That means posts sharing conspiracy theories that spark fear and emotion may perform better in a crisis than those sharing straightforward, accurate information.

    “Ultimately, social media will have to decide if it wants to be a better news organization or remain this ‘open’ platform for expression that can run counter to the ethics and standards that is required by news reporting,” Inouye said. “The problem is, even if something isn’t labeled as ‘news,’ some will still interpret personal opinion as truth, which puts us back in the same position.”

    [ad_2]

    Source link

  • iOS 17 release: See what’s new in iPhone features | CNN Business

    iOS 17 release: See what’s new in iPhone features | CNN Business

    [ad_1]



    CNN
     — 

    iPhone users: Today’s the day to update to Apple’s latest operating system, iOS17, and unlock a slew of new features that promise to make the iPhone experience more personal and intuitive.

    Apple first teased iOS17 at its annual Worldwide Developer Conference in early June, but you may have missed out on some of the details as the tech giant also unveiled its much-anticipated mixed-reality Vision Pro headset that same day.

    iPhone users can update to iOS17 starting Monday by clicking on the Software Update section in the phone’s Settings app. Of course, many users have gotten in the habit of backing up important photos or files before downloading the latest software update – or waiting until the second version rolls out (likely in the coming weeks) if they’re afraid of any bugs that could come with the first version of a next-generation mobile operating system.

    Here are some of the buzziest and most-anticipated new features that iPhone users can expect from iOS17.

    Live Voicemail and FaceTime video messages are here

    One of the buzziest new features, dubbed Live Voicemail, will transcribe a caller’s message in real time, giving iPhone users the decision whether to ignore the call or take it on while the other person is still on the line and leaving their message.

    Unknown numbers will go directly to Live Voicemail when you have the “Silence Unknown Callers” setting turned on.

    Moreover, FaceTime will also now give users the ability to leave video messages if someone doesn’t pick up a video call.

    With iOS17, Facetime calls will also get more expressive – with reactions such as hearts, balloons, fireworks and more effects that can be activated through simple gestures.

    Another update that may require some getting used to is saying just “Siri” to activate Apple’s voice assistant, instead of “Hey Siri.”

    Dropping “Hey” from Siri’s launch-phrase is meant to create a more natural way to activate the assistant. Moreover, Siri will also be able to better process back-to-back requests once activated.

    For example, instead of asking: “Hey, Siri, how tall is Shaquille O’Neal?” and “Hey, Siri, how old is Shaquille O’Neal?” You should be able to just say: “Siri, how tall is Shaquille O’Neal?” Followed by: “How old is he?”

    The new NameDrop feature in iOS17 makes it easier than ever to exchange contact information with a new friend. iPhone users can simply bring their iPhones close to each other, as they would when AirDropping something, to share names and Contact Posters.

    The Contact Poster update is another new feature iPhone users have been getting hyped about. This allows iPhone users to design a custom image that will show up when making calls. The update that allows users to choose their own caller ID photo and will give iPhone users a more consistent look no matter who they’re calling, Apple has said.

    iPhone users will also be able to personalize their contact card “poster” with a photo or memoji of choice.

    Autocorrect is also getting a comprehensive update, Apple said, with a transformer language model — or “a state-of-the-art on-device machine learning language model for word prediction,” according to the company.

    This refreshed design better supports typing and offers sentence-level autocorrections that can fix more types of grammatical mistakes. iPhone users will also now receive predictive text recommendations in-line as they type, making adding entire words or completing sentences as easy as tapping the space bar.

    The new iOS keyboard will also learn your habits over time, such as fixing words that you frequently misspell and leaving words alone that you intentionally thumbed in. As Craig Federighi, Apple’s head of software, put it in June: “In those moments where you just want to type a ducking word, well, the keyboard will learn it, too.”

    New StandBy mode, Journal app and much more

    iOS17 also introduces StandBy, a new full-screen experience with glanceable information designed to be viewed from a distance when the iPhone is on its side and charging. For example, when charging your iPhone at your nightstand or desk, you can personalize the display to feature a clock, favorite photos, or your most-used widgets.

    Apple’s new Journal app, which aims to help users reflect and practice gratitude through the daily practice of journaling, will also be available in a software update later this year.

    And there’s a whole lot more: Check out Apple’s handy 17-page guide on all of the newest features coming to iOS17.

    [ad_2]

    Source link

  • Zuckerberg unveils Quest 3 as Meta tries to stay ahead in the mixed reality headset game | CNN Business

    Zuckerberg unveils Quest 3 as Meta tries to stay ahead in the mixed reality headset game | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Meta is moving forward in its efforts to dominate the AR world with the new and improved Meta Quest 3.

    Unveiled by CEO Mark Zuckerberg at the company’s virtual Meta Connect event Wednesday, the headset starts at $500 and is a complete redesign of earlier models. The Quest 3, first announced in June, offers improved performance, immersive new mixed-reality features and a sleeker, more comfortable design.

    With a much stronger processor, higher-resolution display, revamped Touch Plus controllers and a 40% slimmer physique, the Quest 3 is a big step up from its predecessors. The Meta Quest 2 allows for strictly virtual reality, while the Meta Quest Pro has advanced passthrough cameras for seeing your actual surroundings, but it costs a whopping $1,000.

    Most importantly, the Quest 3 has support for Meta Reality, allowing users to enjoy mixed-reality experiences that blend the real world with the virtual one — for example, you can play a virtual piano on your real-life coffee table.

    “If you pick up a digital ball and throw it at the physical wall, it’ll bounce off it,” Zuckerberg said at Meta Connect Wednesday. “If someone’s shooting at you and you want to duck the fire, you just get behind your physical couch.”

    The Meta Quest virtual library is fully accessible with the Quest 3 – a library that now features VR-friendly Roblox, released Wednesday, and is set to add X Box cloud gaming in December, giving gamers the chance to play titles like Halo and Minecraft on a large screen anywhere.

    The headset is available for preorder now and officially hit stores on Oct. 10, available in two storage options (128GB and 512GB).

    Zuckerberg explains features of the new Quest 3 headset on September 27, 2023.

    Meta’s newest headset comes three years after the Quest 2, under a year after the Quest Pro and under four months after the Apple Vision Pro.

    Dubbed by Zuckerberg as the “first mainstream mixed reality headset” the Quest 3 is part of an ongoing arms race between two of tech’s biggest players to command the headset space – and Zuckerberg’s personal vision for a next-generation internet where users can interact with each other in virtual spaces resembling real life. And it comes in at a much cheaper price than the Apple alternative (which will cost you $3,499, to be exact) and is still mainly a VR headset with alternative reality options, while Apple’s product is a dedicated mixed reality experience.

    To get ahead of Apple’s June unveiling of the Vision Pro, Zuckerberg teased the Meta Quest 3 just days before its rival’s big announcement. But the two companies had a tense relationship even before Apple’s entry into the market. They have competed over news and messaging features, and their CEOs have traded jabs over data privacy and app store policies. Last February, Meta said it expected to take a $10 billion hit in 2022 from Apple’s move to limit how apps like Facebook collect data for targeted ads.

    Meta has until now been the dominant player in the headset market, but it has so far struggled to attract a mainstream audience for its VR headset products. The Wall Street Journal reported last year that Meta had just 200,000 active users in Horizon Worlds, its app for socializing in VR. And in 2023, IDC estimates just 10.1 million AR/VR headsets will ship globally from the entire market, far below the tens of millions of iPhones Apple sells each quarter.

    Morgan Stanley analysts called Apple’s Vision Pro a “moonshot” effort following its June announcement, saying the product “has the potential to become Apple’s next compute platform,” but that the company has “much to prove” before the headset’s launch next year.

    The biggest fight may not be between tech giants, but for the general public’s acceptance. Many analysts say the biggest hurdle to consumer adoption of mixed reality headsets is ensuring a wide range of potential use cases and experiences available on the devices. While Meta has introduced features that let users play games, explore virtual worlds, watch YouTube videos, workout, chat with friends and more, it has yet to convince most consumers that the device is worthwhile.

    [ad_2]

    Source link

  • ADL says it will resume advertising on X following feud with Elon Musk | CNN Business

    ADL says it will resume advertising on X following feud with Elon Musk | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The Anti-Defamation League on Wednesday said it plans to resume advertising on X, the platform formerly known as Twitter, following a spat with owner Elon Musk.

    Musk last month threatened to sue the ADL for defamation, claiming that the nonprofit organization’s statements about rising hate speech on the social media platform had hurt X’s advertising revenue. ADL CEO Jonathan Greenblatt pushed back on the claims, saying that while the ADL was part of a coalition of groups that called on companies to pause advertising on the platform immediately following Musk’s acquisition last year, it had not been engaged in such calls in recent months.

    Musk’s statements about the group also amplified a campaign of antisemitic hate against the organization that had begun prior to Musk’s legal threat, leading to a surge of threats directed at the ADL, Greenblatt told CNN last month.

    The rights group reiterated in a statement Wednesday that “any allegation that ADL has somehow orchestrated a boycott of X or caused billions of dollars of losses to the company or is ‘pulling the strings’ for other advertisers is false.”

    “Indeed, we ourselves were advertising on the platform until the anti-ADL attacks began a few weeks ago,” the group said. “We now are preparing to do so again to bring our important message on fighting hate to X and its users.”

    Musk responded to the ADL’s statement in a post Wednesday saying, “Thank you for clarifying that you support advertising on X.”

    The statement appears to mark a resolution — for now — to weekslong tension between Musk and the ADL, which has coincided with incidents of antisemitism rising across the United States. But the group says it will continue to monitor for antisemitic content on X.

    “As we have noted in our research over the past several years, X – along with other social media platforms — has a serious issue with antisemites and other extremists using these platforms to push their hateful ideas and, in some cases, bully Jewish and other users,” it said. “A better, healthier, and safer X would be a win for the world … As we do with all platforms, we will credit X as it moves in that direction, and we also will call it out when it has not.”

    The ADL and other similar organizations, including the Center for Countering Digital Hate, have said in reports that the volume of hate speech on the website has grown dramatically under Musk’s stewardship. (Musk has criticized the findings.)

    Two brands in August paused their ad spending on X after their advertisements ran alongside an account promoting Nazism. X suspended the account after the issue was flagged and said ad impressions on the page were minimal.

    X has emphasized its new “freedom of speech, not freedom of reach” policy that aims to limit the reach of so-called lawful but awful content on the platform and to protect brands from having their ads appear alongside such content. CEO Linda Yaccarino has also promoted additional brand safety controls for advertisers, including the ability to avoid having their ads show next to “targeted hate speech, sexual content, gratuitous gore, excessive profanity, obscenity, spam, [and] drugs.”

    Asked about Musk’s threats to sue the ADL in an interview last week, Yaccarino said, “I wish that would be different … We’re looking into that.” She added that the ADL should acknowledge X’s progress on addressing antisemitism.

    It appears the platform may have more work to do. A search on Wednesday for Greenblatt’s name immediately surfaced multiple hateful and antisemitic tweets about the ADL leader.

    [ad_2]

    Source link

  • Parents urged to delete their kids’ social media accounts ahead of possible Israeli hostage videos | CNN Business

    Parents urged to delete their kids’ social media accounts ahead of possible Israeli hostage videos | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Schools in Israel, the UK and the US are advising parents to delete their children’s social media apps over concerns that Hamas militants will broadcast or disseminate disturbing videos of hostages who have been seized in recent days.

    A Tel Aviv school’s parent’s association said it expects videos of hostages “begging for their lives” to surface on social media. In a message to parents, shared with CNN by a mother of children at a high school in Tel Aviv, the association asked parents to remove apps such as TikTok from their children’s phones.

    “We cannot allow our kids to watch this stuff. It is also difficult, furthermore – impossible – to contain all this content on social media,” according to the parent’s association. “Thank you for your understanding and cooperation.”

    Hamas has warned that it will post murders of hostages on social media if Israel targets people in Gaza without warning.

    There are additional concerns that terrorists will exploit social media algorithms to specifically target such videos to followers of Jewish or Israeli influencers in an effort to wage psychological warfare on Israelis and Jews and their supporters globally.

    During the onslaught on Saturday, armed Hamas militants poured over the heavily-fortified border into Israel and took as many as 150 hostages, including Israeli army officers, back to Gaza. The surprise attacks killed at least 1,200 people, according to the Israel Defense Forces, and injured thousands more.

    Since Israel began airstrikes on the Palestinian enclave Saturday, at least 1,055 people have been killed in Gaza, including hundreds of children, women, and entire families, according to the Palestinian health ministry. It said a further 5,184 have been injured, as of Wednesday.

    As the war wages on, some Jewish schools in the US are also asking parents not to share related videos or photos that may surface, and to prevent children – and themselves – from watching them. The schools are also advising community members to delete their social media apps during this time.

    “Together with other Jewish day schools, we are warning parents to disable social media apps such as Instagram, X, and Tiktok from their children’s phones,” the head of a school in New Jersey wrote in an email. “Graphic and often misleading information is flowing freely, augmenting the fears of our students. … Parents should discuss the dangers of these platforms and ask their children on a daily basis about what they are seeing, even if they have deleted the most unfiltered apps from their phones.”

    Another school in the UK said it asked students to delete their social media apps during a safety assembly.

    TikTok, Instagram and X – formerly known as Twitter – did not immediately respond to requests for comment on how they are combating the increase of videos being posted online and for comment on schools asking parents to delete these apps.

    But X said on its platform is has experienced an increase in daily active users in the conflict area and its escalation teams have “actioned tens of thousands of posts for sharing graphic media, violent speech, and hateful conduct.” It did not respond to a request to comment further or define “actioned.”

    “We’re also continuing to proactively monitor for antisemitic speech as part of all our efforts,” X’s safety team said. “Plus we’ve taken action to remove several hundred accounts attempting to manipulate trending topics.”

    The company added it remains “laser focused” on enforcing the site’s rules and reminded users they can limit sensitive media they may encounter by visiting the “Content you see” option in Settings.

    Still, misinformation continues to run rampant on social media platforms, including X.

    A post viewed more than 500,000 times – featuring the hashtag #PalestineUnderAttack – claimed to show an airplane being shot down. But the clip was from the video game Arma 3, as was later noted in a “community note” appended to the post.

    Another video that is purported to show Israeli generals after being captured by Hamas fighters was viewed more than 1.7 million times by Monday. The video, however, instead shows the detention of separatists in Azerbaijan.

    On Tuesday, the European Union warned Elon Musk of “penalties” for disinformation circulating on X amid Israel-Hamas war.

    The EU also informed Meta CEO Zuckerberg on Wednesday of a disinformation surge on its platforms – which include Facebook – and demanded the company respond in 24 hours with how it plans to combat the issue.

    In an Instagram story on Tuesday, Zuckerberg called the attack “pure evil” and said his focus “remains on the safety of our employees and their families in Israel and the region.”

    [ad_2]

    Source link

  • Justice Kagan order: Apple doesn’t have to change app store terms while battling Epic in court | CNN Business

    Justice Kagan order: Apple doesn’t have to change app store terms while battling Epic in court | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    A judicial order forcing Apple to change some of its app store terms will not need to take immediate effect while litigation over the decision plays out, Supreme Court Justice Elena Kagan said on Wednesday, handing a temporary defeat to opponents of the company.

    The order is a setback for “Fortnite”-maker Epic Games as Apple appeals a lower-court ruling that found the iPhone-maker had violated California competition law.

    Epic Games declined to comment on Kagan’s decision, which occurred in the Supreme Court’s so-called “shadow docket” and was not referred to the full court.

    Apple didn’t immediately respond to a request for comment.

    Apple had previously been ordered not to interfere with efforts by iOS app developers to inform their users within their apps about alternatives to Apple’s in-app payment system, which allows Apple to take a commission.

    In April, a federal appeals court upheld the order that, if allowed to take effect, would prevent Apple from intervening when developers include “buttons, external links or other calls to action that direct customers to purchasing mechanisms” apart from Apple’s own channels.

    The appeals court temporarily paused enforcement of the injunction while Apple appeals the ruling to the Supreme Court. But last month, Epic Games filed an emergency request to the court calling for the order to be put into effect immediately, saying the public would otherwise be harmed by Apple’s practices.

    [ad_2]

    Source link

  • Illinois passes a law that requires parents to compensate child influencers | CNN Business

    Illinois passes a law that requires parents to compensate child influencers | CNN Business

    [ad_1]



    CNN
     — 

    When 16-year-old Shreya Nallamothu from Normal, Illinois, scrolled through social media platforms to pass time during the pandemic, she became increasingly frustrated with the number of children she saw featured in family vlogs.

    She recalled the many home videos her parents filmed of herself and her sister over the years: taking their first steps, going to school and other “embarrassing stuff.”

    “I’m so glad those videos stayed in the family,” she said. “It made me realize family vlogging is putting very private and intimate moments onto the internet.”

    She said reminders and lectures from her parents about how everything is permanent online intensified her reaction to the videos she saw of kid influencers. “The fact that these kids are either too young to grasp that or weren’t given the chance to grasp that is really sad.”

    Nallamothu wrote a letter last year to her state senator, Democrat Dave Koehler, urging him to consider legislation to protect young influencers. Last week, her home state became the first to pass a law that establishes safeguards for minors who are featured in online videos – and how they’re compensated.

    Illinois Gov. J. B. Pritzker on Friday signed a bill, inspired by Nallamothu’s letter, amending the state’s Child Labor Law that will allow teenagers over the age of 18 to take legal action against their parents if they were featured in monetized social media videos and not properly compensated, similar to the rights held by child actors.

    Starting July 1 2024, parents in Illinois will be required to put aside 50% of earnings for a piece of content into a blocked trust fund for the child, based on the percentage of time they’re featured in the video. For example, if a child is in 50% of a video, they should receive 25% of the funds; if they’re in 100%, they are required to get 50% of the earnings. However, this only applies in scenarios during which the child appears on the screen for more than 30% of the vlogs in a 12-month period.

    “We understand that parents should receive compensation too because they have equity in this, but we don’t want to forget about the child,” Koehler told CNN.

    Many YouTube parent vloggers or social media influencers post multiple videos each month or weekly, sharing intimate details about their lives, ranging from family financial troubles and the birth of a new baby to opening new toys or going through a child’s phone or report card. Although children are predominantly featured in these monetized videos, parents have had no legal obligation to give them any portion of the earnings.

    Meanwhile, kid influencer accounts, which can at times earn $20,000 or more for sponsored posts, are typically run by parents and not often set up in the child’s name due to age restrictions on social media platforms.

    “We often see with emerging technology and trends that legislation is always a reaction to that,” Koehler said. “But we know with the explosion of social media that parents are using it to monetize kids being on videos. If money is being made and nothing is set up for the children, it’s the same thing as a child actor.”

    The new law is modeled off of the 1936 Jackie Coogan’s Law, the Hollywood silent actor discovered by Charlie Chaplin whose parents swindled him out of his earnings. That California law required parents to set aside a portion of 15% of child earnings in a blocked trust account that the child actor could access after the age of 18.

    Although similar bills have been proposed in California and Washington, Jessica Maddox — an assistant professor at The University of Alabama who studies the social media influencer community — said she’s hopeful other states will follow in Illinois’ footsteps.

    “Even though Illinois is the first state to pass such a law, this legislation is a long time coming,” Maddox said. “Social media labor and careers are becoming increasingly common and viable forms of income, and it’s important that the law catches up with technology to ensure minors aren’t being exploited.”

    Maddox said it also breathes new life into the long-simmering debate over what is appropriate for parents to document online and whether a child can really consent to participating.

    “I’ve seen organic conversations start to emerge between individuals who had been featured heavily in their parents’ social media content but are now of age to tell their stories and admit that had they really understood what was going on, they would have never consented for their lives to be broadcast for everyone.”

    Chris McCarty — the 19-year-old founder of Quit Clicking Kids, an advocacy and education site to combat the monetization of children on social media, who is helping to develop child influencer legislation in Washington State — believes that as the kids featured in family vlogs grow up and share their stories, there will be an increase in public pressure to provide more privacy protections.

    “When children are slightly older, often the narratives get increasingly personal; for example. detailing trouble with bullies, first periods, doctor’s visits, and mental health issues,” McCarty said. “A lot of consumers assume that children working in a family vlog and child actors have the same experiences. This is not the case. As difficult as it is to be a child actor, child actors are still playing a part rather than having their intimate personal details shared for entertainment and monetary purposes.”

    Nallamothu agrees that the next step is for legislation to evolve over time to include more regulations around consent.

    “I know this bill isn’t going to be perfect off the bat but I don’t want perfection to get in the way of progress because regulations have only started coming up,” she said. “I’m glad it’s getting there.”

    [ad_2]

    Source link

  • OpenAI launches a version of ChatGPT for businesses | CNN Business

    OpenAI launches a version of ChatGPT for businesses | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI is releasing a version of its buzzy ChatGPT tool specifically for businesses, the company announced Monday, as an AI arms race continues to ramp up throughout corporate America.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase as of Monday. The new offering promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Some of the early customers of ChatGPT Enterprise include fintech startup Block, cosmetics giant Estee Lauder Companies and the professional services firm PwC.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    Before the launch of ChatGPT Enterprise, a number of prominent companies including JPMorgan Chase had implemented temporary restrictions on workplace use of ChatGPT.

    ChatGPT Enterprise, however, addresses one of the core issues that led to the workplace clampdowns: privacy and security concerns. Formerly, some business leaders had expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere. OpenAI’s announcement blog post for ChatGPT Enterprise, meanwhile, states that it does “not train on your business data or conversations, and our models don’t learn from your usage.”

    OpenAI did not publicly disclose the pricing levels for ChatGPT Enterprise, instead asking potential business clients to contact its sales team.

    “We look forward to sharing an even more detailed roadmap with prospective customers and continuing to evolve ChatGPT Enterprise based on your feedback,” the company said. “We’re onboarding as many enterprises as we can over the next few weeks.”

    In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting – namely, that users’ chat data will not be used to train AI models.

    Microsoft also previously disclosed a multi-billion dollar investment into OpenAI. It’s not immediately clear how the dueling new AI tools for business will end up competing with each other.

    [ad_2]

    Source link