ReportWire

Tag: computer science and information technology

  • How your phone learned to see in the dark | CNN Business

    How your phone learned to see in the dark | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Open up Instagram at any given moment and it probably won’t take long to find crisp pictures of the night sky, a skyline after dark or a dimly lit restaurant. While shots like these used to require advanced cameras, they’re now often possible from the phone you already carry around in your pocket.

    Tech companies such as Apple, Samsung and Google are investing resources to improve their night photography options at a time when camera features have increasingly become a key selling point for smartphones that otherwise largely all look and feel the same from one year to the next.

    Earlier this month, Google brought a faster version of its Night Sight mode, which uses AI algorithms to lighten or brighten images in dark environments, to more of its Pixel models. Apple’s Night mode, which is available on models as far back as the iPhone 11, was touted as a premier feature on its iPhone 14 lineup last year thanks to its improved camera system.

    These tools have come a long way in just the past few years, thanks to significant advancements in artificial intelligence technology as well as image processing that has become sharper, quicker, and more resilient to challenging photography situations. And smartphone makers aren’t done yet.

    “People increasingly rely on their smartphones to take photos, record videos, and create content,” said Lian Jye Su, an artificial intelligence analyst at ABI Research. “[This] will only fuel the smartphone companies to up their games in AI-enhanced image and video processing.”

    While there has been much focus lately on Silicon Valley’s renewed AI arms race over chatbots, the push to develop more sophisticated AI tools could also help further improve night photography and bring our smartphones closer to being able to see in the dark.

    Samsung’s Night mode feature, which is available on various Galaxy models but optimized for its premium S23 Ultra smartphone, promises to do what would have seemed unthinkable just five to 10 years ago: enable phones to take clearer pictures with little light.

    The feature is designed to minimize what’s called “noise,” a term in photography that typically refers to poor lighting conditions, long exposure times, and other elements that can take away from the quality of an image.

    The secret to reducing noise, according to the company, is a combination of the S23 Ultra’s adaptive 200M pixel sensor. After the shutter button is pressed, Samsung uses advanced multi-frame processing to combine multiple images into a single picture and AI to automatically adjust the photo as necessary.

    “When a user takes a photo in low or dark lighting conditions, the processor helps remove noise through multi-frame processing,” said Joshua Cho, executive vice president of Samsung’s Visual Solution Team. “Instantaneously, the Galaxy S23 Ultra detects the detail that should be kept, and the noise that should be removed.”

    For Samsung and other tech companies, AI algorithms are crucial to delivering photos taken in the dark. “The AI training process is based on a large number of images tuned and annotated by experts, and AI learns the parameters to adjust for every photo taken in low-light situations,” Su explained.

    For example, algorithms identify the right level of exposure, determine the correct color pallet and gradient under certain lighting conditions, sharpen blurred faces or objects artificially, and then makes those changes. The final result, however, can look quite different from what the person taking the picture saw in real time, in what some might argue is a technical sleight-of-hand trick.

    Lights illuminate the Atlanta Botanical Gardens, in this photo taken using Google Pixel 5 Night Sight setting.

    Google is also focused on reducing noise in photography. Its AI-powered Night Sight feature captures a burst of longer-exposure frames. It then uses something called HDR+ Bracketing, which creates several photos with different settings. After a picture is taken, the images are combined together to create “sharper photos” even in dark environments “that are still incredibly bright and detailed,” said Alex Schiffhauer, a group product manager at Google.

    While effective, there can be a slight but noticeable delay before the image is ready. But Schiffhauer said Google intends to speed up this process more on future Pixel iterations. “We’d love a world in which customers can get the quality of Night Sight without needing to hold still for a few seconds,” Schiffhauer said.

    Google also has an astrophotography feature which allows people to take shots of the night sky without needing to tweak the exposure or other settings. The algorithms detect details in the sky and enhances them to stand out, according to the company.

    Apple has long been rumored to be working on an astrophotography feature, but some iPhone 14 Pro Max users have successfully been able to capture pictures of the sky through its existing Night Mode tool. When a device detects a low-light environment, Night mode turns on to capture details and brighten shots. (The company did not respond to a request to elaborate on how the algorithms work.)

    AI can make a difference in the image, but the end results for each of these features also depend on the phone’s lenses, said Gartner analyst Bill Ray. A traditional camera will have the lens several centimeters from the sensor, but the limited space on a phone often requires squeezing things together, which can result in a more shallow depth of field and reduced image quality, especially in darker environments.

    “The quality of the lens is still a big deal, and how the phone addresses the lack of depth,” Ray said.

    While night photography on phones has come a long way, a buzzy new technology could push it ahead even more.

    Generative AI, the technology that powers the viral chatbot ChatGPT, has earned plenty of attention for its ability to create compelling essays and images in response to user prompts. But these AI systems, which are trained on vast troves of online data, also have potential to edit and process images.

    “In recent years, generative AI models have also been used in photo-editing functions like background removal or replacement,” Su said. If this technology is added to smartphone photo systems, it could eventually make night modes even more powerful, Su said.

    Big Tech companies, including Google, are already fully embracing this technology in other parts of their business. Meanwhile, smartphone chipset vendors like Qualcomm and MediaTek are looking to support more generative AI applications natively on consumer devices, Su said. These include image and video augmentation.

    “But this is still about two to three years away from limited versions of this showing up on smartphones,” he said.

    [ad_2]

    Source link

  • Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    [ad_1]



    CNN
     — 

    Shares of Google-parent Alphabet fell more than 3% in early trading Monday after a report sparked concerns that its core search engine could lose market share to AI-powered rivals, including Microsoft’s Bing.

    Last month, Google employees learned that Samsung was weighing making Bing the default search engine on its devices instead of Google’s search engine, prompting a “panic” inside the company, according to a report from the New York Times, citing internal messages and documents. (CNN has not reviewed the material.)

    In an effort to address the heightened competition, Google is said to be developing a new AI-powered search engine called Project “Magi,” according to the Times. The company, which reportedly has about 160 people working on the project, aims to change the way results appear in Google Search and will include an AI chat tool available to answer questions. The project is expected to be unveiled to the public next month, according to the report.

    In a statement sent to CNN, Google spokesperson Lara Levin said the company has been using AI for years to “improve the quality of our results” and “offer entirely new ways to search,” including with a feature rolled out last year that lets users search by combining images and words.

    “We’ve done so in a responsible and helpful way that maintains the high bar we set for delivering quality information,” Levin said. “Not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new AI-powered features to Search, and will share more details soon.”

    Samsung did not immediately respond to a request for comment.

    Google’s search engine has dominated the market for two decades. But the viral success of ChatGPT, which can generate compelling written responses to user prompts, appeared to put Google on defense for the first time in years.

    In March, Google began opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT and promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    At an event in February, a Google executive also said the company will bring “the magic of generative AI” directly into its core search product and use artificial intelligence to pave the way for the “next frontier of our information products.”

    Microsoft, meanwhile, has invested in and partnered with OpenAI, the company behind ChatGPT, to deploy similar technology in Bing and other productivity tools. Other tech companies, including Meta, Baidu and IBM, as well as a slew of startups, are racing to develop and deploy AI-powered tools.

    But tech companies face risks in embracing this technology, which is known to make mistakes and “hallucinate” responses. That’s particularly true when it comes to search engines, a product that many use to find accurate and reliable information.

    Google was called out after a demo of Bard provided an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Microsoft’s Bing AI demo was also called out for several errors, including an apparent failure to differentiate between the types of vacuums and even made up information about certain products.

    In an interview with 60 Minutes that aired on Sunday, Google and Alphabet CEO Sundar Pichai stressed the need for companies to “be responsible in each step along the way” as they build and release AI tools.

    For Google, he said, that means allowing time for “user feedback” and making sure the company “can develop more robust safety layers before we build, before we deploy more capable models.”

    He also expressed his belief that these AI tools will ultimately have broad impacts on businesses, professions and society.

    “This is going to impact every product across every company and so that’s, that’s why I think it’s a very, very profound technology,” he said. “And so, we are just in early days.”

    [ad_2]

    Source link

  • TikTok is testing a new option to create AI-generated avatars for profile pictures | CNN Business

    TikTok is testing a new option to create AI-generated avatars for profile pictures | CNN Business

    [ad_1]


    New York
    CNN
     — 

    TikTok is testing a new option to let users create AI-generated avatars for their profile pictures, the company confirmed to CNN on Wednesday, in a move with the potential to put recent advances in artificial intelligence technology front and center for millions of users.

    The new feature appears to create a stylized, illustrated image of the user based on an uploaded picture, according to a post from social media consultant Matt Navarra, who was first to spot the option.

    The feature is still in the early stages of testing and not widely available to TikTok users, according to the company, and there is currently no timeline for when the feature might roll out.

    “We’re always thinking about new ways to add value to the community and enrich the TikTok experience, as we continue to build a safe place that entertains, inspires creativity, and drives culture,” a TikTok spokesperson said in a statement provided to CNN. “In a few select regions, we’re experimenting with a new way to create and share profile pictures with the TikTok community.”

    AI-generated images have taken over the internet in recent months, but some tools have also raised concerns among privacy experts, digital artists, and users who have noticed the potential to sexualize images, make skin paler and make bodies thinner.

    [ad_2]

    Source link

  • Ex-ByteDance employee claims China had ‘supreme access’ to all data | CNN Business

    Ex-ByteDance employee claims China had ‘supreme access’ to all data | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    China’s Communist Party had “supreme access” to all data held by TikTok’s parent company Bytedance, including on servers in the United States, a former employer who is bringing a wrongful termination lawsuit has alleged.

    The allegations in the lawsuit – which Bytedance denies and has vowed to contest – comes at a time of intense scrutiny within the US and other Western nations over what level of control, if any, Beijing is able to exert over TikTok and the social media app’s wildly popular content.

    Yintao “Roger” Yu filed a lawsuit of wrongful termination against Bytedance in Superior Court in San Francisco earlier this month. He says he worked at the company from August 2017 to November 2018, as a head of engineering for US operations.

    In a new complaint filed on Friday, Yu claimed that the Chinese Communist Party (CCP) had a special office in the company, sometimes referred to as the “Committee,” which monitored Bytedance and “guided how it advanced core Communist values.”

    “The Committee maintained supreme access to all the company data, even data stored in the United States,” the complaint obtained by CNN read.

    Yu’s lawsuit alleges that the company made user data accessible to China’s Communist Party via a backdoor channel, no matter where the data was located.

    Yu also claimed that he had observed Bytedance being “responsive to the CCP’s requests” to share, elevate or even remove content, describing Bytedance as “useful propaganda tool” for Beijing’s leaders.

    A Bytedance spokesperson has denied Yu’s allegations, saying he worked on an app called Flipagram while at the company, which was discontinued due to business reasons.

    “We plan to vigorously oppose what we believe are baseless claims and allegations in this complaint,” the spokesperson said to CNN.

    “Mr. Yu worked for ByteDance Inc. for less than a year and his employment ended in July 2018,” which Yu disputed in his complaint.

    Earlier reporting from Yu’s lawsuit detailed how shortly after he began his job, he realized that Bytedance had for years engaged in what he called a “worldwide scheme” to steal and profit from the content of others.

    The scheme involved using software purposely unleashed to “systematically” strip user content from competitors’ websites, chiefly Instagram and Snapchat, and populate its own video services without asking for permission.

    The former employee alleged he was “troubled by ByteDance’s efforts to skirt legal and ethical lines.”

    Yu is seeking compensatory damages such as lost earnings, injunctive relief and liquidated and punitive damages.

    In a statement to CNN, a ByteDance spokesperson said the company is “committed to respecting the intellectual property of other companies, and we acquire data in accordance with industry practices and our global policy.”

    The latest allegations come as the hugely popular TikTok app is at risk of being banned by US lawmakers for national security concerns.

    The Biden administration has threatened TikTok with a nationwide ban unless its Chinese owners sell their stakes in the company, spelling out an increasingly tense relationship between the two countries. Last month, Montana became the first US state to pass legislation banning TikTok on all personal devices.

    At issue is who owns the keys to TikTok’s algorithms and the vast troves of data collected from the 150 million people in the United States who use the app each month.

    US officials have widely expressed fears the Chinese government could potentially gain access to TikTok user data through its links to its parent company and that such information could be used to benefit Chinese intelligence or propaganda campaigns.

    However, security experts say there is still no public evidence the Chinese government has actually spied on people through TikTok, which doesn’t operate in China.

    In March, TikTok’s chief executive Shou Chew testified before Congress, saying that he had “seen no evidence that the Chinese government has access to that [US user] data; they have never asked us, we have not provided it.”

    “Our commitment is to move their data into the United States, to be stored on American soil by an American company, overseen by American personnel. So the risk would be similar to any government going to an American company, asking for data,” Chew said at the hearing.

    China has responded to the Biden administration’s demand, saying that it would “firmly” oppose a forced sale of TikTok.

    The Chinese government considers some advanced technology, including content recommendation algorithms, to be critical to its national interest. In December, Chinese officials proposed tightening the rules that govern the sale of that technology to foreign buyers.

    A sale or divestiture of TikTok would involve the export of technology, so it would need obtain a license and approval from the Chinese government, according to a commerce ministry spokeswoman in March.

    [ad_2]

    Source link

  • AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority.

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety.

    The statement was signed by leading industry officials including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others.

    The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. AI experts have said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction; today’s cutting-edge chatbots largely reproduce patterns based on training data they’ve been fed and do not think for themselves.

    Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur.

    The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence. In response, a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    Hinton, whose pioneering work helped shape today’s AI systems, previously told CNN he decided to leave his role at Google and “blow the whistle” on the technology after “suddenly” realizing “that these things are getting smarter than us.”

    Dan Hendrycks, director of the Center for AI Safety, said in a tweet Tuesday that the statement first proposed by David Kreuger, an AI professor at the University of Cambridge, does not preclude society from addressing other types of AI risk, such as algorithmic bias or misinformation.

    Hendrycks compared Tuesday’s statement to warnings by atomic scientists “issuing warnings about the very technologies they’ve created.”

    “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,’” Hendrycks tweeted. “From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

    [ad_2]

    Source link

  • US judge temporarily blocks Microsoft acquisition of Activision | CNN Business

    US judge temporarily blocks Microsoft acquisition of Activision | CNN Business

    [ad_1]

    A US judge late on Tuesday granted the Federal Trade Commission’s (FTC) request to temporarily block Microsoft Corp’s acquisition of video game maker Activision Blizzard and set a hearing next week.

    US District Judge Edward Davila scheduled a two-day evidentiary hearing on the FTC’s request for a preliminary injunction for June 22-23 in San Francisco. Without a court order, Microsoft could have closed on the $69 billion deal as early as Friday.

    The FTC, which enforces antitrust law, asked an administrative judge to block the transaction in early December. An evidential hearing in the administrative proceeding is set to begin Aug. 2.

    Based on the late-June hearing, the federal court will decide whether a preliminary injunction — which would last during the administrative review of the case — is necessary. The FTC sought the temporary block on Monday.

    Davila said the temporary restraining order issued on Tuesday “is necessary to maintain the status quo while the complaint is pending (and) preserve this court’s ability to order effective relief in the event it determines a preliminary injunction is warranted and preserve the FTC’s ability to obtain an effective permanent remedy in the event that it prevails in its pending administrative proceeding.”

    Microsoft

    (MSFT)
    and Activision

    (ATVI)
    must submit legal arguments opposing a preliminary injunction by June 16; the FTC must reply on June 20.

    Activision, which said Monday the FTC decision to seek a federal court order was “a welcome update and one that accelerates the legal process,” declined to comment Tuesday.

    Microsoft said Tuesday “accelerating the legal process in the U.S will ultimately bring more choice and competition to the gaming market. A temporary restraining order makes sense until we can receive a decision from the court, which is moving swiftly.”

    The FTC declined to comment.

    Davila said the bar on closing will remain in place until at least five days after the court rules on the preliminary injunction request.

    The FTC has argued the transaction would give Microsoft’s video game console Xbox exclusive access to Activision games, leaving Nintendo consoles and Sony Group Corp’s PlayStation out in the cold.

    Microsoft’s bid to acquire the “Call of Duty” video game maker was approved by the EU in May, but British competition authorities blocked the takeover in April.

    Microsoft has said the deal would benefit gamers and gaming companies alike, and has offered to sign a legally binding consent decree with the FTC to provide “Call of Duty” games to rivals including Sony for a decade.

    The case reflects the muscular approach to antitrust enforcement taken by the administration of US President Joe Biden.

    [ad_2]

    Source link

  • OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI, the company behind the viral ChatGPT tool, has been hit with a lawsuit alleging the company stole and misappropriated vast swaths of peoples’ data from the internet to train its AI tools.

    The proposed class action lawsuit, filed Wednesday in a California federal court, claims that OpenAI secretly scraped “massive amounts of personal data from the internet,” according to the complaint. The nearly 160-page complaint alleges that this personal data, including “essentially every piece of data exchanged on the internet it could take,” was also seized by the company without notice, consent or “just compensation.”

    Moreover, this data scraping occurred at an “unprecedented scale,” the suit claims.

    OpenAI did not immediately respond to CNN’s request for comment Wednesday. Microsoft, a major investor into OpenAI, was also named as a defendant in the suit and did not immediately respond to a request for comment.

    “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable – but unacceptable by any measure of responsible data protection and use,” Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, said in a statement to CNN Wednesday.

    The complaint also claims that OpenAI products “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

    The lawsuit seeks injunctive relief in the form of a temporary freeze on further commercial use of OpenAI’s products. It also seeks payments of “data dividends” as financial compensation to people whose information was used to develop and train OpenAI’s tools.

    OpenAI publicly launched ChatGPT late last year, and the tool immediately went viral for its ability to generate compelling, human-sounding responses to user prompts. The success of ChatGPT spurred an apparent AI arms race in the tech world, as companies big and small are now racing to develop and deploy AI tools into as many products as possible.

    [ad_2]

    Source link

  • Twitter threatens to sue Meta after rival app Threads gains traction | CNN Business

    Twitter threatens to sue Meta after rival app Threads gains traction | CNN Business

    [ad_1]



    CNN
     — 

    Twitter is threatening Meta with a lawsuit after the blockbuster launch of Meta’s new Twitter rival, Threads — in perhaps the clearest sign yet that Twitter views the app as a competitive threat.

    On Wednesday, an attorney representing Twitter sent Meta CEO Mark Zuckerberg a letter that accused the company of trade secret theft through the hiring of former Twitter employees.

    The letter was first reported by Semafor. A person familiar with the matter confirmed the letter’s authenticity to CNN.

    The letter by Alex Spiro, an outside lawyer for Twitter owner Elon Musk, alleged that Meta had engaged in “systematic, willful, and unlawful misappropriation of Twitter’s trade secrets and other intellectual property.”

    In response to reports on the letter, Musk tweeted: “Competition is fine, cheating is not.”

    The letter goes on to say that Meta hired former Twitter employees who “have improperly retained Twitter documents and electronic devices” and that Meta “deliberately” involved these employees in developing Threads.

    “Twitter intends to strictly enforce its intellectual property rights,” Spiro continued, “and demands that Meta take immediate steps to stop using any Twitter trade secrets or other highly confidential information.”

    Meta spokesperson Andy Stone flatly dismissed the letter. “No one on the Threads engineering team is a former Twitter employee — that’s just not a thing,” he said on Threads.

    In the months since Musk acquired Twitter for $44 billion, the social network has been challenged by a growing number of smaller microblogging platforms, such as the decentralized social network Mastodon and Bluesky, an alternative backed by former Twitter CEO Jack Dorsey. But Twitter has not threatened either with litigation.

    Unlike some Twitter rivals, Threads has experienced rapid growth, with Zuckerberg reporting 30 million user sign-ups in the app’s first day. As of Thursday afternoon, Threads was the number-one free app on the iOS App Store.

    The legal threat may not necessarily lead to litigation but it could be part of a strategy to slow down Meta, said Carl Tobias, a law professor at the University of Richmond.

    “Sometimes lawyers, they threaten but don’t follow through. Or they see how far they can go. That may be the case, but I don’t know that for sure,” Tobias told CNN. He added: “There may be some value to tying it up in litigation and complicating life for Meta.”

    [ad_2]

    Source link

  • Meta’s Threads app rolls out first big batch of updates | CNN Business

    Meta’s Threads app rolls out first big batch of updates | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Meta’s Twitter rival app Threads on Tuesday rolled out its first major batch of updates since its launch two weeks ago as it works to maintain momentum.

    The new features include a translation button and a tab on users’ activity feed dedicated to showing who’s followed them, according to a post from Cameron Roth, a software engineer working on Threads.

    All new features should be available to iOS Threads users by the end of Tuesday, Roth said.

    Threads users have been clamoring for updates since its launch. The new app attracted over 100 million user sign-ups in less than a week, but it still lacks many of the features popular on Twitter and other platforms, including direct messaging and a robust search function.

    User engagement on Threads has dipped since its first week, according to web traffic analysis firm Similarweb. And Meta executives have teased plans to improve the app in hopes of getting users to keep coming back.

    “Early growth was off the charts, but more importantly 10s of millions of people now come back daily … The focus for the rest of the year is improving the basics and retention,” Meta CEO Mark Zuckerberg said in a Threads post Monday.

    Tuesday’s updates also include the ability to subscribe and receive notifications from accounts a user doesn’t follow and a “+” button that lets users follow new accounts from the replies on a post, as well as bug fixes and other improvements.

    Instagram head Adam Mosseri, who is overseeing Threads, has also hinted at plans to introduce a desktop version of the app as well as a feed of only accounts a user follows and an edit button.

    [ad_2]

    Source link

  • Twitter says portions of source code leaked online | CNN Business

    Twitter says portions of source code leaked online | CNN Business

    [ad_1]



    CNN
     — 

    Twitter said parts of its proprietary code were posted online and had been exposed until Friday, when the company had the material removed from the web and filed for a court order to hunt down the source of the leak.

    The leak saw excerpts of Twitter’s source code — the programming that powers the Twitter platform and its internal tools — posted to the online software repository GitHub, according to a court filing Friday by a Twitter attorney. The files were posted by a pseudonymous GitHub user, identified only by the handle FreeSpeechEnthusiast. The account was created on Jan. 3 and does not appear to have posted any other material besides the Twitter code.

    The code leak represents the latest mishap for Twitter as CEO Elon Musk has sought to reverse a sharp decline in revenues through substantial layoffs and other cost cutting measures that some experts had already said risked making the platform less safe. Leaked source code can not only provide insight into how a company designs its product but can also give criminals the chance to find or exploit security flaws and vulnerabilities.

    Twitter has launched an effort to identify the person or group behind the FreeSpeechEnthusiast GitHub account, as well as anyone who may have interacted with the leaked code. On Friday, Twitter filed for a subpoena at the US District Court for the Northern District of California, which Twitter hopes will compel GitHub to hand over IP addresses, contact information, and access logs associated with the incident.

    “The purpose for which Twitter’s DMCA Subpoena is sought is to obtain the identity of an alleged infringer or infringers, and such information will only be used for the purpose of protecting Twitter’s rights,” Twitter wrote in its filing to the court.

    GitHub removed the content on Friday after Twitter submitted a copyright claim to the company. GitHub declined to comment on the matter but said it publicly posts all copyright takedown requests and referred CNN to Twitter’s request. Twitter, which has cut much of its public relations team under Musk, automatically responded to a request for comment with an email containing a poop emoji.

    The leak was first reported by The New York Times.

    The leak comes as Musk has sought to place more of his own imprint on the social media platform he purchased last year. The acquisition prompted a wave of advertisers to flee the platform over fears the deal would lead to a rise in hate speech and an increase in reputational risks for brands. Musk has blamed the advertiser revolt for steep losses at the company, and has aggressively pushed the company’s subscription service, Twitter Blue, as an alternative revenue stream. He has also said Twitter will charge fees for other software applications to access Twitter’s platform.

    On Saturday, reports on an internal memo by Musk outlining employee stock awards suggested that Twitter was valued at about $20 billion, or less than half of the $44 billion Musk paid for the company. (CNN has not independently confirmed the memo’s existence or its contents.) In the memo, Musk reportedly defended the changes he has made at the company and claimed that Twitter’s valuation could someday exceed $250 billion.

    The same day, Musk tweeted that prior to the changes he made, Twitter only had $1 billion in cash, which he said represented about four months’ worth of expenses and an “extremely dire situation.” But, he added, things are looking up.

    “Now that advertisers are returning, it looks like we will break even in Q2,” he said.

    [ad_2]

    Source link

  • Universal Music Group calls AI music a ‘fraud,’ wants it banned from streaming platforms. Experts say it’s not that easy | CNN Business

    Universal Music Group calls AI music a ‘fraud,’ wants it banned from streaming platforms. Experts say it’s not that easy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Universal Music Group — the music company representing superstars including Sting, The Weeknd, Nicki Minaj and Ariana Grande — has a new Goliath to contend with: artificial intelligence.

    The music group sent urgent letters in April to streaming platforms, including Spotify

    (SPOT)
    and Apple Music, asking them to block artificial intelligence platforms from training on the melodies and lyrics of their copywritten songs.

    The company has “a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators,” a spokesperson from Universal Music Group, or UMG, told CNN. “We expect our platform partners will want to prevent their services from being used in ways that harm artists.”

    The move by UMG, first reported by the Financial Times, aims to stop artificial intelligence from creating an existential threat to the industry.

    Artificial intelligence, and specifically AI music, learns by either training on existing works on the internet or through a library of music given to the AI by humans.

    UMG says it is not against the technology itself, but rather AI that is so advanced it can recreate melodies and even musicians’ voices in seconds. That could possibly threaten UMG’s deep library of music and artists that generate billions of dollars in revenue.

    “UMG’s success has been, in part, due to embracing new technology and putting it to work for our artists — as we have been doing with our own innovation around AI for some time already,” UMG said in a statement Monday. “However, the training of generative AI using our artists’ music … begs the question as to which side of history all stakeholders in the music ecosystem want to be on.”

    The company said AI that uses artists’ music violates UMG’s agreements and copyright law. UMG has been sending requests to streamers asking them to take down AI-generated songs.

    “I understand the intent behind the move, but I’m not sure how effective this will be as AI services will likely still be able to access the copyrighted material one way or another,” said Karl Fowlkes, an entertainment and business attorney at The Fowlkes Firm.

    No regulations exist that dictate on what AI can and cannot train. But last month, in response to individuals looking to seek copyright for AI-generated works, the US Copyright Office released new guidance around how to register literary, musical, and artistic works made with AI.

    “In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form,’” the new guidance says.

    The copyright will be determined on a case-by-case basis, the guidance continued, based on how the AI tool operates and how it was used to create the final piece or work.

    The US Copyright Office announced it will also be seeking public input on how the law should apply to copywritten works the AI trains on, and how the office should treat those works.

    “AI companies using copyrighted works to train their models to create similar works is exactly the type of behavior the copyright office and courts should explicitly ban. Original art is meant to be protected by law, not works created by machines that used the original art to create new work,” said Fowlkes.

    But according to AI experts, it’s not that simple.

    “You can flag your site not to be searched. But that’s a request — you can’t prevent it. You can just request that someone not do it,” said Shelly Palmer, Professor of Advanced Media at Syracuse University.

    For example, a website can apply a robots.txt file that works like a guardrail to control which URL’s “search engine crawlers” can access a given site, according to Google. But it is not a full stop, keep-out option.

    Grammy-winning DJ and producer David Guetta proved in February just how easy it is to create new music using AI. Using ChatGPT for lyrics and Uberduck for vocals, Guetta was able to create a new song in an hour.

    The result was a rap with a voice that sounded exactly like Eminem. He played the song at one of his shows in February, but said he would never release it commercially.

    “What I think is very interesting about AI is that it’s raising a question of what is it to be an artist,” Guetta told CNN last month.

    Guetta believes AI is going to have a significant impact on the music industry, so he’s embracing it instead of fighting it. But he admits there are still questions about copyright.

    “That is an ethical problem that needs to be addressed because it sounds crazy to me that today I can type lyrics and it’s going to sound like Drake is rapping it, or Eminem,” he said.

    And that is exactly what UMG wants to avoid. The music group likens AI music to “deep fakes, fraud, and denying artists their due compensation.”

    “These instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists,” the UMG statement said.

    Music streamers Spotify, Apple Music and Pandora did not return request for comment.

    [ad_2]

    Source link

  • Twitter’s former CEO has a new app that looks a lot like Twitter | CNN Business

    Twitter’s former CEO has a new app that looks a lot like Twitter | CNN Business

    [ad_1]



    CNN
     — 

    The buzzy new social media app of the moment looks so much like Twitter it’s almost hard to distinguish the two. The profiles, timelines and colors are nearly identical. Even the creator is the same.

    But under the hood, Bluesky, developed by Twitter co-founder and former CEO Jack Dorsey, is vastly different.

    The app, which launched in a closed beta on iOS in February and on Android this month, runs on a decentralized network which provides users with more control over how the service is run, data is stored, and content is moderated.

    In recent days, it’s gained traction among journalists, politicians and celebrities, from Democratic Rep. Alexandria Ocasio-Cortez to model Chrissy Teigan and the 90s band Eve 6.

    Here’s what you should know:

    Bluesky calls itself “a new social network for microblogging.” With the app, users can post and follow short updates on a timeline, just as they would on Twitter, though with some differences. There are currently no hashtags – a central feature on Twitter – and no direct messages.

    Bluesky was formed independently of Twitter while Dorsey was serving as CEO but it was funded by the company until it became an independent organization in February 2022. In a tweet introducing the idea in 2019, Dorsey said it also plans to “build an open community around it, inclusive of companies & organizations, researchers, civil society leaders,” but warned “this isn’t going to happen overnight.”

    In a tweet last year, Dorsey said the “biggest issue and my biggest regret is that [Twitter] became a company.” He later clarified that if a service was a protocol it “can’t be owned by a state, or company.”

    If the idea of a decentralized social network sounds familiar, it’s likely because of Mastodon, another Twitter alternative that also gained attention late last year.

    Like Mastodon, Bluesky appeals to a number of Twitter users who are frustrated with the direction of the platform under owner Elon Musk. In the six months since Musk took over Twitter, he has made a number of controversial changes to its features and policies, including the removal of blue check marks from prominent users.

    Some of the same high-profile users now testing out Bluesky have also been openly critical of Musk’s moves at Twitter.

    According to data.ai, the company formerly known as App Annie, Bluesky has been downloaded more than 375,000 times from the Apple App Store and the waitlist continues to be flooded with signup requests. On the Google Play Store, Bluesky is described as having been downloaded more than 100,000 times. (By comparison, Twitter reported having more than 200 million monetizable daily active users last year before Musk completed his acquisition.)

    Bluesky did not immediately respond to a request for comment.

    It’s unclear if Bluesky has staying power or will lose steam as Mastodon did. But Mark Bartholomew, a professor at the University at Buffalo School of Law who writes about online privacy, said the early shift toward Bluesky is a positive one, as it gives social media users more choice over where they spend their time.

    “Competition might actually help users find the product features they want, like greater privacy protection, portability, and more significant content moderation,” he said. “Social media platforms have features that users dislike but they still feel like they must accept them to just be in the online space where everyone else is.”

    All it took, he said, was Musk taking stepsto sabotage his own platform.”

    For now, Bluesky is invite-only as it ramps up support for the implementation of its network. Existing users get one invite code to share with someone for every two weeks they’re on the app. Not surprisingly, the sense of exclusivity has only added to the excitement of joining Bluesky.

    As Eve 6 wrote on Twitter: “Bluesky invite codes are the new blue check.”

    [ad_2]

    Source link

  • EU approves Microsoft’s deal to buy Activision Blizzard | CNN Business

    EU approves Microsoft’s deal to buy Activision Blizzard | CNN Business

    [ad_1]



    CNN
     — 

    European regulators have approved Microsoft’s $69 billion acquisition of Activision Blizzard, handing the technology giant a victory at a time when the deal is being challenged in other countries.

    While the merger could harm competition in some respects, particularly in the fast-growing market for cloud gaming services, concessions by Microsoft were enough to mitigate antitrust concerns stemming from the deal, the European Commission said in a statement.

    Among Microsoft’s offers were a 10-year commitment letting European consumers play Activision titles on any cloud gaming service. Microsoft also committed that it would not downgrade the quality or content of its games made available on rival streaming platforms.

    “These commitments fully address the competition concerns identified by the Commission and represent a significant improvement for cloud game streaming compared to the current situation,” the Commission said.

    The Microsoft deal, which would make the company the third largest game publisher in the world after Tencent and Sony, is being challenged in the United States and the UK.

    In a statement, Microsoft said its commitment on game streaming would go beyond the European Union.

    “The European Commission has required Microsoft to license popular Activision Blizzard games automatically to competing cloud gaming services,” said Microsoft President Brad Smith. “This will apply globally and will empower millions of consumers worldwide to play these games on any device they choose.”

    Activision CEO Bobby Kotick called the requirements “stringent” and pledged to expand investments in EU workers.

    “Our talented teams in Sweden, Spain, Germany, Romania, Poland and many other European countries have the skills, ambition, and government support needed to compete effectively on a global scale,” Kotick said in a statement. “We expect these teams to grow and prosper given their governments’ firm but pragmatic approach to gaming.”

    [ad_2]

    Source link

  • AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The AI boom is here, and Nvidia is reaping all the benefits.

    Shares of Nvidia

    (NVDA)
    exploded 28% higher Thursday after reporting earnings and sales that surged well above Wall Street’s already lofty expectations. That was enough to make investors temporarily forget about America’s dangerous debt ceiling standoff, sending the broader stock market higher — even after credit rating agency Fitch warned late Wednesday that America could soon lose its sterling AAA debt rating.

    Nvidia makes chips that power generative AI, a type of artificial intelligence that can create new content, such as text and images, in response to user prompts. That’s the kind of AI underlying ChatGPT, Google’s Bard, Dall-E and many of the other new AI technologies.

    “The computer industry is going through two simultaneous transitions — accelerated computing and generative AI,” said Jensen Huang, Nvidia’s CEO, in a statement. “A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process.”

    Huang said Nvidia is increasing supply of its entire suite of data center products to meet “surging demand” for them.

    Last quarter, Nvidia’s profit surged 26% to $2 billion, and sales rose 19% to $7.2 billion, each easily surpassing Wall Street analysts’ forecasts. Nvidia’s outlook for the current quarter was also significantly — about 50% — higher than analysts’ predictions.

    Nvidia’s stock is up nearly 110% this year.

    “There is not one better indicator around underlying AI demand going on … than the foundational Nvidia story,” said Dan Ives, analyst at Wedbush. “We view Nvidia at the core hearts and lungs of the AI revolution.”

    [ad_2]

    Source link

  • The Reddit blackout shows no signs of stopping | CNN Business

    The Reddit blackout shows no signs of stopping | CNN Business

    [ad_1]



    CNN
     — 

    A widespread Reddit blackout affecting some of the site’s largest communities has continued into its third day with no signs of stopping, as a number of groups on the site vowed to remain closed off indefinitely to protest changes to the platform’s data policies.

    As of Wednesday morning, more than 6,000 subreddits remained inaccessible and in private mode after what began as a two-day voluntary shutdown. The blackout includes popular forums such as r/aww, r/videos and r/music, each of which claims more than 25 million subscribers on the platform.

    The extended protest highlights the commitment of some users, moderators and developers to a long-term standoff with Reddit’s management over a decision to begin charging steep fees for third-party data access to its platform.

    Reddit didn’t immediately respond to a request for comment.

    The coming fees have provoked broad outrage because of their expected impact on independent apps and moderator tools that have grown up around Reddit and that many users view as a critical resource. Some of the largest third-party apps, such as Apollo and RIF, have said they cannot afford the fees and must shut down, effectively driving users to Reddit’s native app that has been widely panned as slow, buggy and inferior, particularly for users with disabilities.

    In recent days, Reddit has said it would exempt some accessibility apps from the price changes and allow some third-party tools to continue operating through its application programming interface (API). But many moderators have called the announcements little more than a “microscopic” concession.

    In response to allegations that Reddit is imposing the fees and forcing developers to shut down in a “profit-driven” move, Reddit co-founder and CEO Steve Huffman said in a recent Q&A with users that Reddit will “continue to be profit-driven until profits arrive.”

    “Unlike some of the [third-party] apps, we are not profitable,” Huffman said.

    The tensions echo how Twitter, under its new owner Elon Musk, has prompted criticism with plans for its own paywall for data in a bid to develop new revenue sources and to shore up the company’s struggling finances. For Reddit, the stakes are also high to grow revenue, as the company reportedly looks to go public later this year.

    Huffman reportedly dismissed the blackout in a leaked internal memo obtained by The Verge. According to the memo, Huffman described the protest as “among the noisiest we’ve seen” but insisted that “like all blowups on Reddit, this one will pass as well.”

    “We absolutely must ship what we said we would,” Huffman reportedly wrote in the memo, in an apparent reference to the API changes. Huffman also reportedly predicted that some subreddits would end their protest after the initially scheduled two days.

    As of Wednesday morning, many groups participating in the blackout had lifted their self-imposed restrictions. But even as some groups went public once more, others joined the protest.

    [ad_2]

    Source link

  • Meta releases clues on how AI is used on Facebook and Instagram | CNN Business

    Meta releases clues on how AI is used on Facebook and Instagram | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    As demand for greater transparency in artificial intelligence mounts, Meta released tools and information Thursday aimed at helping users understand how AI influences what they see on its apps.

    The social media giant introduced nearly two dozen explainers focused on various features of its platforms, such as Instagram Stories and Facebook’s news feed. These describe how Meta selects what content to recommend to users.

    The description and disclosures came in the face of looming legislation around the world that may soon impose concrete disclosure requirements on companies that use AI technology.

    Meta’s so-called “system cards” cover how the company determines which accounts to present to users as recommended follows on Facebook and Instagram, how the company’s search tools function and how notifications work.

    For example, the system card devoted to Instagram’s search function describes how the app gathers all relevant search results in response to a user’s query, scores each result based on the user’s past interactions with the app and then applies “additional filters” and “integrity processes” to narrow the list before finally presenting it to the user.

    Meta’s president of global affairs, Nick Clegg, tied the company’s new disclosures to a global debate about the potential dangers of artificial intelligence that range from the spread of misinformation to a rise in AI-enabled fraud and scams.

    “With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks,” Clegg wrote in a blog post Thursday. “We believe that the best way to respond to those concerns is with openness.”

    A longer blog post describing how Facebook content ranking works, meanwhile, identifies detailed factors that go into determining what information the platform presents first.

    Those factors include whether a post has been flagged by a third-party fact checker, how engaging the account that posted the material may be, and whether you may have interacted with the account in the past.

    Meta’s new explainers coincide with the release of new tools for users to tailor the company’s algorithms, including the ability to tell Instagram to supply more of a certain type of content. Previously, Meta had only offered the ability for users to tell Instagram to show less, not more, Clegg wrote.

    On both Facebook and Instagram, he added, users will now be able to customize their feeds further by accessing a menu from individual posts.

    Finally, he said, Meta will be making it easier for researchers to study its platforms by providing a content library and an application programming interface (API) featuring a variety of content from Facebook and Instagram.

    Meta’s announcement comes as European lawmakers have swiftly advanced legislation that would create new requirements for explanation and transparency for companies that use artificial intelligence, and as US lawmakers have said they hope to begin working on similar legislation later this year.

    [ad_2]

    Source link

  • Twitter’s future is in doubt as Threads tops 100 million users | CNN Business

    Twitter’s future is in doubt as Threads tops 100 million users | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Twitter has weathered months, if not years, of mismanagement as well as mass layoffs, frequent service disruptions and an exodus of top advertisers, but the launch of a rival app from Meta could prove to be the final straw.

    Threads surpassed 100 million users this weekend, less than a week after it launched, Meta CEO Mark Zuckerberg announced Monday, marking a staggering feat for any social network and one that puts it on pace to rapidly pass Twitter’s audience size.

    Meanwhile, multiple internet traffic analysts reported noticeable declines in Twitter usage in just the past few days. The results underscore the risk Meta poses to Twitter’s business and raise questions about how, or if, Twitter can stem its losses.

    Twitter traffic had already been trending downward for months, according to data from the internet infrastructure company Cloudflare and the web analytics firm Similarweb. But the pace of decline appears to have accelerated in recent days, both companies said, likely reflecting strong interest in Threads and a mass migration from the platform owned by Elon Musk to the one run by Zuckerberg.

    Twitter didn’t immediately respond to a request for comment.

    On Sunday, Cloudflare CEO Matthew Prince shared a chart showing Twitter’s popularity relative to other websites it tracks. “Twitter traffic tanking,” Prince said as he posted the chart.

    The chart showed that in January, Twitter was ranked 32nd on the list; the next month, it had fallen to 34th. For much of the spring, Twitter fluctuated between 35th place and 37th. But the beginning of July showed a rapid falloff in popularity, as Twitter plunged to 40th place. (Cloudflare defines popularity as the “size of a population of users that look up a domain per unit of time.”)

    Similarweb told CNN Monday it has witnessed comparable trends in Twitter traffic.

    “In the first two full days that Threads was generally available, [last] Thursday and Friday, web traffic to twitter.com was down 5% compared with the same days of the previous week and down 11% compared with July 6 and 7, 2022,” said David Carr, a senior insights manager at Similarweb. “We’ve been reporting for a while that Twitter is down compared with last year – June traffic was down 4% – but Threads seems to be taking a bigger bite out of it.”

    Bolstering the traffic reports were the anecdotal experiences of some Threads users. Alex Stamos, director of the Stanford Internet Observatory, said Saturday he ran an “unscientific test” of how the same post he shared on Twitter, Threads and Mastodon, another rival, performed with his audience over a 23-hour period.

    The identical content Stamos created on each platform saw significantly more engagement on Threads than on Twitter as measured by likes and replies — despite having a fraction of his usual reach on the newer platform, he said.

    Stamos, who has more than 100,000 followers on Twitter but only a tenth of that number on Threads, added that strong Threads engagement with his posts describing the “research” also supported the original findings. The quality of the replies to his posts were also much higher on non-Twitter platforms, he observed.

    “From my perspective, Twitter is done as a platform for serious tech conversations,” Stamos said, who previously was the chief security officer at Facebook.

    Fueling Threads’ rapid growth has been Meta’s use of Instagram as a springboard to sign up new users, along with what many Threads users have identified as a dissatisfaction with Twitter.

    Threads started out with a number of celebrity accounts prepopulating its platform but has since gained additional high-profile users including Kim Kardashian and Jeff Bezos. An account that had been banned from Twitter that tracks the movements of Musk’s private jet has also joined the new platform.

    More than 100 US lawmakers have signed up as well, Axios reported last week, though few world leaders appear to be on Threads at the moment.

    Zuckerberg and Instagram head Adam Mosseri have emphasized that Threads is about more than replacing Twitter and that the app seeks to tap audiences outside of Twitter’s traditional user base. That means Threads will not actively elevate news or political content, Mosseri said, describing those topics as “not at all worth the scrutiny, negativity (let’s be honest), or integrity risks that come along with them.”

    Over the weekend, Mosseri’s stance on news and politics triggered a debate over Threads’ approach to those topics. Some users praised it as a way to make the platform more accessible to average users, who may never have embraced Twitter before. Others argued that many of the topics Mosseri characterized as non-political, including music, fashion and entertainment, are their own source of news and can be inherently political.

    Even as Meta’s executives look to put some daylight between Threads and Twitter, the rapid rise of Threads only appears to have deepened Musk’s longtime feud with Zuckerberg. The app’s launch prompted threats of litigation as Twitter has accused Meta of trade secret theft, not to mention talk of a physical cage fight between Musk and Zuckerberg.

    On Sunday, Musk, who is known for erratic behavior and incendiary remarks, made it even more personal as he lobbed a sexual insult at Zuckerberg and proposed comparing the size of their respective genitalia.

    Zuckerberg has not directly responded to the insult. But after a Threads user pointed out that the new app was not featured in Twitter’s trending topics tab, Zuckerberg replied “Concerning” with a crying-laughter emoji. And he used the same emoji to reply to a post by the fast-food brand Wendy’s, which had suggested Zuckerberg should “go to space just to really make him mad lol.”

    [ad_2]

    Source link

  • OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    [ad_1]

    Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, launched on Monday.

    The project’s core offering is its World ID, which the company describes as a “digital passport” to prove that its holder is a real human, not an AI bot. To get a World ID, a customer signs up to do an in-person iris scan using Worldcoin’s ‘orb’, a silver ball approximately the size of a bowling ball. Once the orb’s iris scan verifies the person is a real human, it creates a World ID.

    The company behind Worldcoin is San Francisco and Berlin-based Tools for Humanity.

    The project has 2 million users from its beta period, and with Monday’s launch, Worldcoin is scaling up “orbing” operations to 35 cities in 20 countries. As an enticement, those who sign up in certain countries will receive Worldcoin’s cryptocurrency token WLD.

    WLD’s price rose in early trading on Monday. On the world’s largest exchange, Binance, it hit a peak of $5.29 and at 1000 GMT was at $2.49 from a starting price of $0.15, having seen $25.1 million of trading volume, according to Binance’s website.

    Blockchains can store the World IDs in a way that preserves privacy and can’t be controlled or shut down by any single entity, co-founder Alex Blania told Reuters.

    The project says World IDs will be necessary in the age of generative AI chatbots like ChatGPT, which produce remarkably humanlike language. World IDs could be used to tell the difference between real people and AI bots online.

    Altman told Reuters Worldcoin also can help address how the economy will be reshaped by generative AI.

    “People will be supercharged by AI, which will have massive economic implications,” he said.

    One example Altman likes is universal basic income, or UBI, a social benefits program usually run by governments where every individual is entitled to payments. Because AI “will do more and more of the work that people now do,” Altman believes UBI can help to combat income inequality. Since only real people can have World IDs, it could be used to reduce fraud when deploying UBI.

    Altman said he thought a world with UBI would be “very far in the future” and he did not have a clear idea of what entity could dole out money, but that Worldcoin lays groundwork for it to become a reality.

    “We think that we need to start experimenting with things so we can figure out what to do,” he said.

    [ad_2]

    Source link

  • Opinion: Utah’s startling new rules for kids and social media | CNN

    Opinion: Utah’s startling new rules for kids and social media | CNN

    [ad_1]

    Editor’s Note: Kara Alaimo, an associate professor of communication at Fairleigh Dickinson University, writes about issues affecting women and social media. Her book, “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Reclaim It,” will be published by Alcove Press in 2024. The opinions expressed in this commentary are her own. Read more opinion on CNN.



    CNN
     — 

    Utah’s Republican governor, Spencer Cox, recently signed two bills into law that sharply restrict children’s use of social media platforms. Under the legislation, which takes effect next year, social media companies have to verify the ages of all users in the state, and children under age 18 have to get permission from their parents to have accounts.

    Parents will also be able to access their kids’ accounts, apps won’t be allowed to show children ads, and accounts for kids won’t be able to be used between 10:30 p.m. and 6:30 a.m. without parental permission.

    It’s about time. Social networks in the United States have become potentially incredibly dangerous for children, and parents can no longer protect our kids without the tools and safeguards this law provides. While Cox is correct that these measures won’t be “foolproof,” and what implementing them actually looks like remains an open question, one thing is clear: Congress should follow Utah’s lead and enact a similar law to protect every child in this country.

    One of the most important parts of Utah’s law is the requirement for social networks to verify the ages of users. Right now, most apps ask users their ages without requiring proof. Children can lie and say they’re older to avoid some of the features social media companies have created to protect kids — like TikTok’s new setting that asks 13- to 17-year-olds to enter their passwords after they’ve been online for an hour, as a prompt for them to consider whether they want to spend so much time on the app.

    While critics argue that age verification allows tech companies to collect even more data about users, let’s be real: These companies already have a terrifying amount of intimate information about us. To solve this problem, we need a separate (and comprehensive) data privacy law. But until that happens, this concern shouldn’t stop us from protecting kids.

    One of the key components of this legislation is allowing parents access to their kids’ accounts. By doing this, the law begins to help address one of the biggest dangers kids face online: toxic content. I’m talking about things like the 2,100 pieces of content about suicide, self-harm and depression that 14-year-old Molly Russell in the UK saved, shared or liked in the six months before she killed herself last year.

    I’m also talking about things like the blackout challenge — also called the pass-out or choking challenge — that has gone around social networks. In 2021, four children 12 or younger in four different states all died after trying it.

    “Check out their phones,” urged the father of one of these young victims. “It’s not about privacy — this is their lives.”

    Of course, there are legitimate privacy concerns to worry about here, and just as kids’ use of social media can be deadly, social apps can also be used in healthy ways. LGBTQ children who aren’t accepted in their families or communities, for example, can turn online for support that is good for their mental health. Now, their parents will potentially be able to see this content on their accounts.

    I hope groups that serve children who are questioning their gender and sexual identities and those that work with other vulnerable youth will adapt their online presences to try to serve as resources for educating parents about inclusivity and tolerance, too. This is also a reminder that vulnerable children need better access to mental health services like therapy — they’re way too young to be left to their own devices to seek out the support they need online.

    But, despite these very real privacy concerns, it’s simply too dangerous for parents not to know what our kids are seeing on social media. Just as parents and caregivers supervise our children offline and don’t allow them to go to bars or strip clubs, we have to ensure they don’t end up in unsafe spaces on social media.

    The other huge challenge the Utah law helps parents overcome is the amount of time kids are spending on social media. A 2022 survey by Common Sense Media found that the average 8- to 12-year-old is on social media for 5 hours and 33 minutes per day, while the average 13- to 18 year-old spends 8 hours and 39 minutes every day. That’s more time than a full time-job.

    The American Academy of Pediatrics warns that lack of sleep is associated with serious harms in children — everything from injuries to depression, obesity and diabetes. So parents in the US need to have a way to make sure their kids aren’t up on TikTok all night (parents in China don’t have to worry about this because the Chinese version of TikTok doesn’t allow kids to stay on for more than 40 minutes and isn’t useable overnight).

    Of course, Utah isn’t an authoritarian state like China, so it can’t just turn off kids’ phones. That’s where this new law comes in requiring social networks to implement these settings. The tougher part of Utah’s law for tech companies to implement will be a provision requiring social apps to ensure they’re not designed to addict kids.

    Social networks are arguably addictive by nature, since they feed on our desires for connection and validation. But hopefully the threat of being sued by children who say they’ve been addicted or otherwise harmed by social networks — an outcome for which this law provides an avenue — will force tech companies to think carefully about how they build their algorithms and features like bottomless feeds that seem practically designed to keep users glued to their screens.

    TikTok and Snap didn’t respond to requests for comment from CNN about Utah’s law, while a representative for Meta, Facebook’s parent company, said the company shares the goal to keep Facebook safe for kids but also wants it to be accessible.

    Of course, if social networks had been more responsible, it probably wouldn’t have come to this. But in the US, tech companies have taken advantage of a lack of rules to build platforms that can be dangerous for our kids.

    States are finally saying no more. In addition to Utah’s measures, California passed a sweeping online safety law last year. Connecticut, Ohio and Arkansas are also considering laws to protect kids by regulating social media. A bill introduced in Texas wouldn’t allow kids to use social media at all.

    There’s nothing innocent about the experiences many kids are having on social media. This law will help Utah’s parents protect their kids. Parents in other states need the same support. Now, it’s time for the federal government to step up and ensure children throughout the country have the same protections as Utah kids.

    Suicide & Crisis Lifeline: Call or text 988. The Lifeline provides 24/7, free and confidential support for people in distress, prevention and crisis resources for you and your loved ones, and best practices for professionals in the United States. En Español: Linea de Prevencion del Suidio y Crisis: 1-888-628-9454.

    [ad_2]

    Source link

  • FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Artificial intelligence tools such as ChatGPT could lead to a “turbocharging” of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

    Addressing House lawmakers, FTC chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

    In recent months, a new crop of AI tools have gained attention for their ability to generate convincing emails, stories and essays as well as images, audio and videos. While these tools have potential to change the way people work and create, some have also raised concerns about how they could be use to deceive by impersonating individuals.

    Even as policymakers across the federal government debate how to promote specific AI rules, citing concerns about possible algorithmic discrimination and privacy issues, companies could still face FTC investigations today under a range of statutes that have been on the books for years, Khan and her fellow commissioners said.

    “Throughout the FTC’s history we have had to adapt our enforcement to changing technology,” said FTC Commissioners Rebecca Slaughter. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … [and] not be scared off by this idea that this is a new, revolutionary technology.”

    FTC Commissioner Alvaro Bedoya said companies cannot escape liability simply by claiming that their algorithms are a black box.

    “Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” said Bedoya. “There is law, and companies will need to abide by it.”

    The FTC has previously issued extensive public guidance to AI companies, and the agency last month received a request to investigate OpenAI over claims that the company behind ChatGPT has misled consumers about the tool’s capabilities and limitations.

    [ad_2]

    Source link