ReportWire

Tag: iab-technology & computing

  • Opinion: Utah’s startling new rules for kids and social media | CNN

    Opinion: Utah’s startling new rules for kids and social media | CNN

    [ad_1]

    Editor’s Note: Kara Alaimo, an associate professor of communication at Fairleigh Dickinson University, writes about issues affecting women and social media. Her book, “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Reclaim It,” will be published by Alcove Press in 2024. The opinions expressed in this commentary are her own. Read more opinion on CNN.



    CNN
     — 

    Utah’s Republican governor, Spencer Cox, recently signed two bills into law that sharply restrict children’s use of social media platforms. Under the legislation, which takes effect next year, social media companies have to verify the ages of all users in the state, and children under age 18 have to get permission from their parents to have accounts.

    Parents will also be able to access their kids’ accounts, apps won’t be allowed to show children ads, and accounts for kids won’t be able to be used between 10:30 p.m. and 6:30 a.m. without parental permission.

    It’s about time. Social networks in the United States have become potentially incredibly dangerous for children, and parents can no longer protect our kids without the tools and safeguards this law provides. While Cox is correct that these measures won’t be “foolproof,” and what implementing them actually looks like remains an open question, one thing is clear: Congress should follow Utah’s lead and enact a similar law to protect every child in this country.

    One of the most important parts of Utah’s law is the requirement for social networks to verify the ages of users. Right now, most apps ask users their ages without requiring proof. Children can lie and say they’re older to avoid some of the features social media companies have created to protect kids — like TikTok’s new setting that asks 13- to 17-year-olds to enter their passwords after they’ve been online for an hour, as a prompt for them to consider whether they want to spend so much time on the app.

    While critics argue that age verification allows tech companies to collect even more data about users, let’s be real: These companies already have a terrifying amount of intimate information about us. To solve this problem, we need a separate (and comprehensive) data privacy law. But until that happens, this concern shouldn’t stop us from protecting kids.

    One of the key components of this legislation is allowing parents access to their kids’ accounts. By doing this, the law begins to help address one of the biggest dangers kids face online: toxic content. I’m talking about things like the 2,100 pieces of content about suicide, self-harm and depression that 14-year-old Molly Russell in the UK saved, shared or liked in the six months before she killed herself last year.

    I’m also talking about things like the blackout challenge — also called the pass-out or choking challenge — that has gone around social networks. In 2021, four children 12 or younger in four different states all died after trying it.

    “Check out their phones,” urged the father of one of these young victims. “It’s not about privacy — this is their lives.”

    Of course, there are legitimate privacy concerns to worry about here, and just as kids’ use of social media can be deadly, social apps can also be used in healthy ways. LGBTQ children who aren’t accepted in their families or communities, for example, can turn online for support that is good for their mental health. Now, their parents will potentially be able to see this content on their accounts.

    I hope groups that serve children who are questioning their gender and sexual identities and those that work with other vulnerable youth will adapt their online presences to try to serve as resources for educating parents about inclusivity and tolerance, too. This is also a reminder that vulnerable children need better access to mental health services like therapy — they’re way too young to be left to their own devices to seek out the support they need online.

    But, despite these very real privacy concerns, it’s simply too dangerous for parents not to know what our kids are seeing on social media. Just as parents and caregivers supervise our children offline and don’t allow them to go to bars or strip clubs, we have to ensure they don’t end up in unsafe spaces on social media.

    The other huge challenge the Utah law helps parents overcome is the amount of time kids are spending on social media. A 2022 survey by Common Sense Media found that the average 8- to 12-year-old is on social media for 5 hours and 33 minutes per day, while the average 13- to 18 year-old spends 8 hours and 39 minutes every day. That’s more time than a full time-job.

    The American Academy of Pediatrics warns that lack of sleep is associated with serious harms in children — everything from injuries to depression, obesity and diabetes. So parents in the US need to have a way to make sure their kids aren’t up on TikTok all night (parents in China don’t have to worry about this because the Chinese version of TikTok doesn’t allow kids to stay on for more than 40 minutes and isn’t useable overnight).

    Of course, Utah isn’t an authoritarian state like China, so it can’t just turn off kids’ phones. That’s where this new law comes in requiring social networks to implement these settings. The tougher part of Utah’s law for tech companies to implement will be a provision requiring social apps to ensure they’re not designed to addict kids.

    Social networks are arguably addictive by nature, since they feed on our desires for connection and validation. But hopefully the threat of being sued by children who say they’ve been addicted or otherwise harmed by social networks — an outcome for which this law provides an avenue — will force tech companies to think carefully about how they build their algorithms and features like bottomless feeds that seem practically designed to keep users glued to their screens.

    TikTok and Snap didn’t respond to requests for comment from CNN about Utah’s law, while a representative for Meta, Facebook’s parent company, said the company shares the goal to keep Facebook safe for kids but also wants it to be accessible.

    Of course, if social networks had been more responsible, it probably wouldn’t have come to this. But in the US, tech companies have taken advantage of a lack of rules to build platforms that can be dangerous for our kids.

    States are finally saying no more. In addition to Utah’s measures, California passed a sweeping online safety law last year. Connecticut, Ohio and Arkansas are also considering laws to protect kids by regulating social media. A bill introduced in Texas wouldn’t allow kids to use social media at all.

    There’s nothing innocent about the experiences many kids are having on social media. This law will help Utah’s parents protect their kids. Parents in other states need the same support. Now, it’s time for the federal government to step up and ensure children throughout the country have the same protections as Utah kids.

    Suicide & Crisis Lifeline: Call or text 988. The Lifeline provides 24/7, free and confidential support for people in distress, prevention and crisis resources for you and your loved ones, and best practices for professionals in the United States. En Español: Linea de Prevencion del Suidio y Crisis: 1-888-628-9454.

    [ad_2]

    Source link

  • Micron Technology: China probes US chip maker for cybersecurity risks as tech tension escalates | CNN Business

    Micron Technology: China probes US chip maker for cybersecurity risks as tech tension escalates | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    China has launched a cybersecurity probe into Micron Technology, one of America’s largest memory chip makers, in apparent retaliation after US allies in Asia and Europe announced new restrictions on the sale of key technology to Beijing.

    The Cyberspace Administration of China (CAC) will review products sold by Micron in the country, according to a statement by the watchdog late on Friday.

    The move is aimed at “ensuring the security of key information infrastructure supply chains, preventing cybersecurity risks caused by hidden product problems, and maintaining national security,” it noted.

    It came on the same day that Japan, a US ally, said it would restrict the export of advanced chip manufacturing equipment to countries including China, following similar moves by the United States and the Netherlands.

    Washington and its allies have announced curbs on China’s semiconductor industry, which strike at the heart of Beijing’s bid to become a tech superpower.

    Last month, the Netherlands also unveiled new restrictions on overseas sales of semiconductor technology, citing the need to protect national security. In October, the United States banned Chinese companies from buying advanced chips and chipmaking equipment without a license.

    Micron told CNN it was aware of the review.

    “We are in communication with the CAC and are cooperating fully,” it said, adding that it stands by the security of its products.

    Shares in Micron sank 4.4% on Wall Street Friday following the news, the biggest drop in more than three months. Micron derives more than 10% of its revenue from China.

    In an earlier filing, the Idaho-based company had warned of such risks.

    “The Chinese government may restrict us from participating in the China market or may prevent us from competing effectively with Chinese companies,” it said last week.

    China has strongly criticized restrictions on tech exports, saying last month it “firmly opposes” such measures.

    In efforts to boost growth and job creation, Beijing is seeking to woo foreign investments as it grapples with mounting economic challenges. The newly minted premier Li Qiang and several top economic officials have been rolling out the welcome wagon for global CEOs and promising they would “provide a good environment and services.”

    But Beijing has also exerted growing pressure on foreign companies to bring them into line with its agenda.

    Last month, authorities closed the Beijing office of Mintz Group, a US corporate intelligence firm, and detained five local staff.

    Days earlier, they suspended Deloitte’s operations in Beijing for three months and imposed a fine of $31 million over alleged lapses in its work auditing a state-owned distressed debt manager.

    [ad_2]

    Source link

  • Academic researchers blast Twitter’s data paywall as ‘outrageously expensive’ | CNN Business

    Academic researchers blast Twitter’s data paywall as ‘outrageously expensive’ | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    After Twitter announced in February it would begin charging third parties to access its platform data, academic researchers warned that the vaguely worded plan could threaten important studies about how misinformation, harassment and other malicious activity spreads online.

    Now, as Twitter has released more pricing information, many of those same academics are saying their fears were well-founded, complaining that Twitter’s new tiered paywall not only charges “outrageously expensive” prices but that it also restricts the amount of accessible data so heavily that what little researchers can see, even on the most expensive tiers, is not useful for studies at any rigorous level.

    Twitter, which has cut much of its public relations team under CEO Elon Musk, automatically responded to a request for comment with an email containing a poop emoji.

    In an open letter this week, the Coalition for Independent Technology Research — a group representing dozens of researchers and civil society organizations — said free and open access to Twitter data has historically enabled systematic, large-scale research on social media’s role in public health initiatives, foreign propaganda, political discourse, and even the bots and spam that Musk has blamed for ruining Twitter.

    But Twitter’s new tiered access system undercuts all of that, the researchers said. The company’s pricing that launched last week, starting at $100 per month for a “basic” amount of data, does not provide nearly enough volume for users at the low end, while the high end “ranges from $42,000 to $210,000 per month [and] is unaffordable for researchers,” the letter said.

    The new basic tier limits users to reading just 10,000 tweets per month. That represents 0.3% of what researchers used to be able to collect in a single day, the letter said.

    Even under the most expensive “enterprise” tier costing upwards of $2.5 million a year, Twitter is offering only a fraction of the tweets it used to, the letter continued. Before the change, researchers could pay about $500 a month for the ability to access up to 10% of the roughly 1 billion tweets a month that flow across Twitter’s platform.

    Now, though, “the most expensive Enterprise tier would cut that by 80% at about 400 times the price,” the researchers’ letter said.

    Asking researchers to pay orders of magnitude more for a fifth of the access they once had represents a barrier to accountability and transparency, the letter added.

    “Under the new pricing plans, studying the communications and interactions of even a small population—such as the 535 Members of the U.S. Congress or the 705 Members of the European Parliament—will be unfeasible,” the letter said. “The new pricing plans will also end at least 76 long-term efforts, including dashboards, tools, or code packages that support other researchers, journalists, first-responders, educators, and Twitter users.”

    [ad_2]

    Source link

  • FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Artificial intelligence tools such as ChatGPT could lead to a “turbocharging” of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

    Addressing House lawmakers, FTC chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

    In recent months, a new crop of AI tools have gained attention for their ability to generate convincing emails, stories and essays as well as images, audio and videos. While these tools have potential to change the way people work and create, some have also raised concerns about how they could be use to deceive by impersonating individuals.

    Even as policymakers across the federal government debate how to promote specific AI rules, citing concerns about possible algorithmic discrimination and privacy issues, companies could still face FTC investigations today under a range of statutes that have been on the books for years, Khan and her fellow commissioners said.

    “Throughout the FTC’s history we have had to adapt our enforcement to changing technology,” said FTC Commissioners Rebecca Slaughter. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … [and] not be scared off by this idea that this is a new, revolutionary technology.”

    FTC Commissioner Alvaro Bedoya said companies cannot escape liability simply by claiming that their algorithms are a black box.

    “Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” said Bedoya. “There is law, and companies will need to abide by it.”

    The FTC has previously issued extensive public guidance to AI companies, and the agency last month received a request to investigate OpenAI over claims that the company behind ChatGPT has misled consumers about the tool’s capabilities and limitations.

    [ad_2]

    Source link

  • Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat is about to give new meaning to the “chat” part of its name.

    Snap, the company behind Snapchat, announced on Wednesday that its customizable My AI chatbot, is now accessible to all users within the app. The feature, which is powered by the viral AI chatbot ChatGPT, was previously only available to paying Snapchat+ subscribers.

    The tool offers recommendations, answers questions, helps users make plans and can write a haiku in seconds, according to the company. It can be brought into conversation with friends when it’s mentioned with “@MyAI.” Users can also give it a name and design a custom Bitmoji avatar for it to personalize it more.

    The move comes more than a month after ChatGPT creator OpenAI opened up access to its chatbot to third-party businesses. Snap, Instacart and tutor app Quizlet were among the early partners experimenting with adding ChatGPT.

    Since its public release in November 2022, ChatGPT has stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    The initial batch of companies tapping into ChatGPT’s functionality each have slightly different visions for how to incorporate it. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.

    Adding ChatGPT features also may come with some risks. The tool, which is trained on vast troves of data online, can spread inaccurate information and has the potential to respond to users in ways they might find inappropriate.

    In a blog post on Wednesday, Snap acknowledged “My AI is far from perfect but we’ve made a lot of progress.”

    It said, for example, about 99.5% of My AI responses conform to its community guidelines. Snap said it has made changes to “help protect against responses that could be inappropriate or harmful.” The company also said it has added moderation technology and included the new feature to its in-app parental tools.

    “We will continue to use these early learnings to make AI a more safe, fun, and useful experience, and we’re eager to hear your thoughts,” the company said.

    [ad_2]

    Source link

  • Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    [ad_1]



    CNN
     — 

    Less than a few hours after Snapchat rolled out its My AI chatbot to all users last week, Lyndsi Lee, a mother from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

    “It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.

    The feature is powered by the viral AI chatbot tool ChatGPT – and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

    The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear you’re talking to a computer.

    “I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,” Lee said. “I just think there is a really clear line [Snapchat] is crossing.”

    The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to reckon with questions that may have seemed theoretical only months ago.

    In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was released to Snap’s subscription customers, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. In particular, he cited reports that it can provide kids with suggestions for how to lie to their parents.

    “These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

    In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”

    In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.

    In another TikTok video with more than 1.5 million views, a user named Ariel recorded a song with an intro, chorus and piano chords written by My AI about what it’s like to be a chatbot. When she sent the recorded song back, she said the chatbot denied its involvement with the reply: “I’m sorry, but as an AI language model, I don’t write songs.” Ariel called the exchange “creepy.”

    Other users shared concerns about how the tool understands, interacts with and collects information from photos. “I snapped a picture … and it said ‘nice shoes’ and asked who the people [were] in the photo,” a Snapchat user wrote on Facebook.

    Snapchat told CNN it continues to improve My AI based on community feedback and is working to establish more guardrails to keep its users safe. The company also said that similar to its other tools, users don’t have to interact with My AI if they don’t want to.

    It’s not possible to remove My AI from chat feeds, however, unless a user subscribes to its monthly premium service, Snapchat+. Some teens say they have opted to pay the $3.99 Snapchat+ fee to turn off the tool before promptly canceling the service.

    But not all users dislike the feature.

    One user wrote on Facebook that she’s been asking My AI for homework help. “It gets all of the questions right.” Another noted she’s leaned on it for comfort and advice. “I love my little pocket, bestie!” she wrote. “You can change the Bitmoji [avatar] for it and surprisingly it offers really great advice to some real life situations. … I love the support it gives.”

    ChatGPT, which is trained on vast troves of data online, has previously come under fire for spreading inaccurate information, responding to users in ways they might find inappropriate and enabling students to cheat. But Snapchat’s integration of the tool risks heightening some of these issues, and adding new ones.

    Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.

    “If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” she said. “Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot. In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

    For now, the onus is on parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to show up in more popular apps and services.

    Sinead Bovell, the founder of WAYE, a startup that helps prepare youth for future with advanced technologies, said parents need to make it very clear “chatbots are not your friend.”

    “They’re also not your therapists or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” she said.

    “Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they would a friend – even though from a user design perspective, the chatbot exists in the same corner of Snapchat.”

    She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.

    [ad_2]

    Source link

  • UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    [ad_1]



    Reuters
     — 

    A citizen of the United Kingdom who was extradited to New York from Spain last month has pleaded guilty to cyberstalking and computer hacking schemes, including the 2020 hack of the social media site Twitter, the U.S. Justice Department said on Tuesday.

    Joseph James O’Connor, 23, was charged in both North Dakota and New York. The North Dakota case was transferred to the U.S. District Court for the Southern District of New York.

    O’Connor pleaded guilty to charges including conspiring to commit computer intrusions, to commit wire fraud and to commit money laundering.

    O’Connor, who was extradited to the U.S. on April 26, will also forfeit more than $794,000 and pay restitution to victims, prosecutors said. He faces a maximum of 77 years in prison at sentencing on June 23.

    “O’Connor’s criminal activities were flagrant and malicious, and his conduct impacted multiple people’s lives. He harassed, threatened, and extorted his victims, causing substantial emotional harm,” Assistant Attorney General Kenneth Polite said in a statement.

    Prosecutors said the schemes included gaining unauthorized access to social media accounts on Twitter in July 2020 as well as a TikTok account in August 2020. Along with his co-conspirators, O’Connor stole at least $794,000 worth of cryptocurrency.

    The July 2020 Twitter attack hijacked a variety of verified accounts, including those of then-Democratic presidential candidate Joe Biden and Tesla CEO Elon Musk, who now owns Twitter.

    The accounts of former President Barack Obama, reality TV star Kim Kardashian, Bill Gates, Warren Buffett, Benjamin Netanyahu, Jeff Bezos, Michael Bloomberg and Kanye West were also hit.

    The alleged hacker used the accounts to solicit digital currency, prompting Twitter to prevent some verified accounts from publishing messages for several hours until security could be restored.

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Biden picks Air Force general to lead NSA and Cyber Command | CNN Politics

    Biden picks Air Force general to lead NSA and Cyber Command | CNN Politics

    [ad_1]



    CNN
     — 

    President Joe Biden has nominated an Air Force general to head the nation’s powerful electronic spying agency and the US military command that conducts offensive cyber operations – a crucial position as the US continues to battle Russia, China and other foes in cyberspace.

    Lt. Gen. Timothy Haugh, who has served for years in senior US military cyber positions, is Biden’s choice to replace outgoing Army Gen. Paul Nakasone as head of the National Security Agency and US Cyber Command, an Air Force official confirmed to CNN.

    Politico first reported on Haugh’s nomination.

    The White House did not respond to a request for comment.

    Haugh’s nomination could face a roadblock in the Senate after Republican Sen. Tommy Tuberville of Alabama put a hold on senior military nominations because he objects to the department’s abortion travel policy.

    Haugh is currently deputy of US Cyber Command, a command of thousands of US military personnel who conduct offensive and defensive cyber operations to protect US critical infrastructure. Officials from the command traveled to Ukraine in late 2021 to prepare Kyiv for an onslaught of Russian cyberattacks that accompanied the full-scale Russian invasion.

    The command and NSA also have taken an increasingly active role in helping defend American elections from foreign interference under Nakasone’s leadership over the last five years.

    During the 2020 election, Iranian hackers accessed a US municipal website for reporting unofficial election results and Cyber Command kicked the hackers off the network out of concern that they might post fake results on the website, a senior US military official revealed last month.

    Haugh’s nomination signals a continued emphasis on election security work at Fort Meade, the sprawling military base in Maryland where the NSA and Cyber Command are housed. As a senior US military cyber official, Haugh has been involved in election security discussions in recent midterm and general elections.

    [ad_2]

    Source link

  • Russian-speaking cyber gang claims credit for hack of BBC and British Airways employee data | CNN Business

    Russian-speaking cyber gang claims credit for hack of BBC and British Airways employee data | CNN Business

    [ad_1]



    CNN
     — 

    A group of Russian-speaking cyber criminals has claimed credit for a sweeping hack that has compromised employee data at the BBC and British Airways and left US and UK cybersecurity officials scrambling to respond.

    The hackers, known as the CLOP ransomware gang, say they have “information on hundreds of companies.” They’ve given victims until June 14 to discuss a ransom before they start publishing data from companies they claim to have hacked, according to a dark web posting seen by CNN.

    The extortion threat adds urgency to an already high-stakes security incident that has forced responses from tech firms, corporations and government agencies from the US to Canada and the UK.

    The compromise of employee data at the BBC and British Airways came via a breach of a human resources firm, Zellis, that both organizations use.

    “We are aware of a data breach at our third-party supplier, Zellis, and are working closely with them as they urgently investigate the extent of the breach,” a BBC spokesperson told CNN Wednesday. The spokesperson declined to comment on the hackers’ extortion threat.

    A British Airways spokesperson said the company had “notified those colleagues whose personal information has been compromised to provide support and advice.”

    The hackers — a well-known group whose favored malware emerged in 2019 — last week began exploiting a new flaw in a widely used file-transfer software known as MOVEit, appearing to target as many exposed organizations as they could. The opportunistic nature of the hack left a broad swath of organizations vulnerable to extortion.

    Numerous US state government agencies use the MOVEit software, but it’s unclear how many agencies, if any, have been compromised.

    The US Cybersecurity and Infrastructure Security Agency has ordered all federal civilian agencies to update the MOVEit software in light of the hack. No federal agencies have been confirmed as victims, a CISA spokesperson told CNN.

    Together with the Federal Bureau of Investigation, CISA also released advice on dealing with the CLOP hack. Progress, the US firm that owns the MoveIT software, has also urged victims to update their software packages and has issued security advice.

    CISA Executive Director for Cybersecurity Eric Goldstein said in a statement: “CISA remains in close contact with Progress Software and our partners at the FBI to understand prevalence within federal agencies and critical infrastructure.”

    But the effort to respond to the cyber attack is very much ongoing.

    The CLOP hackers are “overwhelmed with the number of victims,” according to Charles Carmakal, chief technology officer at Mandiant Consulting, a Google-owned firm that has investigated the hack. “Instead of directly reaching out to victims over email or telephone calls like in prior campaigns, they are asking victims to reach out to them via email,” he said on LinkedIn Tuesday night.

    Allan Liska, a ransomware expert at cybersecurity firm Recorded Future, also told CNN: “Unfortunately, the sensitive nature of the data often stored on MOVEit servers means there will likely be real consequences stemming from the [data theft] but it will be months before we understand the full fallout from this attack.”

    [ad_2]

    Source link

  • The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    [ad_1]



    CNN
     — 

    Gannett, the largest newspaper publisher in the United States, is suing Google, alleging the tech giant holds a monopoly over the digital ad market.

    The publisher of USA Today and more than 200 local publications filed the lawsuit in a New York federal court on Tuesday, and is seeking unspecified damages. Gannett argues in court documents that Google and its parent company, Alphabet, controls how publishers buy and sell ads online.

    “The result is dramatically less revenue for publishers and Google’s ad-tech rivals, while Google enjoys exorbitant monopoly profits,” the lawsuit states.

    Google controls about a quarter of the US digital advertising market, with Meta, Amazon and TikTok combining for another third, according to eMarketer. News publishers and other websites combine for the other roughly 40%. Big Tech’s share of the market is beginning to erode slightly, but Google remains by far the largest individual player.

    That means publishers often rely at least in part on Google’s advertising technology to support their operations: Gannett says Google controls 90% of the ad market for publishers.

    Michael Reed, Gannett’s chairman and CEO, said in a statement Tuesday that Google’s dominance in the online advertising industry has come “at the expense of publishers, readers and everyone else.”

    “Digital advertising is the lifeblood of the online economy,” Reed added. “Without free and fair competition for digital ad space, publishers cannot invest in their newsrooms.”

    Dan Taylor, Google’s vice president of global ads, told CNN that the claims in the suit “are simply wrong.”

    “Publishers have many options to choose from when it comes to using advertising technology to monetize – in fact, Gannett uses dozens of competing ad services, including Google Ad Manager,” Taylor said in a statement Tuesday. “And when publishers choose to use Google tools, they keep the vast majority of revenue.”

    He continued: “We’ll show the court how our advertising products benefit publishers and help them fund their content online.”

    The legal action from Gannett comes as Google faces a growing number of antitrust complaints in the United States and the European Union over its advertising business, which remains its central moneymaker.

    EU officials said last week that Google’s advertising business should be broken up, alleging that the tech giant’s involvement in multiple parts of the digital advertising supply chain creates “inherent conflicts of interest” that risk harming competition.

    Earlier this year, the Justice Department and eight states sued Google, accusing the company of harming competition with its dominance in the online advertising market and similarly calling for it to be broken up.

    [ad_2]

    Source link

  • Dylan Mulvaney says Bud Light’s backlash response was ‘worse than not hiring a trans person at all’ | CNN Business

    Dylan Mulvaney says Bud Light’s backlash response was ‘worse than not hiring a trans person at all’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Dylan Mulvaney on Thursday broke her silence about the fallout that occurred after the trans influencer made two Instagram posts sponsored by Bud Light earlier this year.

    Bud Light’s sponsorship of an April 1 Instagram post by Mulvaney set off a firestorm of anti-trans backlash and calls for a boycott. Mulvaney herself also faced a wave of hate and violent threats.

    Now, in a video posted to Instagram Thursday, Mulvaney is calling on Bud Light and other companies not only to work with trans and other queer influencers, but to support them through the process, even as trans rights are under fire across the country and corporations face anti-LGBTQ+ campaigns.

    Mulvaney said she has “been scared to leave my house, and I have been ridiculed in public, I have been followed,” and she criticized Bud Light for not standing by her and the partnership. She said the company never reached out to her in the wake of the backlash.

    “For a company to hire a trans person and then not publicly stand by them is worse in my opinion than not hiring a trans person at all because it gives customers permission to be as transphobic and hateful as they want,” Mulvaney said. “And the hate doesn’t end with me, it has serious and grave consequences for the rest of our community.”

    When the backlash ignited in April, Bud Light first responded with a straightforward explanation of its relationship with social media influencers like Mulvaney. But later it released a vague statement from the CEO that failed to offer support for Mulvaney or the trans community. Bud Light sales dropped in the ensuing weeks, the company lost its top rating from a major LGBTQ+ nonprofit and it placed two marketing executives on leave.

    The controversy over the sponsored posts came as trans rights are under attack. Over 400 anti-LGBTQ+ bills were introduced in state legislatures this year through April 3, according to American Civil Liberties Union, including ones restricting access to gender-affirming care for trans youth. Generally, transgender people are more than four times as likely to be victims of violent crime than cisgender people, according to a study from the UCLA School of Law.

    The Bud Light backlash also coincided with anti-LGBTQ+ campaigns against other big brands, including Target.

    Mulvaney’s statement followed a Wednesday appearance by Brendan Whitworth, CEO of Bud Light owner Anheuser-Busch, on CBS Mornings, in which he repeated the company’s recent statements about wanting to “focus on what we do best, which is brewing great beer for everyone,” and did not directly answer a question about whether the campaign was a mistake.

    “I think the conversation surrounding Bud Light has moved away from beer, and the conversation has become divisive, and Bud Light really does not belong there, Bud Light should be about bringing people together,” Whitworth said.

    In her video, Mulvaney appeared to address that sentiment, saying, “supporting trans people, it shouldn’t be political.”

    “There should be nothing controversial or divisive about working with us, and I know it’s possible because I’ve worked with some fantastic companies who care,” Mulvaney said. “But caring about the LGBTQ+ community requires a lot more than just a donation somewhere during Pride month.”

    She added: “We’re customers, too, I know a lot of trans and queer people who love beer.”

    In a statement responding to Mulvaney’s video, an Anheuser-Busch spokesperson told CNN on Thursday that, “we remain committed to the programs and partnerships we have forged over decades with organizations across a number of communities, including those in the LGBTQ+ community. The privacy and safety of our employees and our partners is always our top priority. As we move forward, we will focus on what we do best — brewing great beer for everyone and earning our place in moments that matter to our consumers.”

    –CNN’s Danielle Wiener-Bronner contributed to this report.

    [ad_2]

    Source link

  • Meta officially launches Twitter rival Threads | CNN Business

    Meta officially launches Twitter rival Threads | CNN Business

    [ad_1]



    CNN
     — 

    Facebook has tried to compete with Twitter in numerous ways over the years, including copying signature Twitter features such as hashtags and trending topics. But now Facebook’s parent company is taking perhaps its biggest swipe at Twitter yet.

    Meta on Wednesday officially launched a new app called Threads, which is intended to offer a space for real-time conversations online, a function that has long been Twitter’s core selling point.

    The app appears to have many similarities to Twitter, from the layout to the product description. The listing, which first appeared earlier this week as a teaser, emphasizes its potential to build a following and connect with like-minded people.

    “The vision for Threads is to create an option and friendly public space for conversation,” Meta CEO Mark Zuckerberg said in a Threads post following the launch. “We hope to take what Instagram does best and create a new experience around text, ideas, and discussing what’s on your mind.”

    Zuckerberg said on his verified Threads account that the app passed 2 million sign-ups in the first two hours. Later on Wednesday, he wrote that Threads “passed 5 million sign ups in the first four hours.”

    He also responded to posts and shared his thoughts on whether Threads will ever be bigger than Twitter.

    “It’ll take some time, but I think there should be a public conversations app with 1 billion+ people on it. Twitter has had the opportunity to do this but hasn’t nailed it,” Zuckerberg wrote on Threads. “Hopefully we will.”

    The app’s listing describes it as a place where communities can come together to discuss everything from the topics they care about today to what’s trending.

    “Whatever it is you’re interested in, you can follow and connect directly with your favorite creators and others who love the same things — or build a loyal following of your own to share your ideas, opinions and creativity with the world,” it reads.

    Meta said messages posted to Threads will have a 500 character limit. The company said it was bringing the app to 100 countries via Apple’s iOS and Android.

    After downloading the app, users are asked to link up their Instagram page, customize their profile and follow the same accounts they already follow on Instagram. The look is similar to Twitter with a familiar layout, text-based feed, the ability repost and quote other Thread posts. But it also blends Instagram’s existing aesthetic and offers the ability to share posts from Threads directly to Instagram Stories. Verified Instagram accounts are also automatically verified on Threads. Thread accounts can also be listed as public or private.

    The new app joins a growing list of Twitter rivals and could pose the biggest threat to Twitter of the bunch, given Meta’s vast resources and its massive audience.

    It also comes amid heightened turmoil at Twitter, which experienced an outage over the weekend, followed by an announcement that the site had imposed temporary limits on how many tweets its users are able to read while using the app.

    In this photo illustration, the app Threads from Meta seen displayed on a mobile phone. Threads is the latest app launched by Meta, which will be available from the 6th of July 2023 and will be a direct rival of social network Twitter, which has been facing a number of issues after the controversial takeover from entrepreneur Elon Musk.

    Twitter owner Elon Musk said these restrictions had been applied “to address extreme levels of data scraping and system manipulation.” Commenting on the launch of Threads Monday, he tweeted: “Thank goodness they’re so sanely run,” parroting reported comments by Meta executives that appeared to take a jab at Musk’s erratic behavior.

    Since acquiring Twitter in October, Musk has turned the social media platform on its head, alienating advertisers and some of its highest-profile users. He is now looking for ways to return the platform to growth. Twitter announced Monday that users would soon need to pay for TweetDeck, a tool that allows people to organize and easily monitor the accounts they follow.

    Twitter is also attempting to encroach on Meta’s domain. In May, Twitter added encrypted messaging and said calls would follow, developments that could allow the platform to compete with Facebook Messenger and WhatsApp, also owned by Meta.

    The escalating rivalry between the two companies only appears to have added to the rivalry between Musk and Meta CEO Mark Zuckerberg.

    In response to a tweet last month from a user about Threads, Musk wrote: “I’m sure Earth can’t wait to be exclusively under Zuck’s thumb with no other options.” In a followup tweet, Musk teased the idea of a cage match with Zuckerberg.

    Zuckerberg fired back in an Instagram story by posting a screenshot of Musk’s tweet overlaid with the caption: “Send Me Location.”

    And after the Threads app debuted, Zuckerberg tweeted an image of two cartoon Spider-Men pointing at each other.

    – CNN’s Hanna Ziady contributed to this report.

    [ad_2]

    Source link

  • Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

    The list of more than 8,000 authors includes some of the world’s most celebrated writers, including Margaret Atwood, Dan Brown, Michael Chabon, Jonathan Franzen, James Patterson, Jodi Picoult and Philip Pullman, among others.

    In an open letter they signed, posted by the Authors Guild Tuesday, the writers accused AI companies of unfairly profiting from their work.

    “Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” the letter said. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

    Tuesday’s letter was addressed to the CEOs of ChatGPT-maker OpenAI, Facebook-parent Meta, Google, Stability AI, IBM and Microsoft. Most of the companies didn’t immediately respond to a request for comment. Meta, Microsoft and Stability AI declined to comment.

    Much of the tech industry is now working to develop AI tools that can generate compelling images and written work in response to user prompts. These tools are built on large language models, which are trained on vast troves of information online. But recently, there has been growing pressure on tech companies over alleged intellectual property violations with this training process.

    This month, comedian Sarah Silverman and two authors filed a copyright lawsuit against OpenAI and Meta, while a proposed class-action suit accused Google of “stealing everything ever created and shared on the internet by hundreds of millions of Americans,” including copyrighted content. Google has called the lawsuit “baseless,” saying it has been upfront for years that it uses public data to train its algorithms. OpenAI did not previously respond to a request for comment on the suit.

    In addition to demanding compensation “for the past and ongoing use of our works in your generative AI programs,” the thousands of authors who signed the letter this week called on AI companies to seek permission before using the copyrighted material. They also urged the companies to pay writers when their work is featured in the results of generative AI, “whether or not the outputs are infringing under current law.”

    The letter also cites this year’s Supreme Court holding in Warhol v Goldsmith, which found that the late artist Andy Warhol infringed on a photographer’s copyright when he created a series of silk screens based on a photograph of the late singer Prince. The court ruled that Warhol did not sufficiently “transform” the underlying photograph so as to avoid copyright infringement.

    “The high commerciality of your use argues against fair use,” the authors wrote to the AI companies.

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needs to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    – CNN’s Catherine Thorbecke contributed to this report.

    [ad_2]

    Source link

  • OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    [ad_1]

    Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, launched on Monday.

    The project’s core offering is its World ID, which the company describes as a “digital passport” to prove that its holder is a real human, not an AI bot. To get a World ID, a customer signs up to do an in-person iris scan using Worldcoin’s ‘orb’, a silver ball approximately the size of a bowling ball. Once the orb’s iris scan verifies the person is a real human, it creates a World ID.

    The company behind Worldcoin is San Francisco and Berlin-based Tools for Humanity.

    The project has 2 million users from its beta period, and with Monday’s launch, Worldcoin is scaling up “orbing” operations to 35 cities in 20 countries. As an enticement, those who sign up in certain countries will receive Worldcoin’s cryptocurrency token WLD.

    WLD’s price rose in early trading on Monday. On the world’s largest exchange, Binance, it hit a peak of $5.29 and at 1000 GMT was at $2.49 from a starting price of $0.15, having seen $25.1 million of trading volume, according to Binance’s website.

    Blockchains can store the World IDs in a way that preserves privacy and can’t be controlled or shut down by any single entity, co-founder Alex Blania told Reuters.

    The project says World IDs will be necessary in the age of generative AI chatbots like ChatGPT, which produce remarkably humanlike language. World IDs could be used to tell the difference between real people and AI bots online.

    Altman told Reuters Worldcoin also can help address how the economy will be reshaped by generative AI.

    “People will be supercharged by AI, which will have massive economic implications,” he said.

    One example Altman likes is universal basic income, or UBI, a social benefits program usually run by governments where every individual is entitled to payments. Because AI “will do more and more of the work that people now do,” Altman believes UBI can help to combat income inequality. Since only real people can have World IDs, it could be used to reduce fraud when deploying UBI.

    Altman said he thought a world with UBI would be “very far in the future” and he did not have a clear idea of what entity could dole out money, but that Worldcoin lays groundwork for it to become a reality.

    “We think that we need to start experimenting with things so we can figure out what to do,” he said.

    [ad_2]

    Source link

  • ‘X’ removed after being installed atop company headquarters following Twitter’s rebrand | CNN Business

    ‘X’ removed after being installed atop company headquarters following Twitter’s rebrand | CNN Business

    [ad_1]



    CNN
     — 

    Officials from the San Francisco Department of Building Inspection on Monday morning observed that the new “X” on top of the building formerly known as Twitter’s headquarters was being dismantled, according to Patrick Hannan, the department’s spokesman.

    The news comes after the company was issued a notice of violation (NOV) Friday for work without a permit for the new sign, which flashes at night, that adorns the building.

    “Over the weekend, the Department of Building Inspection and City Planning received 24 complaints about the unpermitted structure, including concerns about its structural safety and illumination. This morning, building inspectors observed the structure being dismantled. A building permit is required to remove the structure but, due to safety concerns, the permit can be secured after the structure is taken down,” Hannan said in an email to CNN.

    “The property owner will be assessed fees for the unpermitted installation of the illuminated structure. The fees will be for building permits for the installation and removal of the structure, and to cover the cost of the Department of Building Inspection and the Planning Department’s investigation,” he added.

    CNN has reached out to the company formerly known as Twitter for comment.

    – CNN’s Ramishah Maruf contributed to this report

    [ad_2]

    Source link

  • Apple launches buy now, pay later service | CNN Business

    Apple launches buy now, pay later service | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Apple on Tuesday launched an option in its digital wallet allowing customers to pay for online purchases in installments, making it the latest company to embrace the buy now, pay later trend.

    The new feature, called Apple Pay Later, lets customers split payments for purchases into four installments over six weeks, with the first installment due at the time of purchase. Apple users can also apply for a loan within the Wallet app, ranging from $50 to $1000, with no interest or fees, to make online or in-app purchases.

    The payment option is rolling out to select users in the United States now, with plans to offer it to all eligible customers over the next several months, according to a company release. Apple first teased the feature last year.

    Apple’s move comes as a growing number of consumers have turned to buy now, pay later services to stretch their budgets at a time of high inflation and broader economic uncertainty. Other popular services that offer the same payment option include Affirm, Klarna and Afterpay.

    But some economists and consumer advocates have raised concerns that these services could cause shoppers to take on more debt.

    The installment process makes it seem like someone is paying practically nothing for the goods or service they’re acquiring, Terri R. Bradford, a research specialist in payment systems for the Kansas City Federal Reserve, previously told CNN. “So the possibility is that you could, in your mind, think of everything that you’re buying in those four installments and, as a result, take on more debt than you would if you had to pay for them in full each and every time.”

    But Apple says the new feature is “designed with users’ financial health in mind.”

    “There’s no one-size-fits-all approach when it comes to how people manage their finances,” said Jennifer Bailey, Apple’s vice president of Apple Pay and Apple Wallet, in Tuesday’s release. “Many people are looking for flexible payment options, which is why we’re excited to provide our users with Apple Pay Later.”

    Apple users will be able to track and manage upcoming loan payments in the Wallet app. Any loan application can also be done in the app with no impact on credit, according to the company.

    Apple’s Pay Later option is enabled through the Mastercard Installments program.

    [ad_2]

    Source link

  • The city without TikTok offers a window to America’s potential future | CNN Business

    The city without TikTok offers a window to America’s potential future | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Across the United States, more than 150 million people are being faced with the possibility of a new reality: life without TikTok.

    The wildly popular short-form video app has been at the center of an ongoing battle, with lawmakers calling for an outright ban, and the company portraying itself as a critical community space, educational platform and just plain fun.

    In Hong Kong, there’s no need to imagine that reality: TikTok discontinued its services there in 2020.

    Its abrupt departure was met with mixed reactions: disappointment from some users and content creators, but also relief from others who say life is better without the app’s infinite scroll.

    At the time of its exit, TikTok had a relatively modest presence in the city and was not ubiquitous like it is in the US today.

    But the varied reactions to its departure, and the way users have pivoted to other platforms or even real-life offline communities, offer Americans a glimpse into their potential TikTok-less future.

    TikTok announced its exit from Hong Kong in July 2020, a week after China imposed a controversial national security law in the city. The decision came as the app tried to distance itself from China and its Beijing-based parent company ByteDance, in the face of growing pressure in the US under the Trump administration.

    But it meant a jarring halt for creators like Shivani Dukhande, who had roughly 45,000 followers at the time the app left Hong Kong.

    Dukhande, 25, saw her account take off in early 2020 during the pandemic, with lifestyle content such as cooking and wellness videos flourishing on the platform.

    “There were a lot of new creators emerging,” she said. “We used to all collaborate together, we had a chat where we would all speak and share ideas and it created a community.”

    Momentum began to build. Companies started reaching out to Dukhande, paying for sponsored content and collaborating on ad campaigns. Brands began partnering with creators on trending “challenges” in a bid to attract young new consumers.

    “More people were joining and it was becoming such a fun thing to do,” she said. “Then, it just kind of went away one morning.”

    “If it continued, then I probably could have made enough to have quit my 9 to 5,” she said. “If I had the chance to grow, it could have been a potential career path.”

    This is one of the main arguments TikTok has made in recent weeks in the US. In March, as the company’s CEO prepared to testify before Congress, TikTok produced a docuseries highlighting American small business owners who rely on the platform for their livelihoods.

    The platform is used by nearly five million businesses in the US, TikTok said in March. And it’s set to surpass rivals: London-based research firm Omdia projected in November that TikTok’s advertising revenues will exceed the combined video ad revenues of Meta – home of Facebook and Instagram – and YouTube by 2027.

    This is partly because people are spending more time on TikTok. In the second quarter of 2022, TikTok users globally spent an average of 95 minutes per day on the app, according to data analytics firm SensorTower – nearly twice as much time as users spent on Facebook and Instagram.

    Shivani Dukhande had created videos about wellness, lifestyle, food and Hong Kong on her TikTok account.

    But in Hong Kong, other platforms have jumped in to fill the gap. Reels, Instagram’s short-form video product, with similar features as TikTok such as an endless scroll, is growing quickly – and Dukhande has gotten on board.

    She had to rebuild her audience from scratch, and now has 12,500 Instagram followers, but she feels optimistic about its growth. Still, the loss of TikTok was a “missed opportunity,” she said, and the burgeoning community of creators has largely faded from sight.

    “The amount of jobs, the amount of content creation, the amount of marketing opportunities that were there with TikTok – we sort of missed out on that whole chunk of it.”

    But for some people, TikTok’s departure was a welcome change.

    Poppy Anderson, 16, has been using TikTok since its launch in 2018. And, like many others in her generation, she would spend hours “scrolling and scrolling” – even when feeling unfulfilled.

    “It was very easy to kind of find exactly what you like on there, because the [algorithm-run] For You page kept you there,” she said. “And it’s entertaining, but you don’t really get anything from it.”

    She described TikTok as often being a toxic environment that breeds narrow thinking, herd mentality, a misguided “cancel culture” and inappropriate online behavior such as critiquing the bodies of girls and women. Even people she knew in real life began acting differently after joining the app, which strained friendships, she said.

    Martin Poon, 15, also grew weary of TikTok, but it was hard to quit.

    “Everyone was using it, so I feel like there was a sense that you have to use it, you have to be on top of things, you have to know what’s going on. And I think that was stressful to me,” he said.

    Misinformation and misogyny ran rampant on TikTok, with accounts like those of Andrew Tate, the self-styled “alpha male” recently detained in Romania on allegations of human trafficking and rape, gaining popularity among boys at Poon’s school.

    “It’s just concerning how [these accounts] have so much impact on the youth, and it has so much grip on what we think and how it affects our behavior,” said Poon – though he added that misinformation is a major problem on all social media platforms, not just TikTok.

    Experts have long worried about the impact of TikTok on young people’s mental health, with one study claiming the app may surface potentially harmful content related to suicide and eating disorders to teenagers within minutes of them creating an account.

    In response to growing pressure, TikTok recently announced a one-hour daily screentime limit for users under 18, though users will be able to turn off this default setting.

    Anderson acknowledged some positives about TikTok, like open conversations about mental health. Still, she was glad when the app became inaccessible. Falling asleep became easier without the lure of TikTok. “I didn’t have the self control to get off it on my own,” she said.

    For Poon and his friend Ava Chan, also 15, TikTok’s disappearance sparked new beginnings.

    When the app left in 2020, they were doing online classes, isolated from friends and bored at home. At the time, Instagram Reels and YouTube Shorts had yet to arrive in Hong Kong.

    “We had to figure out how to use our time other than being on TikTok,” said Chan. “For us, that was exploring our passions more.”

    For both, that came in advocating for the neurodiverse community. They launched a club at school that spreads education and awareness about neurodiversity, as well as participating in volunteer activities with neurodiverse people.

    Both said it lent them a sense of purpose, and as time went on, they saw other benefits.

    Their friends, who would previously spend time filming and watching TikToks together, began having more face-to-face conversations. They noticed peers begin exercising outdoors more, which was made easier as Covid restrictions lifted. Their mental health improved.

    Of course, being teenagers, they’re not off social media entirely and use it as a tool to promote their club – but it’s far from the previous hours of scrolling. And while they occasionally wonder what’s happening on TikTok outside Hong Kong, the allure of it is lost when nobody else around them uses it either.

    “A lot of people, they’ve just kind of forgotten about it,” said Anderson. “People move to different platforms – or just move on.”

    [ad_2]

    Source link

  • Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Arkansas Gov. Sarah Huckabee Sanders has signed a sweeping bill imposing a minimum age limit for social media usage, in the latest example of states taking more aggressive steps intended to protect teens online.

    But even as Sanders signed the bill into law on Wednesday afternoon, the legislation appeared to contain vast loopholes and exemptions benefiting companies that lobbied on the bill and raising questions about how much of the industry it truly covers.

    The legislation, known as the Social Media Safety Act and taking effect in September, is aimed at giving parents more control over their kids’ social media usage, according to lawmakers. It defines social media companies as any online forum that lets users create public profiles and interact with each other through digital content.

    It requires companies that operate those services to verify the ages of all new users and, if the users are under 18 years old, to obtain a parent’s consent before allowing them to create an account. To perform the age checks, the law relies on third-party companies to verify users’ personal information, such as a driver’s license or photo ID.

    “While social media can be a great tool and a wonderful resource, it can have a massive negative impact on our kids,” Sanders said at a press conference before signing the bill.

    Utah finalized a similar law last month, raising concerns among some users and advocacy groups that the legislation could make user data less secure, internet access less private and infringe upon younger users’ basic rights.

    The push by states to legislate on social media comes after years of mounting scrutiny of the industry and claims that it has harmed users’ well-being and mental health, particularly among teens.

    Despite its seemingly universal scope, however, the new law, also known as SB396, includes numerous carveouts for certain types of digital services and, in some cases, individual companies. And although its sponsors have said the law is specifically meant to apply to certain platforms, including TikTok, parts of the legislative language appear to result in the exact opposite effect.

    In the final days of negotiation over the bill, Arkansas lawmakers approved an amendment that created several categorical exemptions from the age verification requirements. Media companies that “exclusively” offer subscription content; social media platforms that permit users to “generate short video clips of dancing, voice overs, or other acts of entertainment”; and companies that “exclusively offer” video gaming-focused social networking features were exempted.

    Another amendment carved out companies that sell cloud storage services, business cybersecurity services or educational technology and that simultaneously derive less than 25% of their total revenue from running a social media platform.

    Sen. Tyler Dees, a lead co-sponsor of the legislation, explained in remarks on the Arkansas senate floor on April 6 that the exemptions and tweaks to the bill, some of which he said were made in consultation with Apple, Meta and Google, were intended to shield non-social media services from the bill’s age requirements and to focus attention on new accounts created by children, not existing adult accounts.

    “There’s other services that Google offers … like cloud storage, et cetera,” Dees said. “So that’s really the intent of carving out — like LinkedIn, that is a social – I’m sorry, that is a business networking site, and so that’s the intent of those bills.”

    Microsoft-owned LinkedIn is apparently exempt from SB396 under a provision that carves out companies that provide “career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.”

    Other lawmakers have questioned whether the legislation — which has now become law — exempts a giant of the social media industry: YouTube, whose auto-play features and algorithmic recommendation engine have been accused of promoting extremism and radicalizing viewers.

    The confusion over YouTube appears to stem from the carveout for businesses that offer cloud storage and that make less than 25% of their revenue from social media.

    What is unclear is whether YouTube is subject to SB396 because it is a distinct company within Google whose revenue comes almost entirely from operating a social media platform, or whether it is not covered because YouTube is a part of Google and Google is exempt because it derives only a small share of its revenues from YouTube.

    In response to questions by CNN, Dees said SB396 targets platforms including Facebook, Instagram and TikTok, but omitted any mention of Google and declined to answer whether YouTube specifically would be covered by the law.

    “The purpose of this bill was to empower parents and protect kids from social media platforms, like Facebook, Instagram, TikTok and Snapchat,” Dees said in a statement. “We worked with stakeholders to ensure that email, text messaging, video streaming, and networking websites were not covered by the bill.”

    In remarks at Wednesday’s bill signing, Sanders told reporters that Google and Amazon are exempted from the law, implying that YouTube will not be subject to the age verification requirements imposed on other major social media sites.

    Meanwhile, Dees’ statement appeared to contradict the language in SB396 that purports to exempt any company that “allows a user to generate short video clips of dancing, voice overs, or other acts of entertainment in which the primary purpose is not educational or informative” — content that can be commonly found on TikTok, Snapchat and the other social media platforms Deese named.

    According to Meta spokesperson, “We want teens to be safe online. We’ve developed more than 30 tools to support teens and families, including tools that let parents and teens work together to limit the amount of time teens spend on Instagram, and age-verification technology that helps teens have age-appropriate experiences.”

    Meta “automatically set teens’ accounts to private when they join Instagram, we’ve further restricted the options advertisers have to reach teens, as well as the information we use to show ads to teens… and we don’t allow content that promotes suicide, self-harm or eating disorders,” according to the spokesperson, who added: “We’ll continue to work closely with experts, policymakers and parents on these important issues.”

    Spokespeople for Snapchat, TikTok and YouTube didn’t immediately respond to a request for comment.

    [ad_2]

    Source link

  • A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    [ad_1]



    CNN
     — 

    Google on Wednesday unveiled its latest lineup of hardware products, including its first foldable phone and a new tablet, as well as plans to roll out new AI features to its search engine and productivity tools.

    The updates, announced at its annual Google I/O developer conference, come as the company is simultaneously trying to push beyond its core advertising business with new devices while also racing to defend its search engine from the threat posed by a wave of new AI-powered tools.

    In a sign of where Google’s focus currently lies, the company spent more than 90 minutes teasing a long list of new AI features before mentioning hardware updates.

    Here’s what Google announced at the event.

    Google became the latest tech company to unveil a foldable smartphone. Like other foldables, the $1799 Pixel Fold features a vertical hinge that can be opened to reveal a tablet-like display. But Google calls the Fold the thinnest foldable on the market.

    “It took some clever engineering work redesigning components like our speakers, our battery and haptics,” said George Hwang, a product manager at Google, on a call ahead of the announcement. The company packed a Pixel phone into a less than 6 mm body – about two thirds of the thickness of its other Pixel phones.

    The Pixel Fold is very much a phone first: when it’s unfolded, it opens up into a 7.6-inch screen, and moves on Google’s custom-built 180-degree hinge. That hinge mechanism is moved out entirely from under the display to improve its dust resistance and decrease the device’s overall thickness, according to the company.

    The Google Fold includes features you’d find on a Pixel, such as long exposure, unblur, magic eraser, which lets users remove unwanted or distracting object. It also has Pixel Fold-specific tools such as dual-screen live translate, which lets a user communicate in another language with the help of fast audio and text translations on the outer screen.

    Google said it optimized its top apps to take advantage of the larger screen but “there’s still work to be done” because “optimizing for a new foldable form factor takes time,” Hwang said. “It’s a process that we’re committed to and it requires steep investment with our developer partners across Android,” Hwang added.

    Google is far from the first to embrace foldables, but it’s possible it waited to launch its own version until the technology became more advanced. Early versions of the Samsung Galaxy Z Fold, for example, had issues with the screen and most apps were not well optimized for the design.

    But even now, the future for foldables remains uncertain. Most apps are still not optimized for foldable devices; prices remain very high; and Google’s chief rival, Apple, has yet to embrace the option.

    Despite great consumer interest in foldable phones — and a resurgence in 90s-style flip phones among celebrities and TikTok influencers — the foldable market is relatively small, with Samsung dominating the category, followed by others including Motorola, Lenovo, Oppo, and Huawei. According to ABI Research, foldable and flexible displays made up about 0.7% of the smartphone market in 2021, and in 2022 expected to fall just shy of 2%.

    The Pixel Fold will be available in the US, UK, Germany and Japan. The company said the device will start shipping next month.

    A look at the Google's Pixel 7a lineup

    On the surface, the 7a looks similar to the Pixel 7 and 7 Pro, with the same pixel camera bar along the back. It comes with the typical advancements you’d expect to find with any smartphone upgrade – better display, advanced camera and longer-lasting battery. But the 7a now boasts a Tensor G2 processor and a TItan M2 security chip, which brings advanced processing and new artificial intelligence features. It also offers wireless charging for the first time on an A model.

    The Pixel lineup has long been known for its cameras, and the 7a is no exception. It’s packed with upgrades, including a 64-megapixel main camera – the largest sensor on a Pixel A series to date, which will help with improved image quality, low light performance and other features. It also offers a new 13-megapixel ultra-wide camera for capturing even wider shots and a new 13-megapixel front camera. For the first time, each camera enables 4K video.

    The 7a also supports many significant Pixel features, including unblur, magic eraser and an improved Night Sight that’s two times faster and sharper than its predecessor. It also allows users to capture long exposure and enhanced zoom.

    The Pixel comes in several colors, including charcoal, snow, sea and coral, and starts at $499 via the Google Store on May 10.

    The Pixel Series A line has long been aimed at the cost conscious who want good features at a reasonable price, but its reach is limited. Google sells between eight to 10 million of the Pixel devices each year, according to ABI Research.

    “Generally, the smartphones were really meant for Google to showcase how software, and now AI capabilities, could be effectively optimized on hardware and improve the Android user experience,” said David McQueen, an analyst at ABI Research. “Google has purposely kept volume sales limited as it also has to be mindful of its relationship with other smartphone manufacturers that use the Android OS.”

    The Google Pixel tablet

    While phones were a key focus at the event, Google also refreshed other parts of its hardware lineup.

    Google introduced the Pixel Tablet, which is intended for use around the house, from turning off the lights off in the house to setting the thermostat without getting off the couch.

    The tablet, which has rounded edges and corners, comes in three colors: porcelain, hazel and rose, and starts at $499. It will be available on June 20.

    Under the hood, the 11-inch tablet is powered by Google’s Tensor G2 chips, which bring long-lasting battery life and AI features to the device. It also offers a front-facing camera, an 8-megapixel rear camera, and a charging dock.

    Google is also moving forward with plans to bring AI chat features to its core search engine amid a renewed arms race over the technology in Silicon Valley.

    The company said it is introducing the next evolution of Google Search, which will use an AI-powered chatbot to answer questions “you never thought Search could answer” and to help get users the information they want quicker than ever.

    With the update, the look and feel of Google Search results will be noticeably different. When users type a query into the main search bar, they will automatically see a pop-up an AI-generated response in addition to displaying traditional results.

    Users can now sign up for the new Google Search, which will first launch in the United States, via the Google app or Chrome’s desktop browser. A limited number of users will have access to it in the weeks ahead, according to the company, before it scales upward.

    Google is expanding access to its existing chatbot Bard, which operates outside the search engine and can help users do tasks such as outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    The tool, which was previously available to early users via a waitlist only in the US, will soon be available for all users in 120 countries and 40 languages.

    Google is also launching extensions for Bard from its own services, such as Gmail, Sheets and Docs, allowing users to ask questions and collaborate with the chatbot within the apps they’re using.

    Google also announced PaLM 2, its latest large language model to rival ChatGPT-creator OpenAI’s GPT-4.

    The move marks a big step forward for the technology that powers the company’s AI products and promises to be better at logic, common sense reasoning and mathematics. It can also generate specialized code in different programming languages.

    [ad_2]

    Source link