ReportWire

Tag: iab-computing

  • Beware deepfake reality as Trump dominates headlines | CNN Politics

    Beware deepfake reality as Trump dominates headlines | CNN Politics

    [ad_1]

    A version of this story appeared in CNN’s What Matters newsletter. To get it in your inbox, sign up for free here.



    CNN
     — 

    After earlier and incorrectly predicting his own arrest this week, former President Donald Trump veered into the more sinister business of predicting violence and catastrophe if he’s arrested.

    Whether the prediction turns into reality is another thing entirely.

    Trump’s reemergence into the headlines, as both a third-time presidential candidate and a potential defendant, is threatening to pull the country back into his reality. Trump has not been formally charged with any crime and denies all wrongdoing.

    Compare the lived reality where people interact, mostly in peace, and go about their lives with the Trump-centered, fake world available on social media.

    In the real world, Trump hasn’t been charged with anything. On Twitter, fake photos of his arrest generated by artificial intelligence have been viewed millions of times.

    In the real world, prosecutors have to form a methodical criminal case before they indict a defendant. On social media, Trump says everything is part of a plot against him.

    Positing the idea of violent retribution into the echo chamber of his Truth Social platform early Friday, Trump said it is “known that potential death & destruction” that would be “catastrophic for our Country” would result if a charge is brought against him.

    In a post Thursday, Trump went into all caps – the typographical equivalent of screaming – to declare his innocence and add, “OUR COUNTRY IS BEING DESTROYED, AS THEY TELL US TO BE PEACEFUL.”

    The veiled threats place a new form of pressure on Manhattan District Attorney Alvin Bragg, who has already been threatened by Republicans in Congress with an investigation. Without naming Bragg in the Friday post, Trump said anyone who would charge him with a crime is “a degenerate psychopath that truely (sic) hates the USA!”

    CNN’s Brynn Gingras and Kara Scannell reported Friday that Bragg’s office received a package containing a white powder substance and a threatening note. They added that while authorities determined there was no dangerous substance, the package capped off a week where law enforcement has seen continual threats against the court, including several bomb threats, all of which turned out to be unfounded.

    Meanwhile, rather than condemn Trump’s latest post, top Republicans in Washington like House Speaker Kevin McCarthy refused to answer questions about it.

    The photos of Trump being arrested were created in jest by Eliot Higgins, founder of the investigative journalism group Bellingcat, who asked an AI art generator to make a photo of “Donald Trump falling down while being arrested,” according to The Washington Post.

    “I was just mucking about,” Higgins told the Post. “I thought maybe five people would retweet it.”

    Bellingcat, ironically, uses social media posts and other digital data to prove facts, uncovering crimes and investigating atrocities. CNN worked with Bellingcat, for instance, to uncover the Russian operatives who apparently tried to poison the now-jailed dissident leader Alexey Navalny. The group has also used social media to track down apparent war crimes in Ukraine.

    The fake photos, while requiring a double take, were clearly not real. But it is that first impression that can be misleading – and lasting. They fed Trump’s narrative of persecution, a visual manifestation of the drama he puts into his posts.

    There’s more and more of this online, and it’s getting harder and harder to tell fiction from reality.

    Earlier this month, CNN’s Donie O’Sullivan had an incredible video report on the power of AI-generated audio. In addition to magically mimicking Anderson Cooper, he used an AI generator to call his parents. The computer sounded like his voice, but it was not O’Sullivan talking. While his mother later said O’Sullivan’s Irish accent felt off during the conversation, she did not catch it in real time.

    “When we enter this world where anything can be fake – any image, any audio, any video, any piece of text, nothing has to be real – we have what’s called the liar’s dividend, which is anybody can deny reality,” Hany Farid, a professor at the University of California, Berkeley’s School of Information, told O’Sullivan.

    There are many examples of deepfake photos and videos if not tricking people, then certainly causing harm – such as women whose faces have been deepfaked, without their consent, onto pornography.

    When something is repeated enough online or when a fake narrative takes hold, it can influence the real world. That’s certainly what happened on January 6, 2021, when conspiracy theories that blossomed online turned into an attack on the Capitol.

    “There is no online and offline world; there’s one world, and it’s fully integrated,” Farid told O’Sullivan with regard to the potential for AI to create a false reality online that bleeds into the real world.

    “When things happen on the internet, they have real implications for individuals, for communities, for societies, for democracies, and I don’t think we as a field have fully come to grips with our responsibility here,” he said.

    It’s something to be very careful of as we look at what could be a historic period in which a former president, current candidate, serial conspiracy theorist and master of social media potentially faces criminal charges.

    [ad_2]

    Source link

  • Samsung to cut chip production after posting lowest profit in 14 years | CNN Business

    Samsung to cut chip production after posting lowest profit in 14 years | CNN Business

    [ad_1]


    Seoul
    Reuters
     — 

    Samsung Electronics said on Friday it would make a “meaningful” cut to chip production after flagging a worse-than-expected 96% plunge in quarterly operating profit, as a sharp downturn in the global semiconductor market worsens.

    Shares in the world’s largest memory chip and TV maker rose 3% in early trading, while rival SK Hynix shares surged 5% as investors welcomed plans to cut production to help preserve pricing power.

    Samsung

    (SSNLF)
    estimated its operating profit fell to 600 billion won ($455.5 million) in January-March, from 14.12 trillion won a year earlier, in a short preliminary earnings statement. It was the lowest profit for any quarter in 14 years.

    “Memory demand dropped sharply … due to the macroeconomic situation and slowing customer purchasing sentiment, as many customers continue to adjust their inventories for financial purposes,” it said in the statement.

    “We are lowering the production of memory chips by a meaningful level, especially that of products with supply secured,” it added, in a reference to those with sufficient inventories.

    The production cut signal is unusually strong for Samsung, which previously said it would make small adjustments like pauses for refurbishing production lines but not a full-blown cut.

    It did not disclose the size of the planned cut.

    The first-quarter profit fell short of a 873 billion won Refinitiv SmartEstimate, weighted toward analysts who are more consistently accurate. Multiple estimates were revised down earlier this week.

    It was the lowest since a 590 billion won profit in the first quarter of 2009, according to company data.

    With consumer demand for tech devices sluggish due to rising inflation, semiconductor buyers including data center operators and smartphone and personal computer makers are refraining from new chip purchases and using up inventories.

    Analysts estimated the chip division sustained quarterly losses of more than 4 trillion won ($3.03 billion) as memory chip prices fell and its inventory values were slashed.

    This would be the chip business’ first quarterly loss since the first quarter of 2009, a major divergence for what is normally a cash cow that generates about half of Samsung’s profits in better years.

    Revenue likely fell 19% from the same period a year earlier to 63 trillion won, Samsung said.

    The company is due to release detailed earnings, including divisional breakdowns, later this month.

    [ad_2]

    Source link

  • Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Arkansas Gov. Sarah Huckabee Sanders has signed a sweeping bill imposing a minimum age limit for social media usage, in the latest example of states taking more aggressive steps intended to protect teens online.

    But even as Sanders signed the bill into law on Wednesday afternoon, the legislation appeared to contain vast loopholes and exemptions benefiting companies that lobbied on the bill and raising questions about how much of the industry it truly covers.

    The legislation, known as the Social Media Safety Act and taking effect in September, is aimed at giving parents more control over their kids’ social media usage, according to lawmakers. It defines social media companies as any online forum that lets users create public profiles and interact with each other through digital content.

    It requires companies that operate those services to verify the ages of all new users and, if the users are under 18 years old, to obtain a parent’s consent before allowing them to create an account. To perform the age checks, the law relies on third-party companies to verify users’ personal information, such as a driver’s license or photo ID.

    “While social media can be a great tool and a wonderful resource, it can have a massive negative impact on our kids,” Sanders said at a press conference before signing the bill.

    Utah finalized a similar law last month, raising concerns among some users and advocacy groups that the legislation could make user data less secure, internet access less private and infringe upon younger users’ basic rights.

    The push by states to legislate on social media comes after years of mounting scrutiny of the industry and claims that it has harmed users’ well-being and mental health, particularly among teens.

    Despite its seemingly universal scope, however, the new law, also known as SB396, includes numerous carveouts for certain types of digital services and, in some cases, individual companies. And although its sponsors have said the law is specifically meant to apply to certain platforms, including TikTok, parts of the legislative language appear to result in the exact opposite effect.

    In the final days of negotiation over the bill, Arkansas lawmakers approved an amendment that created several categorical exemptions from the age verification requirements. Media companies that “exclusively” offer subscription content; social media platforms that permit users to “generate short video clips of dancing, voice overs, or other acts of entertainment”; and companies that “exclusively offer” video gaming-focused social networking features were exempted.

    Another amendment carved out companies that sell cloud storage services, business cybersecurity services or educational technology and that simultaneously derive less than 25% of their total revenue from running a social media platform.

    Sen. Tyler Dees, a lead co-sponsor of the legislation, explained in remarks on the Arkansas senate floor on April 6 that the exemptions and tweaks to the bill, some of which he said were made in consultation with Apple, Meta and Google, were intended to shield non-social media services from the bill’s age requirements and to focus attention on new accounts created by children, not existing adult accounts.

    “There’s other services that Google offers … like cloud storage, et cetera,” Dees said. “So that’s really the intent of carving out — like LinkedIn, that is a social – I’m sorry, that is a business networking site, and so that’s the intent of those bills.”

    Microsoft-owned LinkedIn is apparently exempt from SB396 under a provision that carves out companies that provide “career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.”

    Other lawmakers have questioned whether the legislation — which has now become law — exempts a giant of the social media industry: YouTube, whose auto-play features and algorithmic recommendation engine have been accused of promoting extremism and radicalizing viewers.

    The confusion over YouTube appears to stem from the carveout for businesses that offer cloud storage and that make less than 25% of their revenue from social media.

    What is unclear is whether YouTube is subject to SB396 because it is a distinct company within Google whose revenue comes almost entirely from operating a social media platform, or whether it is not covered because YouTube is a part of Google and Google is exempt because it derives only a small share of its revenues from YouTube.

    In response to questions by CNN, Dees said SB396 targets platforms including Facebook, Instagram and TikTok, but omitted any mention of Google and declined to answer whether YouTube specifically would be covered by the law.

    “The purpose of this bill was to empower parents and protect kids from social media platforms, like Facebook, Instagram, TikTok and Snapchat,” Dees said in a statement. “We worked with stakeholders to ensure that email, text messaging, video streaming, and networking websites were not covered by the bill.”

    In remarks at Wednesday’s bill signing, Sanders told reporters that Google and Amazon are exempted from the law, implying that YouTube will not be subject to the age verification requirements imposed on other major social media sites.

    Meanwhile, Dees’ statement appeared to contradict the language in SB396 that purports to exempt any company that “allows a user to generate short video clips of dancing, voice overs, or other acts of entertainment in which the primary purpose is not educational or informative” — content that can be commonly found on TikTok, Snapchat and the other social media platforms Deese named.

    According to Meta spokesperson, “We want teens to be safe online. We’ve developed more than 30 tools to support teens and families, including tools that let parents and teens work together to limit the amount of time teens spend on Instagram, and age-verification technology that helps teens have age-appropriate experiences.”

    Meta “automatically set teens’ accounts to private when they join Instagram, we’ve further restricted the options advertisers have to reach teens, as well as the information we use to show ads to teens… and we don’t allow content that promotes suicide, self-harm or eating disorders,” according to the spokesperson, who added: “We’ll continue to work closely with experts, policymakers and parents on these important issues.”

    Spokespeople for Snapchat, TikTok and YouTube didn’t immediately respond to a request for comment.

    [ad_2]

    Source link

  • Federal appeals court tosses state antitrust suit seeking to break up Meta | CNN Business

    Federal appeals court tosses state antitrust suit seeking to break up Meta | CNN Business

    [ad_1]



    CNN
     — 

    A group of states that sued to break up Facebook-parent Meta in 2020 were years too late to file their challenge and failed to make a persuasive case that the company’s data policies harmed competition, a federal appeals court ruled Thursday in a sweeping victory for the tech giant.

    In siding with Meta, the decision by a three-judge panel of the US Court of Appeals for the DC Circuit upheld a lower-court decision tossing out the suit initially filed by New York and dozens of other states.

    The decision is a blow to regulators who have cited Meta as a prime example of the way tech giants have allegedly abused their dominance. And it casts a shadow over a parallel antitrust case against Meta that was brought by the Federal Trade Commission at around the same time.

    The states’ original complaint had sought to unwind Meta’s past acquisitions of Instagram and WhatsApp, accusing the company of a “buy-or-bury” approach that violated antitrust laws.

    In 2021, a federal judge dismissed the complaint, saying that the lawsuit came long after the acquisitions had been completed in 2012 and 2014. Thursday’s appellate decision agreed.

    “An injunction breaking up Facebook, ordering it to divest itself of Instagram and WhatsApp under court supervision, would have severe consequences, consequences that would not have existed if the States had timely brought their suit and prevailed,” wrote Senior Circuit Judge Raymond Randolph.

    In addition, Randolph wrote, state allegations claiming that Meta’s — then Facebook’s — policies placing restrictions on app developers were anticompetitive didn’t hold up.

    The policies in question, Randolph wrote, simply told app developers they could not use Facebook’s platform “to duplicate Facebook’s core products,” and did not rise to the level of an antitrust violation under federal law.

    Although the states argued that Facebook’s policies at the time — which have since been removed — discouraged innovation by the company’s rivals, the complaint failed to establish how widely the policies affected Facebook’s third-party developers.

    “The States thus have not adequately alleged that this policy substantially foreclosed Facebook’s competitors, giving us an additional reason to reject their exclusive dealing theory,” the court held.

    A spokesperson for New York Attorney General Letitia James didn’t immediately respond to a request for comment.

    In a statement, Meta said the state’s case reflected a mischaracterization of “the vibrant competitive ecosystem in which we operate.”

    “In affirming the dismissal of this case, the court noted that this enforcement action was ‘odd’ because we compete in an industry that is experiencing ‘rapid growth and innovation with no end in sight,’ Meta said. “Moving forward, Meta will defend itself vigorously against the FTC’s distortion of antitrust laws and attacks on an American success story that are contrary to the interests of people and businesses who value our services.”

    In spite of Thursday’s decision, Meta must still face a similar lawsuit by the FTC, which also seeks to break up the company in connection with its Instagram and WhatsApp acquisitions.

    Last year, the same federal judge who dismissed the state suit, James Boasberg, allowed the federal suit to proceed. Boasberg had tossed out the FTC suit as well in 2021, saying the agency had failed to make an initial showing that Meta holds a monopoly in personal social networking. But he permitted the FTC to re-file its complaint with changes.

    [ad_2]

    Source link

  • A key safety executive at TikTok is leaving as lawmakers keep pressure on the app | CNN Business

    A key safety executive at TikTok is leaving as lawmakers keep pressure on the app | CNN Business

    [ad_1]


    New York
    CNN
     — 

    TikTok is about to lose a key safety executive as the app faces growing pressure from lawmakers and threats of a ban in the United States.

    TikTok’s Head of US Data Security Trust and Safety Eric Han is set to leave the company next week. His departure was confirmed to CNN by TikTok spokesperson Maureen Shanahan. The news was first reported Tuesday by The Verge.

    In the role, which he has held since 2019, Han led policy decisions such as those aimed at reducing the spread of dangerous challenges and cracking down on paid political posts by influencers. The position will be temporarily filled by Andy Bonillo, TikTok’s interim general manager of US data security, until a permanent replacement is found, Shanahan said.

    With the move, TikTok will lose a key safety leader at a difficult moment for the platform. US lawmakers in recent months have ramped up calls for a nationwide ban of the app over concerns that its parent company ByteDance’s connections to China could pose a national security risk to the United States.

    TikTok confirmed in March that federal officials have demanded that the app’s Chinese owners sell their stake in the social media platform, or risk facing a US ban of the app. And last month, Montana lawmakers approved legislation to ban TikTok on personal devices, which would make it the first state to do so, assuming the bill is signed by the state’s governor.

    TikTok CEO Shou Chew testified before Congress in March and attempted to reassure lawmakers about the safety of the app and the security of US users’ data.

    TikTok did not respond to a question about the reason for Han’s departure.

    [ad_2]

    Source link

  • OpenAI CEO Sam Altman to testify before Congress | CNN Business

    OpenAI CEO Sam Altman to testify before Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman will testify before Congress next Tuesday as lawmakers increasingly scrutinize the risks and benefits of artificial intelligence, according to a Senate Judiciary subcommittee.

    During Tuesday’s hearing, lawmakers will question Altman for the first time since OpenAI’s chatbot, ChatGPT, took the world by storm late last year.

    The groundbreaking generative AI tool has led to a wave of new investment in AI, prompting a scramble among US policymakers who have called for guardrails and regulation amid fears of AI’s misuse.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    [ad_2]

    Source link

  • TikTok sues Montana over new law banning the app | CNN Business

    TikTok sues Montana over new law banning the app | CNN Business

    [ad_1]


    New York
    CNN
     — 

    TikTok on Monday filed a suit against Montana over a bill that would ban the popular short-form video app in the state starting early next year.

    TikTok alleges that the ban violates the US Constitution, including the First Amendment, as well as other federal laws, according to a complaint filed in Montana District Court. The company also claims concerns that the Chinese government could access the data of US TikTok users – which are a key motivation behind the ban – are “unfounded.”

    The bill was signed by Montana Gov. Greg Gianforte last week, and would impose a fine of $10,000 per day on TikTok or app stores for making the app available to personal devices in the state starting on January 1, 2024.

    “We are challenging Montana’s unconstitutional TikTok ban to protect our business and the hundreds of thousands of TikTok users in Montana,” TikTok spokesperson Brooke Oberwetter said in a statement. “We believe our legal challenge will prevail based on an exceedingly strong set of precedents and facts.”

    Emily Flower, a spokesperson for Montana’s Attorney General, told CNN: “We expected a legal challenge and are fully prepared to defend the law.”

    The Montana law stems from growing criticism of TikTok over its ties to China through its parent company, ByteDance. Many US officials have expressed fears that the Chinese government could potentially access US data via TikTok for spying purposes, though there is no evidence that the Chinese government has ever done so. Some federal lawmakers have also called for a ban.

    Montana’s ban went a step beyond other states that have restricted TikTok from government devices. But legal and technology experts say there are challenges for Montana, or any state, to enforce such a ban. Even if the law is allowed to stand, the practicalities of the internet may make it impossible to keep TikTok out of the hands of users.

    TikTok said in the complaint that the app is used by “hundreds of thousands” of people in Montana to “communicate with each other and others around the world on an endless variety of topics, from business to politics to the arts.”

    “This unprecedented and extreme step of banning a major platform for First Amendment speech, based on unfounded speculation about potential foreign government access to user data and the content of the speech, is flatly inconsistent with the constitution,” TikTok said in the complaint.

    TikTok is seeking for the court to invalidate and permanently enjoin Montana from enforcing the ban.

    The legal challenge by TikTok is an indicator of the hurdles that Montana and other lawmakers could face in attempting to restrict the platform in the United States. A group of TikTok creators also sued Montana last week over the state’s ban, saying it violates their First Amendment rights.

    CNN’s Brian Fung contributed to this report.

    [ad_2]

    Source link

  • AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The AI boom is here, and Nvidia is reaping all the benefits.

    Shares of Nvidia

    (NVDA)
    exploded 28% higher Thursday after reporting earnings and sales that surged well above Wall Street’s already lofty expectations. That was enough to make investors temporarily forget about America’s dangerous debt ceiling standoff, sending the broader stock market higher — even after credit rating agency Fitch warned late Wednesday that America could soon lose its sterling AAA debt rating.

    Nvidia makes chips that power generative AI, a type of artificial intelligence that can create new content, such as text and images, in response to user prompts. That’s the kind of AI underlying ChatGPT, Google’s Bard, Dall-E and many of the other new AI technologies.

    “The computer industry is going through two simultaneous transitions — accelerated computing and generative AI,” said Jensen Huang, Nvidia’s CEO, in a statement. “A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process.”

    Huang said Nvidia is increasing supply of its entire suite of data center products to meet “surging demand” for them.

    Last quarter, Nvidia’s profit surged 26% to $2 billion, and sales rose 19% to $7.2 billion, each easily surpassing Wall Street analysts’ forecasts. Nvidia’s outlook for the current quarter was also significantly — about 50% — higher than analysts’ predictions.

    Nvidia’s stock is up nearly 110% this year.

    “There is not one better indicator around underlying AI demand going on … than the foundational Nvidia story,” said Dan Ives, analyst at Wedbush. “We view Nvidia at the core hearts and lungs of the AI revolution.”

    [ad_2]

    Source link

  • Microsoft to pay $20 million to settle Xbox Live privacy allegations | CNN Business

    Microsoft to pay $20 million to settle Xbox Live privacy allegations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Microsoft will pay $20 million to settle US government allegations that the tech giant violated children’s privacy by illegally collecting their personal information through its Xbox Live gaming service.

    According to the Federal Trade Commission, Microsoft broke the law by failing to tell parents about the full breadth of information it gathered from kids under the age of 13.

    That information, the FTC said in a lawsuit filed Monday, included the fact that children may share images of themselves in their account profiles, as well as video and audio recordings of themselves, their real names and logs of their activity on the platform.

    Microsoft also allegedly kept for years the personal information of millions of people, including children, who started creating accounts with Xbox Live but who never completed the sign-up process.

    “Even when a user indicated that they were under 13, they were also asked, until late 2021, to provide additional personal information including a phone number and to agree to Microsoft’s service agreement and advertising policy, which until 2019 included a pre-checked box allowing Microsoft to send promotional messages and to share user data with advertisers,” the FTC said in a release.

    In a statement, Microsoft said: “We recently entered into a settlement with the U.S. Federal Trade Commission (FTC) to update our account creation process and resolve a data retention glitch found in our system. We are committed to complying with the order.”

    Parental settings give adults some control over what their children’s accounts show to other users. For example, Xbox Live’s default settings restrict who children can interact with on the service, the FTC said. But other default settings, the agency alleged, allow kids to access third-party games and apps with minimal friction.

    Microsoft failed to sufficiently disclose to parents what information the company was collecting from kids and how it was being used, the FTC said, alleging violations of the Children’s Online Privacy Protection Act (COPPA).

    In agreeing to settle the claims, Microsoft committed to several additional measures beyond the financial penalty.

    Microsoft agreed to delete any personal information it collects from kids if they don’t complete the account registration process. It also agreed to tell third-party game publishers when a user may be a child, effectively putting the third-party publishers on notice to comply with COPPA in handling the user’s information.

    The settlement comes as the FTC has challenged Microsoft’s $69 billion acquisition of video game giant Activision-Blizzard, a proposed deal that would turn Microsoft into the world’s third-largest game publisher and give it control over popular franchises such as “Call of Duty” and “World of Warcraft.”

    US and UK officials have alleged that Microsoft’s acquisition could give it anti-competitive control over the games industry by being able to withhold titles from rival platforms, particularly in the nascent cloud gaming sector. To address the concerns, Microsoft has struck licensing deals with other companies to ensure their customers continue to have access to Activision games following the deal’s close.

    Those concessions have convinced the European Union to approve the deal, but litigation to block the deal involving US and UK regulators remains ongoing.

    [ad_2]

    Source link

  • Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    [ad_1]


    London
    CNN
     — 

    The European Union took a major step Wednesday toward setting rules — the first in the world — on how companies can use artificial intelligence.

    It’s a bold move that Brussels hopes will pave the way for global standards for a technology used in everything from chatbots such as OpenAI’s ChatGPT to surgical procedures and fraud detection at banks.

    “We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists.

    Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law.

    “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added.

    Hundreds of top AI scientists and researchers warned last month that the technology posed an extinction risk to humanity, and several prominent figures — including Microsoft President Brad Smith and OpenAI CEO Sam Altman — have called for greater regulation of the technology.

    At the Yale CEO Summit this week, more than 40% of business leaders — including Walmart chief Doug McMillion and Coca-Cola

    (KO)
    CEO James Quincy — said AI had the potential to destroy humanity five to 10 years from now.

    Against that backdrop, the EU AI Act seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects.”

    Here are the key takeaways.

    Once approved, the Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.

    The extent of regulation depends on the risks created by a particular application, from minimal to “unacceptable.”

    Systems that fall into the latter category are banned outright. These include real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China, which assign people a “health score” based on their behavior.

    The legislation also sets tight restrictions on “high-risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.”

    These include systems used to influence voters in an election, as well as social media platforms with more than 45 million users that recommend content to their users — a list that would include Facebook, Twitter and Instagram.

    The Act also outlines transparency requirements for AI systems.

    For instance, systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.

    Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.

    AI systems with minimal or no risk, such as spam filters, fall largely outside of the rules.

    Most AI systems will likely fall into the high-risk or prohibited categories, leaving their owners exposed to potentially enormous fines if they fall foul of the regulations, according to Racheal Muldoon, a barrister (litigator) at London law firm Maitland Chambers.

    Engaging in prohibited AI practices could lead to a fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    That goes much further than Europe’s signature data privacy law, the General Data Protection Regulation, under which Meta was hit with a €1.2 billion ($1.3 billion) fine last month. GDPR sets fines of up to €10 million ($10.8 million), or up to 2% of a firm’s global turnover.

    Fines under the AI Act serve as a “war cry from the legislators to say, ‘take this seriously’,” Muldoon said.

    At the same time, penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for start-ups.

    The Act also requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed.

    “The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added.

    The Act gives citizens the right to file complaints against providers of AI systems and makes a provision for an EU AI Office to monitor enforcement of the legislation. It also requires member states to designate national supervisory authorities for AI.

    Microsoft

    (MSFT)
    — which, together with Google, is at the forefront of AI development globally — welcomed progress on the Act but said it looked forward to “further refinement.”

    “We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said in a statement.

    IBM

    (IBM)
    , meanwhile, called on EU policymakers to take a “risk-based approach” and suggested four “key improvements” to the draft Act, including further clarity around high-risk AI “so that only truly high-risk use cases are captured.”

    The Act may not come into force until 2026, according to Muldoon, who said revisions were likely, given how rapidly AI was advancing. The legislation has already gone through several updates since drafting began in 2021.

    “The law will expand in scope as the technology develops,” Muldoon said.

    [ad_2]

    Source link

  • YouTube removed video of Robert F. Kennedy, Jr. for violating vaccine misinformation policy | CNN Business

    YouTube removed video of Robert F. Kennedy, Jr. for violating vaccine misinformation policy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    YouTube said on Monday that it had removed a video of presidential hopeful Robert F. Kennedy, Jr. being interviewed by podcast host Jordan Peterson for violating its policy prohibiting vaccine misinformation.

    A YouTube spokesperson told CNN that the platform removed the video from Peterson’s channel because it does not allow “content that alleges that vaccines cause chronic side effects, outside of rare side effects that are recognized by health authorities.”

    The platform’s latest move comes as Kennedy, an environmental lawyer and anti-vaccine activist, has gained more mainstream attention with his views and recently had his account reinstated on Instagram as a result of his long-shot presidential campaign.

    YouTube began cracking down broadly on vaccine misinformation in 2021, following an earlier policy preventing false or misleading claims about Covid-19. At the time, YouTube said it would remove the channels of “several well-known vaccine misinformation spreaders,” including one belonging to the Children’s Health Defense, a group affiliated with Kennedy. (The YouTube channel for Kennedy’s presidential campaign remains active.)

    Under its policy, YouTube removes false claims about currently administered vaccines that the World Health Organization and local authorities have approved and confirmed to be safe.

    Although YouTube removed the video, it remains available on Twitter, showing the fractured approach to vaccine misinformation across the internet as his campaign gets underway.

    In a tweet on Sunday, Kennedy noted YouTube’s removal of the video saying, “What do you think … Should social media platforms censor presidential candidates?”

    Kennedy also gained attention for his anti-vaccine views on a different podcast this week.

    On Monday, prominent vaccine scientist Peter Hotez said he was accosted outside of his home after a Twitter exchange with podcaster Joe Rogan, who challenged Hotez to debate Kennedy over the weekend.

    Hotez had tweeted in support of a Vice article criticizing Spotify’s handling of vaccine misinformation in an interview with Kennedy on Rogan’s show. After Twitter owner Elon Musk and hedge fund manager Bill Ackman weighed in, Hotez said he was “stalked in front of my home by a couple of antivaxxers.”

    Kennedy suggested to Hotez that they have a “respectful, congenial, informative debate.” Hotez said he would go on Rogan’s podcast but would not debate Kennedy.

    [ad_2]

    Source link

  • TSMC confirms supplier data breach following ransom demand by Russian-speaking cybercriminal group | CNN Business

    TSMC confirms supplier data breach following ransom demand by Russian-speaking cybercriminal group | CNN Business

    [ad_1]



    CNN
     — 

    Taiwanese semiconductor giant TSMC confirmed Friday that one of its hardware suppliers was hacked and had data stolen from it, but said the incident had no impact on business operations.

    Confirmation of the breach came after Russian-speaking cybercriminals claimed TSMC as a victim on Thursday and demanded an extraordinary $70 million ransom from the semiconductor firm.

    There were no signs that TSMC or the hardware supplier, Taiwanese firm Kinmax, had any plans to pay the hackers (representatives from both companies didn’t respond to CNN’s questions about any ransom).

    TSMC — one of the world’s largest chipmakers and a key supplier to Apple

    (AAPL)
    — was quick to assure investors and the public that the hack had no impact on its operations and that it did not compromise its customers’ data.

    “After the incident, TSMC has immediately terminated its data exchange with this concerned supplier in accordance with the Company’s security protocols and standard operating procedures,” TSMC said in a statement to CNN.

    The hackers accessed Kinmax’s internal “testing environment” for the technology it prepares to deliver to customers, Kinmax said in a statement distributed by TSMC.

    “The leaked content mainly consisted of system installation preparation that the Company provided to our customers as default configurations,” Kinmax said. The company apologized to customers whose names may show up in the leaked data.

    Ransomware groups are known to exaggerate the value of the data they steal and make outlandish demands that are never met.

    LockBit is the name of the group claiming responsibility for the hack of the TSMC supplier and the type of ransomware they use. LockBit ransomware was the most deployed ransomware around the world in 2022, according to US cybersecurity officials.

    Jon DiMaggio, an executive at security firm Analyst1 who has studied LockBit extensively, said the hackers will likely publish the stolen data or sell it if TSMC refuses to negotiate a ransom.

    For years, American officials and Taiwanese cybersecurity experts have looked to fortify the island’s infrastructure in the face of hacking threats.

    Taiwan’s chip industry is critical to the global hardware supply chain, making any potentially impactful cyberattacks on it a concern for government officials and business executives around the world.

    While the TSMC-related hacking incident doesn’t appear to have been impactful, a separate ransomware attack in 2020 on Taiwan’s state-run energy company temporarily disrupted some customers’ ability to pay for gas with company cards, according to local media reports at the time.

    [ad_2]

    Source link

  • Twitter threatens to sue Meta after rival app Threads gains traction | CNN Business

    Twitter threatens to sue Meta after rival app Threads gains traction | CNN Business

    [ad_1]



    CNN
     — 

    Twitter is threatening Meta with a lawsuit after the blockbuster launch of Meta’s new Twitter rival, Threads — in perhaps the clearest sign yet that Twitter views the app as a competitive threat.

    On Wednesday, an attorney representing Twitter sent Meta CEO Mark Zuckerberg a letter that accused the company of trade secret theft through the hiring of former Twitter employees.

    The letter was first reported by Semafor. A person familiar with the matter confirmed the letter’s authenticity to CNN.

    The letter by Alex Spiro, an outside lawyer for Twitter owner Elon Musk, alleged that Meta had engaged in “systematic, willful, and unlawful misappropriation of Twitter’s trade secrets and other intellectual property.”

    In response to reports on the letter, Musk tweeted: “Competition is fine, cheating is not.”

    The letter goes on to say that Meta hired former Twitter employees who “have improperly retained Twitter documents and electronic devices” and that Meta “deliberately” involved these employees in developing Threads.

    “Twitter intends to strictly enforce its intellectual property rights,” Spiro continued, “and demands that Meta take immediate steps to stop using any Twitter trade secrets or other highly confidential information.”

    Meta spokesperson Andy Stone flatly dismissed the letter. “No one on the Threads engineering team is a former Twitter employee — that’s just not a thing,” he said on Threads.

    In the months since Musk acquired Twitter for $44 billion, the social network has been challenged by a growing number of smaller microblogging platforms, such as the decentralized social network Mastodon and Bluesky, an alternative backed by former Twitter CEO Jack Dorsey. But Twitter has not threatened either with litigation.

    Unlike some Twitter rivals, Threads has experienced rapid growth, with Zuckerberg reporting 30 million user sign-ups in the app’s first day. As of Thursday afternoon, Threads was the number-one free app on the iOS App Store.

    The legal threat may not necessarily lead to litigation but it could be part of a strategy to slow down Meta, said Carl Tobias, a law professor at the University of Richmond.

    “Sometimes lawyers, they threaten but don’t follow through. Or they see how far they can go. That may be the case, but I don’t know that for sure,” Tobias told CNN. He added: “There may be some value to tying it up in litigation and complicating life for Meta.”

    [ad_2]

    Source link

  • Elon Musk rebrands Twitter as X | CNN Business

    Elon Musk rebrands Twitter as X | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In a radical rebranding, Twitter owner Elon Musk has replaced Twitter’s iconic bird logo with X.

    Musk made the shock announcement of his plans early Sunday. By Monday morning US time, he tweeted that X.com now points to Twitter.com.

    “Interim X logo goes live later today,” he wrote, shortly before sharing a photo of Twitter’s headquarters lit up by a giant new X.

    The Twitter website now features the same logo, while the familiar blue bird is gone.

    Previously, Musk said he was bidding “adieu to the twitter brand and, gradually, all the birds.”

    Twitter

    (TWTR)
    , founded in 2006, has used its vivid, globally recognized blue bird emblem for more than a decade.

    The renaming could be seen as something of a brand overhaul “Hail Mary” for the company: Musk in recent months has repeatedly warned that Twitter, facing steep losses in ad revenue, was on the edge of bankruptcy.

    Increasing the pressure, earlier this month rival social media platform Threads launched from Facebook

    (FB)
    parent Meta. It surpassed 100 million user sign-ups in its first week.

    Twitter had 238 million active users prior to being taken private by Musk in October 2022.

    One of the world’s richest men, Musk was once best known for his innovative efforts through companies SpaceX and Tesla

    (TSLA)
    to launch rockets and build electric cars.

    Now, many of the headlines he makes are for his eccentric remarks on his personal Twitter account – often sharing conspiracy theories and getting into public spats on the social media platform.

    Musk overhauled the site after acquiring it for $44 billion in late October, then followed with mass layoffs, disputes over millions of dollars allegedly owed in severance and Musk’s note to employees that remaining at the company would mean “working long hours at high intensity.” He wrote: “Only exceptional performance will constitute a passing grade.”

    The upheaval prompted organizations, including the Anti-Defamation League, Free Press and GLAAD, to pressure brands to rethink advertising on Twitter.

    The groups pointed to the mass layoffs as a key factor in their thinking, citing fears that Musk’s cuts would make Twitter’s election-integrity policies effectively unenforceable, even if they technically remain active.

    Musk also began overseeing controversial policy changes which led to frequent service disruptions at Twitter and upended his own reputation in the process.

    In June, Musk named Linda Yaccarino, a former NBCUniversal marketing executive, CEO of the company.

    She commented on the name change on Twitter Sunday afternoon: “It’s an exceptionally rare thing – in life or in business – that you get a second chance to make another big impression. Twitter made one massive impression and changed the way we communicate. Now, X will go further, transforming the global town square.”

    As the new venture begins, it faces challenges. Musk recently disclosed that the platform still has a negative cash flow due to a 50% drop in advertising revenue and heavy debt loads.

    Criticizing the exit, or pause, of such Twitter advertisers as General Mills

    (GIS)
    , Macy’s

    (M)
    and some car companies that compete with Tesla, Musk has called himself a “free speech absolutist” and said he wanted to buy Twitter to bolster users’ ability to speak freely on the platform.

    Musk explained his approach to free speech by saying: “Is someone you don’t like allowed to say something you don’t like? And if that is the case, then we have free speech.”

    He added that Twitter would “be very reluctant to delete things” and that the platform would aim to allow all legal speech. Many users have worried that could mean a rise in hate speech.

    Meanwhile, the initial frenzy around rival Threads appears to have come back to earth, especially as it has been plagued with spam and lacks several user-friendly features Twitter, or, now X, offers.

    Adam Mosseri, who is overseeing the Threads launch for Meta, has hinted at plans to add features such as a desktop version of the app, a feed of only accounts a user follows and an edit button.

    Its ability to draw advertising support is, as yet, unproven.

    [ad_2]

    Source link

  • Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    [ad_1]



    CNN
     — 

    Some of the world’s top artificial intelligence companies are launching a new industry body to work together — and with policymakers and researchers — on ways to regulate the development of bleeding-edge AI.

    The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society.

    Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry.

    News of the forum comes after the four AI firms, along with several others including Amazon and Meta, pledged to the Biden administration to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.

    The industry-led forum, which is open to other companies designing the most advanced AI models, plans to make its technical evaluations and benchmarks available through a publicly accessible library, the companies said in a joint statement.

    “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

    The announcement comes a day after AI experts such as Anthropic CEO Dario Amodei and AI pioneer Yoshua Bengio warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development.

    “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.

    Within two to three years, Amodei said, AI could become powerful enough to help malicious actors build functional biological weapons, where today those actors may lack the specialized knowledge needed to complete the process.

    The best way to prevent major harms, Bengio told a Senate panel, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world.

    The European Union is moving toward legislation that could be finalized as early as this year that would ban the use of AI for predictive policing and limit its use in lower-risk scenarios.

    US lawmakers are much further behind. While a number of AI-related bills have already been introduced in Congress, much of the driving force for a comprehensive AI bill rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry through a series of briefings this summer.

    Starting in September, Schumer has said, the Senate will hold a series of nine additional panels for members to learn about how AI could affect jobs, national security and intellectual property.

    [ad_2]

    Source link

  • 300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    As many as 300 million full-time jobs around the world could be automated in some way by the newest wave of artificial intelligence that has spawned platforms like ChatGPT, according to Goldman Sachs economists.

    They predicted in a report Sunday that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies than emerging markets.

    That’s partly because white-collar workers are seen to be more at risk than manual laborers. Administrative workers and lawyers are expected to be most affected, the economists said, compared to the “little effect” seen on physically demanding or outdoor occupations, such as construction and repair work.

    In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, the bank estimates.

    If generative artificial intelligence “delivers on its promised capabilities, the labor market could face significant disruption,” the economists wrote. The term refers to the technology behind ChatGPT, the chatbot sensation that has taken the world by storm.

    ChatGPT, which can answer prompts and write essays, has already prompted many businesses to rethink how people should work every day.

    This month, its developer unveiled the latest version of the software behind the bot, GPT-4. The platform has quickly impressed early users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Further use of such AI will likely lead to job losses, the Goldman Sachs economists wrote. But they noted that technological innovation that initially displaces workers has historically also created employment growth over the long haul.

    While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity — and boost global GDP by 7% annually over a 10-year period, according to Goldman Sachs.

    “Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI,” the economists added.

    “Most workers are employed in occupations that are partially exposed to AI automation and, following AI adoption, will likely apply at least some of their freed-up capacity toward productive activities that increase output.”

    Of US workers expected to be affected, for instance, 25% to 50% of their workload “can be replaced,” the researchers added.

    “The combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labor productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer.”

    — CNN’s Nicole Goodkind contributed to this report.

    [ad_2]

    Source link

  • Apple’s Weather app briefly went down and rained on everyone’s morning | CNN Business

    Apple’s Weather app briefly went down and rained on everyone’s morning | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Anyone using their iPhone to check the weather on Tuesday may have had better luck just looking out the window.

    Apple’s default Weather app briefly went down for many users on Tuesday morning, showing blank screens with no data. The result: many users felt clueless about what was happening outside.

    “The Apple Weather app has been down all morning and I never imagined how much disruption that would cause,” wrote one Twitter user. Another tweeted an apparent “Top Gun” reference: “Biggest storm of the season is about to hit Fargo and the Apple weather app is down. I’m flying blind, Goose.”

    There are numerous other sources one could use to determine the weather, including various apps, websites, local news reports and, of course, one’s own eyes. But the apparent disruption from the outage highlights how reliant some have grown on certain popular applications.

    Apple confirmed the outage in a Twitter reply to a frustrated user, noting that some app users may be experiencing a “temporary outage.” The company’s

    System Status page
    also flagged the Weather app as facing an ongoing issue.

    Apple did not immediately respond to CNN’s request for comment.

    One CNN reporter saw only a handful of cities on the Weather app home screen load with full data, while most cities remained completely blank. The app usually displays information including hourly forecast, 10-day forecast, air quality index, precipitation, UV index and more.

    The app was revamped as part of the iOS 16 release in September after Apple bought popular weather service Dark Sky in 2020 and fully integrated its features into the newest operating system.

    [ad_2]

    Source link

  • Cash App founder Bob Lee knew the suspect in his stabbing death, police say | CNN Business

    Cash App founder Bob Lee knew the suspect in his stabbing death, police say | CNN Business

    [ad_1]



    CNN
     — 

    San Francisco Police have arrested Nima Momeni in connection to the murder of Cash App founder Bob Lee, San Francisco Police Chief Bill Scott said during a news conference on Thursday.

    Scott described Momeni as a 38-year-old man from Emeryville, California. Scott said Momeni and Lee knew one another, but he didn’t provide further details about their connection.

    California Secretary of State Records indicate that Momeni has been the owner of an IT business, which, according to its website, provides services like technical support.

    Momeni was taken into custody without incident, according to Scott, and taken to the San Francisco County jail where he was booked on one charge of murder.

    Lee was stabbed to death in the Rincon Hill neighborhood of San Francisco early in the morning of April 4th. The moments following the stabbing attack were captured on surveillance video and in a 911 call to authorities, according to a local Bay Area news portal.

    The surveillance footage, reviewed by the online news site The San Francisco Standard, shows Lee walking alone on Main Street, “gripping his side with one hand and his cellphone in the other, leaving a trail of blood behind him.”

    Many in the tech world and beyond responded to news of Lee’s death with an outpouring of shock and grief. Some, including Elon Musk, also said the incident highlighted the fact that “violent crime in SF is horrific.”

    But on Thursday, San Francisco District Attorney Brooke Jenkins criticized Musk’s statement as “reckless and irresponsible.” Jenkins said Musk’s remark “assumed incorrect circumstances” about the death and effectively “spreads misinformation” while police were actively working to solve the case.

    Lee was the former chief technology officer of Square who helped launch Cash App. He later joined MobileCoin, a cryptocurrency and digital payments startup, in 2021 as its chief product officer.

    Josh Goldbard, the CEO MobileCoin, previously told CNN: “Bob was a dynamo, a force of nature. Bob was the genuine article. He was made for the world that is being born right now, he was a child of dreams, and whatever he imagined, no matter how crazy, he made real.”

    Earlier Thursday, San Francisco Board of Supervisors member Matt Dorsey expressed his gratitude to the police department’s homicide detail for “their tireless work to bring Bob Lee’s killer to justice and for their arrest of a suspect this morning.”

    [ad_2]

    Source link

  • Meta opens up its Horizon Worlds VR app to teens for the first time, prompting outcries from US lawmakers | CNN Business

    Meta opens up its Horizon Worlds VR app to teens for the first time, prompting outcries from US lawmakers | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Meta is forging ahead with plans to let teenagers onto its virtual reality app, Horizon Worlds, despite objections from lawmakers and civil society groups that the technology could have possible unintended consequences for mental health.

    On Tuesday, the social media giant said children as young as 13 in Canada and the United States will gain access to Horizon Worlds for the first time in the coming weeks.

    The app, which is already available to users above the age of 17, represents Meta CEO Mark Zuckerberg’s vision for a next-generation internet, where users can physically interact with each other in virtual spaces resembling real life.

    “Now, teens will be able to explore immersive worlds, play games like Arena Clash and Giant Mini Paddle Golf, enjoy concerts and live comedy events, connect with others from around the world, and express themselves as they create their own virtual experiences,” Meta said in a blog post.

    Zuckerberg has pushed to spend billions developing VR hardware and software, even as Meta has scaled back significantly in other parts of its business. Last year alone, the company spent nearly $16 billion in its Reality Labs segment and warned investors not to expect profitability from that unit anytime soon.

    Tuesday’s expansion reflects Meta’s attempt to capture early adopters in a key demographic. But it immediately triggered criticism from lawmakers who had pleaded with the company to postpone its plan.

    “Meta is despicably attempting to lure young teens to Horizon Worlds in an attempt to boost its failing platform,” said Connecticut Democratic Sen. Richard Blumenthal, who last month, along with Massachusetts Democratic Sen. Ed Markey, urged Zuckerberg to reconsider letting teens use the app.

    Lawmakers have previously raised alarms about the impact of some of Meta’s other products, including Instagram, on younger users.

    “Meta has a record of abject failure to protect children and teens, and yet again, this company has chosen to put young users at risk so that it can make more money,” Markey said, accusing Meta of “inviting digital disaster.”

    “I’m calling on the company to reverse course and immediately abandon this policy change,” Markey added.

    Those calls were echoed earlier this month by dozens of civil society groups who wrote in an open letter that Meta’s VR offerings could expose users to new privacy risks through the collection of biometric and other data; new forms of unfair and deceptive marketing; and abuse or bullying.

    Meta said in its announcement that in opening up Horizon Worlds to teens, the company would provide protective guardrails, such as by using default settings to make teenage users’ profiles and activity less visible to other users and by applying content ratings to potentially mature virtual spaces. Meta added that its safety controls were developed with input from parents and online safety experts.

    “I hope no one is assuming there is any inclination on our part to simply open the floodgates,” Nick Clegg, Meta’s president of global affairs, told CNN during a recent tech demonstration at the company’s Washington offices. “Clearly we can’t do that. We have to build experiences which are tailored to the unique vulnerabilities of teens.”

    Meta’s announcement Tuesday came as other US government officials said they were beefing up scrutiny of social media’s potential effects on mental health.

    The Federal Trade Commission is “actively working” on hiring in-house psychologists to address concerns linking social media use to teen mental health harms, said Alvaro Bedoya, an FTC commissioner.

    In recent weeks, members of the FTC have been consulting with public health officials and medical professionals to understand the available scientific evidence on the matter, Bedoya told lawmakers on a House Energy and Commerce subcommittee.

    “There is evidence that some uses of social media do, in fact, hurt certain groups of teenagers and children,” Bedoya said, though he cautioned that there were important nuances and caveats in the research. “This is not some moral panic. There is a ‘there’ there.”

    [ad_2]

    Source link

  • North America’s largest transportation network suspends use of Twitter for service alerts | CNN Business

    North America’s largest transportation network suspends use of Twitter for service alerts | CNN Business

    [ad_1]


    New York
    CNN
     — 

    North America’s largest transportation network suspended the use of Twitter for service alerts Thursday, saying the “reliability of the platform can no longer be guaranteed.”

    The Metropolitan Transportation Authority, which serves 15.3 million passengers across a 5,000 square-mile area surrounding New York City, Long Island, New York State and Connecticut, also said their access to Twitter through its Application Programming Interface (API) was involuntarily interrupted twice over the last two weeks.

    “The MTA does not pay tech platforms to publish service information and has built redundant tools that provide service alerts in real time,” MTA’s Acting Chief Customer Officer Shanifah Rieara said in a statement. “Those include the MYmta and TrainTime apps, the MTA’s homepage at MTA.info, email alerts and text messages.”

    “Service alerts are also available on thousands of screens in stations, on trains and in buses,” Rieara said. “The MTA has terminated posting service information to Twitter, effective immediately, as the reliability of the platform can no longer be guaranteed.”

    The @MTA app will remain active and customers will still be able to tweet at MTA accounts, including @nyct_subway, and get responses, according to the MTA.

    – CNN’s Julian Cummings contributed to this report

    [ad_2]

    Source link