ReportWire

Category: Technology

Technology News | ReportWire publishes the latest breaking U.S. and world news, trending topics and developing stories from around globe.

  • Lyft stock plunges nearly 15% on weaker than expected revenue forecast | CNN Business

    Lyft stock plunges nearly 15% on weaker than expected revenue forecast | CNN Business

    [ad_1]



    CNN
     — 

    Lyft may have a bumpy road ahead to recovery.

    The ride-hailing company reported revenue of $1 billion for the quarter ending in March, marking a 14% year-over-year increase and beating Wall Street estimate’s. But the company forecast weaker-than-expected revenue for the current quarter, which was enough to jitter investors.

    Shares of Lyft plunged nearly 15% in after-hours trading Thursday following the earnings results.

    The latest earnings report comes on the heels of Lyft shaking up its the C-suite and announcing plans to cut 26% of its employees as it fights for market share and profitability.

    David Risher, who previously worked at Amazon and Microsoft, recently took over as CEO of Lyft and the company’s two co-founders stepped down from their management positions at the company. Risher has been a member of the Lyft board since 2021.

    On a conference call with analysts on Thursday to discuss the results, Risher said Lyft is currently at “an inflection point” as people return to pre-pandemic social habits.

    “I am very aware of our current levels of growth and profitability are not acceptable,” Risher said on the call, his first as CEO. “I am committed to growing Lyft into a large, durable, profitable business, that our riders, drivers and shareholders love, and I look forward to keeping you informed on our progress.”

    Compared to its chief rival Uber, Lyft has so far struggled to bounce back from the pandemic’s hit to its business. While Uber diversified its business beyond ride-hailing by delivering meals and grocery items during the health crises, Lyft never did. Uber also was able to attract drivers back to the platform better than Lyft as pandemic restrictions eased in the U.S.

    Earlier this week, Uber said in its quarterly earnings report that revenue was up 29%, as demand for its rideshare and delivery services held firm despite lingering recession fears.

    [ad_2]

    Source link

  • Uber will now let teens ride in cars alone | CNN Business

    Uber will now let teens ride in cars alone | CNN Business

    [ad_1]



    CNN
     — 

    Uber is rolling out new features to make it easier for people of all ages to access its ride-hailing service, including an option that will let teens under the age of 18 ride alone for the first time.

    At its annual product event on Wednesday, Uber unveiled a new teen accounts feature, which allows teens between the ages of 13-17 to hail rides and be in the car on their own. Their parents and guardians can also monitor them remotely through the app.

    The new option rolls out on May 22 in more than a dozen metro areas in the United States and Canada – including New York City, Atlanta, Dallas and Houston – with plans to launch in more cities in the coming weeks and months.

    Previously, those under the age of 18 were not allowed to use Uber without being accompanied in the car by an adult.

    Uber’s move comes at a time when tech companies, and social media firms in particular, are increasingly under scrutiny for the impact their products can have on teens.

    At Wednesday’s event, Uber CEO Dara Khosrowshahi framed the option as helping families “manage the craziness” of juggling getting their kids around and stressed the company’s safety features to ensure that “parents can have peace of mind.”

    The new accounts include a unique PIN number that teens will have to give to their driver before embarking and in-app audio recording of the ride. A live trip-tracking feature also lets a parent follow the trips’ progress via the Uber app. And parents can contact the driver directly during the trip as well as contact Uber’s support team.

    Khosrowshahi also said that “only experienced and highly-rated drivers will be eligible to complete trips with teens.”

    Uber said it consulted Safe Kids Worldwide, a nonprofit organization dedicated to protecting children, with the development of the teens account offering.

    Uber also said Wednesday that it is launching a nationwide phone number for anyone without the app to be able to use its service, a move likely aimed at helping older Americans who might not be used to navigating a smartphone.

    Starting on Wednesday, US-based customers can now dial 1-833-USE-UBER (1-833-873-8237), a toll free number, to speak with an agent in English or Spanish and request a ride on demand or reserve one for a future trip.

    Uber has reported strong growth in recent quarters, defying a slump that has hit much of the tech sector in recent months. Uber’s business, which diversified into meal delivery services ahead of the pandemic, has also so-far fared better at bouncing back from the health crises than its chief US rival, Lyft.

    [ad_2]

    Source link

  • Twitter loses its top content moderation official at a key moment | CNN Business

    Twitter loses its top content moderation official at a key moment | CNN Business

    [ad_1]



    CNN
     — 

    Twitter has lost its top content moderation official just weeks before the company is set to undergo a regulatory stress test by European Union officials focused on its handling of user content, in the latest sign of turbulence at the company under owner Elon Musk.

    On Thursday, Twitter’s head of trust and safety, Ella Irwin, told Reuters she had left the company. Irwin has not addressed the reasons for her departure, but the move coincided with the company’s content moderation dispute with the Daily Wire, a conservative outlet.

    The dispute focused on the forthcoming release of a self-described documentary, “What Is a Woman?” that Twitter warned would be labeled as “hateful content” due to two instances of misgendering, according to Daily Wire CEO Jeremy Boreing. Musk intervened later Thursday, calling the content moderation decision “a mistake by many people at Twitter” and that the video would be “definitely allowed.”

    Twitter did not immediately respond to a request for comment on Irwin’s departure.

    But the sudden and unexpected vacancy at Twitter could leave the company without a key content moderation official at a sensitive moment. Later this month at Twitter’s San Francisco offices, EU officials are set to review whether the platform is likely to be compliant with a sweeping content moderation law that could eventually trigger millions of dollars in fines for Twitter if it’s found to be noncompliant.

    That law, known as the Digital Services Act, will require so-called “very large online platforms” including Twitter to abide by tough content moderation standards by as early as August. It’s far from clear whether the company can meet those requirements by the deadline, and recent developments at Twitter seem to have further alarmed EU regulators in that respect.

    For months, as Musk has increasingly welcomed more incendiary speech onto the platform Twitter had previously restricted, EU officials have been reminding Twitter of its content moderation obligations under the DSA. The warnings have also come amid mass layoffs at the company that have eliminated entire teams, including much of its content moderation staff.

    Last month, Twitter pulled out of the European Union’s code of conduct on disinformation, a series of voluntary commitments to combat mis- and disinformation that the EU has said would be considered as part of any evaluation of a platform’s compliance with the overall Digital Services Act (DSA).

    Although Twitter said it was “committed to fully complying with the Digital Services Act” and would meet its DSA obligations with respect to misinformation “in a manner that reflects Twitter’s unique service,” the company told EU officials “we feel we have no alternative” but to withdraw from the code.

    The announcement prompted swift backlash from Thierry Breton, a top EU commissioner and digital regulator, who appeared to regard Twitter’s decision as an attempt to evade responsibility.

    “Obligations remain,” Breton said. “You can run but you can’t hide.”

    Irwin’s departure could undercut the EU’s confidence further. Without a trust and safety head who would otherwise be expected to attend the EU stress test, Twitter’s ability to effectively respond to the evaluation may be constrained. A spokesperson for the European Commission didn’t immediately respond to a request for comment.

    On Friday, The Wall Street Journal reported that Twitter’s head of brand safety and ad quality also departed the company this week.

    All of this could be problematic for Twitter and Musk in the long run – and could also create an added headache for Linda Yaccarino just as she takes over as the company’s new CEO.

    Companies that fail to abide by the DSA risk fines of up to 6% of their global annual revenue. For Twitter, which is already struggling to regain its financial footing amid significant debt and an advertiser backlash, that’s a cost it can ill afford.

    [ad_2]

    Source link

  • Instagram lifts ban on anti-vaccine activist Robert F. Kennedy Jr. after launch of presidential bid | CNN Business

    Instagram lifts ban on anti-vaccine activist Robert F. Kennedy Jr. after launch of presidential bid | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Instagram announced Sunday it had lifted its ban on Robert F. Kennedy Jr., the anti-vaccine activist who has launched a presidential bid, two years after it shut down Kennedy’s account for breaking its rules related to Covid-19.

    “As he is now an active candidate for president of the United States, we have restored access to Robert F. Kennedy, Jr.’s, Instagram account,” Andy Stone, a spokesperson for Instagram’s parent company Meta said in a statement.

    Kennedy, who has a long history of spreading vaccine misinformation, was banned from Instagram in February 2021.

    A company spokesperson at the time said Instagram had removed his account for “repeatedly sharing debunked claims about the coronavirus or vaccines.”

    While Kennedy’s Instagram account was banned, his Facebook account remained active. Both platforms are owned by Meta.

    Kennedy was a leading anti-vaccination voice during the Covid-19 pandemic, using his social media platforms to sow doubt and misinformation about the shots.

    He has promoted false claims about vaccine links to autism and in 2022 compared vaccine mandates to Nazi Germany.

    His wife, actress Cheryl Hines, publicly condemned Kennedy’s remark as “reprehensible” after he invoked Anne Frank, who was murdered by Nazis as a teenager.

    Hines distanced herself from him in January 2022, tweeting: “His opinions are not a reflection of my own.”

    Kennedy’s return to Instagram, first reported by The Washington Post, will give him access to his more than 769,000 followers.

    The decision comes as traditional media and social media companies attempt to navigate a 2024 election campaign fraught with accusations of misinformation and censorship.

    On Friday, YouTube announced it would no longer remove content featuring false claims that the 2020 US presidential election was stolen, reversing a policy instituted more than two years ago amid a wave of misinformation about the election.

    The decision to reinstate Kennedy comes amid a flurry of activity between the candidate and Silicon Valley.

    On Sunday, Twitter

    (TWTR)
    founder Jack Dorsey appeared to endorse Kennedy for president, tweeting a YouTube video titled, “Robert F. Kennedy, Jr. argues he can beat Trump and DeSantis in 2024.” Dorsey added in the tweet, “He can and will.”

    On Monday, Kennedy is due to take part in a live audio chat on Twitter with the company’s owner Elon Musk.

    Meta’s decision to allow Kennedy back on Instagram came a few days after the Democratic presidential candidate publicly complained that the platform was blocking his campaign from creating a new account.

    Stone, the Meta spokesperson, told CNN on Sunday that the restriction was a mistake and that the company had resolved the issue.

    Meta executives have long maintained they believe political candidates should be able to use its platforms to reach voters, even if those candidates sometimes break rules that would get other users banned from its platforms.

    [ad_2]

    Source link

  • Russian-speaking cyber gang claims credit for hack of BBC and British Airways employee data | CNN Business

    Russian-speaking cyber gang claims credit for hack of BBC and British Airways employee data | CNN Business

    [ad_1]



    CNN
     — 

    A group of Russian-speaking cyber criminals has claimed credit for a sweeping hack that has compromised employee data at the BBC and British Airways and left US and UK cybersecurity officials scrambling to respond.

    The hackers, known as the CLOP ransomware gang, say they have “information on hundreds of companies.” They’ve given victims until June 14 to discuss a ransom before they start publishing data from companies they claim to have hacked, according to a dark web posting seen by CNN.

    The extortion threat adds urgency to an already high-stakes security incident that has forced responses from tech firms, corporations and government agencies from the US to Canada and the UK.

    The compromise of employee data at the BBC and British Airways came via a breach of a human resources firm, Zellis, that both organizations use.

    “We are aware of a data breach at our third-party supplier, Zellis, and are working closely with them as they urgently investigate the extent of the breach,” a BBC spokesperson told CNN Wednesday. The spokesperson declined to comment on the hackers’ extortion threat.

    A British Airways spokesperson said the company had “notified those colleagues whose personal information has been compromised to provide support and advice.”

    The hackers — a well-known group whose favored malware emerged in 2019 — last week began exploiting a new flaw in a widely used file-transfer software known as MOVEit, appearing to target as many exposed organizations as they could. The opportunistic nature of the hack left a broad swath of organizations vulnerable to extortion.

    Numerous US state government agencies use the MOVEit software, but it’s unclear how many agencies, if any, have been compromised.

    The US Cybersecurity and Infrastructure Security Agency has ordered all federal civilian agencies to update the MOVEit software in light of the hack. No federal agencies have been confirmed as victims, a CISA spokesperson told CNN.

    Together with the Federal Bureau of Investigation, CISA also released advice on dealing with the CLOP hack. Progress, the US firm that owns the MoveIT software, has also urged victims to update their software packages and has issued security advice.

    CISA Executive Director for Cybersecurity Eric Goldstein said in a statement: “CISA remains in close contact with Progress Software and our partners at the FBI to understand prevalence within federal agencies and critical infrastructure.”

    But the effort to respond to the cyber attack is very much ongoing.

    The CLOP hackers are “overwhelmed with the number of victims,” according to Charles Carmakal, chief technology officer at Mandiant Consulting, a Google-owned firm that has investigated the hack. “Instead of directly reaching out to victims over email or telephone calls like in prior campaigns, they are asking victims to reach out to them via email,” he said on LinkedIn Tuesday night.

    Allan Liska, a ransomware expert at cybersecurity firm Recorded Future, also told CNN: “Unfortunately, the sensitive nature of the data often stored on MOVEit servers means there will likely be real consequences stemming from the [data theft] but it will be months before we understand the full fallout from this attack.”

    [ad_2]

    Source link

  • North Korea hackers suspected in new $35 million crypto heist | CNN Business

    North Korea hackers suspected in new $35 million crypto heist | CNN Business

    [ad_1]


    New York
    CNN
     — 

    North Korean hackers were likely behind the theft of at least $35 million from a popular cryptocurrency service, multiple crypto-tracking experts told CNN Tuesday.

    It’s the latest in a string of hacks of cryptocurrency firms linked to Pyongyang that US officials worry could be used to fund the North Korean regime’s nuclear and ballistic weapons programs.

    The hackers drained the cryptocurrency accounts of certain customers of Atomic Wallet, an Estonia-based company that claims 5 million users of its software.

    Atomic Wallet said on Saturday that “less than 1%” of monthly users appeared to be affected by the hack. The firm has not specified how much money might have been stolen or who was behind the hack. CNN has requested comment from the firm.

    Some of the apparent victims of the hack took to Twitter to beg the hackers for their money back, posting their cryptocurrency addresses in case the hackers took pity on them.

    North Korean hackers have stolen billions of dollars from banks and cryptocurrency firms over the last several years, providing a key source of revenue for the regime, according to reports from the United Nations and private firms.

    In the Atomic Wallet incident, the hackers’ money-laundering techniques and the tools they used matched telltale North Korean behavior, according to London-based crypto-tracking firm Elliptic.

    An independent cryptocurrency tracker known as ZachXBT told CNN that North Korean hackers were very likely responsible. The amount confirmed stolen could rise above $35 million as Atomic Wallet continues to investigate the incident, the analyst said.

    “The pattern was similar to what we saw with the laundering of Harmony funds back in January,” ZachXBT said, referring to the laundering of $100 million stolen from a California-based firm.

    The FBI blamed North Korea for the hack of Harmony. CNN reported on how private investigators and South Korean intelligence operatives were able to claw back a fraction of that money.

    Thwarting North Korean hacking and money laundering has quickly become a national security priority for the Biden administration. About half of North Korea’s missile program has been funded by cyberattacks and cryptocurrency theft, a White House official said last month.

    CNN has requested comment from the FBI on the Atomic Wallet hack.

    [ad_2]

    Source link

  • What the chaos at Twitter means for the future of social movements | CNN Business

    What the chaos at Twitter means for the future of social movements | CNN Business

    [ad_1]

    Editor’s Note: The CNN Original Series “The 2010s” looks back at a turbulent era marked by extraordinary political and social upheaval. New episodes air at 9 p.m. ET/PT Sundays.



    CNN
     — 

    When thousands of Egyptians marched through the streets during the Arab Spring of 2011, they had a tool at their disposal that earlier social movements didn’t: Twitter.

    A key group of activists used the platform to form networks and organize protests against the authoritarian regime, while many more demonstrators used it to disseminate information and images from the ground for the rest of the world to see. Months later, organizers from the Occupy Wall Street movement took to Twitter to coordinate protests in New York and beyond.

    Twitter fostered public conversation around the Black Lives Matter movement after the 2014 police killing of Michael Brown in Ferguson, Missouri, and again after the 2020 police killing of George Floyd. It amplified #MeToo in the aftermath of the sexual assault allegations against Hollywood producer Harvey Weinstein, and catapulted other revolutionary movements around the world to global attention.

    “You can’t underestimate the impact of Twitter to social movements,” Amara Enyia, manager of policy and research for the Movement for Black Lives, told CNN.

    Twitter has often been heralded as a democratizing force, bringing previously marginalized voices to the forefront and giving the public a platform to demand accountability from leaders. (It has also enabled the spread of misinformation, extremist ideas and abusive content.)

    But since Elon Musk acquired Twitter last year and the platform plunged into chaos, some organizers and digital media experts have been bracing for the impact that his controversial policy changes and mass layoffs may have on social movements going forward.

    Though Twitter has often been referred to as a public square, some of Musk’s recent moves challenge that description.

    Through Twitter, organizers and political groups have had a level of direct access to policymakers and leaders that wouldn’t have been possible in person, said Rachel Kuo, an assistant professor of media and cinema studies at the University of Illinois, Urbana-Champaign. Verified activists were able to promote certain messages that the algorithm then pushed to the top of users’ feeds, organizers could launch campaigns that caught the attention of high-profile figures and the public could follow along for real-time updates.

    “There are now issues in how people see Twitter as a source of information and a source of political community,” said Kuo, whose research focuses on race, social movements and digital technologies. “It isn’t seen in the same way anymore.”

    Elon Musk's controversial policy changes at Twitter could have implications for social movements, some activists say.

    Musk upended traditional Twitter verification and turned it into a pay-for-play system, leading to the impersonation of government accounts and the spread of fake images. For organizers who opt not to pay the monthly subscription fee for a blue check, that also means a loss of credibility and visibility, Kuo added.

    Twitter, which has cut much of its public relations team under Musk, did not respond to a request for comment.

    Twitter’s role in information-sharing has been disrupted in other ways, too.

    The platform has been plagued by technical glitches after mass layoffs and departures at the company, frustrating many users. People have also reported that the “for you” timeline is showing them content they aren’t interested in.

    As a result of these issues and others, some are leaving Twitter altogether – more than 32 million users are projected to exit the platform in the two years following Musk’s takeover, according to a December 2022 forecast from the market research agency Insider Intelligence. (Twitter reported having 238 million monetizable daily active users last year before Musk acquired it.)

    With fewer people on Twitter, the platform becomes less centralized and the information landscape more fractured, said Sarah Aoun, a privacy and security researcher who works on cybersecurity for the Movement for Black Lives. That makes it harder for activists to connect, exchange tactics and build solidarity in the way they once did.

    Protesters in Cairo gather in Tahrir Square in November 2011.

    Musk’s approach to content moderation has also made Twitter a more hostile environment, Aoun said. Twitter has never been a completely safe space for marginalized voices – women, people of color, LGBTQ people and other vulnerable groups have long been targets of online harassment and abuse – but reports from the Center for Countering Digital Hate and Anti-Defamation League indicate an increase in hate speech on the platform under Musk’s leadership. (Musk has previously pushed back at that characterization by focusing on a different metric.)

    Some are also disillusioned over Musk’s decision to reinstate users who were previously suspended for violating the platform’s rules, including former President Donald Trump and GOP Rep. Marjorie Taylor Greene.

    “The lack of verification, the mass exodus, the inability to coordinate the way that we used to be able to coordinate and the content moderation (gutting) makes it a very difficult platform to be on at the moment,” Aoun said.

    Musk has stepped back as Twitter’s CEO, a role now held by former NBCUniversal marketing executive Linda Yaccarino. But he will maintain significant control over the platform as the company’s owner, executive chairman and chief technology officer.

    The changes at Twitter have prompted some activists and organizers to reassess their relationships with the platform.

    Rich Wallace, executive director of the Chicago-based organization Equity and Transformation (EAT), said that previously, he used to see robust engagement on tweets about social injustice or racial inequity, whether it was from those who agreed with him or didn’t. Now, he finds that substantive posts barely get traction as opposed to tweets he considers more mundane.

    Wallace said his organization, which seeks to build social and economic equity for Black workers in the informal economy, still shares information about community events on Twitter, but the potential to find new allies or engage in meaningful conversation on the platform is largely a thing of the past.

    Twitter is no longer a space for education and community building that it once was, Wallace said. It’s a shift in how he once viewed the platform, but he isn’t especially concerned. For his organization, it simply means a re-emphasis on the grassroots, in-person work they were already doing.

    People raise their fists in June 2020 as they protest the police killing of George Floyd.

    “As organizers, we’ve been creative in how we organize around barriers,” he said. “This is just one of the newer barriers that we have to assess and organize through.”

    As Kuo sees it, the ways that the changes at Twitter will affect organizing and activism will vary widely. Hyperlocal community organizers or those who work with populations that don’t speak English aren’t typically using Twitter in their day-to-day work, and so the recent shifts likely won’t affect them drastically. But she predicts that mid-to-large nonprofit organizations with communications staff might be rethinking their strategy on the platform.

    “It’s very dependent on organizational structure, form, strategies for change and political vision,” Kuo said.

    Enyia said that on a personal level, she finds that she’s engaging with people on Twitter less often and moreso using the platform to keep up with news. But in her advocacy work with the Movement for Black Lives, it remains an important tool.

    “For us, its utility is in the fact that it creates more access points to our policy platform, to the issues that we’re advocating on,” she said. “And in that regard, it’s still very, very useful.”

    When Musk first took over Twitter, some organizers and activists flocked to other alternatives, such as Mastodon or Bluesky (an app backed by Twitter co-founder and former CEO Jack Dorsey).

    Neither appears to be fulfilling the same purpose that Twitter once did, Aoun and others said. Mastodon and Bluesky are decentralized and fewer people are using them, making it more difficult to build community. And while their numbers are growing, they’re still far smaller than Twitter.

    The Bluesky app is seen on a phone and laptop in June 2023.

    In the case of Mastodon, there are privacy and security issues that concern some activists. Because the social network allows users to join different servers run by various groups and individuals, Aoun said “the privacy, security and content moderation is basically as good as the person behind the server.” Twitter – at least before Musk took over – had dedicated privacy and security teams, offering more transparency about how their systems worked.

    Some activists are using popular social networks such as Instagram and TikTok, but the visual nature of those platforms versus the text-based medium of Twitter changes how people are able to interact and engage with each other, Kuo said.

    Twitter has been an incredibly powerful tool for social movements, Enyia said. But ultimately, the platform is just that – a tool.

    “There is no panacea for just the nuts and bolts work that it takes to meet people, to engage people, to organize and talk to people,” Enyia said. “So even if we recognize that social media is a tool, we don’t put all of our eggs in that basket.”

    Social media platforms come and go, and the same could happen to Twitter. So while Enyia’s organization continues to use the platform for its own ends, it’s prepared for a reality in which Twitter is less relevant.

    “We have to stay on top of it to make sure that the tools are serving their purpose as it relates to our work,” Enyia said. “But then we have to be ready to evolve or to move on or to adapt to different tools when it becomes clear that that’s the direction we have to go.”

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • Senate Judiciary advances journalism bargaining bill targeting Big Tech | CNN Business

    Senate Judiciary advances journalism bargaining bill targeting Big Tech | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The Senate Judiciary Committee advanced legislation on Thursday that would give news organizations the power to jointly bargain against Meta, Google and other online platforms for a greater share of online advertising revenue.

    The legislation would create an antitrust exemption allowing radio and TV broadcasters, as well as small news outlets with fewer than 1,500 employees, to “band together” and arrest the decline of local journalism in cities and states across the country, said its lead co-sponsors, Minnesota Democratic Sen. Amy Klobuchar and Louisiana Republican Sen. John Kennedy.

    The concept, a version of which became law in Australia in 2021 and since been proposed in numerous countries, has been vigorously opposed by tech giants who in some cases have threatened to pull news content from their platforms over the legislation.

    Meta and Google didn’t immediately respond to a request for comment.

    The measure cleared the committee by a vote of 14-7. But it faces an uncertain future on the Senate floor.

    One member of the committee, California Democratic Sen. Alex Padilla, voted against the bill Thursday and vowed to block any future floor vote on the legislation until lawmakers make several changes.

    Padilla said the legislation doesn’t do enough to ensure that actual journalists in local newsrooms will benefit from the bargaining, as opposed to hedge funds and publication owners. He also raised concerns that the bill as written could allow online platforms such as Google to charge individual internet users each time they attempt to share or click on a link to a news article, a practice Padilla warned would be harmful to the internet.

    “This bill, as written, does nothing to guarantee the protection or pay of the journalists and media workers that we’re claiming to try to protect,” Padilla said. “For us to ignore them while claiming to be fighting for them is absurd.”

    Several other senators echoed Padilla’s remarks on Thursday, including Democratic Sens. Jon Ossoff, Peter Welch and Cory Booker.

    Kennedy and Klobuchar argued that the bill — which had previously passed out of the committee during the last Congress, in 2022 — is urgently necessary in light of the closure of thousands of local newspapers nationwide since the rise of online platforms.

    “We have small towns in all of our states with news organizations that cover everything from what’s happening in the city council to reports of the local high school football and volleyball games to informing citizens that a flood is coming,” Klobuchar said. “That kind of reporting … is being undermined right now because, in a very tough market, these news reporters and news organizations are not getting the share of the revenue that they should get.”

    Kennedy urged colleagues to set aside their other views on tech platforms and news media.

    “This bill is not about whether or not you like social media,” Kennedy said. “This bill is not about whether or not you like what is happening in American news media today. This bill is about creative content. That’s all it’s about. And whether we respect creative content and value it, or whether we do not.”

    [ad_2]

    Source link

  • Schumer outlines plan for how Senate will regulate AI | CNN Business

    Schumer outlines plan for how Senate will regulate AI | CNN Business

    [ad_1]



    CNN
     — 

    Senate Majority Leader Chuck Schumer announced a broad, open-ended plan for regulating artificial intelligence on Wednesday, describing AI as an unprecedented challenge for Congress that effectively has policymakers “starting from scratch.”

    The plan, Schumer said at a speech in Washington, will begin with at least nine panels to identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.” The panels will be composed of experts from industry, academia and civil society, with the first sessions taking place in September, Schumer said.

    The Senate will then turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions, Schumer added, arguing that the resulting US solution could leapfrog existing regulatory proposals from around the world.

    “If we can put this together in a very serious way, I think the rest of the world will follow and we can set the direction of how we ought to go in AI, because I don’t think any of the existing proposals have captured that imagination,” Schumer said, reflecting on other recent proposals such as the European Union’s draft AI Act, which last week was approved by the European Parliament.

    The speech represents Schumer’s most definitive remarks to date on a problem that has dogged Congress for months amid the wide embrace of tools such as ChatGPT: How to catch up, or get ahead, on policymaking for a technology that is already in the hands of millions of people and evolving rapidly.

    In the wake of ChatGPT’s viral success, Silicon Valley has raced to develop and deploy a new crop of generative AI tools that can produce images and writing almost instantly, with the potential to change how people work, shop and interact with each other. But these same tools have also raised concerns for their potential to make factual errors, spread misinformation and perpetuate biases, among other issues.

    In contrast to the fast pace of AI advancements, Schumer has stressed the importance of a deliberate approach, focusing on getting lawmakers acquainted with the basic facts of the technology and the issues it raises before seeking to legislate. He and three other colleagues began last week by convening the first in a series of closed-door briefings on AI for senators that is expected to run through the summer.

    In his remarks Wednesday, Schumer appeared to acknowledge criticism of his pace.

    “I know many of you have spent months calling on us to act,” he said. “I hear you. I hear you loud and clear.”

    But he described AI as a novel issue for which Congress lacks a guide.

    “It’s not like labor, or healthcare, or defense, where Congress has had a long history we can work off of,” he said. “Experts aren’t even sure which questions policymakers should be asking. In many ways, we’re starting from scratch.”

    Schumer described his plan as laying “a foundation for AI policy” that will do “years of work in a matter of months.”

    To guide that process, Schumer expanded on a set of principles he first announced in April. Formally unveiling the framework on Wednesday, Schumer said any legislation on AI should be geared toward facilitating innovation before addressing risks to national security or democratic governance.

    “Innovation first,” Schumer said, “but with security, accountability, [democratic] foundations and explainability.”

    The last two pillars of his framework, Schumer said, may be among the most important, as unrestricted artificial intelligence could undermine electoral processes or make it impossible to critically evaluate an AI’s claims.

    Schumer’s remarks were restrained in calling for any specific proposals. At one point, he acknowledged that a consensus may even emerge that recommends against major government intervention on the technology.

    But he was clear on one point: “We do — we do — need to require companies to develop a system where in simple and understandable terms users understand why the system produced a particular answer, and where that answer came from.”

    The Senate may still be a long way off from unveiling any comprehensive proposal, however. Schumer predicted that the process is likely to take longer than weeks but shorter than years.

    “Months would be the proper timeline,” he said.

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link

  • Microsoft under European antitrust investigation over Teams | CNN Business

    Microsoft under European antitrust investigation over Teams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    European officials are investigating whether Microsoft’s practice of bundling its Teams software with Office 365 is anticompetitive, the European Commission said Thursday.

    The EU probe follows a formal complaint by Microsoft’s rival, the Salesforce-owned Slack, in 2020, alleging that Microsoft has illegally circumvented competition.

    By packaging Teams together with its “well-entrenched” productivity suite, including apps such as Word and Outlook, Microsoft could be effectively blocking customers from seeking out rival collaboration tools, the Commission said. Antitrust officials are also concerned about interoperability issues between Microsoft’s software and third-party products, it added.

    “These practices may constitute anti-competitive tying or bundling and prevent suppliers of other communication and collaboration tools from competing,” the Commission said in a statement.

    Microsoft said in a statement it is cooperating with the probe.

    “We respect the European Commission’s work on this case and take our own responsibilities very seriously,” said a Microsoft spokesperson. “We will continue to cooperate with the Commission and remain committed to finding solutions that will address its concerns.”

    In a press briefing Thursday, EU spokesperson Arianna Podesta told reporters that “at this stage, possible commitments [by Microsoft to resolve the concerns] are too early to be discussed. We first need to identify indeed if there is a breach of antitrust considerations.”

    The in-depth investigation reflects rising EU antitrust scrutiny for Microsoft, which was last fined on a competition violation in 2013 for not honoring a commitment to give European consumers a choice in web browsers.

    Slack’s initial EU complaint alleged that Microsoft forces Teams onto millions of customers, “blocking its removal, and hiding the true cost to enterprise customers.”

    A Slack executive at the time argued that Microsoft sells a closed ecosystem of its own products, while Slack provides customers with more freedom to mix and match services.

    “This is a proxy for two very different philosophies for the future of digital ecosystems, gateways versus gatekeepers,” said Slack’s VP of communications and policy, Jonathan Prince.

    [ad_2]

    Source link

  • ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    [ad_1]



    CNN
     — 

    Less than six months after ChatGPT-creator OpenAI unveiled an AI detection tool with the potential to help teachers and other professionals detect AI generated work, the company has pulled the feature.

    OpenAI quietly shut down the tool last week citing a “low rate of accuracy,” according to an update to the original company blog post announcing the feature.

    “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company wrote in the update. OpenAI said it is also committed to helping “users to understand if audio or visual content is AI-generated.”

    The news may renew concerns about whether the companies behind a new crop of generative AI tools are equipped to build safeguards. It also comes as educators prepare for the first full school year with tools like ChatGPT publicly available.

    The sudden rise of ChatGPT quickly raised alarms among some educators late last year over the possibility that it could make it easier than ever for students to cheat on written work. Public schools in New York City and Seattle banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators moved with remarkable speed to rethink their assignments in response to ChatGPT, even as it remained unclear how widespread use of the tool was among students and how harmful it could really be to learning.

    Against that backdrop, OpenAI announced the AI detection tool in February to allow users to check if an essay was written by a human or AI. The feature, which worked on English AI-generated text, was powered by a machine learning system that takes an input and assigns it to several categories. After pasting a body of text such as a school essay into the new tool, it gave one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    But even on its launch day, OpenAI admitted the tool was “imperfect” and results should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Lama Ahmad, policy research director at OpenAI, told CNN at the time.

    While the tool might provide another reference point, such as comparing past examples of a student’s work and writing style, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Although OpenAI may be shelving its tool for now, there are some alternatives on the market.

    Other companies such as Turnitin have also rolled out AI plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan introduced a similar AI detection feature, called ZeroGPT.

    [ad_2]

    Source link

  • A flashing ‘X’ was installed atop the San Francisco headquarters following Twitter’s rebrand. A city complaint says the sign went up without a permit | CNN Business

    A flashing ‘X’ was installed atop the San Francisco headquarters following Twitter’s rebrand. A city complaint says the sign went up without a permit | CNN Business

    [ad_1]


    San Francisco
    CNN
     — 

    In a complaint, the city of San Francisco says they have visited the headquarters of the company formerly known as Twitter twice since Friday regarding the new flashing “X” sign on top of the building.

    According to the complaint, a notice of violation (NOV) was issued for work without a permit for the new sign that adorns the building where the social media platform’s headquarters is located.

    Owner Elon Musk rebranded Twitter and its iconic bird logo as X last week, as CNN previously reported. He tweeted video of the building with the new flashing “X” logo on Saturday saying, “Our HQ in San Francisco tonight.”

    “NOV issued for work without permit. Site visited by MH and spoke with Tweeter (sic) representatives and Building maintenance engineer representatives. I explained BID’s complaint investigation process and requested access to roof area. Tweeter (sic) representative decline to provide access but did explain that the structure is a temporary lighted sign for an event. I explained to all representatives that the NOV requires the structure to be remove with a building permit or legalize,” the complaint reads.

    The complaint also noted that on Saturday another attempt was made to access the roof but was also denied.

    Patrick Hannan, a spokesperson for the city’s Department of Building Inspection told the Washington Post that “to ensure consistency with the historic nature of the building and to ensure the new additions are safely attached to the sign,” the city requires a permit to approve new letters or symbols on a sign, the paper reported.

    CNN has reached out to the City of San Francisco and X for comment.

    According to the city’s website, a notice of violation can incur fees, including permit and investigation fees. It is unknown what fees X could face.

    [ad_2]

    Source link

  • First on CNN: A new group of Twitter vendors is suing the company for alleged unpaid bills | CNN Business

    First on CNN: A new group of Twitter vendors is suing the company for alleged unpaid bills | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A group of Twitter vendors on Tuesday filed a proposed class action lawsuit alleging that the company has failed to pay tens of thousands of dollars in overdue bills.

    The four firms — captioning services company White Coat Captioning, consulting group YES Consulting and public relations firms Cancomm and Dialogue México — allege that Twitter is in breach of their contracts and has yet to pay bills ranging from around $40,000 to $140,000 for services they provided the company last year.

    Tuesday’s lawsuit, which was filed in California Northern District Court and first reported by CNN, refers to the companies suing Twitter as “small businesses without the resources, time, and money to litigate these claims on their own.”

    The lawsuit comes as new Twitter owner Elon Musk attempts to slash costs after buying the company for $44 billion, a significant amount of which came from debt financing. It also adds to the growing list of legal actions Twitter is facing from landlords, business partners and former employees claiming the company has failed to pay what they are owed since Musk’s takeover.

    Twitter is also facing lawsuits from at least one landlord claiming it has missed rent payments, a private jet company for unpaid bills for executive flights and an event production company who said Twitter failed to pay it after canceling the “Chirp Conference” it had been set to organize in November after Musk took over the company.

    The latest suit was filed by Shannon Liss-Riordan, who has also filed four proposed class action lawsuits and hundreds of arbitration demands on behalf of Twitter employees laid off following Musk’s takeover in pursuit of additional severance they allege they were promised by the company prior to Musk’s takeover. Some former workers have also alleged sex and disability discrimination and other issues, which the company has argued in court are without merit.

    Twitter has moved to dismiss many of the lawsuits in court. Twitter, which fired much of its media relations team last fall, did not immediately respond to a request for comment about the new lawsuit.

    “Elon Musk told Twitter vendors that, if they want to get paid, then sue,” Liss-Riordan said in a statement to CNN, referring to comments reportedly made by the Twitter owner. “Well, he’s now getting his wish. Businesses, like employees, should not have to sue to get paid what they are owed.”

    In the new lawsuit, White Coat Captioning said it provided real-time captioning services for events and classes for Twitter employees who were hard of hearing or spoke languages other than English. The company alleged that it began contacting Twitter in November about overdue and pending invoices for services rendered under a contract signed in March 2022.

    “Twitter reassured White Coat Captioning it had processed and would pay these invoices, but it never did,” the firm alleged in the complaint. In January, the firm claims that Twitter said it was conducting an “additional review” of the invoices. Twitter owes the captioning company around $42,000, according to the complaint.

    YES Consulting, which said it provided leadership training to Twitter employees per an agreement signed in March 2022, alleges that Twitter owes it approximately $49,000 for services provided between August and November last year.

    Latin American public relations firm Dialogue also alleges that Twitter has failed to pay approximately $140,000 for eight invoices for services provided in November and December of last year.

    The vendors are seeking damages in the amount each company is allegedly owed by Twitter, as well as interest.

    [ad_2]

    Source link

  • Samsung to cut chip production after posting lowest profit in 14 years | CNN Business

    Samsung to cut chip production after posting lowest profit in 14 years | CNN Business

    [ad_1]


    Seoul
    Reuters
     — 

    Samsung Electronics said on Friday it would make a “meaningful” cut to chip production after flagging a worse-than-expected 96% plunge in quarterly operating profit, as a sharp downturn in the global semiconductor market worsens.

    Shares in the world’s largest memory chip and TV maker rose 3% in early trading, while rival SK Hynix shares surged 5% as investors welcomed plans to cut production to help preserve pricing power.

    Samsung

    (SSNLF)
    estimated its operating profit fell to 600 billion won ($455.5 million) in January-March, from 14.12 trillion won a year earlier, in a short preliminary earnings statement. It was the lowest profit for any quarter in 14 years.

    “Memory demand dropped sharply … due to the macroeconomic situation and slowing customer purchasing sentiment, as many customers continue to adjust their inventories for financial purposes,” it said in the statement.

    “We are lowering the production of memory chips by a meaningful level, especially that of products with supply secured,” it added, in a reference to those with sufficient inventories.

    The production cut signal is unusually strong for Samsung, which previously said it would make small adjustments like pauses for refurbishing production lines but not a full-blown cut.

    It did not disclose the size of the planned cut.

    The first-quarter profit fell short of a 873 billion won Refinitiv SmartEstimate, weighted toward analysts who are more consistently accurate. Multiple estimates were revised down earlier this week.

    It was the lowest since a 590 billion won profit in the first quarter of 2009, according to company data.

    With consumer demand for tech devices sluggish due to rising inflation, semiconductor buyers including data center operators and smartphone and personal computer makers are refraining from new chip purchases and using up inventories.

    Analysts estimated the chip division sustained quarterly losses of more than 4 trillion won ($3.03 billion) as memory chip prices fell and its inventory values were slashed.

    This would be the chip business’ first quarterly loss since the first quarter of 2009, a major divergence for what is normally a cash cow that generates about half of Samsung’s profits in better years.

    Revenue likely fell 19% from the same period a year earlier to 63 trillion won, Samsung said.

    The company is due to release detailed earnings, including divisional breakdowns, later this month.

    [ad_2]

    Source link

  • Tim Cook and Bob Iger to meet with House China committee members | CNN Business

    Tim Cook and Bob Iger to meet with House China committee members | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Members of a House panel focused on US-China competition are set to meet with leaders from Silicon Valley and Hollywood during a multi-day tour of California beginning today, according to a source close to the committee.

    The House Select Committee on the Chinese Communist Party plan to meet with top execs from Google, Microsoft, Apple and Disney, among others, to discuss topics ranging from China’s investments in artificial intelligence to its cultural and human rights record; its impact on supply chains; and its goals for defense and other emerging technologies, the source said.

    “We’re going to learn and share our concerns and views on the geopolitics at play here, and what we understand the CCP’s broader ambitions to be,” the source said.

    The 10-member bipartisan congressional delegation led by Chairman Mike Gallagher, a Wisconsin Republican, will kick things off Wednesday in a meeting with Disney CEO Bob Iger, where lawmakers are expected to raise concerns about Disney’s compliance with China’s censorship regime.

    Lawmakers will also dine with entertainment producers and screenwriters who have been critical of the industry’s approach to wooing Chinese viewers, the source said.

    On Thursday, lawmakers will engage with officials from Big Tech and venture capital, the source said. Microsoft President Brad Smith will speak to members about China’s control of rare earth minerals, a key input in many modern computing technologies, while experts from Stanford University are set to discuss innovation in the defense field. The group is expected to lunch with Big Tech executives representing Google, Microsoft, Palantir and Scale AI.

    On Friday, lawmakers will have conversations with former Defense Secretary James Mattis as well as Apple CEO Tim Cook. China is Apple’s third-largest geographic business segment after the Americas and Europe, accounting for more than $74 billion in company revenues last year. Apple’s revenue from China grew by 70% between 2020 and 2021, according to its financial reports.

    The meetings will also include a session on China’s role in the digital currency space and talks with members of the cryptocurrency community based in California, the source added.

    The breadth of subjects covered on the tour highlight the range of challenges the Chinese government poses to US leadership, the source said, adding that lawmakers will seek to deliver the message to business that excessive dependence on China — whether for supplies, or as a base of potential customers — exposes the US to risk.

    “This committee was set up to build out the bipartisan consensus on the CCP and the actions we need to take to defend ourselves,” the source said. “[The goal is to] make them aware of what’s happening so they can equip themselves as appropriate.”

    [ad_2]

    Source link

  • You can now apply for your share of a $725 million Facebook data privacy settlement. Here’s how | CNN Business

    You can now apply for your share of a $725 million Facebook data privacy settlement. Here’s how | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook users who had an active account at any point between May 2007 and December 2022 can now apply to receive a piece of parent company Meta’s $725 million settlement related to the Cambridge Analytica scandal.

    Meta in December agreed to the payment to settle a longstanding class action lawsuit accusing it of allowing Cambridge Analytica and other third parties to access private user information and misleading users about its privacy practices.

    The legal battle began four years ago, following an international outcry from the company’s disclosure that the private information of as many as 87 million Facebook users was obtained by Cambridge Analytica, a data analytics firm that worked with the Trump campaign.

    The California judge overseeing the case granted preliminary approval of the settlement late last month, and Facebook users can now apply for a cash payment as part of a settlement.

    The claim form — which requires a few personal details and information about a user’s Facebook account — can be filled out online or printed and submitted by mail. The form takes only a few minutes to complete and must be submitted by August 25 to be included as part of the settlement.

    Any US Facebook user who had an active account sometime between May 24, 2007, and December 22, 2022, is eligible to be part of the settlement class, including those who have since deleted their accounts.

    It’s not yet clear how much each settlement payment will be. The fund will be distributed to class members who submit valid claims based on how long they had an active Facebook account during the relevant period, according to a frequently asked questions page on the settlement site.

    A final settlement approval hearing is set for September 7. Settlement payments will be distributed after the court’s approval, assuming there are no appeals.

    Meta did not admit wrongdoing as part of the settlement. Facebook has made changes in the wake of the Cambridge Analytica incident, including restricting third-party access to user data and improving communications to users about how their data is collected and shared.

    “We pursued a settlement as it’s in the best interest of our community and shareholders,” Meta spokesperson Dina Luce said in a statement following the December settlement agreement. “Over the last three years we revamped our approach to privacy and implemented a comprehensive privacy program. We look forward to continuing to build services people love and trust with privacy at the forefront.”

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • Meta shuts down network of fake accounts that ‘signal a shift’ in China-based influence efforts | CNN Business

    Meta shuts down network of fake accounts that ‘signal a shift’ in China-based influence efforts | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook’s parent company Meta announced Wednesday that it has taken down a network of more than 100 China-based accounts that posed as organizations in the US and Europe and pushed pro-Beijing talking points.

    The Facebook and Instagram accounts, which included a fictitious news organization and posed as a think tank, likely used deepfake images developed through artificial intelligence to make the fake accounts appear legitimate, Meta said.

    The network, which had more than 15,000 followers on Meta’s platforms, appears to have had some financial resources behind it. In one instance, the people behind the accounts called for protests in Budapest against George Soros, the billionaire philanthropist and frequent target of right-wing groups, and posted on Twitter an offer to pay people to attend. The accounts also offered to pay freelance writers to contribute to at least one of its websites.

    The accounts were awash with pro-China commentary, including “warnings against boycotting the 2022 Beijing Olympics; allegations of US foreign policy in Africa,” and “claims of comfortable living conditions for Uyghurs in China,” Meta said in its report. The fake accounts also posted “negative commentary about Uyghur activists and critics of the Chinese state,” it said.

    Meta did not link the network to the Chinese government, instead saying it found links to individuals in China associated with a technology company. CNN has reached out to the company for comment. Meta regularly takes down covert influence campaigns and discloses information about them in quarterly reports.

    The takedowns “signal a shift in the nature” of China-based influence networks, as Chinese operatives embrace new tactics like setting up a front company, hiring freelance writers around the world and offering to recruit protesters, Ben Nimmo, Meta’s global threat intelligence lead, told reporters on Tuesday.

    While the networks are generally small and have struggled to build an audience, “they are experimenting with diverse tactics and that’s always something we want to keep an eye on,” Nimmo said. 

    The tactics are similar to those used by Russian operatives during the 2016 US presidential election campaign. Using fake personas and posing as representatives of US political and activist organizations, Russians successfully recruited unwitting Americans to take part in political stunts.

    Chinese operatives have in recent years “evolved their posture” from being concerned about being caught influencing US elections to seeing influence operations as another tool to project power, a US official told CNN.

    “We’re keeping a close eye” on the Chinese influence operations heading into the 2024 election, the official said.

    Indictments from special counsel Robert Mueller’s team in 2018 detailed how disinformation from Russia were designed to exacerbate existing divisions in the United States.

    Ahead of the 2022 US midterm election, FBI officials expressed concern that Chinese operatives appeared to be engaging in “Russian-style influence activities” that stoke American divisions. Russian and Chinese government-affiliated operatives and organizations both promoted misinformation about the integrity of American elections that originated in the US during the midterm election season, FBI officials have said. 

    [ad_2]

    Source link