ReportWire

Category: Technology

Technology News | ReportWire publishes the latest breaking U.S. and world news, trending topics and developing stories from around globe.

  • North Korea hackers suspected in new $35 million crypto heist | CNN Business

    North Korea hackers suspected in new $35 million crypto heist | CNN Business

    [ad_1]


    New York
    CNN
     — 

    North Korean hackers were likely behind the theft of at least $35 million from a popular cryptocurrency service, multiple crypto-tracking experts told CNN Tuesday.

    It’s the latest in a string of hacks of cryptocurrency firms linked to Pyongyang that US officials worry could be used to fund the North Korean regime’s nuclear and ballistic weapons programs.

    The hackers drained the cryptocurrency accounts of certain customers of Atomic Wallet, an Estonia-based company that claims 5 million users of its software.

    Atomic Wallet said on Saturday that “less than 1%” of monthly users appeared to be affected by the hack. The firm has not specified how much money might have been stolen or who was behind the hack. CNN has requested comment from the firm.

    Some of the apparent victims of the hack took to Twitter to beg the hackers for their money back, posting their cryptocurrency addresses in case the hackers took pity on them.

    North Korean hackers have stolen billions of dollars from banks and cryptocurrency firms over the last several years, providing a key source of revenue for the regime, according to reports from the United Nations and private firms.

    In the Atomic Wallet incident, the hackers’ money-laundering techniques and the tools they used matched telltale North Korean behavior, according to London-based crypto-tracking firm Elliptic.

    An independent cryptocurrency tracker known as ZachXBT told CNN that North Korean hackers were very likely responsible. The amount confirmed stolen could rise above $35 million as Atomic Wallet continues to investigate the incident, the analyst said.

    “The pattern was similar to what we saw with the laundering of Harmony funds back in January,” ZachXBT said, referring to the laundering of $100 million stolen from a California-based firm.

    The FBI blamed North Korea for the hack of Harmony. CNN reported on how private investigators and South Korean intelligence operatives were able to claw back a fraction of that money.

    Thwarting North Korean hacking and money laundering has quickly become a national security priority for the Biden administration. About half of North Korea’s missile program has been funded by cyberattacks and cryptocurrency theft, a White House official said last month.

    CNN has requested comment from the FBI on the Atomic Wallet hack.

    [ad_2]

    Source link

  • What the chaos at Twitter means for the future of social movements | CNN Business

    What the chaos at Twitter means for the future of social movements | CNN Business

    [ad_1]

    Editor’s Note: The CNN Original Series “The 2010s” looks back at a turbulent era marked by extraordinary political and social upheaval. New episodes air at 9 p.m. ET/PT Sundays.



    CNN
     — 

    When thousands of Egyptians marched through the streets during the Arab Spring of 2011, they had a tool at their disposal that earlier social movements didn’t: Twitter.

    A key group of activists used the platform to form networks and organize protests against the authoritarian regime, while many more demonstrators used it to disseminate information and images from the ground for the rest of the world to see. Months later, organizers from the Occupy Wall Street movement took to Twitter to coordinate protests in New York and beyond.

    Twitter fostered public conversation around the Black Lives Matter movement after the 2014 police killing of Michael Brown in Ferguson, Missouri, and again after the 2020 police killing of George Floyd. It amplified #MeToo in the aftermath of the sexual assault allegations against Hollywood producer Harvey Weinstein, and catapulted other revolutionary movements around the world to global attention.

    “You can’t underestimate the impact of Twitter to social movements,” Amara Enyia, manager of policy and research for the Movement for Black Lives, told CNN.

    Twitter has often been heralded as a democratizing force, bringing previously marginalized voices to the forefront and giving the public a platform to demand accountability from leaders. (It has also enabled the spread of misinformation, extremist ideas and abusive content.)

    But since Elon Musk acquired Twitter last year and the platform plunged into chaos, some organizers and digital media experts have been bracing for the impact that his controversial policy changes and mass layoffs may have on social movements going forward.

    Though Twitter has often been referred to as a public square, some of Musk’s recent moves challenge that description.

    Through Twitter, organizers and political groups have had a level of direct access to policymakers and leaders that wouldn’t have been possible in person, said Rachel Kuo, an assistant professor of media and cinema studies at the University of Illinois, Urbana-Champaign. Verified activists were able to promote certain messages that the algorithm then pushed to the top of users’ feeds, organizers could launch campaigns that caught the attention of high-profile figures and the public could follow along for real-time updates.

    “There are now issues in how people see Twitter as a source of information and a source of political community,” said Kuo, whose research focuses on race, social movements and digital technologies. “It isn’t seen in the same way anymore.”

    Elon Musk's controversial policy changes at Twitter could have implications for social movements, some activists say.

    Musk upended traditional Twitter verification and turned it into a pay-for-play system, leading to the impersonation of government accounts and the spread of fake images. For organizers who opt not to pay the monthly subscription fee for a blue check, that also means a loss of credibility and visibility, Kuo added.

    Twitter, which has cut much of its public relations team under Musk, did not respond to a request for comment.

    Twitter’s role in information-sharing has been disrupted in other ways, too.

    The platform has been plagued by technical glitches after mass layoffs and departures at the company, frustrating many users. People have also reported that the “for you” timeline is showing them content they aren’t interested in.

    As a result of these issues and others, some are leaving Twitter altogether – more than 32 million users are projected to exit the platform in the two years following Musk’s takeover, according to a December 2022 forecast from the market research agency Insider Intelligence. (Twitter reported having 238 million monetizable daily active users last year before Musk acquired it.)

    With fewer people on Twitter, the platform becomes less centralized and the information landscape more fractured, said Sarah Aoun, a privacy and security researcher who works on cybersecurity for the Movement for Black Lives. That makes it harder for activists to connect, exchange tactics and build solidarity in the way they once did.

    Protesters in Cairo gather in Tahrir Square in November 2011.

    Musk’s approach to content moderation has also made Twitter a more hostile environment, Aoun said. Twitter has never been a completely safe space for marginalized voices – women, people of color, LGBTQ people and other vulnerable groups have long been targets of online harassment and abuse – but reports from the Center for Countering Digital Hate and Anti-Defamation League indicate an increase in hate speech on the platform under Musk’s leadership. (Musk has previously pushed back at that characterization by focusing on a different metric.)

    Some are also disillusioned over Musk’s decision to reinstate users who were previously suspended for violating the platform’s rules, including former President Donald Trump and GOP Rep. Marjorie Taylor Greene.

    “The lack of verification, the mass exodus, the inability to coordinate the way that we used to be able to coordinate and the content moderation (gutting) makes it a very difficult platform to be on at the moment,” Aoun said.

    Musk has stepped back as Twitter’s CEO, a role now held by former NBCUniversal marketing executive Linda Yaccarino. But he will maintain significant control over the platform as the company’s owner, executive chairman and chief technology officer.

    The changes at Twitter have prompted some activists and organizers to reassess their relationships with the platform.

    Rich Wallace, executive director of the Chicago-based organization Equity and Transformation (EAT), said that previously, he used to see robust engagement on tweets about social injustice or racial inequity, whether it was from those who agreed with him or didn’t. Now, he finds that substantive posts barely get traction as opposed to tweets he considers more mundane.

    Wallace said his organization, which seeks to build social and economic equity for Black workers in the informal economy, still shares information about community events on Twitter, but the potential to find new allies or engage in meaningful conversation on the platform is largely a thing of the past.

    Twitter is no longer a space for education and community building that it once was, Wallace said. It’s a shift in how he once viewed the platform, but he isn’t especially concerned. For his organization, it simply means a re-emphasis on the grassroots, in-person work they were already doing.

    People raise their fists in June 2020 as they protest the police killing of George Floyd.

    “As organizers, we’ve been creative in how we organize around barriers,” he said. “This is just one of the newer barriers that we have to assess and organize through.”

    As Kuo sees it, the ways that the changes at Twitter will affect organizing and activism will vary widely. Hyperlocal community organizers or those who work with populations that don’t speak English aren’t typically using Twitter in their day-to-day work, and so the recent shifts likely won’t affect them drastically. But she predicts that mid-to-large nonprofit organizations with communications staff might be rethinking their strategy on the platform.

    “It’s very dependent on organizational structure, form, strategies for change and political vision,” Kuo said.

    Enyia said that on a personal level, she finds that she’s engaging with people on Twitter less often and moreso using the platform to keep up with news. But in her advocacy work with the Movement for Black Lives, it remains an important tool.

    “For us, its utility is in the fact that it creates more access points to our policy platform, to the issues that we’re advocating on,” she said. “And in that regard, it’s still very, very useful.”

    When Musk first took over Twitter, some organizers and activists flocked to other alternatives, such as Mastodon or Bluesky (an app backed by Twitter co-founder and former CEO Jack Dorsey).

    Neither appears to be fulfilling the same purpose that Twitter once did, Aoun and others said. Mastodon and Bluesky are decentralized and fewer people are using them, making it more difficult to build community. And while their numbers are growing, they’re still far smaller than Twitter.

    The Bluesky app is seen on a phone and laptop in June 2023.

    In the case of Mastodon, there are privacy and security issues that concern some activists. Because the social network allows users to join different servers run by various groups and individuals, Aoun said “the privacy, security and content moderation is basically as good as the person behind the server.” Twitter – at least before Musk took over – had dedicated privacy and security teams, offering more transparency about how their systems worked.

    Some activists are using popular social networks such as Instagram and TikTok, but the visual nature of those platforms versus the text-based medium of Twitter changes how people are able to interact and engage with each other, Kuo said.

    Twitter has been an incredibly powerful tool for social movements, Enyia said. But ultimately, the platform is just that – a tool.

    “There is no panacea for just the nuts and bolts work that it takes to meet people, to engage people, to organize and talk to people,” Enyia said. “So even if we recognize that social media is a tool, we don’t put all of our eggs in that basket.”

    Social media platforms come and go, and the same could happen to Twitter. So while Enyia’s organization continues to use the platform for its own ends, it’s prepared for a reality in which Twitter is less relevant.

    “We have to stay on top of it to make sure that the tools are serving their purpose as it relates to our work,” Enyia said. “But then we have to be ready to evolve or to move on or to adapt to different tools when it becomes clear that that’s the direction we have to go.”

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • Senate Judiciary advances journalism bargaining bill targeting Big Tech | CNN Business

    Senate Judiciary advances journalism bargaining bill targeting Big Tech | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The Senate Judiciary Committee advanced legislation on Thursday that would give news organizations the power to jointly bargain against Meta, Google and other online platforms for a greater share of online advertising revenue.

    The legislation would create an antitrust exemption allowing radio and TV broadcasters, as well as small news outlets with fewer than 1,500 employees, to “band together” and arrest the decline of local journalism in cities and states across the country, said its lead co-sponsors, Minnesota Democratic Sen. Amy Klobuchar and Louisiana Republican Sen. John Kennedy.

    The concept, a version of which became law in Australia in 2021 and since been proposed in numerous countries, has been vigorously opposed by tech giants who in some cases have threatened to pull news content from their platforms over the legislation.

    Meta and Google didn’t immediately respond to a request for comment.

    The measure cleared the committee by a vote of 14-7. But it faces an uncertain future on the Senate floor.

    One member of the committee, California Democratic Sen. Alex Padilla, voted against the bill Thursday and vowed to block any future floor vote on the legislation until lawmakers make several changes.

    Padilla said the legislation doesn’t do enough to ensure that actual journalists in local newsrooms will benefit from the bargaining, as opposed to hedge funds and publication owners. He also raised concerns that the bill as written could allow online platforms such as Google to charge individual internet users each time they attempt to share or click on a link to a news article, a practice Padilla warned would be harmful to the internet.

    “This bill, as written, does nothing to guarantee the protection or pay of the journalists and media workers that we’re claiming to try to protect,” Padilla said. “For us to ignore them while claiming to be fighting for them is absurd.”

    Several other senators echoed Padilla’s remarks on Thursday, including Democratic Sens. Jon Ossoff, Peter Welch and Cory Booker.

    Kennedy and Klobuchar argued that the bill — which had previously passed out of the committee during the last Congress, in 2022 — is urgently necessary in light of the closure of thousands of local newspapers nationwide since the rise of online platforms.

    “We have small towns in all of our states with news organizations that cover everything from what’s happening in the city council to reports of the local high school football and volleyball games to informing citizens that a flood is coming,” Klobuchar said. “That kind of reporting … is being undermined right now because, in a very tough market, these news reporters and news organizations are not getting the share of the revenue that they should get.”

    Kennedy urged colleagues to set aside their other views on tech platforms and news media.

    “This bill is not about whether or not you like social media,” Kennedy said. “This bill is not about whether or not you like what is happening in American news media today. This bill is about creative content. That’s all it’s about. And whether we respect creative content and value it, or whether we do not.”

    [ad_2]

    Source link

  • Schumer outlines plan for how Senate will regulate AI | CNN Business

    Schumer outlines plan for how Senate will regulate AI | CNN Business

    [ad_1]



    CNN
     — 

    Senate Majority Leader Chuck Schumer announced a broad, open-ended plan for regulating artificial intelligence on Wednesday, describing AI as an unprecedented challenge for Congress that effectively has policymakers “starting from scratch.”

    The plan, Schumer said at a speech in Washington, will begin with at least nine panels to identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.” The panels will be composed of experts from industry, academia and civil society, with the first sessions taking place in September, Schumer said.

    The Senate will then turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions, Schumer added, arguing that the resulting US solution could leapfrog existing regulatory proposals from around the world.

    “If we can put this together in a very serious way, I think the rest of the world will follow and we can set the direction of how we ought to go in AI, because I don’t think any of the existing proposals have captured that imagination,” Schumer said, reflecting on other recent proposals such as the European Union’s draft AI Act, which last week was approved by the European Parliament.

    The speech represents Schumer’s most definitive remarks to date on a problem that has dogged Congress for months amid the wide embrace of tools such as ChatGPT: How to catch up, or get ahead, on policymaking for a technology that is already in the hands of millions of people and evolving rapidly.

    In the wake of ChatGPT’s viral success, Silicon Valley has raced to develop and deploy a new crop of generative AI tools that can produce images and writing almost instantly, with the potential to change how people work, shop and interact with each other. But these same tools have also raised concerns for their potential to make factual errors, spread misinformation and perpetuate biases, among other issues.

    In contrast to the fast pace of AI advancements, Schumer has stressed the importance of a deliberate approach, focusing on getting lawmakers acquainted with the basic facts of the technology and the issues it raises before seeking to legislate. He and three other colleagues began last week by convening the first in a series of closed-door briefings on AI for senators that is expected to run through the summer.

    In his remarks Wednesday, Schumer appeared to acknowledge criticism of his pace.

    “I know many of you have spent months calling on us to act,” he said. “I hear you. I hear you loud and clear.”

    But he described AI as a novel issue for which Congress lacks a guide.

    “It’s not like labor, or healthcare, or defense, where Congress has had a long history we can work off of,” he said. “Experts aren’t even sure which questions policymakers should be asking. In many ways, we’re starting from scratch.”

    Schumer described his plan as laying “a foundation for AI policy” that will do “years of work in a matter of months.”

    To guide that process, Schumer expanded on a set of principles he first announced in April. Formally unveiling the framework on Wednesday, Schumer said any legislation on AI should be geared toward facilitating innovation before addressing risks to national security or democratic governance.

    “Innovation first,” Schumer said, “but with security, accountability, [democratic] foundations and explainability.”

    The last two pillars of his framework, Schumer said, may be among the most important, as unrestricted artificial intelligence could undermine electoral processes or make it impossible to critically evaluate an AI’s claims.

    Schumer’s remarks were restrained in calling for any specific proposals. At one point, he acknowledged that a consensus may even emerge that recommends against major government intervention on the technology.

    But he was clear on one point: “We do — we do — need to require companies to develop a system where in simple and understandable terms users understand why the system produced a particular answer, and where that answer came from.”

    The Senate may still be a long way off from unveiling any comprehensive proposal, however. Schumer predicted that the process is likely to take longer than weeks but shorter than years.

    “Months would be the proper timeline,” he said.

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link

  • Microsoft under European antitrust investigation over Teams | CNN Business

    Microsoft under European antitrust investigation over Teams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    European officials are investigating whether Microsoft’s practice of bundling its Teams software with Office 365 is anticompetitive, the European Commission said Thursday.

    The EU probe follows a formal complaint by Microsoft’s rival, the Salesforce-owned Slack, in 2020, alleging that Microsoft has illegally circumvented competition.

    By packaging Teams together with its “well-entrenched” productivity suite, including apps such as Word and Outlook, Microsoft could be effectively blocking customers from seeking out rival collaboration tools, the Commission said. Antitrust officials are also concerned about interoperability issues between Microsoft’s software and third-party products, it added.

    “These practices may constitute anti-competitive tying or bundling and prevent suppliers of other communication and collaboration tools from competing,” the Commission said in a statement.

    Microsoft said in a statement it is cooperating with the probe.

    “We respect the European Commission’s work on this case and take our own responsibilities very seriously,” said a Microsoft spokesperson. “We will continue to cooperate with the Commission and remain committed to finding solutions that will address its concerns.”

    In a press briefing Thursday, EU spokesperson Arianna Podesta told reporters that “at this stage, possible commitments [by Microsoft to resolve the concerns] are too early to be discussed. We first need to identify indeed if there is a breach of antitrust considerations.”

    The in-depth investigation reflects rising EU antitrust scrutiny for Microsoft, which was last fined on a competition violation in 2013 for not honoring a commitment to give European consumers a choice in web browsers.

    Slack’s initial EU complaint alleged that Microsoft forces Teams onto millions of customers, “blocking its removal, and hiding the true cost to enterprise customers.”

    A Slack executive at the time argued that Microsoft sells a closed ecosystem of its own products, while Slack provides customers with more freedom to mix and match services.

    “This is a proxy for two very different philosophies for the future of digital ecosystems, gateways versus gatekeepers,” said Slack’s VP of communications and policy, Jonathan Prince.

    [ad_2]

    Source link

  • ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    [ad_1]



    CNN
     — 

    Less than six months after ChatGPT-creator OpenAI unveiled an AI detection tool with the potential to help teachers and other professionals detect AI generated work, the company has pulled the feature.

    OpenAI quietly shut down the tool last week citing a “low rate of accuracy,” according to an update to the original company blog post announcing the feature.

    “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company wrote in the update. OpenAI said it is also committed to helping “users to understand if audio or visual content is AI-generated.”

    The news may renew concerns about whether the companies behind a new crop of generative AI tools are equipped to build safeguards. It also comes as educators prepare for the first full school year with tools like ChatGPT publicly available.

    The sudden rise of ChatGPT quickly raised alarms among some educators late last year over the possibility that it could make it easier than ever for students to cheat on written work. Public schools in New York City and Seattle banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators moved with remarkable speed to rethink their assignments in response to ChatGPT, even as it remained unclear how widespread use of the tool was among students and how harmful it could really be to learning.

    Against that backdrop, OpenAI announced the AI detection tool in February to allow users to check if an essay was written by a human or AI. The feature, which worked on English AI-generated text, was powered by a machine learning system that takes an input and assigns it to several categories. After pasting a body of text such as a school essay into the new tool, it gave one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    But even on its launch day, OpenAI admitted the tool was “imperfect” and results should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Lama Ahmad, policy research director at OpenAI, told CNN at the time.

    While the tool might provide another reference point, such as comparing past examples of a student’s work and writing style, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Although OpenAI may be shelving its tool for now, there are some alternatives on the market.

    Other companies such as Turnitin have also rolled out AI plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan introduced a similar AI detection feature, called ZeroGPT.

    [ad_2]

    Source link

  • A flashing ‘X’ was installed atop the San Francisco headquarters following Twitter’s rebrand. A city complaint says the sign went up without a permit | CNN Business

    A flashing ‘X’ was installed atop the San Francisco headquarters following Twitter’s rebrand. A city complaint says the sign went up without a permit | CNN Business

    [ad_1]


    San Francisco
    CNN
     — 

    In a complaint, the city of San Francisco says they have visited the headquarters of the company formerly known as Twitter twice since Friday regarding the new flashing “X” sign on top of the building.

    According to the complaint, a notice of violation (NOV) was issued for work without a permit for the new sign that adorns the building where the social media platform’s headquarters is located.

    Owner Elon Musk rebranded Twitter and its iconic bird logo as X last week, as CNN previously reported. He tweeted video of the building with the new flashing “X” logo on Saturday saying, “Our HQ in San Francisco tonight.”

    “NOV issued for work without permit. Site visited by MH and spoke with Tweeter (sic) representatives and Building maintenance engineer representatives. I explained BID’s complaint investigation process and requested access to roof area. Tweeter (sic) representative decline to provide access but did explain that the structure is a temporary lighted sign for an event. I explained to all representatives that the NOV requires the structure to be remove with a building permit or legalize,” the complaint reads.

    The complaint also noted that on Saturday another attempt was made to access the roof but was also denied.

    Patrick Hannan, a spokesperson for the city’s Department of Building Inspection told the Washington Post that “to ensure consistency with the historic nature of the building and to ensure the new additions are safely attached to the sign,” the city requires a permit to approve new letters or symbols on a sign, the paper reported.

    CNN has reached out to the City of San Francisco and X for comment.

    According to the city’s website, a notice of violation can incur fees, including permit and investigation fees. It is unknown what fees X could face.

    [ad_2]

    Source link

  • Elon Musk’s Twitter begins purge of blue check marks | CNN Business

    Elon Musk’s Twitter begins purge of blue check marks | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Elon Musk’s Twitter on Thursday began a purge of blue verification check marks from users who have not signed up for its subscription service, with the checks disappearing from the accounts of journalists, academics and celebrities.

    The blue checks even disappeared from the accounts of some of the most well-known and widely followed people on the social network, including Kim Kardashian, Beyonce, Pope Francis, former president Donald Trump and Twitter founder Jack Dorsey.

    The initial rollout of the change appeared to be fairly glitchy, as blue checks disappeared and reappeared on some accounts. Some other high-profile legacy verified accounts also didn’t seem to lose their checks, at least at first.

    The change — and its confusing rollout — threatens to create an even greater risk of impersonation of high-profile users and confusion over the veracity of information on the platform.

    Twitter had previously said it would “begin winding down” blue checks granted under its old verification system — which emphasized protecting high-profile users at risk of impersonation — on April 1. In order to stay verified, Musk said, users would have to pay $8 per month to join the platform’s Twitter Blue subscription service, which has allowed accounts to pay for verification since December.

    Instead, Twitter removed the check mark from a single account from The New York Times, a publication Musk has repeatedly criticized, and changed the language on its site in a way that obscures why users are verified.

    Last week, Musk tweeted that the “final date for removing legacy Blue checks is 4/20,” a date with special resonance to the billionaire entrepreneur given its meaning to marijuana enthusiasts.

    The decision to move forward with the change, after some confusing messaging, is just the latest example of Musk’s Twitter upending the experience for users — and in this case, not just any users, but many of the most high-profile accounts that have long been a key selling point for the platform.

    Prominent users such as actor William Shatner and anti-bullying activist Monica Lewinsky have previously pushed back against the idea that, as power users that draw attention to the site, they should have to pay for a feature that keeps them safe from impersonation.

    Musk, for his part, has previously presented changes to Twitter’s verification system as a way of “treating everyone equally.”

    “There shouldn’t be a different standard for celebrities,” he said in an earlier tweet. The paid feature could also drive revenue, which could help Musk, who is on the hook for significant debt after buying Twitter for $44 billion.

    [ad_2]

    Source link

  • Lyft plans to ‘significantly reduce’ workforce, CEO says | CNN Business

    Lyft plans to ‘significantly reduce’ workforce, CEO says | CNN Business

    [ad_1]



    CNN
     — 

    Lyft

    (LYFT)
    plans to “significantly reduce” its workforce, the company’s new CEO David Risher told employees on Friday, in another round of layoffs as it struggles to turn a profit and pull off a turnaround.

    In a company-wide memo, Risher said the cuts were aimed at making Lyft a “faster, flatter company where everyone is closer to our riders and drivers.”

    “I own this decision, and understand that it comes at an enormous cost,” Risher continued. “We’re not just talking about team members; we’re talking about relationships with people who’ve worked (and played) together, sometimes for years.”

    The announcement follows Lyft’s move in November to cut 13% of its workforce, citing fears of a looming recession.

    The Wall Street Journal reported that the latest job cuts would eliminate at least 1,200 positions or upward of 30% of its staff. A Lyft spokesperson declined to provide details on the extent of the cuts.

    “David has made clear to the company that his focus is on creating a great and affordable experience for riders and improving drivers’ earnings,” the spokesperson said. “To do so requires that we reduce our costs and structure our company so that our leaders are closer to riders and drivers. This is a hard decision and one we’re not making lightly. But the result will be a far stronger, more competitive Lyft.”

    Lyft announced last month that Risher, an Amazon veteran, would take over as CEO in April, and that co-founders Logan Green and John Zimmer will step down from their management positions at the ride-hailing company.

    Risher was the 37th employee of Amazon – a company that has long been the model for the on-demand industry – and he went on to become the e-commerce giant’s first head of product and head of US retail.

    For Lyft and Risher, the current challenges are immense. While Uber diversified its business beyond ride-hailing by delivering meals and grocery items, Lyft never did. That arguably hurt the company earlier in the pandemic when fewer customers were traveling but more were ordering items online.

    Now Uber is showing renewed strength In its most recent earnings report, Uber said that it had its “strongest quarter ever,” reporting a 49% year-over-year increase in revenue. Lyft’s latest earnings report, meanwhile, was unusually disappointing for Wall Street.

    Lyft shares were up 6% in midday trading Friday, but the company’s stock is down roughly 70% over the past year.

    – CNN’s Catherine Thorbecke contributed to this report.

    [ad_2]

    Source link

  • Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat is about to give new meaning to the “chat” part of its name.

    Snap, the company behind Snapchat, announced on Wednesday that its customizable My AI chatbot, is now accessible to all users within the app. The feature, which is powered by the viral AI chatbot ChatGPT, was previously only available to paying Snapchat+ subscribers.

    The tool offers recommendations, answers questions, helps users make plans and can write a haiku in seconds, according to the company. It can be brought into conversation with friends when it’s mentioned with “@MyAI.” Users can also give it a name and design a custom Bitmoji avatar for it to personalize it more.

    The move comes more than a month after ChatGPT creator OpenAI opened up access to its chatbot to third-party businesses. Snap, Instacart and tutor app Quizlet were among the early partners experimenting with adding ChatGPT.

    Since its public release in November 2022, ChatGPT has stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    The initial batch of companies tapping into ChatGPT’s functionality each have slightly different visions for how to incorporate it. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.

    Adding ChatGPT features also may come with some risks. The tool, which is trained on vast troves of data online, can spread inaccurate information and has the potential to respond to users in ways they might find inappropriate.

    In a blog post on Wednesday, Snap acknowledged “My AI is far from perfect but we’ve made a lot of progress.”

    It said, for example, about 99.5% of My AI responses conform to its community guidelines. Snap said it has made changes to “help protect against responses that could be inappropriate or harmful.” The company also said it has added moderation technology and included the new feature to its in-app parental tools.

    “We will continue to use these early learnings to make AI a more safe, fun, and useful experience, and we’re eager to hear your thoughts,” the company said.

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • Meta shuts down network of fake accounts that ‘signal a shift’ in China-based influence efforts | CNN Business

    Meta shuts down network of fake accounts that ‘signal a shift’ in China-based influence efforts | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook’s parent company Meta announced Wednesday that it has taken down a network of more than 100 China-based accounts that posed as organizations in the US and Europe and pushed pro-Beijing talking points.

    The Facebook and Instagram accounts, which included a fictitious news organization and posed as a think tank, likely used deepfake images developed through artificial intelligence to make the fake accounts appear legitimate, Meta said.

    The network, which had more than 15,000 followers on Meta’s platforms, appears to have had some financial resources behind it. In one instance, the people behind the accounts called for protests in Budapest against George Soros, the billionaire philanthropist and frequent target of right-wing groups, and posted on Twitter an offer to pay people to attend. The accounts also offered to pay freelance writers to contribute to at least one of its websites.

    The accounts were awash with pro-China commentary, including “warnings against boycotting the 2022 Beijing Olympics; allegations of US foreign policy in Africa,” and “claims of comfortable living conditions for Uyghurs in China,” Meta said in its report. The fake accounts also posted “negative commentary about Uyghur activists and critics of the Chinese state,” it said.

    Meta did not link the network to the Chinese government, instead saying it found links to individuals in China associated with a technology company. CNN has reached out to the company for comment. Meta regularly takes down covert influence campaigns and discloses information about them in quarterly reports.

    The takedowns “signal a shift in the nature” of China-based influence networks, as Chinese operatives embrace new tactics like setting up a front company, hiring freelance writers around the world and offering to recruit protesters, Ben Nimmo, Meta’s global threat intelligence lead, told reporters on Tuesday.

    While the networks are generally small and have struggled to build an audience, “they are experimenting with diverse tactics and that’s always something we want to keep an eye on,” Nimmo said. 

    The tactics are similar to those used by Russian operatives during the 2016 US presidential election campaign. Using fake personas and posing as representatives of US political and activist organizations, Russians successfully recruited unwitting Americans to take part in political stunts.

    Chinese operatives have in recent years “evolved their posture” from being concerned about being caught influencing US elections to seeing influence operations as another tool to project power, a US official told CNN.

    “We’re keeping a close eye” on the Chinese influence operations heading into the 2024 election, the official said.

    Indictments from special counsel Robert Mueller’s team in 2018 detailed how disinformation from Russia were designed to exacerbate existing divisions in the United States.

    Ahead of the 2022 US midterm election, FBI officials expressed concern that Chinese operatives appeared to be engaging in “Russian-style influence activities” that stoke American divisions. Russian and Chinese government-affiliated operatives and organizations both promoted misinformation about the integrity of American elections that originated in the US during the midterm election season, FBI officials have said. 

    [ad_2]

    Source link

  • A key safety executive at TikTok is leaving as lawmakers keep pressure on the app | CNN Business

    A key safety executive at TikTok is leaving as lawmakers keep pressure on the app | CNN Business

    [ad_1]


    New York
    CNN
     — 

    TikTok is about to lose a key safety executive as the app faces growing pressure from lawmakers and threats of a ban in the United States.

    TikTok’s Head of US Data Security Trust and Safety Eric Han is set to leave the company next week. His departure was confirmed to CNN by TikTok spokesperson Maureen Shanahan. The news was first reported Tuesday by The Verge.

    In the role, which he has held since 2019, Han led policy decisions such as those aimed at reducing the spread of dangerous challenges and cracking down on paid political posts by influencers. The position will be temporarily filled by Andy Bonillo, TikTok’s interim general manager of US data security, until a permanent replacement is found, Shanahan said.

    With the move, TikTok will lose a key safety leader at a difficult moment for the platform. US lawmakers in recent months have ramped up calls for a nationwide ban of the app over concerns that its parent company ByteDance’s connections to China could pose a national security risk to the United States.

    TikTok confirmed in March that federal officials have demanded that the app’s Chinese owners sell their stake in the social media platform, or risk facing a US ban of the app. And last month, Montana lawmakers approved legislation to ban TikTok on personal devices, which would make it the first state to do so, assuming the bill is signed by the state’s governor.

    TikTok CEO Shou Chew testified before Congress in March and attempted to reassure lawmakers about the safety of the app and the security of US users’ data.

    TikTok did not respond to a question about the reason for Han’s departure.

    [ad_2]

    Source link

  • Microsoft opens up its AI-powered Bing to all users | CNN Business

    Microsoft opens up its AI-powered Bing to all users | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft is rolling out the new AI-powered version of its Bing search engine to anyone who wants to use it.

    Nearly three months after the company debuted a limited preview version of its new Bing, powered by the viral AI chatbot ChatGPT, Microsoft is opening it up to all users without a waitlist – as long as they’re signed into the search engine via Microsoft’s Edge browser.

    The move highlights Microsoft’s commitment to move forward with the product even as the AI technology behind it has sparked concerns around inaccuracies and tone. In some cases, people who baited the new Bing were subject to some emotionally reactive and aggressive responses.

    “We’re getting better at speed, we’re getting better at accuracy … but we are on a never-ending quest to make things better and better,” Yusuf Mehdi, a VP at Microsoft overseeing its AI initiatives, told CNN on Wednesday.

    Bing now gets more than 100 million daily active users each day, a significant uptick in the past few months, according to Mehdi. Google, which has long dominated the market, is also adding similar AI features to its search engine.

    In February, Microsoft showed off how its revamped search engine could write summaries of search results, chat with users to answer additional questions about a query and write emails or other compositions based on the results.

    At a press event in New York City on Wednesday, the company shared an early look at some updates, including the ability to ask questions with pictures, access chat history so the chatbot remembers its rapport with users, and export responses to Microsoft Word. Users can also personalize the tone and style of the chatbot’s responses, selecting from a lengthier, creative reply to something that’s shorter and to the point.

    The wave of attention in recent months around ChatGPT, developed by OpenAI with financial backing from Microsoft, helped renew an arms race among tech companies to deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    Beyond adding AI features to search, Microsoft has said it plans to bring ChatGPT technology to its core productivity tools, including Word, Excel and Outlook, with the potential to change the way we work. The decision to add generative AI features to Bing could be particularly risky, however, given how much people rely on search engines for accurate and reliable information.

    Microsoft’s moves also come amid heightened scrutiny on the rapid pace of advancement in AI technology. In March, some of the biggest names in tech, including Elon Musk and Apple co-founder Steve Wozniak, called for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Mehdi said he doesn’t believe the AI industry is moving too fast and suggested the calls for a pause aren’t particularly helpful.

    “Some people think we should pause development for six months but I’m not sure that fixes anything or improves or moves things along,” he said. “But I understand where it’s coming from concern wise.”

    He added: “The only way to really build this technology well is to do it out in the open in the public so we can have conversations about it.”

    [ad_2]

    Source link

  • Twitter is adding calls and encrypted messaging | CNN Business

    Twitter is adding calls and encrypted messaging | CNN Business

    [ad_1]


    London
    CNN
     — 

    Twitter is adding encrypted messaging to the platform Wednesday, and calls will follow shortly, CEO Elon Musk tweeted late Tuesday.

    “Release of encrypted DMs [direct messages] V1.0 should happen tomorrow. This will grow in sophistication rapidly. The acid test is that I could not see your DMs even if there was a gun to my head,” he said.

    “Coming soon will be voice and video chat from your handle to anyone on this platform, so you can talk to people anywhere in the world without giving them your phone number.”

    The move comes as Musk, who took control of Twitter six months ago, looks for ways to return the platform to growth. Its future looks increasingly uncertain in the face of dwindling advertising revenue and increased competition from rivals such as Mastodon and BlueSky, developed by Twitter co-founder and former CEO Jack Dorsey.

    Adding calls and encrypted messaging could allow Twitter to compete with Mark Zuckerberg’s Meta, which owns Facebook

    (FB)
    Messenger and WhatsApp. Billions of people around the world use those platforms to communicate daily with family and friends, including in groups. Twitter, meanwhile, reported 238 million monetizable daily users last July.

    Since taking the company private in October, Musk has turned Twitter on its head. A number of users, celebrities and media organizations have said they plan to leave the platform over recent policy changes, which they say threaten to make it less safe and reliable.

    Right-wing TV host Tucker Carlson said Tuesday he would relaunch his program on Twitter, which he praised as the only remaining large free-speech platform in the world after Fox News fired him last month.

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Fertility app fined $200,000 for leaking customer’s health data | CNN Business

    Fertility app fined $200,000 for leaking customer’s health data | CNN Business

    [ad_1]



    CNN
     — 

    The company behind a popular fertility app has agreed to pay $200,000 in federal and state fines after authorities alleged that it had shared users’ personal health information for years without their consent, including to Google and to two companies based in China.

    The app, known as Premom, will also be banned from sharing personal health information for advertising purposes and must ensure that the data it shared without users’ consent is deleted from third-party systems, according to the Federal Trade Commission, along with the attorneys general of Connecticut, the District of Columbia and Oregon.

    Wednesday’s proposed settlement targeting Premom highlights how regulators have stepped up their scrutiny of fertility trackers and health information in the wake of the US Supreme Court’s decision last year striking down federal protections for abortion.

    The sharing of personal data allegedly affected Premom’s hundreds of thousands of users from at least 2018 until 2020, and violated a federal regulation known as the Health Breach Notification Rule, according to an FTC complaint against Easy Healthcare, Premom’s parent company.

    Premom didn’t immediately respond to a request for comment.

    As part of the alleged violation, Premom collected and shared personally identifiable health information with Google and with a third-party marketing firm in violation of Premom’s own privacy policy, which had promised to share only “non-identifiable data” with others, according to the complaint.

    In addition, Premom allegedly shared location information and device identifiers — such as WiFi network names and hardware IDs — with two China-based data analytics companies, known as Jiguang and Umeng, according to the complaint. That information, the FTC alleged, “could be used to identify Premom’s users and disclose to third parties that these users were utilizing a fertility app,” according to an FTC complaint filed against Easy Healthcare, Premom’s parent company.

    Since the Supreme Court’s decision in Dobbs v. Jackson, a wave of anti-abortion legislation has raised the prospect that fertility apps, search engines and other technology platforms could be forced to hand over user data in potential prosecutions of abortion-seekers.

    “Now more than ever, with reproductive rights under attack across the country, it is essential that the privacy of healthcare decisions is vigorously protected,” said DC Attorney General Brian Schwalb in a statement. “My office will continue to make sure companies protect consumers’ personal information to protect against unlawful encroachment on access to effective reproductive healthcare.”

    Samuel Levine, director of the FTC’s consumer protection bureau, said the agency “will not tolerate health privacy abuses.”

    “Premom broke its promises and compromised consumers’ privacy,” Levine said in a statement. “We will vigorously enforce the Health Breach Notification Rule to defend consumer’s health data from exploitation.”

    [ad_2]

    Source link

  • Amazon corporate workers plan walkout next week over return-to-office policies | CNN Business

    Amazon corporate workers plan walkout next week over return-to-office policies | CNN Business

    [ad_1]



    CNN
     — 

    Some Amazon corporate workers have announced plans to walk off the job next week over frustrations with the company’s return-to-work policies, among other issues, in a sign of heightened tensions inside the e-commerce giant after multiple rounds of layoffs.

    The work stoppage is being jointly organized by an internal climate justice worker group and a remote work advocacy group, according to an email from organizers and public social media posts.

    Workers participating have two main demands: asking the e-commerce giant to put climate impact at the forefront of its decision making, and to provide greater flexibility for how and where employees work.

    The lunchtime walkout is scheduled for May 31, beginning at noon. Organizers have said in an internal pledge that they are only going to go through with the walkout if at least 1,000 workers agree to participate, according to an email from organizers.

    The Washington Post was first to report the planned walkout.

    The collective action from corporate workers comes after Amazon, like other Big Tech companies, cut tens of thousands of jobs beginning late last year amid broader economic uncertainty. All told, Amazon has said this year that it is laying off some 27,000 workers in multiple rounds of cuts.

    At the same time, Amazon and other tech companies are trying to get workers into the office more. In February, Amazon said it was requiring thousands of its workers to be in the office for at least three days per week, starting on May 1.

    “Morale is really at an all-time low right now,” an Amazon corporate worker based in Los Angeles, who plans on participating in the walkout next week, told CNN. “I think the hope from this walkout is really to send a clear message to leadership that we’re expecting real action from them on a number of issues, with the thesis of just, like, we need better long term decision-making that benefits not only employees but the communities that we serve.”

    The worker, who asked not to be named, said organizers are focusing the in-person walkout efforts at the company’s Seattle headquarters but have also created a way for people to participate virtually so “all Amazonians are welcome to participate.”

    One of the internal groups spearheading next week’s walkout is dubbed Amazon Employees for Climate Justice (AECJ), the same coalition that organized protests slamming the company for inaction on climate change back in 2019.

    “Amazon must keep pace with a changing world,” the group wrote in a Twitter thread Tuesday calling for the walkout next week. “To cultivate a diverse, world-class workplace, we need real plans to tackle our climate impact and flexible work options.”

    Amazon’s Climate Pledge, signed in 2019, commits the company to reach net-zero carbon emissions by 2040, among other climate goals. But in the Twitter thread, the group blasted the pledge as “hype” and demanded “a genuine climate plan.”

    Amazon said it has made progress in meeting its goals, including by putting thousands of electric delivery vehicles on the road, and by continuing to invest in both proven and new science-backed solutions for reducing carbon emissions. Amazon also said it had the goal of powering 100% of its operations with renewable energy by 2030, and now expects to meet that goal by 2025.

    “We respect our employees’ rights to express their opinions,” Rob Munoz, an Amazon spokesperson, told CNN in a statement Tuesday.

    In response to employee concerns about the return to office, Munoz said the company has “had a great few weeks with more employees in the office.”

    “There’s been good energy on campus and in urban cores like Seattle where we have a large presence. We’ve heard this from lots of employees and the businesses that surround our offices,” Munoz said. “As it pertains to the specific topics this group of employees is raising, we’ve explained our thinking in different forums over the past few months and will continue to do so.”

    [ad_2]

    Source link