ReportWire

Tag: iab-technology & computing

  • Jack Dorsey no longer thinks Elon Musk is the right person to run Twitter | CNN Business

    Jack Dorsey no longer thinks Elon Musk is the right person to run Twitter | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Former Twitter CEO Jack Dorsey backtracked Saturday on his earlier endorsement of Elon Musk as the right choice to lead the company, speaking out against the billionaire who, for the past six months, has led Twitter through a series of largely self-inflicted crises.

    Asked on Bluesky, Dorsey’s new Twitter-like social media venture, whether he believed Musk has been the best possible steward of Twitter, Dorsey said flatly: “No.”

    Dorsey added that Musk “should have walked away” from acquiring Twitter for $44 billion, and faulted Twitter’s board in hindsight for trying to compel Musk to follow through with the deal despite Musk’s attempts to back out of the purchase last year.

    “It all went south,” Dorsey said. “But it happened and all we can do now is build something to avoid that ever happening again.”

    Twitter, which has cut much of its public relations team under Musk, didn’t immediately respond to a request for comment.

    Under Musk, Twitter has slashed most of its staff, suffered frequent service disruptions and made a number of controversial changes to its policies and features, including a recent decision to remove blue checks from VIP users who don’t pay to be verified.

    Dorsey’s reflections, outlined in Bluesky posts reviewed by CNN, highlight the Twitter founder’s growing disillusionment with Musk. They also come after numerous exchanges in recent months where Dorsey has publicly questioned some of Musk’s decision-making.

    A year ago, Dorsey was quick to heap praise on Musk. When Musk’s deal to purchase Twitter was first announced, Dorsey said that so long as Twitter had to be owned by a single person or company, “Elon is the singular solution I trust.”

    “I trust his mission to extend the light of consciousness,” Dorsey proclaimed at the time.

    Dorsey also rolled over his more than 18 million shares in Twitter (a roughly 2.4% stake) into the new Musk-owned company as an equity investor, rather than receiving a cash payout, according to a securities filing after the deal was completed.

    Now, though, Dorsey appears to believe Musk was an imperfect choice. Confronted by criticism from other Bluesky users that Twitter could have gone in a different direction, Dorsey argued that there was nothing stopping someone else from outbidding Musk.

    “If Elon or anyone wanted to buy the company, all they had to do was name a price that the board felt was better than what the company could do independently,” he said. “This is true for every public company.”

    Asked whether he felt any responsibility for the role he played in the transaction, Dorsey, who served on Twitter’s board at the time, said he was not the only person who authorized the deal and that Twitter’s “only alternative” to Musk was an acquisition by “hedge funds and Wall Street activists.”

    “The company would have never survived as a public company,” Dorsey claimed, adding: “I wish it were different,” but that some of Twitter’s revenue initiatives prior to Musk’s takeover “would not have mattered given market turn.”

    [ad_2]

    Source link

  • TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    [ad_1]



    CNN
     — 

    By any standard, John August is a successful screenwriter. He’s written such films as “Big Fish,” “Charlie’s Angels” and “Go.” But even he is concerned about the impact AI could have on his work.

    A powerful new crop of AI tools, trained on vast troves of data online, can now generate essays, song lyrics and other written work in response to user prompts. While there are clearly limits for how well AI tools can produce compelling creative stories, these tools are only getting more advanced, putting writers like August on guard.

    “Screenwriters are concerned about our scripts being the feeder material that is going into these systems to generate other scripts, treatments, and write story ideas,” August, a Writers Guild of America (WGA) committee member, told CNN. “The work that we do can’t be replaced by these systems.”

    August is one of the more than 11,000 members of the WGA who went on strike Tuesday morning, bringing an immediate halt to the production of some television shows and possibly delaying the start of new seasons of others later this year.

    WGA is demanding a host of changes from the Alliance of Motion Picture and Television Producers (AMPTP), from an increase in pay to receiving clear guidelines around working with streaming services. But as part of their demands, the WGA is also fighting to protect their livelihoods from AI.

    In a proposal published on WGA’s website this week, the labor union said AI should be regulated so it “can’t write or rewrite literary material, can’t be used as source material” and that writers’ work “can’t be used to train AI.”

    August said the AI demand “was one of the last things” added to the WGA list, but that it’s “clearly an issue writers are concerned about” and need to address now rather than when their contact is up again in three years. By then, he said, “it may be too late.”

    WGA said the proposal was rejected by AMPTP, which countered by offering annual meetings to discuss advancements in the technology. August said AMPTP’s response shows they want to keep their options open.

    In a document sent to CNN responding to some of WGA’s asks, AMPTP said it values the work of creatives and “the best stories are original, insightful and often come from people’s own experiences.”

    “AI raises hard, important creative and legal questions for everyone,” it wrote. “Writers want to be able to use this technology as part of their creative process, without changing how credits are determined, which is complicated given AI material can’t be copyrighted. So it’s something that requires a lot more discussion, which we’ve committed to doing.”

    It added that the current WGA agreement defines a “writer” as a “person,” and said “AI-generated material would not be eligible for writing credit.”

    The writers’ attempt at bargaining over AI is perhaps the most high-profile labor battle yet to address concerns about the cutting-edge technology that has captivated the world’s attention in the six months since the public release of ChatGPT.

    Goldman Sachs economists estimate that as many as 300 million full-job jobs globally could be automated in some way by the newest wave of AI. White-collar workers, including those in administrative and legal roles, are expected to be the most affected. And the impact may hit sooner than some think: IBM’s CEO recently suggested AI could eliminate the need for thousands of jobs at his company alone in the next five years.

    David Gunkel, a professor at the department of communications at Northern Illinois University who tracks AI in media and entertainment, said screenwriters want clear guidelines around AI because “they can see the writing on the wall.”

    “AI is already displacing human labor in many other areas of content creation—copywriting, journalism, SEO writing, and so on,” he said. “The WGA is simply trying to get out-in-front of and to protect their members against … ‘technological unemployment.’”

    While film and TV writers in Hollywood may currently be leading the charge, professionals in other industries will almost certainly be paying attention.

    “There’s certainly other industries that need to be paying close attention to this space,” said Rowan Curran, an analyst at Forrester Research who focuses on AI. He noted that digital artists, musicians, engineers, real estate professionals and customer service workers will all feel the impact of generative AI.

    “Watch this #WGA strike carefully,” Justine Bateman, a writer, director and former actress, wrote in a tweet shortly after the strike kicked off. “Understand that our fight is the same fight that is coming to your professional sector next: it’s the devaluing of human effort, skill, and talent in favor of automation and profits.”

    AI has had a place in Hollywood for years. In the 2018 “Marvel Avengers Infinity Wars” film, the face of Thanos – a character played by actor Josh Brolin – was created in part with the technology.

    Crowd and battle scenes in films including the “Lord of the Rings” and “Meg” have utilized AI, and the most recent Indiana Jones used it to make Harrison Ford’s character appear younger. It’s also been used for color correction, finding footage more quickly during post production and making improvements such as removing scratches and dust from footage.

    But AI in screenwriting is in its infancy. In March, a “South Park” episode called “Deep Learning,” was co-written by ChatGPT and the tool was highly focused on in the plot (the characters use ChatGPT to talk to girls and write school papers).

    August said writers are largely willing to play ball with tools, as long as they’re used as launching pads or for research and writers are still credited and utilized throughout the production process.

    “Screenwriters are not luddites, and we’ve been quick to use new technologies to help us tell our stories,” August said. “We went from typewriters to word processors happily and it increased productivity. …. But we don’t need a magical typewriter that types scripts all by itself.”

    Because large language models are trained on text that humans have written before, and find patterns in words and sentences to create responses to prompts, concerns around intellectual property exist, too. “It is entirely possible for a [chatbot] to generate a script in the style of a particular kind of filmmaker or scriptwriter without prior consent of the original artist or the Hollywood studio that holds the IP for that material,” Gunkel said.

    For example, one could prompt ChatGPT to generate a zombie apocalypse drama in the style of David Mamet. “Who should get credited for that?” August said. “What happens if we allow a producer or studio executive to come up with a treatment or pitch or something that looks like a screenplay that no writer has touched?”

    For now, the legal landscape remains very much unsettled on the matter, with regulations lagging behind the rapid pace of AI development. In early April, the Biden administration said it is seeking public comments on how to hold artificial intelligence systems like ChatGPT accountable.

    “We can’t protect studios from their own bad choices,” August said. “We can only protect writers from abuses.”

    The strike, and the demands around AI specifically, come at a time when both the writers and the studios are feeling financial pain.

    Many of the businesses represented by AMPTP have seen drops in their stock price, prompting deep cost cutting, including layoffs. The need to manage costs, combined with addressing the fallout from the strike, might only make the companies feel more pressure to turn to AI for scriptwriting.

    “In the short term, this could be an effective way to circumvent the WGA strike, mainly because [large language models], which are considered property and not personnel, can be employed for this task without violating the picket line,” Gunkel said. Such an “experiment” could also show production studios whether it’s possible “to get by with less humans involved,” he said.

    But Joshua Glick, a visiting professor of film and electronic arts at Bard University, believes such a move would be ill-advised.

    “It would be a pretty aggressive and antagonistic move for studios to move forward with AI-generated scripts in terms of getting writers to come to the negotiating table because AI is such a crucial sticking point in the negotiations,” said Glick, who also co-created Deepfake: Unstable Evidence on Screen, an exhibition at the Museum of the Moving Image in New York.

    “At the same time, I think the result of those scripts would be pretty mediocre at best,” he said.

    However the studios react, the issue is unlikely to go away in Hollywood. Film and TV actors’ contracts are up in June, and many are worried about how their faces, bodies and voices will be impacted by AI, August said.

    “As writers, we don’t want tools to replace us but actors have the same concerns with AI, as do directors, editors and everyone else who does creative work in this industry,” he added.

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • How Meta got caught in tensions between the US and EU | CNN Business

    How Meta got caught in tensions between the US and EU | CNN Business

    [ad_1]



    CNN
     — 

    Facebook-parent Meta has perhaps become the most high-profile casualty of a long-running privacy dispute between Europe and the United States — but it may not be the last.

    Meta has been fined a record-breaking €1.2 billion ($1.3 billion) by European Union regulators for violating EU privacy laws by transferring the personal data of Facebook users to servers in the United States. Meta said Monday it would appeal the ruling, including the fine.

    The historic fine against Meta — and a potentially game-changing legal order that could force Meta to stop transferring EU users’ data to the United States — isn’t just a one-off decision limited to this one company or its individual business practices. It reflects bigger, unresolved tensions between Europe and the United States over data privacy, government surveillance and regulation of internet platforms.

    Those underlying and fundamental disagreements, which have simmered for years, have now come to a head, casting a significant shadow over thousands of businesses that depend on processing EU data in the United States.

    Beyond its huge economic implications, however, the fine has once again highlighted Europe’s deep mistrust of US surveillance powers — right as the US government is trying to build its own case against foreign-linked apps such as TikTok over similar surveillance concerns.

    The origins of Meta’s fine this week trace back to a 2020 ruling by Europe’s top court.

    In that decision, the European Court of Justice struck down a complex transatlantic framework Meta and many other companies had been relying on until then to legally move EU user data to US servers in the ordinary course of running their businesses.

    That framework, known as Privacy Shield, was itself the outgrowth of European complaints that US authorities didn’t do enough to protect the privacy of EU citizens. At the time Privacy Shield was created, the world was still reeling from disclosures made by National Security Agency leaker Edward Snowden. His disclosures highlighted the vast reach of US surveillance programs such as PRISM, which allowed the NSA to snoop on the electronic communications of foreign nationals as they used tech tools built by Google, Microsoft, and Yahoo, among others.

    PRISM relied on a basic fact of internet architecture: Much of the world’s online communications take place on US-based platforms that route their data through US servers, with few legal protections or recourse for either foreigners or Americans swept up in the tracking.

    A 2013 European Parliament report on the PRISM program captured the EU’s sense of alarm, noting the “very strong implications” for EU citizens.

    “PRISM seems to have allowed an unprecedented scale and depth in intelligence gathering,” the report said, “which goes beyond counter-terrorism and beyond espionage activities carried out by liberal regimes in the past. This may lead towards an illegal form of Total Information Awareness where data of millions of people are subject to collection and manipulation by the NSA.”

    Privacy Shield was a 2016 US-EU agreement designed to address those concerns by making US companies certifiably accountable for their handling of EU user data. For a time, it seemed as if Privacy Shield could be a lasting solution facilitating the growth of the internet and a globally connected society, one in which the free flow of data would not be impeded.

    But when the European Court of Justice invalidated that framework in 2020, it reiterated longstanding surveillance concerns and insisted that Privacy Shield still didn’t provide EU citizens’ personal information the same level of protection in the US that it enjoys in EU countries, a standard required under GDPR, the EU’s signature privacy law.

    The loss of Privacy Shield created enormous uncertainty for the more than 5,300 businesses that rely on the smooth transfer of data across borders. The US government has said transatlantic data flows support the more than $7 trillion dollars of economic activity that occurs every year between the United States and the European Union. And the US Chamber of Commerce has estimated that transatlantic data transfers account for about half of all data transfers in both the US and the EU.

    The Biden administration has moved to implement a successor to Privacy Shield that contains some changes to US surveillance practices, and if it is fully implemented in time, it could prevent Meta and other companies from having to suspend transatlantic data transfers or some of their European operations.

    But it’s unclear whether those changes will be enough to be accepted by the EU, or whether the new data privacy framework could avoid its own court challenge.

    The possibility that US-EU data transfers may be seriously disrupted is refocusing scrutiny on US surveillance law just as the US government has been sounding its own alarms about Chinese government surveillance.

    US officials have warned that China could seek to use data collected from TikTok or other foreign-linked companies to benefit the country’s intelligence or propaganda campaigns, using the personal information to identify spying targets or to manipulate public opinion through targeted disinformation.

    But US moral authority on the issue risks being eroded by the EU criticism, a problem for the US government that may only be compounded by its own missteps.

    Just last week, a federal court described how the FBI improperly accessed a vast intelligence database meant for surveilling foreign nationals in a bid to gather information on US Capitol rioters and those who protested the 2020 killing of George Floyd.

    The improper access, which was not “reasonably likely” to retrieve foreign intelligence information or evidence of a crime, according to a Justice Department assessment described in the court’s opinion, has only inflamed domestic critics of US surveillance law, and could give ammunition to EU critics.

    The intelligence database at issue was authorized under Section 702 of the Foreign Intelligence Surveillance Act — the same law used to justify the NSA’s PRISM program and which the EU has repeatedly cited as a danger to its citizens and a reason to suspect transatlantic data sharing.

    While the US distinguishes itself from China based on commitments to open and democratic governance, the EU’s concerns about the US are not much different in kind: They come from a place of deep mistrust of broad surveillance authority and suspicions about the potential misuse of user data.

    For years, civil liberties advocates have alleged that Section 702 enables warrantless spying on Americans on an enormous scale. Now, the FBI incident may only further validate EU fears; add to the existing concerns that led to Meta’s fine; contribute to the potential unraveling of the US-EU data relationship; and damage US credibility in its push to warn about the hypothetical risks of letting TikTok data flow to China.

    If a new transatlantic data agreement is delayed or falls apart, Meta won’t be the only company stuck with the bill. Thousands of other companies may get caught in the middle, and the United States will have to hope nobody looks too closely at why while still trying to make a case against TikTok.

    [ad_2]

    Source link

  • Twitter’s own lawyers refute Elon Musk’s claim that the ‘Twitter Files’ exposed US government censorship | CNN Business

    Twitter’s own lawyers refute Elon Musk’s claim that the ‘Twitter Files’ exposed US government censorship | CNN Business

    [ad_1]



    CNN
     — 

    For months, Twitter owner Elon Musk and his allies have amplified baseless claims that the US government illegally coerced Twitter into censoring a 2020 New York Post article about Hunter Biden. The foundation for those claims rests on the so-called “Twitter Files,” a series of reports by a set of handpicked journalists who, at Musk’s discretion, were given selective access to historical company archives.

    Now, though, Twitter’s own lawyers are disputing those claims in a case involving former President Donald Trump — forcefully rejecting any suggestion that the Twitter Files show what Musk and many Republicans assert they contain.

    In a court filing last week, Twitter’s attorneys contested one of the most central allegations to emerge from the Twitter Files: that regular communications between the FBI and Twitter ahead of the 2020 election amounted to government coercion to censor content or, worse, that Twitter had become an actual arm of the US government.

    In tweets last year, Musk alleged that the communications showed a clear breach of the US constitution.

    “If this isn’t a violation of the Constitution’s First Amendment, what is?” he said of a screenshot purportedly showing Joe Biden’s presidential campaign in 2020 asking Twitter to review several tweets it suggested were violations of the company’s terms. Some of the tweets in question included nonconsensual nude images that violated Twitter’s policies.

    In another push to promote misleading allegations of government malfeasance stemming from the Twitter Files, Musk also claimed that the “government paid Twitter millions of dollars to censor info from the public.”

    Legal experts have said the claim of a constitutional violation is weak because the First Amendment binds the government, not political campaigns, and Trump was president at the time, not Biden. The Twitter Files also show the Trump administration made its own requests for removal of Twitter content. And the payments to Twitter have also been identified as routine reimbursements for responding to subpoenas and investigations, not payments for content moderation decisions.

    “Nothing in the new materials shows any governmental actor compelling or even discussing any content-moderation action with respect to Trump” and others participating in the suit, Twitter argued.

    The communications unearthed as part of the Twitter Files do not show coercion, Twitter’s lawyers wrote, “because they do not contain a specific government demand to remove content—let alone one backed by the threat of government sanction.”

    “Instead,” the filing continued, the communications “show that the [FBI] issued general updates about their efforts to combat foreign interference in the 2020 election.”

    The evidence outlined by Twitter’s lawyers is consistent with public statements by former Twitter employees and the FBI, along with prior CNN analysis of the Twitter Files.

    Altogether, the filing by Musk’s own corporate lawyers represents a step-by-step refutation of some of the most explosive claims to come out of the Twitter Files and that in some cases have been promoted by Musk himself.

    Twitter did not immediately respond to a request for comment.

    Even as the filing undercuts Musk’s effort to portray the Twitter Files as a smoking gun, the filing may still work to his benefit because, if successful, it may save Twitter from a costly re-litigation of its handling of Trump’s account and others.

    The communications in question, some of which also came out in a deposition of an FBI agent in a separate case, were invoked last year as part of a bid to revive litigation over Twitter’s banning of Trump following the Jan. 6 attack on the US Capitol. The lawsuit had been dismissed last summer, after the federal judge overseeing the case said there was no evidence of a First Amendment violation.

    Musk’s release of company files has given lawyers for Trump and other plaintiffs in the case another shot. If the court decides the new evidence is enough to suspend the prior judgment, the lawyers for Trump and others said in May, then they might decide to file a fresh amended complaint.

    But Twitter argued last week that the judge should not allow the case to be reopened because nothing in the Twitter Files supports the already dismissed claim of federal coercion.

    Even the FBI’s flagging of specific problematic tweets were merely suggestions that they might violate Twitter’s terms of service, not a request that they be removed or an implication of retribution if Twitter failed to take the tweets down, Twitter’s lawyers said.

    Citing another case, Twitter wrote: “The FBI’s ‘flags’ cannot amount to coercion because there was ‘no intimation that Twitter would suffer adverse consequences if it refused.’”

    Twitter also objected to the claim, amplified by Musk, that Twitter was paid to censor conservative speech when it sought reimbursement for complying with government requests for user data.

    “The reimbursements were not for responding to requests to remove any accounts or content and thus are wholly irrelevant to Plaintiffs’ joint-action theory,” Twitter wrote.

    It added: “The new materials demonstrate only that Twitter exercised its statutory right—provided to all private actors—to seek reimbursement for time spent processing a government official’s legal requests for information under the Stored Communications Act. The payments therefore do not concern content moderation at all—let alone specific requests to take down content.”

    [ad_2]

    Source link

  • US judge temporarily blocks Microsoft acquisition of Activision | CNN Business

    US judge temporarily blocks Microsoft acquisition of Activision | CNN Business

    [ad_1]

    A US judge late on Tuesday granted the Federal Trade Commission’s (FTC) request to temporarily block Microsoft Corp’s acquisition of video game maker Activision Blizzard and set a hearing next week.

    US District Judge Edward Davila scheduled a two-day evidentiary hearing on the FTC’s request for a preliminary injunction for June 22-23 in San Francisco. Without a court order, Microsoft could have closed on the $69 billion deal as early as Friday.

    The FTC, which enforces antitrust law, asked an administrative judge to block the transaction in early December. An evidential hearing in the administrative proceeding is set to begin Aug. 2.

    Based on the late-June hearing, the federal court will decide whether a preliminary injunction — which would last during the administrative review of the case — is necessary. The FTC sought the temporary block on Monday.

    Davila said the temporary restraining order issued on Tuesday “is necessary to maintain the status quo while the complaint is pending (and) preserve this court’s ability to order effective relief in the event it determines a preliminary injunction is warranted and preserve the FTC’s ability to obtain an effective permanent remedy in the event that it prevails in its pending administrative proceeding.”

    Microsoft

    (MSFT)
    and Activision

    (ATVI)
    must submit legal arguments opposing a preliminary injunction by June 16; the FTC must reply on June 20.

    Activision, which said Monday the FTC decision to seek a federal court order was “a welcome update and one that accelerates the legal process,” declined to comment Tuesday.

    Microsoft said Tuesday “accelerating the legal process in the U.S will ultimately bring more choice and competition to the gaming market. A temporary restraining order makes sense until we can receive a decision from the court, which is moving swiftly.”

    The FTC declined to comment.

    Davila said the bar on closing will remain in place until at least five days after the court rules on the preliminary injunction request.

    The FTC has argued the transaction would give Microsoft’s video game console Xbox exclusive access to Activision games, leaving Nintendo consoles and Sony Group Corp’s PlayStation out in the cold.

    Microsoft’s bid to acquire the “Call of Duty” video game maker was approved by the EU in May, but British competition authorities blocked the takeover in April.

    Microsoft has said the deal would benefit gamers and gaming companies alike, and has offered to sign a legally binding consent decree with the FTC to provide “Call of Duty” games to rivals including Sony for a decade.

    The case reflects the muscular approach to antitrust enforcement taken by the administration of US President Joe Biden.

    [ad_2]

    Source link

  • Chinese tech giant Alibaba announces new chairman and CEO succession plan in major shakeup | CNN Business

    Chinese tech giant Alibaba announces new chairman and CEO succession plan in major shakeup | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Joseph Tsai, executive vice chairman and cofounder of Alibaba Group, will succeed Daniel Zhang as chairman, according to an announcement by the Chinese tech giant on Tuesday.

    This is Alibaba’s second succession in just a few years after founder Jack Ma stepped away in 2019.

    Eddie Wu, chairman of Alibaba’s e-commerce platform Taobao and Tmall Group, will succeed Zhang as chief executive officer and replace him on the company’s board of directors. Both appointments will take effect on September 10, 2023, the company said.

    Following the transition, Zhang will continue to serve as the chairman and CEO of Alibaba’s cloud unit.

    “This is the right time for me to make a transition, given the importance of Alibaba Cloud Intelligence Group as it progresses towards a full spin-off,” Zhang said in the announcement.

    He added that the emergence of generative AI has opened up “exciting new opportunities” for the company’s cloud business.

    Wu, also a cofounder of Alibaba, served as the technology director at the company’s inception in 1999.

    “I am grateful for the trust of the Alibaba Group board of directors and am honored to succeed Daniel as Alibaba’s CEO,” he said.

    “While our current transformation brings in a new corporate organizational and governance structure, Alibaba’s mission remains unchanged.”

    The succession comes just a few months after the internet giant announced its biggest restructuring in its 24-year history.

    The company would split into six separate units, including cloud, e-commerce, logistics, media and entertainment, according to a company statement in March. Each unit would be overseen by its own CEO and board directors, and most of them can pursue separate listings or fundraisings.

    Zhang was appointed by Alibaba as CEO in May 2015, eight years after he joined the company. On September 10, 2019, he replaced Jack Ma as the executive chairman, as Ma retired on his birthday and the 20th anniversary of the company as he had promised.

    Alibaba is China’s largest e-commerce company, boasting more than 900 million active users annually on its Taobao and Tmall platforms. It also operates the country’s biggest cloud computing and digital payment platforms.

    But the company, along with its co-founder Ma, has been at the center of a sweeping crackdown by Beijing in recent years.

    After Ma criticized Chinese financial regulators in a public speech in late 2020, Beijing called off the blockbuster IPO of Ant Group, the affiliate of Alibaba that owns Alipay, at the last minute. The cancellation marked the start of a regulatory onslaught against the country’s internet industry and the private sector, during which Beijing imposed a record fine of $2.8 billion on Alibaba Group for violating antitrust rules.

    Since then, Ma had largely disappeared from public view and retreated further from his companies. He has reportedly spent more time overseas, including in Hong Kong and Japan, home to his friend and Alibaba investor, SoftBank CEO Masa Son.

    But in March, he made a surprising public appearance in mainland China, days before Alibaba announced its major restructuring plan. His return was a symbolic move and probably a “planned media event” by Beijing intended to appease private sector fears, according to analysts.

    Since then, Ma has shown up in public more frequently, with a more visible focus on researching and teaching. In April, the University of Hong Kong announced that Ma would join its business school for the next three years.

    Last week, Ma gave his first lecture as a visiting professor to the University of Tokyo, according to a statement from the university.

    [ad_2]

    Source link

  • China just played a trump card in the chip war. Are more export curbs coming? | CNN Business

    China just played a trump card in the chip war. Are more export curbs coming? | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    A trade war between China and the United States over the future of semiconductors is escalating.

    Beijing hit back Monday by playing a trump card: It imposed export controls on two strategic raw materials, gallium and germanium, that are critical to the global chipmaking industry.

    “We see this as China’s second, and much bigger, counter measure to the tech war, and likely a response to the potential US tightening of [its] AI chip ban,” said Jefferies analysts. Sanctioning one of America’s biggest memory chipmakers, Micron Technology

    (MU)
    , in May was the first, they said.

    Here’s what you need to know about gallium and germanium, how they could play into the chip war and whether more countermeasures could be coming.

    Last October, the Biden administration unveiled a set of export controls banning Chinese companies from buying advanced chips and chip-making equipment without a license.

    Chips are vital for everything from smartphones and self-driving cars to advanced computing and weapons manufacturing. US officials have talked about the move as a measure to protect national security interests.

    But it didn’t stop there. For the curbs to be effective, Washington needed other key suppliers, located in the Netherlands and Japan, to join. They did.

    China eventually retaliated. In April, it launched a cybersecurity probe into Micron before banning the company from selling to Chinese companies working on key infrastructure projects. On Monday, Beijing announced the restrictions on gallium and germanium.

    Gallium is a soft, silvery metal and is easy to cut with a knife. It’s commonly used to produce compounds that are key materials in semiconductors and light-emitting diodes.

    Germanium is a hard, grayish-white and brittle metalloid that is used in the production of optical fibers that can transmit light and electronic data.

    The export controls have drawn comparisons with China’s reported attempts in early 2021 to restrict exports of rare earths, a group of 17 elements for which China controls more than half of the global supply.

    Gallium and germanium do not belong to this group of minerals. Like rare earths, they can be expensive to mine or produce.

    This is because they are usually formed as a byproduct of mining more common metals, primarily aluminum, zinc and copper, and processed in countries that produce them.

    China is the world’s leading producer of both gallium and germanium, according to the US Geological Survey. The country accounted for 98% of the global production of gallium, and 68% of the refinery production of germanium.

    “The economies of scale in China’s extensive and increasingly integrated mining and processing operations, along with state subsidies, have allowed it to export processed minerals at a cost that operators elsewhere can’t match, perpetuating the country’s market dominance for many critical commodities,” analysts from Eurasia Group said on Tuesday.

    Shares of Chinese producers of the two raw materials surged by 10% on Tuesday.

    Beyond China, Australian rare earths producers also advanced, as investors expected Beijing might extend export curbs to that group of strategically important minerals. Lynas Rare Earths

    (LYSCF)
    rose 1.5%.

    The United States is dependent on China for these the two critical elements. It imported more than 50% of the gallium and germanium it used in 2021 from the country, the US Geological Survey showed.

    Eurasia Group analysts described China’s export controls as a “warning shot.”

    “It is a shot across the bow intended to remind countries including the United States, Japan, and the Netherlands that China has retaliatory options and to thereby deter them from imposing further restrictions on Chinese access to high-end chips and tools,” Eurasia Group said in a research note.

    Chinese authorities may also intend to use its control over these niche metals as a possible bargaining chip in discussions with US Treasury Secretary Janet Yellen, who is scheduled to visit Beijing later this week.

    Jefferies analysts said the timing of the announcement was unlikely to be a casual decision.

    “It gives the US at least two days to digest and come up with a well-considered response,” they said.

    However, the move is not considered “a death blow” to the United States and its allies.

    China may be the industry leader, but there are alternative producers, as well as available substitutes for both minerals, the Eurasia Group analysts pointed out.

    The United States also imports a fifth of its gallium from the United Kingdom and Germany and buys more than 30% of its germanium from Belgium and Germany.

    That’s definitely possible, a former senior Chinese official has warned.

    The curbs announced this week are “just the start,” Wei Jianguo, a former deputy commerce minister, told the official China Daily on Wednesday, adding China has more tools in its arsenal with which to retaliate.

    “If the high-tech restrictions on China become tougher in the future, China’s countermeasures will also escalate,” he was quoted as saying.

    Analysts believe this too. Rare earths, which are not difficult to find but are complicated to process, are also critical in making semiconductors, and could be the next target.

    “If this action doesn’t change the US-China dynamics, more rare earth export controls should be expected,” Jefferies analysts said.

    However, analysts from Eurasia Group warned that restricting exports is a “double-edged sword.”

    Past attempts by China to leverage its dominance in rare earths have reduced availability and raised prices. Higher prices have spurred greater competition by making mining and processing ventures outside of China more cost-competitive, they said.

    China cut its rare earths export quota in 2010 amid tensions with the United States.

    That resulted in greater efforts by companies outside of the country to produce the metals. US data showed that China’s global market share dropped from 97% in 2010 to about 60% in 2019.

    “Imposing export restrictions risks reducing market dominance,” the Eurasia Group analysts said.

    CNN’s Hanna Ziady and Xiaofei Xu contributed to reporting.

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link

  • Threads now has ‘tens of millions’ of daily users. But its honeymoon phase may be over | CNN Business

    Threads now has ‘tens of millions’ of daily users. But its honeymoon phase may be over | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Two weeks after Meta launched its Twitter competitor Threads and received an unprecedented amount of user signups, the frenzy around the app appears to have come back to Earth.

    After surpassing 100 million user sign-ups in less than a week, user engagement on Threads has slowed. Threads daily active users fell from 49 million on July 7, two days after its launch, to 23.6 million users last Friday, according to a report published this week by web traffic analysis firm Similarweb. The app’s average usage time also fell from 21 minutes to 6 minutes over the same timeframe.

    The slowdown hints at the challenges ahead for Meta as it looks to not only draw users away from Twitter but build a service that reaches a far larger audience. Threads is already facing some of the common issues that often plague social media platforms, including user retention, spam and some early regulatory scrutiny around its approach to content moderation. It’s also not clear yet how much Meta’s investments in building Threads will actually amount to financial returns for the company.

    “I’m very optimistic about how the Threads community is coming together,” Meta CEO Mark Zuckerberg said in a post on the platform Monday. “Early growth was off the charts, but more importantly 10s of millions of people now come back daily … The focus for the rest of the year is improving the basics and retention.”

    Meta executives acknowledged in the early days after Threads’ launch that getting users to sign up for a buzzy new app is much easier than convincing them to continue engaging there long-term. That’s likely even more true for Threads, which launched as a relatively bare-bones app in an effort to capitalize on a moment of weakness at Twitter and also tapped into Instagram’s network to ease the sign-in process.

    Threads on Tuesday rolled out its first batch of updates to the iOS version of the app, including a translation button, a tab on users’ activity feed dedicated to showing who’s followed them and the option to subscribe and receive notifications from accounts a user doesn’t follow.

    Instagram head Adam Mosseri, who is overseeing the Threads launch, has also hinted at plans to add features such as a desktop version of the app, a feed of only accounts a user follows and an edit button. “We’re clearly way out over our skis on this,” Mosseri said in a Threads post the week of the app’s launch.

    In the meantime, Threads is grappling with a common social media issue — spam. Users have complained of replies to posts filling up with spammy links and offering “giveaways” in exchange for new followers. And on Monday, Mosseri said in a Threads post that the platform was “going to have to get tighter on things like rate limits” because “spam attacks have picked up.”

    This “is going to mean more unintentionally limiting active people (false positives),” Mosseri warned. “If you get caught up [in] those protections let us know.”

    Meta declined to clarify whether Mosseri’s post refers to limits on users’ ability to post or read content, or to provide any additional details. But the comment did prompt some snark from Twitter owner Elon Musk, after backlash to Twitter’s own rate limits — restrictions on how many tweets users can read — helped propel Threads’ early growth.

    Meta shares have jumped more than 6% since the Threads launch, but some analysts who follow the company are skeptical that Threads will quickly contribute to the company’s bottom line, if at all.

    Threads could be a way for Meta to eke additional engagement time out of its massive existing user base. The app could also ultimately supplement Meta’s core advertising business, which could use a boost after facing challenges from a broad decline in the online ad market and changes to Apple’s app privacy practices.

    Meta executives have said they will likely incorporate advertising into the platform, once its user base has reached critical mass. But even if Threads continues to add users, “advertisers could be hesitant and possibly wait before allocating ad dollars to Threads because of their uncertainty about long-run user retention and engagement,” Morningstar senior equity analyst Ali Mogharabi said in a recent investor note.

    Like Twitter, Threads could also struggle to attract advertisers because the nature of a real-time news and public conversations app means the content is sometimes negative or controversial. Even before Musk took over Twitter and alienated advertisers, the platform represented a tiny piece of the ad sales market compared to Meta’s properties.

    Threads, however, likely has a leg up on Twitter because Meta is known as a company that provides clear value for advertisers, said Scott Kessler, global tech sector lead at research firm Third Bridge. If anything, he said, the risk may be that some advertisers may think twice about spending on yet another Meta platform versus diversifying their ad strategy.

    For now, analysts will be awaiting Meta executives’ commentary about Threads during its quarterly earnings call next week, including to see if they offer any hints about whether ads may be rolled out on the app ahead of the crucial holiday shopping season.

    “They launched this in July,” Kessler said. “That should give them enough time to build out sufficient tools for holiday shopping season advertising.”

    [ad_2]

    Source link

  • Pro-Chinese online influence campaign promoted protests in Washington, researchers say | CNN Politics

    Pro-Chinese online influence campaign promoted protests in Washington, researchers say | CNN Politics

    [ad_1]



    CNN
     — 

    A Chinese marketing firm likely organized and promoted protests in Washington last year as part of a wide-ranging pro-Beijing influence campaign, according to new research.

    The Chinese firm also used a network of over 70 fake news websites to promote pro-China content in an example of the more aggressive efforts by pro-China operatives to influence US political debate in recent years, according to security firm Mandiant, which analyzed the activity.

    One of the protests was against a US government ban on goods produced in China’s Xinjiang region, where US officials have accused the Chinese government of systematic repression of the Uyghurs. The other protest was on the sidelines of a June conference on international religious freedom, Mandiant said.

    One of the protests only attracted roughly a dozen people but it showed the scope and ambition of the pro-China efforts.

    The hired protesters, who included self-proclaimed musicians and actors in the Washington, DC, area, apparently had no idea they were being enlisted in a pro-China influence campaign, the Mandiant researchers said.

    The campaign backed by the Chinese firm, Shanghai Haixun Technology Co., Ltd., is “intended to sow discord in US society,” Ryan Serabian, a senior analyst at Mandiant, told CNN.

    In both cases, protesters carry placards and chant slogans about racial discrimination and abortion in the US. Haixun, the Chinese firm, distributed videos of the protesters online to further the influence campaign, according to Mandiant.

    Shanghai Haixun Technology did not respond to a request for comment.

    Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, said he was unaware of the details of the research. “China has always adhered to non-interference in other countries’ internal affairs,” Liu said in an email to CNN.

    The Washington Post first reported on the Mandiant research.

    In the runup to the 2016 US presidential elections, Russian operatives used social media to organize protests on American soil as part of Moscow’s election interference, according to US intelligence officials. Such divisive tactics are no longer confined to the Russians, according to election security experts.

    During the 2022 US midterm elections, pro-China propagandists showed signs of engaging in “Russia-style influence activities” that stoke American divisions, FBI officials told reporters last year. The FBI pointed to Facebook’s shutdown of accounts originating in China that posted memes mocking President Joe Biden and Republican Sen. Marco Rubio of Florida.

    [ad_2]

    Source link

  • Elizabeth Warren and Lindsey Graham want a new agency to regulate tech | CNN Business

    Elizabeth Warren and Lindsey Graham want a new agency to regulate tech | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Two US senators are calling for the creation of a new federal agency to regulate tech companies such as Amazon, Google and Meta, in the latest push by members of Congress to clamp down on Big Tech.

    Under the proposal released Thursday by Sen. Elizabeth Warren, a Massachusetts Democrat, and Sen. Lindsey Graham, a South Carolina Republican, Congress would establish a new regulatory body with the power to sue platforms — or even force them to stop operating — in response to various potential harms to customers, rivals and the general public, including anticompetitive practices, violations of consumer privacy and the spread of harmful online content.

    The new regulator would have broad jurisdiction, covering not just social media platforms or e-commerce but also the rapidly evolving field of artificial intelligence. The bill targets tech platforms including Amazon, Apple, Google, Meta, Microsoft, TikTok and Twitter, which now officially known as X, a Senate aide told CNN, though the companies aren’t directly named in the legislation.

    “For too long, giant tech companies have exploited consumers’ data, invaded Americans’ privacy, threatened our national security, and stomped out competition in our economy,” Warren said in a statement. “This bipartisan bill would create a new tech regulator and it makes clear that reining in Big Tech platforms is a top priority on both sides of the aisle.”

    The push comes after years of stalled attempts to impose new rules on large tech companies and multiple failed efforts to block deals on antitrust grounds. Some AI companies have openly welcomed the creation of a special-purpose AI regulator. Warren and Graham’s legislation, the Digital Consumer Protection Commission Act, would be the first bipartisan bill of its kind, though a similar proposal by Sen. Michael Bennet, a Colorado Democrat, has been circulating since last year. Thursday’s proposal differs from Bennet’s bill, the aide said, in that it is in some ways more specific in its restrictions on the tech industry.

    The new commission would have far-reaching authority under the bill, with the ability to make regulations for the industry, investigate claims of wrongdoing and pursue enforcement actions. For the largest companies under its purview — defined by a mixture of user numbers, revenue figures, market capitalization and other metrics — the commission would issue operating licenses that could be revoked in the case of repeat offenses, according to a copy of the bill text reviewed by CNN.

    “Enough is enough. It’s time to rein in Big Tech,” Graham and Warren wrote in an op-ed in the New York Times Thursday. “And we can’t do it with a law that only nibbles around the edges of the problem. Piecemeal efforts to stop abusive and dangerous practices have failed.”

    The legislation would also ban certain practices outright and direct the new agency to police any violations. For example, companies such as Google would not be able to prioritize its own apps and services at the top of search results or use noncompete agreements to block employees from going to work for a rival startup.

    Companies covered by the legislation would also face restrictions on how they can use Americans’ personal information for targeted advertising, in a privacy-focused move.

    And the legislation seeks to address the type of national security concerns that have been linked to TikTok by forcing “dominant” platforms to be either based in the United States or controlled by US citizens, and by restricting the companies’ ability to store data in certain countries.

    In unveiling the bill, the lawmakers drew parallels between their proposed US agency and other sector-specific regulators such as the Federal Communications Commission, which oversees the telecom and broadcast industries, and the Nuclear Regulatory Commission, which regulates nuclear power.

    But the legislation could also lead to some areas of overlap — for example, with the Federal Trade Commission and the Department of Justice overseeing antitrust issues, as well as with the FTC on consumer protection issues. The Senate aide told CNN that the bill’s intent is to see the new tech-focused commission working together with the FTC and DOJ, and that the legislation ensures both existing agencies will also be able to conduct their own enforcement as well.

    [ad_2]

    Source link

  • Opinion: Utah’s startling new rules for kids and social media | CNN

    Opinion: Utah’s startling new rules for kids and social media | CNN

    [ad_1]

    Editor’s Note: Kara Alaimo, an associate professor of communication at Fairleigh Dickinson University, writes about issues affecting women and social media. Her book, “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Reclaim It,” will be published by Alcove Press in 2024. The opinions expressed in this commentary are her own. Read more opinion on CNN.



    CNN
     — 

    Utah’s Republican governor, Spencer Cox, recently signed two bills into law that sharply restrict children’s use of social media platforms. Under the legislation, which takes effect next year, social media companies have to verify the ages of all users in the state, and children under age 18 have to get permission from their parents to have accounts.

    Parents will also be able to access their kids’ accounts, apps won’t be allowed to show children ads, and accounts for kids won’t be able to be used between 10:30 p.m. and 6:30 a.m. without parental permission.

    It’s about time. Social networks in the United States have become potentially incredibly dangerous for children, and parents can no longer protect our kids without the tools and safeguards this law provides. While Cox is correct that these measures won’t be “foolproof,” and what implementing them actually looks like remains an open question, one thing is clear: Congress should follow Utah’s lead and enact a similar law to protect every child in this country.

    One of the most important parts of Utah’s law is the requirement for social networks to verify the ages of users. Right now, most apps ask users their ages without requiring proof. Children can lie and say they’re older to avoid some of the features social media companies have created to protect kids — like TikTok’s new setting that asks 13- to 17-year-olds to enter their passwords after they’ve been online for an hour, as a prompt for them to consider whether they want to spend so much time on the app.

    While critics argue that age verification allows tech companies to collect even more data about users, let’s be real: These companies already have a terrifying amount of intimate information about us. To solve this problem, we need a separate (and comprehensive) data privacy law. But until that happens, this concern shouldn’t stop us from protecting kids.

    One of the key components of this legislation is allowing parents access to their kids’ accounts. By doing this, the law begins to help address one of the biggest dangers kids face online: toxic content. I’m talking about things like the 2,100 pieces of content about suicide, self-harm and depression that 14-year-old Molly Russell in the UK saved, shared or liked in the six months before she killed herself last year.

    I’m also talking about things like the blackout challenge — also called the pass-out or choking challenge — that has gone around social networks. In 2021, four children 12 or younger in four different states all died after trying it.

    “Check out their phones,” urged the father of one of these young victims. “It’s not about privacy — this is their lives.”

    Of course, there are legitimate privacy concerns to worry about here, and just as kids’ use of social media can be deadly, social apps can also be used in healthy ways. LGBTQ children who aren’t accepted in their families or communities, for example, can turn online for support that is good for their mental health. Now, their parents will potentially be able to see this content on their accounts.

    I hope groups that serve children who are questioning their gender and sexual identities and those that work with other vulnerable youth will adapt their online presences to try to serve as resources for educating parents about inclusivity and tolerance, too. This is also a reminder that vulnerable children need better access to mental health services like therapy — they’re way too young to be left to their own devices to seek out the support they need online.

    But, despite these very real privacy concerns, it’s simply too dangerous for parents not to know what our kids are seeing on social media. Just as parents and caregivers supervise our children offline and don’t allow them to go to bars or strip clubs, we have to ensure they don’t end up in unsafe spaces on social media.

    The other huge challenge the Utah law helps parents overcome is the amount of time kids are spending on social media. A 2022 survey by Common Sense Media found that the average 8- to 12-year-old is on social media for 5 hours and 33 minutes per day, while the average 13- to 18 year-old spends 8 hours and 39 minutes every day. That’s more time than a full time-job.

    The American Academy of Pediatrics warns that lack of sleep is associated with serious harms in children — everything from injuries to depression, obesity and diabetes. So parents in the US need to have a way to make sure their kids aren’t up on TikTok all night (parents in China don’t have to worry about this because the Chinese version of TikTok doesn’t allow kids to stay on for more than 40 minutes and isn’t useable overnight).

    Of course, Utah isn’t an authoritarian state like China, so it can’t just turn off kids’ phones. That’s where this new law comes in requiring social networks to implement these settings. The tougher part of Utah’s law for tech companies to implement will be a provision requiring social apps to ensure they’re not designed to addict kids.

    Social networks are arguably addictive by nature, since they feed on our desires for connection and validation. But hopefully the threat of being sued by children who say they’ve been addicted or otherwise harmed by social networks — an outcome for which this law provides an avenue — will force tech companies to think carefully about how they build their algorithms and features like bottomless feeds that seem practically designed to keep users glued to their screens.

    TikTok and Snap didn’t respond to requests for comment from CNN about Utah’s law, while a representative for Meta, Facebook’s parent company, said the company shares the goal to keep Facebook safe for kids but also wants it to be accessible.

    Of course, if social networks had been more responsible, it probably wouldn’t have come to this. But in the US, tech companies have taken advantage of a lack of rules to build platforms that can be dangerous for our kids.

    States are finally saying no more. In addition to Utah’s measures, California passed a sweeping online safety law last year. Connecticut, Ohio and Arkansas are also considering laws to protect kids by regulating social media. A bill introduced in Texas wouldn’t allow kids to use social media at all.

    There’s nothing innocent about the experiences many kids are having on social media. This law will help Utah’s parents protect their kids. Parents in other states need the same support. Now, it’s time for the federal government to step up and ensure children throughout the country have the same protections as Utah kids.

    Suicide & Crisis Lifeline: Call or text 988. The Lifeline provides 24/7, free and confidential support for people in distress, prevention and crisis resources for you and your loved ones, and best practices for professionals in the United States. En Español: Linea de Prevencion del Suidio y Crisis: 1-888-628-9454.

    [ad_2]

    Source link

  • Micron Technology: China probes US chip maker for cybersecurity risks as tech tension escalates | CNN Business

    Micron Technology: China probes US chip maker for cybersecurity risks as tech tension escalates | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    China has launched a cybersecurity probe into Micron Technology, one of America’s largest memory chip makers, in apparent retaliation after US allies in Asia and Europe announced new restrictions on the sale of key technology to Beijing.

    The Cyberspace Administration of China (CAC) will review products sold by Micron in the country, according to a statement by the watchdog late on Friday.

    The move is aimed at “ensuring the security of key information infrastructure supply chains, preventing cybersecurity risks caused by hidden product problems, and maintaining national security,” it noted.

    It came on the same day that Japan, a US ally, said it would restrict the export of advanced chip manufacturing equipment to countries including China, following similar moves by the United States and the Netherlands.

    Washington and its allies have announced curbs on China’s semiconductor industry, which strike at the heart of Beijing’s bid to become a tech superpower.

    Last month, the Netherlands also unveiled new restrictions on overseas sales of semiconductor technology, citing the need to protect national security. In October, the United States banned Chinese companies from buying advanced chips and chipmaking equipment without a license.

    Micron told CNN it was aware of the review.

    “We are in communication with the CAC and are cooperating fully,” it said, adding that it stands by the security of its products.

    Shares in Micron sank 4.4% on Wall Street Friday following the news, the biggest drop in more than three months. Micron derives more than 10% of its revenue from China.

    In an earlier filing, the Idaho-based company had warned of such risks.

    “The Chinese government may restrict us from participating in the China market or may prevent us from competing effectively with Chinese companies,” it said last week.

    China has strongly criticized restrictions on tech exports, saying last month it “firmly opposes” such measures.

    In efforts to boost growth and job creation, Beijing is seeking to woo foreign investments as it grapples with mounting economic challenges. The newly minted premier Li Qiang and several top economic officials have been rolling out the welcome wagon for global CEOs and promising they would “provide a good environment and services.”

    But Beijing has also exerted growing pressure on foreign companies to bring them into line with its agenda.

    Last month, authorities closed the Beijing office of Mintz Group, a US corporate intelligence firm, and detained five local staff.

    Days earlier, they suspended Deloitte’s operations in Beijing for three months and imposed a fine of $31 million over alleged lapses in its work auditing a state-owned distressed debt manager.

    [ad_2]

    Source link

  • Academic researchers blast Twitter’s data paywall as ‘outrageously expensive’ | CNN Business

    Academic researchers blast Twitter’s data paywall as ‘outrageously expensive’ | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    After Twitter announced in February it would begin charging third parties to access its platform data, academic researchers warned that the vaguely worded plan could threaten important studies about how misinformation, harassment and other malicious activity spreads online.

    Now, as Twitter has released more pricing information, many of those same academics are saying their fears were well-founded, complaining that Twitter’s new tiered paywall not only charges “outrageously expensive” prices but that it also restricts the amount of accessible data so heavily that what little researchers can see, even on the most expensive tiers, is not useful for studies at any rigorous level.

    Twitter, which has cut much of its public relations team under CEO Elon Musk, automatically responded to a request for comment with an email containing a poop emoji.

    In an open letter this week, the Coalition for Independent Technology Research — a group representing dozens of researchers and civil society organizations — said free and open access to Twitter data has historically enabled systematic, large-scale research on social media’s role in public health initiatives, foreign propaganda, political discourse, and even the bots and spam that Musk has blamed for ruining Twitter.

    But Twitter’s new tiered access system undercuts all of that, the researchers said. The company’s pricing that launched last week, starting at $100 per month for a “basic” amount of data, does not provide nearly enough volume for users at the low end, while the high end “ranges from $42,000 to $210,000 per month [and] is unaffordable for researchers,” the letter said.

    The new basic tier limits users to reading just 10,000 tweets per month. That represents 0.3% of what researchers used to be able to collect in a single day, the letter said.

    Even under the most expensive “enterprise” tier costing upwards of $2.5 million a year, Twitter is offering only a fraction of the tweets it used to, the letter continued. Before the change, researchers could pay about $500 a month for the ability to access up to 10% of the roughly 1 billion tweets a month that flow across Twitter’s platform.

    Now, though, “the most expensive Enterprise tier would cut that by 80% at about 400 times the price,” the researchers’ letter said.

    Asking researchers to pay orders of magnitude more for a fifth of the access they once had represents a barrier to accountability and transparency, the letter added.

    “Under the new pricing plans, studying the communications and interactions of even a small population—such as the 535 Members of the U.S. Congress or the 705 Members of the European Parliament—will be unfeasible,” the letter said. “The new pricing plans will also end at least 76 long-term efforts, including dashboards, tools, or code packages that support other researchers, journalists, first-responders, educators, and Twitter users.”

    [ad_2]

    Source link

  • FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Artificial intelligence tools such as ChatGPT could lead to a “turbocharging” of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

    Addressing House lawmakers, FTC chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

    In recent months, a new crop of AI tools have gained attention for their ability to generate convincing emails, stories and essays as well as images, audio and videos. While these tools have potential to change the way people work and create, some have also raised concerns about how they could be use to deceive by impersonating individuals.

    Even as policymakers across the federal government debate how to promote specific AI rules, citing concerns about possible algorithmic discrimination and privacy issues, companies could still face FTC investigations today under a range of statutes that have been on the books for years, Khan and her fellow commissioners said.

    “Throughout the FTC’s history we have had to adapt our enforcement to changing technology,” said FTC Commissioners Rebecca Slaughter. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … [and] not be scared off by this idea that this is a new, revolutionary technology.”

    FTC Commissioner Alvaro Bedoya said companies cannot escape liability simply by claiming that their algorithms are a black box.

    “Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” said Bedoya. “There is law, and companies will need to abide by it.”

    The FTC has previously issued extensive public guidance to AI companies, and the agency last month received a request to investigate OpenAI over claims that the company behind ChatGPT has misled consumers about the tool’s capabilities and limitations.

    [ad_2]

    Source link

  • Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat is about to give new meaning to the “chat” part of its name.

    Snap, the company behind Snapchat, announced on Wednesday that its customizable My AI chatbot, is now accessible to all users within the app. The feature, which is powered by the viral AI chatbot ChatGPT, was previously only available to paying Snapchat+ subscribers.

    The tool offers recommendations, answers questions, helps users make plans and can write a haiku in seconds, according to the company. It can be brought into conversation with friends when it’s mentioned with “@MyAI.” Users can also give it a name and design a custom Bitmoji avatar for it to personalize it more.

    The move comes more than a month after ChatGPT creator OpenAI opened up access to its chatbot to third-party businesses. Snap, Instacart and tutor app Quizlet were among the early partners experimenting with adding ChatGPT.

    Since its public release in November 2022, ChatGPT has stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    The initial batch of companies tapping into ChatGPT’s functionality each have slightly different visions for how to incorporate it. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.

    Adding ChatGPT features also may come with some risks. The tool, which is trained on vast troves of data online, can spread inaccurate information and has the potential to respond to users in ways they might find inappropriate.

    In a blog post on Wednesday, Snap acknowledged “My AI is far from perfect but we’ve made a lot of progress.”

    It said, for example, about 99.5% of My AI responses conform to its community guidelines. Snap said it has made changes to “help protect against responses that could be inappropriate or harmful.” The company also said it has added moderation technology and included the new feature to its in-app parental tools.

    “We will continue to use these early learnings to make AI a more safe, fun, and useful experience, and we’re eager to hear your thoughts,” the company said.

    [ad_2]

    Source link

  • Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    [ad_1]



    CNN
     — 

    Less than a few hours after Snapchat rolled out its My AI chatbot to all users last week, Lyndsi Lee, a mother from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

    “It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.

    The feature is powered by the viral AI chatbot tool ChatGPT – and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

    The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear you’re talking to a computer.

    “I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,” Lee said. “I just think there is a really clear line [Snapchat] is crossing.”

    The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to reckon with questions that may have seemed theoretical only months ago.

    In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was released to Snap’s subscription customers, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. In particular, he cited reports that it can provide kids with suggestions for how to lie to their parents.

    “These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

    In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”

    In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.

    In another TikTok video with more than 1.5 million views, a user named Ariel recorded a song with an intro, chorus and piano chords written by My AI about what it’s like to be a chatbot. When she sent the recorded song back, she said the chatbot denied its involvement with the reply: “I’m sorry, but as an AI language model, I don’t write songs.” Ariel called the exchange “creepy.”

    Other users shared concerns about how the tool understands, interacts with and collects information from photos. “I snapped a picture … and it said ‘nice shoes’ and asked who the people [were] in the photo,” a Snapchat user wrote on Facebook.

    Snapchat told CNN it continues to improve My AI based on community feedback and is working to establish more guardrails to keep its users safe. The company also said that similar to its other tools, users don’t have to interact with My AI if they don’t want to.

    It’s not possible to remove My AI from chat feeds, however, unless a user subscribes to its monthly premium service, Snapchat+. Some teens say they have opted to pay the $3.99 Snapchat+ fee to turn off the tool before promptly canceling the service.

    But not all users dislike the feature.

    One user wrote on Facebook that she’s been asking My AI for homework help. “It gets all of the questions right.” Another noted she’s leaned on it for comfort and advice. “I love my little pocket, bestie!” she wrote. “You can change the Bitmoji [avatar] for it and surprisingly it offers really great advice to some real life situations. … I love the support it gives.”

    ChatGPT, which is trained on vast troves of data online, has previously come under fire for spreading inaccurate information, responding to users in ways they might find inappropriate and enabling students to cheat. But Snapchat’s integration of the tool risks heightening some of these issues, and adding new ones.

    Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.

    “If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” she said. “Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot. In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

    For now, the onus is on parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to show up in more popular apps and services.

    Sinead Bovell, the founder of WAYE, a startup that helps prepare youth for future with advanced technologies, said parents need to make it very clear “chatbots are not your friend.”

    “They’re also not your therapists or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” she said.

    “Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they would a friend – even though from a user design perspective, the chatbot exists in the same corner of Snapchat.”

    She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.

    [ad_2]

    Source link

  • UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    [ad_1]



    Reuters
     — 

    A citizen of the United Kingdom who was extradited to New York from Spain last month has pleaded guilty to cyberstalking and computer hacking schemes, including the 2020 hack of the social media site Twitter, the U.S. Justice Department said on Tuesday.

    Joseph James O’Connor, 23, was charged in both North Dakota and New York. The North Dakota case was transferred to the U.S. District Court for the Southern District of New York.

    O’Connor pleaded guilty to charges including conspiring to commit computer intrusions, to commit wire fraud and to commit money laundering.

    O’Connor, who was extradited to the U.S. on April 26, will also forfeit more than $794,000 and pay restitution to victims, prosecutors said. He faces a maximum of 77 years in prison at sentencing on June 23.

    “O’Connor’s criminal activities were flagrant and malicious, and his conduct impacted multiple people’s lives. He harassed, threatened, and extorted his victims, causing substantial emotional harm,” Assistant Attorney General Kenneth Polite said in a statement.

    Prosecutors said the schemes included gaining unauthorized access to social media accounts on Twitter in July 2020 as well as a TikTok account in August 2020. Along with his co-conspirators, O’Connor stole at least $794,000 worth of cryptocurrency.

    The July 2020 Twitter attack hijacked a variety of verified accounts, including those of then-Democratic presidential candidate Joe Biden and Tesla CEO Elon Musk, who now owns Twitter.

    The accounts of former President Barack Obama, reality TV star Kim Kardashian, Bill Gates, Warren Buffett, Benjamin Netanyahu, Jeff Bezos, Michael Bloomberg and Kanye West were also hit.

    The alleged hacker used the accounts to solicit digital currency, prompting Twitter to prevent some verified accounts from publishing messages for several hours until security could be restored.

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link