ReportWire

Tag: iab-computing

  • The city without TikTok offers a window to America’s potential future | CNN Business

    The city without TikTok offers a window to America’s potential future | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Across the United States, more than 150 million people are being faced with the possibility of a new reality: life without TikTok.

    The wildly popular short-form video app has been at the center of an ongoing battle, with lawmakers calling for an outright ban, and the company portraying itself as a critical community space, educational platform and just plain fun.

    In Hong Kong, there’s no need to imagine that reality: TikTok discontinued its services there in 2020.

    Its abrupt departure was met with mixed reactions: disappointment from some users and content creators, but also relief from others who say life is better without the app’s infinite scroll.

    At the time of its exit, TikTok had a relatively modest presence in the city and was not ubiquitous like it is in the US today.

    But the varied reactions to its departure, and the way users have pivoted to other platforms or even real-life offline communities, offer Americans a glimpse into their potential TikTok-less future.

    TikTok announced its exit from Hong Kong in July 2020, a week after China imposed a controversial national security law in the city. The decision came as the app tried to distance itself from China and its Beijing-based parent company ByteDance, in the face of growing pressure in the US under the Trump administration.

    But it meant a jarring halt for creators like Shivani Dukhande, who had roughly 45,000 followers at the time the app left Hong Kong.

    Dukhande, 25, saw her account take off in early 2020 during the pandemic, with lifestyle content such as cooking and wellness videos flourishing on the platform.

    “There were a lot of new creators emerging,” she said. “We used to all collaborate together, we had a chat where we would all speak and share ideas and it created a community.”

    Momentum began to build. Companies started reaching out to Dukhande, paying for sponsored content and collaborating on ad campaigns. Brands began partnering with creators on trending “challenges” in a bid to attract young new consumers.

    “More people were joining and it was becoming such a fun thing to do,” she said. “Then, it just kind of went away one morning.”

    “If it continued, then I probably could have made enough to have quit my 9 to 5,” she said. “If I had the chance to grow, it could have been a potential career path.”

    This is one of the main arguments TikTok has made in recent weeks in the US. In March, as the company’s CEO prepared to testify before Congress, TikTok produced a docuseries highlighting American small business owners who rely on the platform for their livelihoods.

    The platform is used by nearly five million businesses in the US, TikTok said in March. And it’s set to surpass rivals: London-based research firm Omdia projected in November that TikTok’s advertising revenues will exceed the combined video ad revenues of Meta – home of Facebook and Instagram – and YouTube by 2027.

    This is partly because people are spending more time on TikTok. In the second quarter of 2022, TikTok users globally spent an average of 95 minutes per day on the app, according to data analytics firm SensorTower – nearly twice as much time as users spent on Facebook and Instagram.

    Shivani Dukhande had created videos about wellness, lifestyle, food and Hong Kong on her TikTok account.

    But in Hong Kong, other platforms have jumped in to fill the gap. Reels, Instagram’s short-form video product, with similar features as TikTok such as an endless scroll, is growing quickly – and Dukhande has gotten on board.

    She had to rebuild her audience from scratch, and now has 12,500 Instagram followers, but she feels optimistic about its growth. Still, the loss of TikTok was a “missed opportunity,” she said, and the burgeoning community of creators has largely faded from sight.

    “The amount of jobs, the amount of content creation, the amount of marketing opportunities that were there with TikTok – we sort of missed out on that whole chunk of it.”

    But for some people, TikTok’s departure was a welcome change.

    Poppy Anderson, 16, has been using TikTok since its launch in 2018. And, like many others in her generation, she would spend hours “scrolling and scrolling” – even when feeling unfulfilled.

    “It was very easy to kind of find exactly what you like on there, because the [algorithm-run] For You page kept you there,” she said. “And it’s entertaining, but you don’t really get anything from it.”

    She described TikTok as often being a toxic environment that breeds narrow thinking, herd mentality, a misguided “cancel culture” and inappropriate online behavior such as critiquing the bodies of girls and women. Even people she knew in real life began acting differently after joining the app, which strained friendships, she said.

    Martin Poon, 15, also grew weary of TikTok, but it was hard to quit.

    “Everyone was using it, so I feel like there was a sense that you have to use it, you have to be on top of things, you have to know what’s going on. And I think that was stressful to me,” he said.

    Misinformation and misogyny ran rampant on TikTok, with accounts like those of Andrew Tate, the self-styled “alpha male” recently detained in Romania on allegations of human trafficking and rape, gaining popularity among boys at Poon’s school.

    “It’s just concerning how [these accounts] have so much impact on the youth, and it has so much grip on what we think and how it affects our behavior,” said Poon – though he added that misinformation is a major problem on all social media platforms, not just TikTok.

    Experts have long worried about the impact of TikTok on young people’s mental health, with one study claiming the app may surface potentially harmful content related to suicide and eating disorders to teenagers within minutes of them creating an account.

    In response to growing pressure, TikTok recently announced a one-hour daily screentime limit for users under 18, though users will be able to turn off this default setting.

    Anderson acknowledged some positives about TikTok, like open conversations about mental health. Still, she was glad when the app became inaccessible. Falling asleep became easier without the lure of TikTok. “I didn’t have the self control to get off it on my own,” she said.

    For Poon and his friend Ava Chan, also 15, TikTok’s disappearance sparked new beginnings.

    When the app left in 2020, they were doing online classes, isolated from friends and bored at home. At the time, Instagram Reels and YouTube Shorts had yet to arrive in Hong Kong.

    “We had to figure out how to use our time other than being on TikTok,” said Chan. “For us, that was exploring our passions more.”

    For both, that came in advocating for the neurodiverse community. They launched a club at school that spreads education and awareness about neurodiversity, as well as participating in volunteer activities with neurodiverse people.

    Both said it lent them a sense of purpose, and as time went on, they saw other benefits.

    Their friends, who would previously spend time filming and watching TikToks together, began having more face-to-face conversations. They noticed peers begin exercising outdoors more, which was made easier as Covid restrictions lifted. Their mental health improved.

    Of course, being teenagers, they’re not off social media entirely and use it as a tool to promote their club – but it’s far from the previous hours of scrolling. And while they occasionally wonder what’s happening on TikTok outside Hong Kong, the allure of it is lost when nobody else around them uses it either.

    “A lot of people, they’ve just kind of forgotten about it,” said Anderson. “People move to different platforms – or just move on.”

    [ad_2]

    Source link

  • Pentagon leak spotlights surprising interplay between gaming and military secrets | CNN Politics

    Pentagon leak spotlights surprising interplay between gaming and military secrets | CNN Politics

    [ad_1]



    CNN
     — 

    The recent leak of classified US documents on social media platform Discord seemingly caught many at the Pentagon by surprise. But it wasn’t the first time that a forum popular with online gamers had hosted military secrets, underlining a major challenge for the US national security establishment and platforms alike.

    As recently as January 2023, someone on a forum for fans of the video game War Thunder reportedly published confidential information on an F-16 fighter jet. That followed reports of at least three other occasions since 2021 when War Thunder fans posted documents on British, French and Chinese tanks. These cases – which Axios also reported on in the context of the Discord leaks – typically involved users boasting of their inside knowledge of military equipment and claiming to want to make the game more realistic.

    Gaijin Entertainment, the company that produces War Thunder, took the posts down after forum moderators flagged them.

    The recent leaks on Discord exposed a shortcoming in how the US government alerts platforms that they are hosting sensitive or classified information, according to Discord’s top lawyer.

    There is currently “no structured process,” for the government to communicate whether documents posted on social media are classified or even authentic, Clint Smith, Discord’s chief legal officer, said in an April 14 statement that described classified military documents as a “significant, complex challenge” for Discord and other platforms.

    The episodes point to vexing challenges for social media platforms like Discord – where 21-year Air National Guardsman Jack Teixeira allegedly began posting classified information in December – and the US military, which has used Discord for recruiting.

    Discord and other platforms face a difficult balancing act in giving young gamers the space to be themselves while also detecting when they post illegal content.

    “A lot of these guys find their social circles in these online gaming spaces, and that can be great,” said Jennifer Golbeck, a professor at the University of Maryland’s College of Information Studies. “But if the culture of the platform shifts to rewarding things that you shouldn’t be doing, it can hard if you’re really invested in that that social group to give that up.”

    Teixeira allegedly posted the documents – which included sensitive US intelligence on the war in Ukraine – to a private Discord chat in an attempt to look after his online friends and keep them informed, one member of the chatroom has claimed.

    The Pentagon is trying to tap into online youth culture without it backfiring spectacularly, as it allegedly did with Teixeira.

    An Air Force Gaming program that allows service members to compete in video game leagues to, according to a Pentagon press release, “build morale and mental health resiliency,” has more than 28,000 members. The top of the Air Force Gaming website includes a link to join the program’s Discord channel.

    There were signs that Pentagon officials were growing wary of information young service members might share on Discord even before news of Teixeira’s alleged leak broke.

    “Don’t post anything in Discord that you wouldn’t want seen by the general public,” reads a pamphlet published by US Army Special Operations Command in March.

    That the warning came as classified documents allegedly shared by Teixeira sat on Discord appears to be entirely a coincidence; many US officials appeared unaware of the leak until news of it broke on April 6.

    “Past incidents show how hard it is to stop these leaks,” said Casey Brooks, an Army veteran and video game fan.

    “This is about maturity and how certain people seek value from interpersonal relationships and approval from peers and the competitive nature that gaming group members bond over,” Brooks told CNN.

    Classified or sensitive documents are also a unique problem for content moderators on social media sites.

    “With porn, you can at least have some kind of AI that will give a rough flag at the beginning that this looks vaguely like porn,” said Golbeck, the University of Maryland professor. “But what looks like a classified document? They’re just documents.”

    As social media platforms like Discord grapple with the challenges of detecting sensitive intelligence leaks online, current and former US officials worry that US adversaries like Russia may see an intelligence gathering opportunity.

    “If it’s not already happening, my guess would be the Russians have assessed that digging around in some of these obscure online forums … could bear fruit,” Holden Triplett, a former FBI official who worked at the US embassy in Moscow, told CNN.

    Though there is no evidence that Teixeira was approached by foreign agents, Triplett said a young generation of online gamers might be a ripe target for recruitment.

    “Ego and excitement have always been strong motivations to spy,” said Triplett, who is founder of security consultancy Trenchcoat Advisors. But the group of Discord users that included Teixeira “seemed particularly indifferent to national security concerns,” which is a vulnerability for the US government, Triplett said.

    [ad_2]

    Source link

  • Six months into Elon Musk’s Twitter: The fall of verification and birth of Twitter Blue in one very long chart | CNN Business

    Six months into Elon Musk’s Twitter: The fall of verification and birth of Twitter Blue in one very long chart | CNN Business

    [ad_1]



    CNN
     — 

    In the six months since Elon Musk completed his acquisition of Twitter, the billionaire has turned the platform on its head by overhauling how it decides which accounts to verify.

    Once given out to authenticate a limited number of accounts from celebrities, government agencies and media organizations, the coveted check mark is now available for purchase through the company’s subscription service, Twitter Blue. The result: more checks and more confusion.

    There were at least 550,000 Twitter Blue subscribers as of April 23, just days after Musk stripped all users of legacy blue checks, according to estimates provided to CNN by Travis Brown, a Berlin-based software developer. By comparison, more than 400,000 accounts were verified with the legacy blue checks before the purge.

    But with Musk gifting some celebrities with the service, it’s unclear how many are actually paying customers. It’s also unclear how much more Twitter can grow subscriptions, which Musk has made central to his plan to boost Twitter’s revenue.

    The change to Twitter’s verification process is just one of many ways Musk has shaken the company’s core after taking the helm of Twitter in October. He eliminated over 80% of its staff and reshaped the site’s policies, drawing criticism for the impact these moves could have on safety and transparency. Many top advertisers have left the platform, and Musk valued it last month at around $20 billion, less than half of what he paid for it.

    But one of Musk’s boldest and biggest changes has been Twitter Blue. Touted as the successor to the old verification system, the subscription model lets anyone pay $8 per month for a blue badge and other features, like prioritized rankings in conversations and search.

    The blowback has been swift. Twitter Blue has stoked chaos and confusion. The program was initially paused only days after its launch when an account impersonating pharmaceutical company Eli Lilly and Company tweeted “insulin is free now,” causing the stock to nosedive.

    More recently, the purge of blue checks has led to a cultural change on the platform. Once a sought-after status symbol, many users find the blue badge is no longer cool. Last week, after the blue check began popping up on famous accounts, celebrities such as Lil Nas X and Chrissy Teigen vehemently denied paying for the service.

    Here’s a look back at the rise and fall of Twitter’s blue badge:

    [ad_2]

    Source link

  • TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    [ad_1]



    CNN
     — 

    By any standard, John August is a successful screenwriter. He’s written such films as “Big Fish,” “Charlie’s Angels” and “Go.” But even he is concerned about the impact AI could have on his work.

    A powerful new crop of AI tools, trained on vast troves of data online, can now generate essays, song lyrics and other written work in response to user prompts. While there are clearly limits for how well AI tools can produce compelling creative stories, these tools are only getting more advanced, putting writers like August on guard.

    “Screenwriters are concerned about our scripts being the feeder material that is going into these systems to generate other scripts, treatments, and write story ideas,” August, a Writers Guild of America (WGA) committee member, told CNN. “The work that we do can’t be replaced by these systems.”

    August is one of the more than 11,000 members of the WGA who went on strike Tuesday morning, bringing an immediate halt to the production of some television shows and possibly delaying the start of new seasons of others later this year.

    WGA is demanding a host of changes from the Alliance of Motion Picture and Television Producers (AMPTP), from an increase in pay to receiving clear guidelines around working with streaming services. But as part of their demands, the WGA is also fighting to protect their livelihoods from AI.

    In a proposal published on WGA’s website this week, the labor union said AI should be regulated so it “can’t write or rewrite literary material, can’t be used as source material” and that writers’ work “can’t be used to train AI.”

    August said the AI demand “was one of the last things” added to the WGA list, but that it’s “clearly an issue writers are concerned about” and need to address now rather than when their contact is up again in three years. By then, he said, “it may be too late.”

    WGA said the proposal was rejected by AMPTP, which countered by offering annual meetings to discuss advancements in the technology. August said AMPTP’s response shows they want to keep their options open.

    In a document sent to CNN responding to some of WGA’s asks, AMPTP said it values the work of creatives and “the best stories are original, insightful and often come from people’s own experiences.”

    “AI raises hard, important creative and legal questions for everyone,” it wrote. “Writers want to be able to use this technology as part of their creative process, without changing how credits are determined, which is complicated given AI material can’t be copyrighted. So it’s something that requires a lot more discussion, which we’ve committed to doing.”

    It added that the current WGA agreement defines a “writer” as a “person,” and said “AI-generated material would not be eligible for writing credit.”

    The writers’ attempt at bargaining over AI is perhaps the most high-profile labor battle yet to address concerns about the cutting-edge technology that has captivated the world’s attention in the six months since the public release of ChatGPT.

    Goldman Sachs economists estimate that as many as 300 million full-job jobs globally could be automated in some way by the newest wave of AI. White-collar workers, including those in administrative and legal roles, are expected to be the most affected. And the impact may hit sooner than some think: IBM’s CEO recently suggested AI could eliminate the need for thousands of jobs at his company alone in the next five years.

    David Gunkel, a professor at the department of communications at Northern Illinois University who tracks AI in media and entertainment, said screenwriters want clear guidelines around AI because “they can see the writing on the wall.”

    “AI is already displacing human labor in many other areas of content creation—copywriting, journalism, SEO writing, and so on,” he said. “The WGA is simply trying to get out-in-front of and to protect their members against … ‘technological unemployment.’”

    While film and TV writers in Hollywood may currently be leading the charge, professionals in other industries will almost certainly be paying attention.

    “There’s certainly other industries that need to be paying close attention to this space,” said Rowan Curran, an analyst at Forrester Research who focuses on AI. He noted that digital artists, musicians, engineers, real estate professionals and customer service workers will all feel the impact of generative AI.

    “Watch this #WGA strike carefully,” Justine Bateman, a writer, director and former actress, wrote in a tweet shortly after the strike kicked off. “Understand that our fight is the same fight that is coming to your professional sector next: it’s the devaluing of human effort, skill, and talent in favor of automation and profits.”

    AI has had a place in Hollywood for years. In the 2018 “Marvel Avengers Infinity Wars” film, the face of Thanos – a character played by actor Josh Brolin – was created in part with the technology.

    Crowd and battle scenes in films including the “Lord of the Rings” and “Meg” have utilized AI, and the most recent Indiana Jones used it to make Harrison Ford’s character appear younger. It’s also been used for color correction, finding footage more quickly during post production and making improvements such as removing scratches and dust from footage.

    But AI in screenwriting is in its infancy. In March, a “South Park” episode called “Deep Learning,” was co-written by ChatGPT and the tool was highly focused on in the plot (the characters use ChatGPT to talk to girls and write school papers).

    August said writers are largely willing to play ball with tools, as long as they’re used as launching pads or for research and writers are still credited and utilized throughout the production process.

    “Screenwriters are not luddites, and we’ve been quick to use new technologies to help us tell our stories,” August said. “We went from typewriters to word processors happily and it increased productivity. …. But we don’t need a magical typewriter that types scripts all by itself.”

    Because large language models are trained on text that humans have written before, and find patterns in words and sentences to create responses to prompts, concerns around intellectual property exist, too. “It is entirely possible for a [chatbot] to generate a script in the style of a particular kind of filmmaker or scriptwriter without prior consent of the original artist or the Hollywood studio that holds the IP for that material,” Gunkel said.

    For example, one could prompt ChatGPT to generate a zombie apocalypse drama in the style of David Mamet. “Who should get credited for that?” August said. “What happens if we allow a producer or studio executive to come up with a treatment or pitch or something that looks like a screenplay that no writer has touched?”

    For now, the legal landscape remains very much unsettled on the matter, with regulations lagging behind the rapid pace of AI development. In early April, the Biden administration said it is seeking public comments on how to hold artificial intelligence systems like ChatGPT accountable.

    “We can’t protect studios from their own bad choices,” August said. “We can only protect writers from abuses.”

    The strike, and the demands around AI specifically, come at a time when both the writers and the studios are feeling financial pain.

    Many of the businesses represented by AMPTP have seen drops in their stock price, prompting deep cost cutting, including layoffs. The need to manage costs, combined with addressing the fallout from the strike, might only make the companies feel more pressure to turn to AI for scriptwriting.

    “In the short term, this could be an effective way to circumvent the WGA strike, mainly because [large language models], which are considered property and not personnel, can be employed for this task without violating the picket line,” Gunkel said. Such an “experiment” could also show production studios whether it’s possible “to get by with less humans involved,” he said.

    But Joshua Glick, a visiting professor of film and electronic arts at Bard University, believes such a move would be ill-advised.

    “It would be a pretty aggressive and antagonistic move for studios to move forward with AI-generated scripts in terms of getting writers to come to the negotiating table because AI is such a crucial sticking point in the negotiations,” said Glick, who also co-created Deepfake: Unstable Evidence on Screen, an exhibition at the Museum of the Moving Image in New York.

    “At the same time, I think the result of those scripts would be pretty mediocre at best,” he said.

    However the studios react, the issue is unlikely to go away in Hollywood. Film and TV actors’ contracts are up in June, and many are worried about how their faces, bodies and voices will be impacted by AI, August said.

    “As writers, we don’t want tools to replace us but actors have the same concerns with AI, as do directors, editors and everyone else who does creative work in this industry,” he added.

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • Exclusive: Senior US general ordered Twitter announcement of drone strike on al Qaeda leader that may have instead killed civilian | CNN Politics

    Exclusive: Senior US general ordered Twitter announcement of drone strike on al Qaeda leader that may have instead killed civilian | CNN Politics

    [ad_1]



    CNN
     — 

    The senior general in charge of US forces in the Middle East ordered that his command announce on Twitter that a senior al Qaeda leader had been targeted by an American drone strike in Syria earlier this month – despite not yet having confirmation of who was actually killed in the strike, according to multiple defense officials.

    Nearly three weeks later, US Central Command still does not know whether a civilian died instead, officials said. CENTCOM did not open a review of the incident, officially known as a civilian-casualty credibility assessment report, until May 15 – twelve days after the strike. That review is ongoing.

    One defense official with direct knowledge of the situation told CNN that some of CENTCOM Commander Gen. Erik Kurilla’s subordinates urged him to hold off on the tweet until there was more clarity on who was actually killed.

    Two other officials denied that, and said they were not aware of any staffers voicing consternation or disagreement with the announcement.

    Either way, the statement ultimately posted to Twitter from the official CENTCOM Twitter account did not identify the supposed senior al Qaeda leader, raising more questions about what had occurred.

    “At 11:42 am local Syrian time on 3 May, US Central Command Forces conducted a unilateral strike in Northwest Syria targeting a senior Al Qaeda leader,” the tweet read. “We will provide more information as operational details become available.”

    The tweet has not been taken down and CENTCOM has not tweeted about the strike again.

    The episode raises questions about how thoroughly CENTCOM has implemented the military’s civilian harm mitigation policy – a process for preventing, mitigating and responding to civilian casualties caused by US military operations.

    The policy was developed in 2022 after a botched US drone strike in Kabul killed 10 civilians in August 2021.

    Pentagon spokesman Brig. Gen. Pat Ryder said on Tuesday that Defense Secretary Lloyd Austin is “absolutely” confident in the Defense Department’s civilian harm mitigation efforts.

    “In terms of CENTCOM’s strike, as you know, they conducted that strike on the third of May. They are investigating the allegations of civilian casualties,” Ryder said at a Pentagon news briefing. “So, you know, I think our record speaks for itself in terms of how seriously we take these. Very few countries around the world do that. The secretary has complete confidence that we will continue to abide by the policies that we put into place.”

    CENTCOM acknowledged last week following a Washington Post report questioning the strike that the operation may have resulted in a civilian casualty and said in a statement that it was “investigating” the incident. The civilian casualty review was not launched until a week after the Post began presenting information to CENTCOM suggesting that the strike had killed a civilian.

    CENTCOM still has not opened a formal investigation into the strike, known as a 15-6 investigation, defense officials told CNN. The officials said the civilian casualty review first needs to determine that a noncombatant was indeed killed in the strike. Then, a commander needs to decide that there are other unanswered questions remaining about the operation that require a more thorough investigation. A 15-6 investigation was launched less than a week after the errant Kabul strike.

    Defense officials told CNN that in the immediate aftermath of the strike, Kurilla and his staff had high confidence that they had killed the senior al-Qaeda leader, though they declined to say why they were so convinced. But they also knew it would likely take a few days to confirm the person’s identity definitively. The US has no military footprint in northwest Syria, an area still recovering from the effects of a devastating earthquake.

    But as the days passed, CENTCOM still could not determine the identity of who they had killed. Some defense officials considered that a red flag, they told CNN.

    By May 8, CENTCOM still had not confirmed the person’s identity, and began receiving information from the Washington Post that raised questions about whether a civilian had been killed, defense officials said. The Post’s information led CENTCOM to open a review into the strike, and whether it had killed a civilian, on May 15.

    There is still some disagreement within the administration about the identity of the person killed, defense officials told CNN. Some intelligence officials continue to believe that the target of the strike was a member of al-Qaeda, even if he wasn’t a senior leader. But there is a growing belief inside the Pentagon that the man – identified by his family as Loutfi Hassan Mesto, a 56-year-old father of ten – was a farmer with no ties to terrorism.

    Mesto’s family told CNN that he had been out grazing his sheep when he was killed. Loutfi never left his village during the Syrian uprisings and did not support any political faction, his brother said.

    Mohamed Sajee, a distant relative living in Qurqaniya, also told CNN that Loutfi was never known to be in favor or against the Syrian regime.

    “It’s impossible that he was with al Qaeda, he doesn’t even have a beard,” he said.

    The Syrian Civil Defense, also known as the White Helmets, told CNN they arrived on the scene of the strike after being contacted on their local emergency number.

    “The team noticed only one crater caused by the missile, which was next to the man’s body,” the White Helmets said, also confirming that the man had been grazing his sheep.

    “When the team arrived, his wife, neighbors, and other people were at the location,” the group added.

    The White Helmets tweeted on May 3 that they had recovered the body of Mesto, who they described as “a civilian aged 60” who was killed in a missile strike while grazing sheep. CENTCOM was aware of the White Helmets’ tweet, officials said, but the group’s information was not considered solid enough yet to open a review.

    The May 3 incident bears a stunning similarity to another CENTCOM operation: a US drone strike in Kabul during the closing days of the withdrawal from Afghanistan, which killed 10 Afghan civilians, including 7 children. The Pentagon initially claimed it had eliminated an ISIS-K threat and defended the operation for weeks, with Joint Chiefs Chair Gen. Mark Milley going as far as to call it a “righteous” strike in a Pentagon briefing two days later.

    A suicide bombing at Kabul’s international airport three days earlier, which killed 13 US service members, had added pressure on CENTCOM to act against any potential threats, and officials believed at the time that another attack was imminent.

    Austin ultimately decided no one would be punished over the botched operation, even as he instructed Central Command and Special Operations Command to improve policies and procedures to prevent civilian harm more effectively.

    Austin committed to adjusting Defense Department policies to better protect civilians, even establishing a civilian protection center of excellence in 2022.

    “Leaders in this department should be held to account for high standards of conduct and leadership,” Austin said at the time.

    [ad_2]

    Source link

  • Twitter’s head of trust and safety says she has resigned | CNN Business

    Twitter’s head of trust and safety says she has resigned | CNN Business

    [ad_1]

    Twitter’s head of trust and safety Ella Irwin told Reuters on Thursday that she has resigned from the social media company.

    In the role, Irwin oversaw content moderation, but the company has faced criticism for lax protections against harmful content since billionaire Elon Musk acquired it in October.

    Irwin’s departure also comes as the platform has struggled to keep advertisers on it, mainly as brands have been wary of appearing next to unsuitable content.

    Musk announced earlier this month that he hired Linda Yaccarino, former NBCUniversal advertising chief, to become Twitter’s new CEO.

    Fortune earlier reported that Irwin’s internal Slack account appeared to be deactivated.

    [ad_2]

    Source link

  • What the chaos at Twitter means for the future of social movements | CNN Business

    What the chaos at Twitter means for the future of social movements | CNN Business

    [ad_1]

    Editor’s Note: The CNN Original Series “The 2010s” looks back at a turbulent era marked by extraordinary political and social upheaval. New episodes air at 9 p.m. ET/PT Sundays.



    CNN
     — 

    When thousands of Egyptians marched through the streets during the Arab Spring of 2011, they had a tool at their disposal that earlier social movements didn’t: Twitter.

    A key group of activists used the platform to form networks and organize protests against the authoritarian regime, while many more demonstrators used it to disseminate information and images from the ground for the rest of the world to see. Months later, organizers from the Occupy Wall Street movement took to Twitter to coordinate protests in New York and beyond.

    Twitter fostered public conversation around the Black Lives Matter movement after the 2014 police killing of Michael Brown in Ferguson, Missouri, and again after the 2020 police killing of George Floyd. It amplified #MeToo in the aftermath of the sexual assault allegations against Hollywood producer Harvey Weinstein, and catapulted other revolutionary movements around the world to global attention.

    “You can’t underestimate the impact of Twitter to social movements,” Amara Enyia, manager of policy and research for the Movement for Black Lives, told CNN.

    Twitter has often been heralded as a democratizing force, bringing previously marginalized voices to the forefront and giving the public a platform to demand accountability from leaders. (It has also enabled the spread of misinformation, extremist ideas and abusive content.)

    But since Elon Musk acquired Twitter last year and the platform plunged into chaos, some organizers and digital media experts have been bracing for the impact that his controversial policy changes and mass layoffs may have on social movements going forward.

    Though Twitter has often been referred to as a public square, some of Musk’s recent moves challenge that description.

    Through Twitter, organizers and political groups have had a level of direct access to policymakers and leaders that wouldn’t have been possible in person, said Rachel Kuo, an assistant professor of media and cinema studies at the University of Illinois, Urbana-Champaign. Verified activists were able to promote certain messages that the algorithm then pushed to the top of users’ feeds, organizers could launch campaigns that caught the attention of high-profile figures and the public could follow along for real-time updates.

    “There are now issues in how people see Twitter as a source of information and a source of political community,” said Kuo, whose research focuses on race, social movements and digital technologies. “It isn’t seen in the same way anymore.”

    Elon Musk's controversial policy changes at Twitter could have implications for social movements, some activists say.

    Musk upended traditional Twitter verification and turned it into a pay-for-play system, leading to the impersonation of government accounts and the spread of fake images. For organizers who opt not to pay the monthly subscription fee for a blue check, that also means a loss of credibility and visibility, Kuo added.

    Twitter, which has cut much of its public relations team under Musk, did not respond to a request for comment.

    Twitter’s role in information-sharing has been disrupted in other ways, too.

    The platform has been plagued by technical glitches after mass layoffs and departures at the company, frustrating many users. People have also reported that the “for you” timeline is showing them content they aren’t interested in.

    As a result of these issues and others, some are leaving Twitter altogether – more than 32 million users are projected to exit the platform in the two years following Musk’s takeover, according to a December 2022 forecast from the market research agency Insider Intelligence. (Twitter reported having 238 million monetizable daily active users last year before Musk acquired it.)

    With fewer people on Twitter, the platform becomes less centralized and the information landscape more fractured, said Sarah Aoun, a privacy and security researcher who works on cybersecurity for the Movement for Black Lives. That makes it harder for activists to connect, exchange tactics and build solidarity in the way they once did.

    Protesters in Cairo gather in Tahrir Square in November 2011.

    Musk’s approach to content moderation has also made Twitter a more hostile environment, Aoun said. Twitter has never been a completely safe space for marginalized voices – women, people of color, LGBTQ people and other vulnerable groups have long been targets of online harassment and abuse – but reports from the Center for Countering Digital Hate and Anti-Defamation League indicate an increase in hate speech on the platform under Musk’s leadership. (Musk has previously pushed back at that characterization by focusing on a different metric.)

    Some are also disillusioned over Musk’s decision to reinstate users who were previously suspended for violating the platform’s rules, including former President Donald Trump and GOP Rep. Marjorie Taylor Greene.

    “The lack of verification, the mass exodus, the inability to coordinate the way that we used to be able to coordinate and the content moderation (gutting) makes it a very difficult platform to be on at the moment,” Aoun said.

    Musk has stepped back as Twitter’s CEO, a role now held by former NBCUniversal marketing executive Linda Yaccarino. But he will maintain significant control over the platform as the company’s owner, executive chairman and chief technology officer.

    The changes at Twitter have prompted some activists and organizers to reassess their relationships with the platform.

    Rich Wallace, executive director of the Chicago-based organization Equity and Transformation (EAT), said that previously, he used to see robust engagement on tweets about social injustice or racial inequity, whether it was from those who agreed with him or didn’t. Now, he finds that substantive posts barely get traction as opposed to tweets he considers more mundane.

    Wallace said his organization, which seeks to build social and economic equity for Black workers in the informal economy, still shares information about community events on Twitter, but the potential to find new allies or engage in meaningful conversation on the platform is largely a thing of the past.

    Twitter is no longer a space for education and community building that it once was, Wallace said. It’s a shift in how he once viewed the platform, but he isn’t especially concerned. For his organization, it simply means a re-emphasis on the grassroots, in-person work they were already doing.

    People raise their fists in June 2020 as they protest the police killing of George Floyd.

    “As organizers, we’ve been creative in how we organize around barriers,” he said. “This is just one of the newer barriers that we have to assess and organize through.”

    As Kuo sees it, the ways that the changes at Twitter will affect organizing and activism will vary widely. Hyperlocal community organizers or those who work with populations that don’t speak English aren’t typically using Twitter in their day-to-day work, and so the recent shifts likely won’t affect them drastically. But she predicts that mid-to-large nonprofit organizations with communications staff might be rethinking their strategy on the platform.

    “It’s very dependent on organizational structure, form, strategies for change and political vision,” Kuo said.

    Enyia said that on a personal level, she finds that she’s engaging with people on Twitter less often and moreso using the platform to keep up with news. But in her advocacy work with the Movement for Black Lives, it remains an important tool.

    “For us, its utility is in the fact that it creates more access points to our policy platform, to the issues that we’re advocating on,” she said. “And in that regard, it’s still very, very useful.”

    When Musk first took over Twitter, some organizers and activists flocked to other alternatives, such as Mastodon or Bluesky (an app backed by Twitter co-founder and former CEO Jack Dorsey).

    Neither appears to be fulfilling the same purpose that Twitter once did, Aoun and others said. Mastodon and Bluesky are decentralized and fewer people are using them, making it more difficult to build community. And while their numbers are growing, they’re still far smaller than Twitter.

    The Bluesky app is seen on a phone and laptop in June 2023.

    In the case of Mastodon, there are privacy and security issues that concern some activists. Because the social network allows users to join different servers run by various groups and individuals, Aoun said “the privacy, security and content moderation is basically as good as the person behind the server.” Twitter – at least before Musk took over – had dedicated privacy and security teams, offering more transparency about how their systems worked.

    Some activists are using popular social networks such as Instagram and TikTok, but the visual nature of those platforms versus the text-based medium of Twitter changes how people are able to interact and engage with each other, Kuo said.

    Twitter has been an incredibly powerful tool for social movements, Enyia said. But ultimately, the platform is just that – a tool.

    “There is no panacea for just the nuts and bolts work that it takes to meet people, to engage people, to organize and talk to people,” Enyia said. “So even if we recognize that social media is a tool, we don’t put all of our eggs in that basket.”

    Social media platforms come and go, and the same could happen to Twitter. So while Enyia’s organization continues to use the platform for its own ends, it’s prepared for a reality in which Twitter is less relevant.

    “We have to stay on top of it to make sure that the tools are serving their purpose as it relates to our work,” Enyia said. “But then we have to be ready to evolve or to move on or to adapt to different tools when it becomes clear that that’s the direction we have to go.”

    [ad_2]

    Source link

  • First on CNN: New bipartisan bill in Senate could address TikTok security concerns without a ban | CNN Business

    First on CNN: New bipartisan bill in Senate could address TikTok security concerns without a ban | CNN Business

    [ad_1]



    CNN
     — 

    Five US senators are set to reintroduce legislation Wednesday that would block companies including TikTok from transferring Americans’ personal data to countries such as China, as part of a proposed broadening of US export controls.

    The bipartisan bill led by Oregon Democratic Sen. Ron Wyden and Wyoming Republican Sen. Cynthia Lummis would, for the first time, subject exports of US data to the same type of licensing requirements that govern the sale of military and advanced technologies. It would apply to thousands of companies that rely on routinely transferring data from the United States to other jurisdictions, including data brokers and social media companies.

    The legislation comes amid a flurry of proposals to regulate how TikTok and other companies may handle the sensitive and valuable data of Americans — not just their names, email addresses and phone numbers but also potentially their behavioral data such as location information, search and browsing histories and personal interests.

    “Massive pools of Americans’ sensitive information — everything from where we go, to what we buy and what kind of health care services we receive — are for sale to buyers in China, Russia and nearly anyone with a credit card,” Wyden said in a statement. “Our bipartisan bill would turn off the tap of data to unfriendly nations, stop TikTok from sending Americans’ personal information to China, and allow nations with strong privacy protections to strengthen their relationships.”

    Lawmakers have scrutinized TikTok, in particular, for its ties to China through its parent company, ByteDance. Much of the existing legislation addressing TikTok at the federal and state level has focused on bans of the app. But Wyden’s bill subjecting US data to export licensing could address the issue without wading into the thorny legal issues surrounding a potential ban, an aide said, and simultaneously avoid giving broad new powers to the executive branch.

    Wednesday’s legislation, known as the Protecting Americans’ Data From Foreign Surveillance Act, does not identify TikTok by name. Instead, it directs the Commerce Department to maintain lists of countries that are considered trustworthy and untrustworthy for the purposes of receiving US data.

    There would be no restrictions applied to personal information transferred to trustworthy states, and no restrictions on individual internet users’ own transfers of their personal data, but companies seeking to transfer Americans’ personal information to countries outside of the trustworthy list would be required to apply for a license. Transfers to countries on the untrustworthy list would be automatically prohibited unless companies could prove they have a valid reason for a transfer, according to a copy of the bill text reviewed by CNN.

    Factors the Commerce Department would need to consider when building its lists include whether a country has enough of its own privacy safeguards — reflected in laws, regulations and norms — to prevent sensitive US data from being transferred further to one of the untrustworthy countries. Another factor includes whether a country has engaged in “hostile foreign intelligence operations, including information operations, against the United States,” language that appears to refer to China, Russia and other foreign adversaries.

    The Commerce Department would also be authorized to identify the specific types of information that would be subject to licensing requirements, based on their sensitivity, as well as how much information a company could transfer to a non-approved country before needing a license.

    A previous version of the bill was introduced last summer. The newest version, the Wyden aide said, includes fresh language that targets TikTok indirectly by prohibiting data transfers from one company to a parent company that may receive data requests by a hostile foreign government, when the company holds data on more than one million users.

    TikTok has faced criticism from US officials who say the company’s links to China pose a national security risk. TikTok has said it has never received a request for US user data from the Chinese government and would never comply with such a request.

    TikTok has also said it is working on securing US user data by storing it on servers controlled by Oracle and by establishing special US access protocols to prevent unauthorized use of the information.

    Should TikTok abide by its plan, known as Project Texas, Wednesday’s legislation would not affect the company, according to the Wyden aide, but if TikTok or ByteDance did seek to move US user data to China, then those transfers would potentially be subject to the proposed Commerce Department restrictions.

    Congress has made several attempts in recent months to address data transfers to foreign adversaries. In February, House lawmakers advanced a bill that would all but require the Biden administration to ban TikTok over national security concerns about the app. The next month, Senate lawmakers introduced a bill that would give the Commerce Department wide latitude to assess all foreign-linked technologies and to take virtually any measures, up to and including imposing a nationwide ban, to restrict their domestic use.

    Those bills have provoked a backlash from industry and civil liberties groups, as well as among some fellow lawmakers. Among the concerns are their potential impact on Americans’ First Amendment rights and a potential conflict with laws facilitating the free flow of media to and from foreign rivals. Other concerns include whether the breadth of the legislation could give the US government too much power and whether it could end up harming industries that are not the target of the legislation.

    The new bill includes language requiring more input from privacy, civil rights and civil liberties experts, said Justin Sherman, founder and CEO of the research firm Global Cyber Strategies and a senior fellow at Duke University’s Sanford School of Public Policy who has seen the bill.

    “You don’t load up Excel sheets in a shipping crate and send them to a foreign port,” Sherman said, but data transfers are a “hugely and often ignored problem in national security.”

    “We need to get beyond just looking at a couple mobile apps and platforms, and start looking at all parts of this ecosystem, including how data gets sold and transferred,” Sherman added, “and this bill takes an important look at that issue.”

    Other senators co-sponsoring Wednesday’s legislation include Rhode Island Democratic Sen. Sheldon Whitehouse, Tennessee Republican Sen. Bill Hagerty, New Mexico Democratic Sen. Martin Heinrich and Florida Republican Sen. Marco Rubio. A companion bill in the House will also be unveiled Wednesday, sponsored by Ohio Republican Rep. Warren Davidson and California Democratic Rep. Anna Eshoo.

    [ad_2]

    Source link

  • A lawsuit by TikTok users challenging Montana’s ban is being funded by the social media company itself | CNN Business

    A lawsuit by TikTok users challenging Montana’s ban is being funded by the social media company itself | CNN Business

    [ad_1]



    CNN
     — 

    A high-profile lawsuit brought by TikTok users and creators last month challenging Montana’s statewide ban against the short-form video app is being funded by the social media giant itself, the company told CNN on Wednesday.

    TikTok has been covering legal fees for the group of five TikTok creators, said Jodi Seth, a TikTok spokesperson, separately from the company’s own lawsuit to block the state’s new law targeting the app over national security concerns.

    “We support our creators through various programs and have an ongoing dialogue about their presence on TikTok,” Seth said in a statement. “Throughout this process, many creators have expressed major concerns both privately and publicly about the potential impact of the Montana law on their livelihoods. We will support our creators in fighting for their constitutional rights.”

    TikTok’s involvement in the creators’ suit was first reported this week by The New York Times, weeks after the initial court case was filed. The company’s role in the litigation had not been previously known.

    The suit by the TikTok creators was the first to challenge Montana’s law banning TikTok from being offered within state lines and establishing penalties for the company and for app stores that violate the law. Legal experts have said the legislation, which is not set to take effect until January, raises constitutional issues and may well be practically unenforceable even if the law is upheld.

    [ad_2]

    Source link

  • China just played a trump card in the chip war. Are more export curbs coming? | CNN Business

    China just played a trump card in the chip war. Are more export curbs coming? | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    A trade war between China and the United States over the future of semiconductors is escalating.

    Beijing hit back Monday by playing a trump card: It imposed export controls on two strategic raw materials, gallium and germanium, that are critical to the global chipmaking industry.

    “We see this as China’s second, and much bigger, counter measure to the tech war, and likely a response to the potential US tightening of [its] AI chip ban,” said Jefferies analysts. Sanctioning one of America’s biggest memory chipmakers, Micron Technology

    (MU)
    , in May was the first, they said.

    Here’s what you need to know about gallium and germanium, how they could play into the chip war and whether more countermeasures could be coming.

    Last October, the Biden administration unveiled a set of export controls banning Chinese companies from buying advanced chips and chip-making equipment without a license.

    Chips are vital for everything from smartphones and self-driving cars to advanced computing and weapons manufacturing. US officials have talked about the move as a measure to protect national security interests.

    But it didn’t stop there. For the curbs to be effective, Washington needed other key suppliers, located in the Netherlands and Japan, to join. They did.

    China eventually retaliated. In April, it launched a cybersecurity probe into Micron before banning the company from selling to Chinese companies working on key infrastructure projects. On Monday, Beijing announced the restrictions on gallium and germanium.

    Gallium is a soft, silvery metal and is easy to cut with a knife. It’s commonly used to produce compounds that are key materials in semiconductors and light-emitting diodes.

    Germanium is a hard, grayish-white and brittle metalloid that is used in the production of optical fibers that can transmit light and electronic data.

    The export controls have drawn comparisons with China’s reported attempts in early 2021 to restrict exports of rare earths, a group of 17 elements for which China controls more than half of the global supply.

    Gallium and germanium do not belong to this group of minerals. Like rare earths, they can be expensive to mine or produce.

    This is because they are usually formed as a byproduct of mining more common metals, primarily aluminum, zinc and copper, and processed in countries that produce them.

    China is the world’s leading producer of both gallium and germanium, according to the US Geological Survey. The country accounted for 98% of the global production of gallium, and 68% of the refinery production of germanium.

    “The economies of scale in China’s extensive and increasingly integrated mining and processing operations, along with state subsidies, have allowed it to export processed minerals at a cost that operators elsewhere can’t match, perpetuating the country’s market dominance for many critical commodities,” analysts from Eurasia Group said on Tuesday.

    Shares of Chinese producers of the two raw materials surged by 10% on Tuesday.

    Beyond China, Australian rare earths producers also advanced, as investors expected Beijing might extend export curbs to that group of strategically important minerals. Lynas Rare Earths

    (LYSCF)
    rose 1.5%.

    The United States is dependent on China for these the two critical elements. It imported more than 50% of the gallium and germanium it used in 2021 from the country, the US Geological Survey showed.

    Eurasia Group analysts described China’s export controls as a “warning shot.”

    “It is a shot across the bow intended to remind countries including the United States, Japan, and the Netherlands that China has retaliatory options and to thereby deter them from imposing further restrictions on Chinese access to high-end chips and tools,” Eurasia Group said in a research note.

    Chinese authorities may also intend to use its control over these niche metals as a possible bargaining chip in discussions with US Treasury Secretary Janet Yellen, who is scheduled to visit Beijing later this week.

    Jefferies analysts said the timing of the announcement was unlikely to be a casual decision.

    “It gives the US at least two days to digest and come up with a well-considered response,” they said.

    However, the move is not considered “a death blow” to the United States and its allies.

    China may be the industry leader, but there are alternative producers, as well as available substitutes for both minerals, the Eurasia Group analysts pointed out.

    The United States also imports a fifth of its gallium from the United Kingdom and Germany and buys more than 30% of its germanium from Belgium and Germany.

    That’s definitely possible, a former senior Chinese official has warned.

    The curbs announced this week are “just the start,” Wei Jianguo, a former deputy commerce minister, told the official China Daily on Wednesday, adding China has more tools in its arsenal with which to retaliate.

    “If the high-tech restrictions on China become tougher in the future, China’s countermeasures will also escalate,” he was quoted as saying.

    Analysts believe this too. Rare earths, which are not difficult to find but are complicated to process, are also critical in making semiconductors, and could be the next target.

    “If this action doesn’t change the US-China dynamics, more rare earth export controls should be expected,” Jefferies analysts said.

    However, analysts from Eurasia Group warned that restricting exports is a “double-edged sword.”

    Past attempts by China to leverage its dominance in rare earths have reduced availability and raised prices. Higher prices have spurred greater competition by making mining and processing ventures outside of China more cost-competitive, they said.

    China cut its rare earths export quota in 2010 amid tensions with the United States.

    That resulted in greater efforts by companies outside of the country to produce the metals. US data showed that China’s global market share dropped from 97% in 2010 to about 60% in 2019.

    “Imposing export restrictions risks reducing market dominance,” the Eurasia Group analysts said.

    CNN’s Hanna Ziady and Xiaofei Xu contributed to reporting.

    [ad_2]

    Source link

  • Legendary computer hacker Kevin Mitnick dies at 59 | CNN Business

    Legendary computer hacker Kevin Mitnick dies at 59 | CNN Business

    [ad_1]



    CNN
     — 

    Kevin Mitnick, one of the most famous hackers in the history of cybersecurity, died over the weekend at age 59 after a more than year-long battle with pancreatic cancer, his family said in a published obituary.

    Before his death on July 16, Mitnick’s hacking sprees were legendary, and multiple films were inspired by him.

    The first, “WarGames” starring Matthew Broderick, was partially based on allegations that Mitnick successfully hacked the computer systems at North American Aerospace Defense Command as a teenager. He denied ever having done so.

    Mitnick’s restless curiosity caught up with him when he was arrested for stealing $1 million in proprietary software from Digital Equipment Corporation in 1988. Mitnick was sentenced to a year in prison and three years of probation, but a new arrest warrant was issued in 1995 for violating that probation. Mitnick went on the run, breaking into the computer systems of multiple corporations, cell phone companies, and educational institutions, according to the federal indictment against him.

    Through it all, Mitnick and his defenders insisted he was harmless, not actually trying to hurt anyone or pursue financial gain.

    “I was an old-school hacker, doing it for intellectual curiosity,” Mitnick told Wired magazine in a 2008 interview. But federal authorities were so concerned about his capabilities that when he was incarcerated again in 1995, Mitnick told CNN he was held in solitary confinement for a time out of concern that even proximity to a telephone could allow him to continue hacking.

    Mitnick and federal prosecutors agreed to a plea deal in 1999 to seven criminal counts, including wire fraud and causing damage to computers. The deal included a 46-month prison sentence and a ban on being “employed in any capacity wherein he has access to computers or computer-related equipment or software” during a period of probation, but he was released in 2000 due to credit for time already served.

    Mitnick published a memoir on his hacking career, “Ghost in the Wires: My Adventures as the World’s Most Wanted Hacker,” in 2011.

    Following his prison term, Mitnick became a white-hat hacker, using his expertise to legally help businesses track people trying to break into their systems. For the past decade, he was the chief hacking officer and partial owner of the tech security firm KnowBe4, founded by his close friend and business partner, Stu Sjouwerman.

    “I made some really stupid mistakes in the past as a younger man that I regret,” Mitnick told CNN in a 2005 interview. “I’m lucky that I’ve been given a second chance and that I could use these skills to help the community.”

    “Kevin was a dear friend to me and many of us here at KnowBe4,” Sjouwerman said in a statement. “He is truly a luminary in the development of the cybersecurity industry, but mostly, Kevin was just a wonderful human being and he will be dearly missed.”

    A memorial for Mitnick is scheduled for August 1 in Las Vegas, his company said. He is survived by his wife Kimberley, who is pregnant with their first child, the family said.

    [ad_2]

    Source link

  • Google is building an AI tool for journalists | CNN Business

    Google is building an AI tool for journalists | CNN Business

    [ad_1]



    CNN
     — 

    Google is developing an artificial intelligence tool for news publishers that can generate article text and headlines, the company said, highlighting how the technology may soon transform the journalism industry.

    The tech giant said in a statement that it is looking to partner with news outlets on the AI tool’s use in newsrooms.

    “Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” a Google spokesperson said, “just like we’re making assistive tools available for people in Gmail and in Google Docs.”

    The effort was first reported by The New York Times, which said the project is referred to internally as “Genesis” and has been pitched to The Times, The Washington Post and News Corp, which owns The Wall Street Journal.

    Google’s statement did not name those media companies but said the company is particularly focusing on “smaller publishers.” It added that the project is not aimed at replacing journalists nor their “essential role … in reporting, creating, and fact-checking their articles.”

    The new tool comes as tech companies, including Google, race to develop and deploy a new crop of generative AI features into applications used in the workplace, with the promise of streamlining tasks and making employees more productive.

    But these tools, which are trained on information online, have also raised concerns because of their potential to get facts wrong or “hallucinate” responses.

    News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on “Star Wars” published by Gizmodo earlier this month similarly required a correction. But both outlets have said they will still move forward with using the technology.

    [ad_2]

    Source link

  • Meta begins blocking news access on its platforms in Canada | CNN Business

    Meta begins blocking news access on its platforms in Canada | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Meta has begun to remove news content from Facebook and Instagram in Canada, the social media giant said Tuesday, in response to recently passed legislation in the country that requires tech companies to negotiate payments to news organizations for hosting their content.

    As a result of the move — which Meta had previously said would occur before the law takes effect — Meta’s Canadian users will no longer be able to click on links to news articles posted to Facebook and Instagram.

    The changes began Tuesday and will roll out gradually over the coming weeks, said Meta spokesperson Andy Stone.

    The decision comes amid a global debate over the relationship between news organizations and social media companies about the value of news content, and who gets to benefit from it.

    Google has also announced that it plans to remove news content from its platforms in Canada when the law takes effect, which could happen by December.

    The Canadian legislation, known as Bill C-18, was given final approval in June. It aims to support the sustainability of news organizations by regulating “digital news intermediaries with a view to enhancing fairness in the Canadian digital news marketplace.”

    It comes after the passage of a 2021 Australian law that the tech platforms initially opposed by warning it would similarly force them to remove news content. Since then, the platforms have reached voluntary agreements with a range of news outlets in that country.

    Like-minded proposals have been introduced around the world amid allegations that the tech industry has decimated local journalism by sucking away billions in online advertising revenues.

    In May, Meta also threatened to remove news content from California if the state moved ahead with a revenue-sharing bill. The legislation was put on hold last month.

    And at the federal level, the US Senate in June advanced a bill that would grant news organizations the ability to jointly negotiate for a greater share of advertising revenues against online platforms, thanks to a proposed antitrust exemption for publishers and broadcasters.

    In a blog post Tuesday, Meta said the Canadian legislation “misrepresents the value news outlets receive when choosing to use our platforms.”

    “The legislation is based on the incorrect premise that Meta benefits unfairly from news content shared on our platforms, when the reverse is true,” the blog post said. “News outlets voluntarily share content on Facebook and Instagram to expand their audiences and help their bottom line.”

    Canadian users of Meta’s platforms will still be able to access news content online by visiting news outlets’ websites directly or by signing up for their subscriptions and apps.

    [ad_2]

    Source link

  • Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    [ad_1]



    CNN
     — 

    When Elyse Nguyen was nearing her wedding date in February and still hadn’t started writing her vows, a friend suggested she try a new source of inspiration: ChatGPT.

    The AI chatbot, which was released publicly in late November, can generate compelling written responses to user prompts and offers the promise of helping people get over writer’s block, whether it be for an essay, an email, or an emotional speech.

    “At first we inputted the prompt as a joke and the output was pretty cheesy with personal references to me and my husband,” said Nguyen, a financial analyst at Qualcomm. “But the essence of what vows should incorporate was there – our promises to each other and structure.”

    She made edits, changed the prompts to add humor and details about her partner’s interests, and added some personal touches. Nguyen ultimately ended up using a good portion of ChatGPT’s suggestions and said her husband was on board with it.

    “It helped alleviate some stress because I had no prior experience with wedding vows nor did I know what should be included,” Nguyen said. “Plus, ChatGPT is a genius with alliteration, analogies and metaphors. Having something like, ‘I promise to be your partner in life with the enthusiasm of a golfer’s first hole in one’ in my back pocket was comical.”

    Nearly five months after ChatGPT went viral and ignited a new AI arms race in Silicon Valley, more couples are looking to it for help with wedding planning, including writing vows and speeches, drafting religious marriage contracts, and setting up websites for the special day.

    Ellen Le recently created some of her wedding website through a new Writer’s Block Assistant tool on online wedding planning service Joy, which was one of the first third-party platforms to incorporate ChatGPT’s technology. (Last month, OpenAI, the company behind ChatGPT, opened up access to the chatbot, paving the way for it to be integrated into numerous apps and services.)

    Le, a product manager at a startup, said she used the feature to draft an “about us” page and write directions from San Francisco to her Napa Valley wedding. The Writer’s Block Assistant tool helps users write vows, best man and maid of honor speeches, thank you cards and wedding website “about us” pages. It also lets users highlight personal stories and select the style or tone before pulling it into a speech.

    “I started drafting my vows and when I typed in how we met, it produced this very delightful story,” Le said. “Some of it was inaccurate, making up certain details, but it gave me a helping hand and something to react to, rather than just spending 10 hours thinking about how to get started.”

    Le said her fiance, who often uses ChatGPT for work, is considering using AI to help with his vows too.

    Joy co-founder and CEO Vishal Joshi, who studied artificial intelligence and electrical engineering at NIT Rourkela in India, said the company launched Writer’s Block Assistant in March after it conducted an internal study that found most of its users were somewhat overwhelmed with getting started on writing vows and speeches, and wished they had help. He said the company has already seen thousands of submissions since launching the tool.

    “Almost two decades ago, AI enthusiasts like myself and my research peers had only dreamt of mass market adoption we are seeing today, and we know this is just the true beginning,” Joshi said. “Just like smartphones, if applied well, the positive impact of AI on our lives can far outshine the negatives. We’re working on responsibly innovating using AI to advance the wedding and event industry as a whole.”

    Michael Grinn and Kate Gardiner used viral AI tool ChatGPT to write the Ketubah, a Jewish wedding contract, for their June wedding.

    ChatGPT has sparked concerns in recent months about its potential to perpetuate biases, spread misinformation and upend certain livelihoods. Now, as it finds its way into marriage ceremonies, it could raise more nuanced questions about whether people risk losing something by injecting technology into what is supposed to be a deeply personal and, for many, spiritual moment in life.

    Michael Grinn, an anesthesiologist with practices in Miami and New York, was experimenting with ChatGPT when he asked it to produce a traditional Ketubah – a Jewish marriage contract – for his upcoming June wedding.

    Grinn and his fiance Kate Gardiner, the founder and CEO of a public relations firm, then requested it make some language changes around gender equality and intimacy. “At the end, we both looked at each other and were like, we can’t disagree with the result,” he said.

    Editing took about an hour, but it still shaved hours off what otherwise could have been a lengthy process, he said. Still, Grinn plans to write his own vows. “I want them to be less refined and something no one else helped me with.”

    He does, however, plan to use ChatGPT for inspiration for officiating his best man’s wedding. “It mostly comes down to time because I’ve been working so much,” he said, “and this is so efficient.”

    [ad_2]

    Source link

  • How Elon Musk upended Twitter and his own reputation in 6 months as CEO | CNN Business

    How Elon Musk upended Twitter and his own reputation in 6 months as CEO | CNN Business

    [ad_1]


    New York
    CNN
     — 

    When Elon Musk first agreed to buy Twitter, he promised to make the company “better than ever,” with greater transparency, fewer bots, a stronger business and more of what he called “free speech.”

    But six months after Musk took control of Twitter, the future of the company and the platform have never been less certain.

    After acquiring the social media platform for $44 billion in late October, Musk reportedly now values Twitter at around $20 billion — and some who track the company believe even that estimate is likely high. Musk repeatedly warned that Twitter could be at risk of filing for bankruptcy only to claim he had brought it back from the brink thanks to his slashing costs, both by laying off 80% of Twitter’s staff and allegedly by failing to pay some of its bills, according to multiple lawsuits. But it’s not clear just how and when Musk might return Twitter to growth.

    He has antagonized journalists and news outlets that have long been central to the platform’s success, overseen policy changes that threaten to make Twitter less safe or reliable, made the platform less transparent to researchers and scared away many top advertisers. Musk’s primary plan to grow Twitter’s business through an overhauled subscription strategy has resulted in much chaos but only a limited number of actual subscriptions.

    In the process, Musk has also upended his own reputation. Once known by much of the public primarily for his innovative efforts to launch rockets and build electric cars, Musk has instead spent much of the past six months in the headlines for controversial policy and feature changes at Twitter, draconian cuts to staff resulting in frequent service disruptions, and briefly banning several prominent journalists. He’s also tweeted a long list of eccentric remarks from his personal Twitter account, including sharing conspiracy theories and publicly mocking a Twitter worker with a disability who was unsure whether he’d been laid off.

    “If he had done nothing except cut costs, then Twitter would have been okay,” said Leslie Miley, a former Twitter engineering manager who started its product safety and security team and left the company in 2015. He has since held roles at Google, Microsoft and the Obama Foundation. “If you had just let everyone go, treated them with respect, and just let the service run for two years, you probably would be okay.”

    Now, though, Miley said he expects Twitter will “eventually go down the road of MySpace.”

    “It’s going to take a little bit longer … [but] I think Twitter is on its way to irrelevance,” he said, “there is no strategy to acquire or retain users because you are offering them no value.”

    Twitter, which has slashed much of its public relations team under Musk, responded to CNN’s request for comment on this story with the auto-reply from its press email that it has used for weeks: a poop emoji.

    For years, what differentiated Twitter from other social platforms was that it served as a central hub for real-time news. It was a place for ordinary people to read and even engage in conversation with celebrities, business leaders and other newsmakers.

    Many of Musk’s recent moves at the platform threaten to undermine that purpose, not to mention the larger information ecosystem — and it’s not clear the efforts will improve the company’s business.

    “Twitter has never been perfect, it had a lot of problems but it was critical global infrastructure for information that Elon Musk is now systematically, frankly, vandalizing,” former Twitter chair of global news Vivian Schiller told CNN in a recent interview.

    Most recently, Musk removed the legacy blue check marks that verified the identities of prominent users, saying he would instead make the checks available only to those who pay $8 per month for Twitter Blue in the interest of “treating everyone equally.”

    “There shouldn’t be a different standard for celebrities,” Musk said in a tweet earlier this month.

    But the move may make it easier for bad actors to impersonate high-profile people and harder for users to trust the veracity and authenticity of information on the platform. What’s more, Musk then decided to sponsor the blue checks for certain celebrities, including Stephen King and LeBron James, in effect creating exactly the “different standard” for famous users he’d professed to want to avoid.

    Now, Musk says content from verified users will be promoted on the platform, potentially making it harder for users who can’t afford a subscription, or simply don’t want to pay Musk for one, to find an audience on the platform. And the new paid verification system won’t necessarily rid the platform of bots, an issue Musk spent months railing on while trying to get out of the acquisition deal last year, according to Filippo Menczer, a computer science professor at Indiana University and director of the Observatory on Social Media.

    “You can create fake accounts and pay $8 [for a blue check] … so if you are a well-funded bad actor, you can do more damage now than you could before,” Menczer said. “And if you are a reliable source and you’re not well-funded, your information will not be as visible as before.”

    Menczer added that the result could be “less free speech, because you’re drowning out the speech of regular people [with speech] by people who either have the technical skills or the money to manipulate the system.”

    Twitter’s move to charge users of its API will also make it harder for researchers to identify and warn the platform about inauthentic activity, Menczer said, and could disrupt other positive uses of the platform that contributed to its reputation as a news hub. Weather agencies, for example, have warned that the change could make it harder for them to release automated emergency weather alerts.

    Any social network lives or dies based on its ability to retain and attract users — and there’s real reason for Twitter to be worried.

    A number of users, celebrities and media organizations have said they plan to leave Twitter over Musk’s recent policy changes — which often appear to be made on a whim without any real principles.

    NPR, BBC and CBC left Twitter after opposing a controversial new “government-funded media” label that they say was misleading. CenterLink, a global nonprofit that represents hundreds of centers providing services to LGBTQ communities, said it would no longer use Twitter after the platform removed protections for transgender users from its hateful conduct policy. And some high-profile users, such as bullying activist Monica Lewinsky, have threatened to exit the platform over the blue check change, now that they may be at greater risk of impersonation on Twitter.

    There remain few alternatives that offer similar features and scale to Twitter, but a growing list of upstart competitors has emerged since Musk’s takeover. At least one large rival, Facebook-parent Meta, has also confirmed it’s working on a service that sounds a lot like Twitter.

    “Almost everything he said he was going to do, he has screwed up in any number of ways,” Miley said. “If it weren’t so damaging to people and organizations who have depended upon the platform, it would be funny. But it’s not actually funny because it has degraded people’s ability to communicate effectively.”

    All of the chaos has made it difficult to convince advertisers, which previously made up 90% of Twitter’s revenue, to rejoin the platform, after many halted spending in the wake of Musk’s takeover over concerns about increased hate speech, as well as confusion about layoffs and the platform’s future direction.

    Just 43% of Twitter’s top 1,000 advertisers as of September — the month before Musk’s takeover — were still advertising on the platform in April, according to data from market intelligence firm Sensor Tower.

    Musk, for his part, has said that Twitter’s usage has increased since his takeover and that advertisers are steadily returning to the platform. But because he took the company private, he is not obligated to make financial disclosures and followers of the company are left to take him at his word.

    Musk built his reputation by overhauling Tesla, helping to launch a widespread shift away from gas cars to electric vehicles and growing SpaceX into a space transport juggernaut. Now, he appears to be attempting a similar overhaul at Twitter — upending the tried-and-true digital advertising business in favor of a subscription model that no other social media platform has yet been able to find large scale success with.

    “I give him some credit for trying a different business model, I think the business model based on user data is quite abusive,” said Luigi Zingales, professor at the University of Chicago Booth School of Business, although Musk has also attempted to improve Twitter’s targeted advertising business.

    Some other tech companies have followed his lead in some places. Facebook-parent Meta copied Twitter by launching a paid verification option. And Meta, along with a number of other tech companies, have undergone multiple rounds of cost-cutting since last fall. Twitter appears to have given cover for some of these ideas, and other firms’ somewhat more principled approaches made them look better by comparison.

    For Twitter and Musk, the stakes for success are high: Musk’s relationships with banks and investors for future endeavors could hinge in part on his performance at the social media firm, which he took on billions of dollars in debt to purchase. Banks “will sit down and say, what kind of cred does this guy have? Will we find him making these shoot-from-the-lip sort of dictates that, in fact, throw our money down a hole?” said Columbia Business School management professor William Klepper.

    Any change to Musk’s reputation from his time leading Twitter could also ultimately have ripple effects for his broader business empire, causing potential investors, recruits and customers to think twice about betting on one of his companies. Tesla

    (TSLA)
    shareholders recently complained to the company’s board that Musk appears “overcommitted.”

    “His reputation has been diminished significantly with Twitter … and once you lose it, it’s very difficult to recover,” Klepper said. “It would be a good opportunity for [Musk] to rethink whether or not … he’s really leadership material.”

    Musk in December pledged to step down as Twitter CEO after millions of users voted in favor of his exit in a poll he posted to the platform. But for now, he remains “Chief Twit.”

    [ad_2]

    Source link

  • UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    [ad_1]



    Reuters
     — 

    A citizen of the United Kingdom who was extradited to New York from Spain last month has pleaded guilty to cyberstalking and computer hacking schemes, including the 2020 hack of the social media site Twitter, the U.S. Justice Department said on Tuesday.

    Joseph James O’Connor, 23, was charged in both North Dakota and New York. The North Dakota case was transferred to the U.S. District Court for the Southern District of New York.

    O’Connor pleaded guilty to charges including conspiring to commit computer intrusions, to commit wire fraud and to commit money laundering.

    O’Connor, who was extradited to the U.S. on April 26, will also forfeit more than $794,000 and pay restitution to victims, prosecutors said. He faces a maximum of 77 years in prison at sentencing on June 23.

    “O’Connor’s criminal activities were flagrant and malicious, and his conduct impacted multiple people’s lives. He harassed, threatened, and extorted his victims, causing substantial emotional harm,” Assistant Attorney General Kenneth Polite said in a statement.

    Prosecutors said the schemes included gaining unauthorized access to social media accounts on Twitter in July 2020 as well as a TikTok account in August 2020. Along with his co-conspirators, O’Connor stole at least $794,000 worth of cryptocurrency.

    The July 2020 Twitter attack hijacked a variety of verified accounts, including those of then-Democratic presidential candidate Joe Biden and Tesla CEO Elon Musk, who now owns Twitter.

    The accounts of former President Barack Obama, reality TV star Kim Kardashian, Bill Gates, Warren Buffett, Benjamin Netanyahu, Jeff Bezos, Michael Bloomberg and Kanye West were also hit.

    The alleged hacker used the accounts to solicit digital currency, prompting Twitter to prevent some verified accounts from publishing messages for several hours until security could be restored.

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Twitter loses its top content moderation official at a key moment | CNN Business

    Twitter loses its top content moderation official at a key moment | CNN Business

    [ad_1]



    CNN
     — 

    Twitter has lost its top content moderation official just weeks before the company is set to undergo a regulatory stress test by European Union officials focused on its handling of user content, in the latest sign of turbulence at the company under owner Elon Musk.

    On Thursday, Twitter’s head of trust and safety, Ella Irwin, told Reuters she had left the company. Irwin has not addressed the reasons for her departure, but the move coincided with the company’s content moderation dispute with the Daily Wire, a conservative outlet.

    The dispute focused on the forthcoming release of a self-described documentary, “What Is a Woman?” that Twitter warned would be labeled as “hateful content” due to two instances of misgendering, according to Daily Wire CEO Jeremy Boreing. Musk intervened later Thursday, calling the content moderation decision “a mistake by many people at Twitter” and that the video would be “definitely allowed.”

    Twitter did not immediately respond to a request for comment on Irwin’s departure.

    But the sudden and unexpected vacancy at Twitter could leave the company without a key content moderation official at a sensitive moment. Later this month at Twitter’s San Francisco offices, EU officials are set to review whether the platform is likely to be compliant with a sweeping content moderation law that could eventually trigger millions of dollars in fines for Twitter if it’s found to be noncompliant.

    That law, known as the Digital Services Act, will require so-called “very large online platforms” including Twitter to abide by tough content moderation standards by as early as August. It’s far from clear whether the company can meet those requirements by the deadline, and recent developments at Twitter seem to have further alarmed EU regulators in that respect.

    For months, as Musk has increasingly welcomed more incendiary speech onto the platform Twitter had previously restricted, EU officials have been reminding Twitter of its content moderation obligations under the DSA. The warnings have also come amid mass layoffs at the company that have eliminated entire teams, including much of its content moderation staff.

    Last month, Twitter pulled out of the European Union’s code of conduct on disinformation, a series of voluntary commitments to combat mis- and disinformation that the EU has said would be considered as part of any evaluation of a platform’s compliance with the overall Digital Services Act (DSA).

    Although Twitter said it was “committed to fully complying with the Digital Services Act” and would meet its DSA obligations with respect to misinformation “in a manner that reflects Twitter’s unique service,” the company told EU officials “we feel we have no alternative” but to withdraw from the code.

    The announcement prompted swift backlash from Thierry Breton, a top EU commissioner and digital regulator, who appeared to regard Twitter’s decision as an attempt to evade responsibility.

    “Obligations remain,” Breton said. “You can run but you can’t hide.”

    Irwin’s departure could undercut the EU’s confidence further. Without a trust and safety head who would otherwise be expected to attend the EU stress test, Twitter’s ability to effectively respond to the evaluation may be constrained. A spokesperson for the European Commission didn’t immediately respond to a request for comment.

    On Friday, The Wall Street Journal reported that Twitter’s head of brand safety and ad quality also departed the company this week.

    All of this could be problematic for Twitter and Musk in the long run – and could also create an added headache for Linda Yaccarino just as she takes over as the company’s new CEO.

    Companies that fail to abide by the DSA risk fines of up to 6% of their global annual revenue. For Twitter, which is already struggling to regain its financial footing amid significant debt and an advertiser backlash, that’s a cost it can ill afford.

    [ad_2]

    Source link

  • Charged rhetoric swirls online and off as Trump’s Miami court date looms | CNN Politics

    Charged rhetoric swirls online and off as Trump’s Miami court date looms | CNN Politics

    [ad_1]



    CNN
     — 

    From the halls of Congress to the dark corners of the internet, charged and violent rhetoric is echoing among some Donald Trump sympathizers ahead of the former president’s appearance in a Miami court on Tuesday

    FBI special agents across the country assigned to domestic terrorism squads are actively working to identify any possible threats, four law enforcement sources told CNN, following Trump’s second indictment.

    So far, the FBI is aware of various groups like the Proud Boys discussing traveling to south Florida to publicly show support for Trump, sources said, but there is currently no indication of any specific and credible threat.

    “We have now reached a war phase,” Rep. Andy Biggs, an Arizona Republican and prominent supporter of Trump’s election denialism, tweeted Friday. “Eye for an eye.” Biggs’ office later said his comment was a call for the GOP to “step up and use their procedural tools” to counter “the Left’s weaponization of our federal law enforcement apparatus.”

    Speaking at a Republican event in Georgia on Friday night, Kari Lake, who unsuccessfully ran for governor of Arizona last year and is still spreading falsehoods about that election, said: “If you want to get to President Trump, you’re going to have to go through me and 75 million Americans just like me.”

    “And I’m going to tell you, most of us are card-carrying members of the NRA,” she said to applause, adding, “That’s not a threat, that’s a public service announcement.”

    On some pro-Trump forums, anonymous users were less circumspect. “MAGA will make Waco look like a tea party!” one user posted Friday in an apparent reference to the April 1993 Waco, Texas siege that left 76 people dead.

    On Trump’s social media platform, Truth Social, one anonymous user posted Thursday, “This is a Declaration of War against the American People. It is time We The People exercise our 2nd Amendment rights and burn the corruption out of DC.”

    The former president himself has been posting frequently on Truth Social throughout the weekend. “SEE YOU IN MIAMI ON TUESDAY!!!” he posted Friday.

    Still, at least on public social media forums, there doesn’t appear to be a mass online mobilization effort for people to gather people in Miami this week like there was in the lead-up to the events in Washington, DC, on January 6, 2021.

    However some prominent right-wing figures are calling for Trump supporters to protest in Miami on Tuesday.

    One influential right-wing activist in Florida who has almost half a million followers on Twitter is promoting a flag-waving event outside Trump’s golf course in Doral on Monday and a protest the following day against the “weaponization of government” outside the Wilkie D. Ferguson Jr. Courthouse, where the former president is set to appear.

    Some Trump supporters online have stressed the need for protests to remain peaceful and some have said they will not demonstrate in Miami on Tuesday, fearing it could be a trap. This is an extension of the false belief held by some that the January 6 attack on the US Capitol was a set-up designed to incriminate supporters of the former president.

    But at least one person who has served prison time for his role in the January 6 riot said he will be in Miami to protest on Tuesday.

    Anthime Gionet, a prominent online streamer better known by his moniker “Baked Alaska,” plead guilty to unlawfully protesting after he livestreamed himself breaching the Capitol in a nearly 30-minute video that showed him encouraging others in the mob to enter the building.

    Gionet served a two month sentence and was released at the end of March, according to federal records.

    On Friday, he lamented Trump’s latest indictment in a livestream outside Mar-a-Lago. During the livestream, Gionet said he and another person who was with him outside Mar-a-Lago would both be in Miami on Tuesday. The other person is heard on the stream responding, “we weren’t supposed to talk about that.” Gionet replied, “I know but it leaked so f*** it.”

    The exchange may be illustrative of the shifting ways people use the internet to organize – something that has proven to be a challenge for law enforcement.

    While much of the planning for January 6, 2021, attack on the US Capitol was done on public forums that could be read by anyone, a lot of that communication has since shifted to private channels, experts say.

    The secretive nature of many private forums has caused federal agents working domestic terrorism matters to place greater emphasis on recruiting informants who can report on potential threats discussed online among extremists, law enforcement sources told CNN.

    But even messages posted publicly cannot be accessed by investigators without lawful investigative purposes. The FBI’s own investigative guidelines limit what material can be accessed by agents and analysts, even when it is in the public domain. These policies prevent FBI employees from trawling the internet looking for concerning material, unless a formal assessment or investigation has been authorized and opened.

    The FBI’s investigative efforts to identify possible threats include querying existing confidential human informants reporting on domestic terrorism issues for any indication of potential threats, sources said.

    In addition to working their informant networks, FBI agents and analysts are reviewing publicly available online platforms frequented by domestic extremists for any indication of plans for violence.

    Ben Decker, CEO of Memetica, a threat intelligence company, told CNN on Sunday, “Given the robust and successful grassroots architecture of right-wing culture war campaigns and anti-Pride protests this month, there are concerns that many of these in-person rally groups could pivot directly into more Trump-themed protests around the country over the coming days.”

    But, at this point, Daniel J. Jones, the president of Advance Democracy, a non-profit that conducts public interest research, told CNN that his group had not identified “what we would assess to be specific and credible plans for violence yet.”

    “However,” he added ,”as we saw during the events of January 6, it’s Trump’s statements that drive the online rhetoric and real-world violence. As such, much depends on what Trump says of his perceived opponents, as well as what he asks of his supporters, in the days ahead.”

    Juliette Kayyem, a CNN national security analyst and a former assistant secretary at the Department of Homeland Security, echoed this concern. “We know how incitement to violence works. It is nurtured from the top and given license to spread by leaders. They don’t have to direct it to one place or time. They can simply unleash it, knowing full well that someone may become emboldened to act,” she said.

    Last month, the Department of Homeland Security issued a nationwide bulletin indicating the country “remains in a heightened threat environment,” warning that individuals “motivated by a range of ideological beliefs and personal grievances continue to pose a persistent and lethal threat to the homeland.”

    DHS analysts indicated the motivating factors that could incite extremists to violence include perception about the integrity of the 2024 election cycle, and, while not specifically citing Trump’s legal woes, also pointed to “judicial decisions” in their list of grievances among extremist groups.

    Ahead of Trump’s Tuesday court appearance, law enforcement will continue to remain on alert.

    “We do not want a repeat of [the January 6] violence,” one senior FBI source said.

    [ad_2]

    Source link