ReportWire

Tag: iab-business and finance

  • Indonesia bans e-commerce transactions on social media in major blow to TikTok | CNN Business

    Indonesia bans e-commerce transactions on social media in major blow to TikTok | CNN Business

    [ad_1]


    Jakarta
    Reuters
     — 

    Indonesia has banned e-commerce transactions on social media platforms, the trade minister said on Wednesday, in a blow to short video app TikTok, which is doubling down on Southeast Asia’s biggest economy to boost its e-commerce business.

    The government said the move, which takes effect immediately, is aimed at protecting offline merchants and marketplaces, adding that predatory pricing on social media platforms is threatening small and medium-sized enterprises.

    The move comes just three months after TikTok pledged to invest billion of dollars in Southeast Asia, mainly in Indonesia, over the next few years in a major push to build its e-commerce platform TikTok Shop.

    TikTok, owned by China’s ByteDance, has 125 million active monthly users in Indonesia and has been looking to translate the large user base into a major e-commerce revenue source.

    A TikTok Indonesia spokesperson said it would pursue a constructive path forward and was “deeply concerned” with the announcement, “particularly how it would impact the livelihoods of the 6 million” local sellers active on TikTok Shop.

    Indonesia Trade Minister Zulkifli Hasan on Wednesday told reporters that the regulation is intended to ensure “fair and just” business competition, adding that it was also intended to ensure data protection of users.

    He warned of letting social media become an e-commerce platform, shop and bank all at the same time.

    The new regulation also requires e-commerce platforms in Indonesia to set a minimum price of $100 for certain items that are directly purchased from abroad, according to the regulation document reviewed by Reuters, and that all products offered should meet local standards.

    Zulkifli said TikTok had one week to comply with the regulation or face the threat of closure. Indonesia Deputy Trade Minister Jerry Sambuaga earlier this month named TikTok’s live streaming features as an example of people selling goods on social media.

    Research firm BMI said TikTok would be the only business affected by the transaction ban and the move was unlikely to harm the digital marketplace industry’s growth.

    Indonesia’s e-commerce market is dominated by the likes of homegrown tech firm GoTo’s Tokopedia, Sea’s Shopee and Chinese e-commerce giant Alibaba’s Lazada.

    E-commerce transactions in Indonesia amounted to nearly $52 billion last year and of that, 5% took place on TikTok, according to data from consultancy Momentum Works.

    Indonesia is among the few markets where TikTok has launched TikTok Shop, as it seeks to leverage its large user base in the country.

    Its 125 million active monthly users in Indonesia is almost on par with its user figures for Europe and behind US users of more than 150 million. TikTok launched an online shopping service in the United States earlier this month.

    Reactions from retailers were mixed.

    Fahmi Ridho, a vendor selling clothes on TikTok, said the platform was a way for stores to recover from the blow dealt by the Covid-19 pandemic.

    “Sales don’t have to be necessarily through [brick and mortar] shops, you can do it online or wherever,” he said “Everything will still have a portion.”

    But Edri, who goes by one name only and sells clothes at a major wholesale market in Jakarta, agreed with the regulation and stressed that there should be limits on items sold online.

    [ad_2]

    Source link

  • The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business

    The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business

    [ad_1]


    New York
    CNN
     — 

    As the Israel-Hamas war reaches the end of its first week, millions have turned to platforms including TikTok and Instagram in hopes of comprehending the brutal conflict in real time. Trending search terms on TikTok in recent days illustrate the hunger for frontline perspectives: From “graphic Israel footage” to “live stream in Israel right now,” internet users are seeking out raw, unfiltered accounts of a crisis they are desperate to understand.

    For the most part, they are succeeding, discovering videos of tearful Israeli children wrestling with the permanence of death alongside images of dazed Gazans sitting in the rubble of their former homes. But that same demand for an intimate view of the war has created ample openings for disinformation peddlers, conspiracy theorists and propaganda artists — malign influences that regulators and researchers now warn pose a dangerous threat to public debates about the war.

    One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attacks, including false claims that they were orchestrated by the media. Another, viewed more than 100,000 times, shows a clip from the video game “Arma 3” with the caption, “The war of Israel.” (Some users in the comments of that video noted they had seen the footage circulating before — when Russia invaded Ukraine.)

    TikTok is hardly alone. One post on X, formerly Twitter, was viewed more than 20,000 times and flagged as misleading by London-based social media watchdog Reset for purporting to show Israelis staging civilian deaths for cameras. Another X post the group flagged, viewed 55,000 times, was an antisemitic meme featuring Pepe the Frog, a cartoon that has been appropriated by far-right white supremacists. On Instagram, a widely shared and viewed video of parachuters dropping in on a crowd and captioned “imagine attending a music festival when Hamas parachutes in” was debunked over the weekend and, in fact, showed unrelated parachute jumpers in Egypt. (Instagram later labeled the video as false.)

    This week, European Union officials sent warnings to TikTok, Facebook and Instagram-parent Meta, YouTube and X, highlighting reports of misleading or illegal content about the war on their platforms and reminding the social media companies they could face billions of dollars in fines if an investigation later determines they violated EU content moderation laws. US and UK lawmakers have also called on those platforms to ensure they are enforcing their rules against hateful and illegal content.

    Since the violence in Israel began, Imran Ahmed, founder and CEO of the social media watchdog group Center for Countering Digital Hate, told CNN his group has tracked a spike in efforts to pollute the information ecosystem surrounding the conflict.

    “Getting information from social media is likely to lead to you being severely disinformed,” said Ahmed.

    Everyone from US foreign adversaries to domestic extremists to internet trolls and “engagement farmers” has been exploiting the war on social media for their own personal or political gain, he added.

    “Bad actors surrounding us have been manipulating, confusing and trying to create deception on social media platforms,” Dan Brahmy, CEO of the Israeli social media threat intelligence firm Cyabra, said Thursday in a video posted to LinkedIn. “If you are not sure of the trustworthiness [of content] … do not share,” he said.

    ‘Upticks in Islamophobic and antisemitic narratives’

    Graham Brookie, senior director of the Digital Forensic Research Lab at the Atlantic Council in Washington, DC, told CNN his team has witnessed a similar phenomenon. The trend includes a wave of first-party terrorist propaganda, content depicting graphic violence, misleading and outright false claims, and hate speech – particularly “upticks in specific and general Islamophobic and antisemitic narratives.”

    Much of the most extreme content, he said, has been circulating on Telegram, the messaging app with few content moderation controls and a format that facilitates quick and efficient distribution of propaganda or graphic material to a large, dedicated audience. But in much the same way that TikTok videos are frequently copied and rebroadcast on other platforms, content shared on Telegram and other more fringe sites can easily find a pipeline onto mainstream social media or draw in curious users from major sites. (Telegram didn’t respond to a request for comment.)

    Schools in Israel, the United Kingdom and the United States this week urged parents to delete their children’s social media apps over concerns that Hamas will broadcast or disseminate disturbing videos of hostages who have been seized in recent days. Photos of dead or bloodied bodies, including those of children, have already spread across Facebook, Instagram, TikTok and X this week.

    And tech watchdog group Campaign for Accountability on Thursday released a report identifying several accounts on X sharing apparent propaganda videos with Hamas iconography or linking to official Hamas websites. Earlier in the week, X faced criticism for videos unrelated to the war being presented as on-the-ground footage and for a post from owner Elon Musk directing users to follow accounts that previously shared misinformation (Musk’s post was later deleted, and the videos were labeled using X’s “community notes” feature.)

    Some platforms are in a better position to combat these threats than others. Widespread layoffs across the tech industry, including at some social media companies’ ethics and safety teams, risk leaving the platforms less prepared at a critical moment, misinformation experts say. Much of the content related to the war is also spreading in Arabic and Hebrew, testing the platforms’ capacity to moderate non-English content, where enforcement has historically been less robust than in English-language content.

    “Of course, platforms have improved over the years. Communication & info sharing mechanisms exist that did not in years past. But they have also never been tested like this,” Brian Fishman, the co-founder of trust and safety platform Cinder who formerly led Facebook’s counterterrorism efforts, said Wednesday in a post on Threads. “Platforms that kept strong teams in place will be pushed to the limit; platforms that did not will be pushed past it.”

    Linda Yaccarino, the CEO of X, said in a letter Wednesday to the European Commission that the platform has “identified and removed hundreds of Hamas-related accounts” and is working with several third-party groups to prevent terrorist content from spreading. “We’ve diligently taken proactive actions to remove content that violates our policies, including: violent speech, manipulated media and graphic media,” she said. The European Commission on Thursday formally opened an investigation into X following its earlier warning about disinformation and illegal content linked to the war.

    Meta spokesperson Andy Stone said that since Hamas’ initial attacks, the company has established “a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation. Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds.”

    YouTube, for its part, says its teams have removed thousands of videos since the attack began, and continues to monitor for hate speech, extremism, graphic imagery and other content that violates its policies. The platform is also surfacing almost entirely videos from mainstream news organizations in searches related to the war.

    Snapchat told CNN that its misinformation team is closely watching content coming out of the region, making sure it is within the platform’s community guidelines, which prohibits misinformation, hate speech, terrorism, graphic violence and extremism.

    TikTok did not respond to a request for comment on this story.

    Large tech platforms are now subject to content-related regulation under a new EU law called the Digital Services Act, which requires them to prevent the spread of mis- and disinformation, address rabbit holes of algorithmically recommended content and avoid possible harms to user mental health. But in such a contentious moment, platforms that take too heavy a hand in moderation could risk backlash and accusations of bias from users.

    Platforms’ algorithms and business models — which generally rely on the promotion of content most likely to garner significant engagement — can aid bad actors who design content to capitalize on that structure, Ahmed said. Other product choices, such as X’s moves to allow any user to pay for a subscription for a blue “verification” checkmark that grants an algorithmic boost to post visibility, and to remove the headlines from links to news articles, can further manipulate how users perceive a news event.

    “It’s time to break the emergency glass,” Ahmed said, calling on platforms to “switch off the engagement-driven algorithms.” He added: “Disinformation factories are going to cause geopolitical instability and put Jews and Muslims at harm in the coming weeks.”

    Even as social media companies work to hide the absolute worst content from their users — whether out of a commitment to regulation, advertisers’ brand safety concerns, or their own editorial judgments — users’ continued appetite for gritty, close-up dispatches from Israelis and Palestinians on the ground is forcing platforms to walk a fine line.

    “Platforms are caught in this demand dynamic where users want the latest and the most granular, or the most ‘real’ content or information about events, including terrorist attacks,” Brookie said.

    The dynamic simultaneously highlights the business models of social media and the role the companies play in carefully calibrating their users’ experiences. The very algorithms that are widely criticized elsewhere for serving up the most outrageous, polarizing and inflammatory content are now the same ones that, in this situation, appear to be giving users exactly what they want.

    But closeness to a situation is not the same thing as authenticity or objectivity, Ahmed and Brookie said, and the wave of misinformation flooding social media right now underscores the dangers of conflating them.

    Despite giving the impression of reality and truthfulness, Brookie said, individual stories and combat footage conveyed through social media often lack the broader perspective and context that journalists, research organizations and even social media moderation teams apply to a situation to help achieve a fuller understanding of it.

    “It’s my opinion that users can interact with the world as it is — and understand the latest, most accurate information from any given event — without having to wade through, on an individual basis, all of the worst possible content about that event,” Brookie said.

    Potentially exacerbating the messy information ecosystem is a culture on social media platforms that often encourages users to bear witness to and share information about the crisis as a way of signaling their personal stance, whether or not they are deeply informed. That can lead even well-intentioned users to unwittingly share misleading information or highly emotional content created with the intention of collecting views or monetizing highly engaging content.

    “Be very cautious about sharing in the middle of a major world event,” Ahmed said. “There are people trying to get you to share bullsh*t, lies, which are designed to inculcate you to hate or to misinform you. And so sharing stuff that you’re not sure about is not helping people, it’s actually really harming them and it contributes to an overall sense that no one can trust what they’re seeing.”

    [ad_2]

    Source link

  • Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    [ad_1]


    Taipei
    CNN
     — 

    Taiwan’s Foxconn says it plans to build artificial intelligence (AI) data factories with technology from American chip giant Nvidia, as the electronics maker ramps up efforts to become a major global player in electric car manufacturing.

    Foxconn Chairman Young Liu and Nvidia CEO Jensen Huang jointly announced the plans on Wednesday in Taipei. The duo said the new facilities using Nvidia’s chips and software will enable Foxconn to better utilize AI in its electric vehicles (EV).

    “We are at the beginning of a new computing revolution,” Huang said. “This is the beginning of a brand new way of doing software — using computers to write software that no humans can.”

    Large computing systems powered by advanced chips will be able to develop software platforms for the next generation of EVs by learning from everyday interactions, they said.

    “Foxconn is turning from a manufacturing service company into a platform solution company,” Liu said. “In three short years, Foxconn has displayed a remarkable range of high-end sedan, passenger crossover, SUV, compact pick-up, commercial bus and commercial van.”

    Best known as the assembler of Apple’s iPhones, Foxconn envisages a similar business model for EVs. It doesn’t sell the vehicles under its own brand. Instead, it will build them for clients in Taiwan and globally.

    In 2021, Foxconn unveiled three EV models, including two passenger cars and a bus, for the first time. They were followed by additional models last year and two new ones — Model N, a cargo van, and Model B, a compact SUV — during Foxconn’s tech day on Wednesday.

    Its electric buses started running in the southern Taiwanese city of Kaohsiung last year, while its first electric car, sold under the N7 brand by Taiwanese automaker Luxgen, is expected to begin deliveries on the island from January 2024.

    Foxconn has entered a competitive industry.

    Global sales of EVs, including purely battery powered vehicles and hybrids, exceeded 10 million units last year, up 55% from 2021, according to the International Energy Agency. Nearly 14 million electric cars will be sold in 2023, it projected.

    Foxconn, which is officially known as the Hon Hai Technology Group, has been expanding its business by entering new industries such as EVs, digital health and robotics.

    Analysts say its entry into the EV space is a “logical diversification.”

    Smartphones are “a very saturated market already, and the room to grow in the … industry is getting [smaller],” said Kylie Huang, a Taipei-based analyst at Daiwa. “If they can really tap into the EV business, I do think that [they] could become influential in the next couple of years.”

    During last year’s tech day, Liu told reporters that the company hoped to build 5% of the world’s electric cars by 2025. It aims to eventually produce up to 40% to 45% of EVs around the world.

    But its foray into the industry hasn’t been entirely smooth.

    Last year, Foxconn bought a factory from Lordstown Motors in Ohio that used to make small cars for General Motors. That partnership ended in June, with the American car company filing for bankruptcy protection and announcing a lawsuit against Foxconn.

    Lordstown Motors accused Foxconn of “fraud” and failing to follow through on investment promises, while Foxconn dismissed the suit as “meritless” and criticized the company for making “false comments and malicious attacks.”

    Still, it’s clear Foxconn is leaning into its expanded ambitions, including hiring two new chief strategy officers for its EV and chips businesses.

    Chiang Shang-yi is a Taiwanese semiconductor industry veteran who helped TSMC become a global foundry powerhouse, while Jun Seki, a former vice chief operating officer at Nissan Motor, leads the EV unit.

    In May, Foxconn announced a new partnership with Infineon Technologies, a German company that specializes in automotive semiconductor chips, to establish a new research center in Taiwan.

    Bill Russo, founder of Shanghai-based consulting firm Automobility, said Foxconn has the advantage of coming from a consumer electronics background, which could allow it to come up with more innovative EV products compared with traditional automakers.

    “The biggest problem with legacy automakers is that they have so much sunk investment in a carryover platform, that they typically want to start not with a clean sheet of paper, but with a highly constrained set of requirements,” he said. “Those carryover technologies bring constraints to how you think about vehicles.”

    “When Tesla started, it started by saying, ‘I’m going to challenge all of that, I’m going to blow up the basic architecture of a car and simplify it greatly,’” he added.

    “I think that’s the advantage that a technology company has … And I think that’s the way Foxconn will come at this.”

    Hanna Ziady contributed to this report.

    [ad_2]

    Source link

  • India restricts laptop, PC imports to boost local manufacturing | CNN Business

    India restricts laptop, PC imports to boost local manufacturing | CNN Business

    [ad_1]



    CNN
     — 

    India has placed restrictions on the import of computers and laptops in a surprise move from the government of Prime Minister Narendra Modi which has been trying to encourage domestic manufacturing in the tech sector.

    Importers will now need to apply for licenses in order to bring laptops, tablets, personal computers and other electronic devices into the country, according to a notice issued by the Ministry of Commerce and Industry on Thursday. Previously, the import of such items was unrestricted.

    The ministry didn’t provide a reason for the change in rules, however Modi has aggressively pushed his “Make in India” campaign, which promotes local manufacturing in a bid to create more jobs. It follows a similar curb on smart TV imports in 2020.

    India’s electronic imports stood at $19.7 billion in the April to June period, up 6.25% from the same period in 2022, according to Reuters.

    CNN has contacted Apple

    (AAPL)
    and Samsung

    (SSNLF)
    , top laptop sellers in the South Asian country, for comment but has not yet received responses.

    India’s push to manufacture domestically comes at a crucial time for the world’s most populous nation, as companies look beyond China to secure crucial supply chains.

    India’s working-age population is expected to hit one billion over the next decade, according to the Organisation for Economic Co-operation and Development. Its large and young labor force makes the country a big draw for global companies seeking alternative manufacturing hubs to China.

    Earlier this year, India’s commerce minister, Piyush Goyal, said Apple was already making between 5% and 7% of its products in India.

    “If I am not mistaken, they are targeting to go up to 25% of their manufacturing,” he said at an event in January.

    In June, US chipmaker Micron

    (MICR)
    announced a new factory in the western state of Gujarat, calling it the country’s first semiconductor assembly and test manufacturing facility.

    The venture will see Micron invest up to $825 million and create “up to 5,000 new direct Micron jobs and 15,000 community jobs over the next several years,” according to the company.

    Foxconn, the world’s largest contract electronics maker and a key supplier to Apple, is also looking to expand its manufacturing operations in India.

    Last month, it abruptly announced it was exiting an ambitious $19.4 billion joint venture with Vedanta

    (VEDL)
    , an Indian metals and energy conglomerate, to help build one of the country’s first chip factories.

    But, the company said it was still committed to investing in Indian chipmaking and was applying to a government program that subsidizes the cost of setting up semiconductor or electronic display production facilities in the country.

    [ad_2]

    Source link

  • ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    [ad_1]



    CNN
     — 

    For months, Eveline Fröhlich, a visual artist based in Stuttgart, Germany, has been feeling “helpless” as she watched the rise of new artificial intelligence tools that threaten to put human artists out of work.

    Adding insult to injury is the fact that many of these AI models have been trained off of the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    “It all felt very doom and gloomy for me,” said Fröhlich, who makes a living selling prints and illustrating book and album covers.

    “We’ve never been asked if we’re okay with our pictures being used, ever,” she added. “It was just like, ‘This is mine now, it’s on the internet, I’m going to get to use it.’ Which is ridiculous.”

    Recently, however, she learned about a tool dubbed Glaze that was developed by computer scientists at the University of Chicago and thwarts the attempts of AI models to perceive a work of art via pixel-level tweaks that are largely imperceptible to the human eye.

    “It gave us some way to fight back,” Fröhlich told CNN of Glaze’s public release. “Up until that point, many of us felt so helpless with this situation, because there wasn’t really a good way to keep ourselves safe from it, so that was really the first thing that made me personally aware that: Yes, there is a point in pushing back.”

    Fröhlich is one of a growing number of artists that is fighting back against AI’s overreach and trying to find ways to protect her images online as a new spate of tools has made it easier than ever for people to manipulate images in ways that can sow chaos or upend the livelihoods of artists.

    These powerful new tools allow users to create convincing images in just seconds by inputting simple prompts and letting generative AI do the rest. A user, for example, can ask an AI tool to create a photo of the Pope dripped out in a Balenciaga jacket — and go on to fool the internet before the truth comes out that the image is fake. Generative AI technology has also wowed users with its ability to spit out works of art in the style of a specific artist. You can, for example, create a portrait of your cat that looks like it was done with the bold brushstrokes of Vincent Van Gogh.

    But these tools also make it very easy for bad actors to steal images from your social media accounts and turn them into something they’re not (in the worst cases, this could manifest as deepfake porn that uses your likeness without your consent). And for visual artists, these tools threaten to put them out of work as AI models learn how to mimic their unique styles and generate works of art without them.

    Some researchers, however, are now fighting back and developing new ways to protect people’s photos and images from AI’s grasp.

    Ben Zhao, a professor of computer science at University of Chicago and one of the lead researchers on the Glaze project, told CNN that the tool aims to protect artists from having their unique works used to train AI models.

    Glaze uses machine-learning algorithms to essentially put an invisible cloak on artworks that will thwart AI models’ attempts to understand the images. For example, an artist can upload an image of their own oil painting that has been run through Glaze. AI models might read that painting as something like a charcoal drawing — even if humans can clearly tell that it is an oil painting.

    Artists can now take a digital image of their artwork, run it through Glaze, “and afterwards be confident that this piece of artwork will now look dramatically different to an AI model than it does to a human,” Zhao told CNN.

    Zhao’s team released the first prototype of Glaze in March and has already surpassed a million downloads of the tool, he told CNN. Just last week, his team released a free online version of the tool as well.

    Jon Lam, an artist based in California, told CNN that he now uses Glaze for all of the images of his artwork that he shares online.

    Lam said that artists like himself have for years posted the highest resolution of their works on the internet as a point of pride. “We want everyone to see how awesome it is and see all the details,” he said. But they had no idea that their works could be gobbled up by AI models that then copy their styles and put them out of work.

    Jon Lam is a visual artist from California who uses the Glaze tool to help protect his artwork online from being used to train AI models.

    “We know that people are taking our high-resolution work and they are feeding it into machines that are competing in the same space that we are working in,” he told CNN. “So now we have to be a little bit more cautious and start thinking about ways to protect ourselves.”

    While Glaze can help ameliorate some of the issues artists are facing for now, Lam says it’s not enough and there needs to be regulation set regarding how tech companies can take data from the internet for AI training.

    “Right now, we’re seeing artists kind of being the canary in the coal mine,” Lam said. “But it’s really going to affect every industry.”

    And Zhao, the computer scientist, agrees.

    Since releasing Glaze, the amount of outreach his team has received from artists in other disciplines has been “overwhelming,” he said. Voice actors, fiction writers, musicians, journalists and beyond have all reached out to his team, Zhao said, inquiring about a version of Glaze for their field.

    “Entire, multiple, human creative industries are under threat to be replaced by automated machines,” he said.

    While the rise of AI images are threatening the jobs of artists around the world, everyday internet users are also at risk of their photos being manipulated by AI in other ways.

    “We are in the era of deepfakes,” Hadi Salman, a researcher at the Massachusetts Institute of Technology, told CNN amid the proliferation of AI tools. “Anyone can now manipulate images and videos to make people actually do something that they are not doing.”

    Salman and his team at MIT released a research paper last week that unveiled another tool aimed at protecting images from AI. The prototype, dubbed PhotoGuard, puts an invisible “immunization” over images that stops AI models from being able to manipulate the picture.

    The aim of PhotoGuard is to protect photos that people upload online from “malicious manipulation by AI models,” Salman said.

    Salman explained that PhotoGuard works by adjusting an image’s pixels in a way that is imperceptible to humans.

    In this demonstration released by MIT, a researcher shows a selfie (left) he took with comedian Trevor Noah. The middle photo, an AI-generated fake image, shows how the image looks after he used an AI model to generate a realistic edit of the pair wearing suits. The right image depicts how the researchers' tool, PhotoGuard, would prevent an attempt by AI models from editing the photo.

    “But this imperceptible change is strong enough and it’s carefully crafted such that it actually breaks any attempts to manipulate this image by these AI models,” he added.

    This means that if someone tries to edit the photo with AI models after it’s been immunized by PhotoGuard, the results will be “not realistic at all,” according to Salman.

    In an example he shared with CNN, Salman showed a selfie he took with comedian Trevor Noah. Using an AI tool, Salman was able to edit the photo to convincingly make it look like he and Noah were actually wearing suits and ties in the picture. But when he tries to make the same edits to a photo that has been immunized by PhotoGuard, the resulting image depicts Salman and Noah’s floating heads on an array of gray pixels.

    PhotoGuard is still a prototype, Salman notes, and there are ways people can try to work around the immunization via various tricks. But he said he hopes that with more engineering efforts, the prototype can be turned into a larger product that can be used to protect images.

    While generative AI tools “allow us to do amazing stuff, it comes with huge risks,” Salman said. It’s good people are becoming more aware of these risks, he added, but it’s also important to take action to address them.

    Not doing anything, “Might actually lead to much more serious things than we imagine right now,” he said.

    [ad_2]

    Source link

  • T-Mobile to lay off 5,000 employees | CNN Business

    T-Mobile to lay off 5,000 employees | CNN Business

    [ad_1]


    New York
    CNN
     — 

    T-Mobile on Thursday announced it plans to lay off 5,000 employees, or around 7% of its total staff, over the next five weeks.

    The reductions will largely affect corporate and back-office jobs that are “primarily duplicative” to other roles and will reduce the company’s middle management layers, CEO Mike Sievert said in a letter to employees Thursday. The company also plans to reduce its spending on “external workers and resources,” but its retail and “consumer care” staff who work directly with customers will not be affected, he said.

    “What it takes to attract and retain customers is materially more expensive than it was just a few quarters ago,” Sievert said.

    T-Mobile’s cuts comes after months of mass layoff announcements at a range of other technology companies — including Microsoft and Meta — as firms grapple with an uncertain economic environment.

    In its most recent quarterly earnings report last month, T-Mobile reported sales down 2.5% year-over-year and net customer additions fell slightly from the same period in the prior year, although it posted record low customer churn and profit growth. T-Mobile’s stock has fallen more than 7% since last August. Shares were trading down around 1% following its layoff announcement.

    In Thursday’s letter, Sievert said that in the three years since closing T-Mobile’s acquisition of rival carrier Sprint, it has been working to streamline the combined businesses and accelerate the build-out of its high-speed internet business. However, he suggested it was important for the company to now narrow its focus.

    “It is clear that doing everything we are doing and just doing it faster is not enough to deliver on these changing customer expectations going forward,” he said. “Today’s changes are all about getting us efficiently focused on a finite set of winning strategies.”

    T-Mobile plans to notify employees who will be laid off by the end of September. The company estimates it will incur a pre-tax charge of $450 million in the September quarter related to the reductions, according to a Thursday securities filing.

    Affected employees will receive “competitive severance packages” based on tenure, as well as accelerated stock vesting, access to career transition services and other benefits, Sievert told employees. He added that the company is not planning additional, widespread employee reductions in the foreseeable future.

    [ad_2]

    Source link

  • South Korea’s Hynix is looking into how its chips got into Huawei’s controversial smartphone | CNN Business

    South Korea’s Hynix is looking into how its chips got into Huawei’s controversial smartphone | CNN Business

    [ad_1]


    Hong Kong/Seoul
    CNN
     — 

    SK Hynix, a South Korean chipmaker, is investigating how two of its memory chips mysteriously ended up inside the Mate 60 Pro, a controversial smartphone launched by Huawei last week.

    Shares in Hynix fell more than 4% on Friday after it emerged that two of its products, a 12 gigabyte (GB) LPDDR5 chip and 512 GB NAND flash memory chip, were found inside the Huawei handset by TechInsights, a research organization based in Canada specializing in semiconductors, which took the phone apart for analysis.

    “The significance of the development is that there are restrictions on what SK Hynix can ship to China,” G Dan Hutcheson, vice chair of TechInsights, told CNN. “Where do these chips come from? The big question is whether any laws were violated.”

    A Hynix spokesperson told CNN Friday that it was aware of its chips being used in the Huawei phone and had started investigating the issue.

    The company “no longer does business with Huawei since the introduction of the US restrictions against the company,” it said in a statement.

    “SK Hynix is strictly abiding by the US government’s export restrictions,” the company said.

    Industry insiders said it was possible that Huawei had purchased the memory chips from the secondary market and not directly from the manufacturer. It’s also possible Huawei may have had a stockpile of components accumulated before the US export curbs kicked in fully.

    TechInsights had previously revealed that the “brains” of the phone were powered by a 5G Kirin 9000s chip made by China’s top chipmaker Semiconductor Manufacturing International Corporation, better known as SMIC.

    It is still examining the Mate 60 Pro and does not rule out the possibility of finding more components made by companies subject to US trade sanctions. So far, it has found that most of the phone’s components were provided by Chinese suppliers.

    Analysts have said the smartphone is a major breakthrough for China as it clashes with the United States over access to advanced technology.

    The development prompted two US congressmen, Mike Gallagher and Michael McCaul, to call on the White House – which is seeking more information about the phone – to further restrict technology export sales to Chinese companies.

    Huawei and SMIC have not replied to requests for comment.

    In 2019, the US government banned American companies from selling software and equipment to Huawei. It also restricted international chipmakers using US-made technology from working with the company.

    That is why, four years later, last week’s launch of the Mate 60 Pro shocked industry experts who didn’t understand how Huawei, which is headquartered in Shenzhen, would have the ability to manufacture such an advanced smartphone following sweeping efforts by the United States to restrict China’s access to foreign chip technology.

    [ad_2]

    Source link

  • Kevin McCarthy opens impeachment inquiry without passing budget despite once criticizing Democrats for the same | CNN Politics

    Kevin McCarthy opens impeachment inquiry without passing budget despite once criticizing Democrats for the same | CNN Politics

    [ad_1]



    CNN
     — 

    In 2019, then-Republican House Minority Leader Kevin McCarthy vehemently criticized Democrats for initiating an impeachment inquiry against President Donald Trump without first passing a budget and securing government funding to prevent a shutdown.

    Fast forward four years later and McCarthy, now the House Speaker, is pushing ahead with a formal impeachment inquiry into President Joe Biden while in the midst of another budget crisis and an unresolved looming government shutdown.

    McCarthy called for the inquiry, even as House Republicans have yet to prove allegations that Biden profited off of his son’s foreign business dealings, to appease far-right members of the Republican caucus who have threatened his speakership.

    In 2019, McCarthy said Democrats were prioritizing a politically-driven impeachment of Trump over the government’s basic responsibilities.

    “This is the day that Alexander Hamilton feared and warned would come,” he said at a news conference on December 5, 2019. “This is the day the nation is weaker because they surely cannot put their animosity or their fear of losing an election in the future in front of all the other things that the American people want.”

    “They don’t even have a budget,” he added. Congress passed a spending package two a few weeks later, averting a government shutdown.

    McCarthy did not respond to CNN’s request for comment.

    Now Congress faces a looming deadline at the end of the month to fund the government and some conservative members of the Republican caucus say they will not support a bill that doesn’t contain spending cuts.

    In comments made on radio shows and in press conferences in 2019 reviewed by CNN’s KFile, McCarthy repeatedly said Democrats’ actions demeaned the impeachment process to a point that every subsequent president could be impeached – something he said he hoped wouldn’t happen.

    “This is exactly what Alexander Hamilton warned us about, that with impeachment, that you would have a party actually grab it and, and not worry about the rule of law, but just the animosity that you have. And I’ve never seen the animosity in our lifetime,” said McCarthy to California local radio station KERN in late December 2019. “I’m sure there’s been animosity like this before, but not to this level. And maybe social media and other things drive it.

    “And if you, and if you lower it to this level, when they ended up with just those two articles, every president would’ve been impeached. And what does it mean for the future? Have we, have we now demeaned impeachment so low that everybody’s gonna have this?” he added.

    “Sometimes something happens so bad we need to learn from and come back from at this moment in time,” McCarthy continued. “I hope that’s the moment of where we are.”

    Trump was impeached for the first time by the House of Representatives in 2019 on charges of abuse of power and obstruction of Congress. The impeachment proceedings were initiated after allegations that he solicited foreign interference from Ukraine to benefit his 2020 reelection campaign and obstructed the subsequent congressional investigation.

    Trump was acquitted by the Senate in early 2020.

    McCarthy made similar comments at a press conference in November 2019.

    “I think what Republicans are doing is standing up for the constitution,” said McCarthy. “I think it’s the same thing that Alexander Hamilton warned us about, that you would use it for political gain from the same basis of going forward.

    “I think what Republicans are standing up for is the idea of what they ran on. First thing, I think a majority should do is pass a budget, which the Democrats have not done. They should actually make sure that they fund the government, which we have not done. We’re working to now have another continuing resolution, so our troops are not being provided the resources they need or the pay raise that they have earned.”

    McCarthy also lamented that impeachment has “overtaken every single committee” and emphasized “what is not being done in Congress.”

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link

  • YouTube unveils a slew of new AI-powered tools for creators | CNN Business

    YouTube unveils a slew of new AI-powered tools for creators | CNN Business

    [ad_1]



    CNN
     — 

    YouTube on Thursday unveiled a slew of new artificial intelligence-powered tools to help creators produce videos and reach a wider audience on the platform, as companies race to incorporate buzzy generative AI technology directly into their core products.

    “We want to make it easier for everyone to feel like they can create, and we believe generative AI will make that possible,” Neal Mohan, YouTube’s CEO, told reporters Thursday during the company’s annual Made On YouTube product event.

    “AI will enable people to push the boundaries of creative expression by making the difficult things simple,” Mohan added. He said YouTube is trying to bring “these powerful tools” to the masses.

    The video platform, under the Alphabet-Google umbrella, teased a new generative AI feature dubbed Dream Screen specifically for its short-form video arm and TikTok competitor, YouTube Shorts. Dream Screen is an experimental feature that lets creators add AI-generated video or image backgrounds to their vertical videos.

    To use Dream Screen, creators can type their idea for a background as a prompt and the platform will do the rest. A user, for example, could create a background that makes it look like they are in outer space or on a beach where the sand is made out of jelly beans, per demos of the tool shared on Thursday.

    Dream Screen is being introduced to select creators and will be rolled out more broadly next year, the company said.

    YouTube also unveiled new AI-powered tools that creators can access to help brainstorm or draft outlines for videos or search for specific music using descriptive phrases. YouTube said it was bringing an AI-powered dubbing tool that will let users share their videos in different languages.

    AI-powered tools in YouTube Studio.

    Alan Chikin Chow, 26, a content creator based in Los Angeles who recently hit 30 million subscribers on YouTube, told CNN that he is most excited about using the new AI-powered dubbing tool for his comedy videos. Chikin Chow currently boasts the title of the most-watched YouTube Shorts creator in the world.

    “I think global content is the future,” Chikin Chow told CNN. “If you look at the trends of our recent generation, the things that have really impacted and moved culture are ones that are global,” he added, citing the Korean smash-hit TV series “Squid Game” as one example.

    Using the AI-powered dubbing features, he said he hopes to reach audiences in new corners of the world that might not otherwise be able to engage with his content.

    LOS ANGELES, CALIFORNIA - DECEMBER 04: Alan Chikin Chow attends the 2022 YouTube Streamy Awards at the Beverly Hilton on December 04, 2022 in Los Angeles, California. (Photo by Emma McIntyre/Getty Images for dick clark productions)

    Chikin Chow added that he’s also excited to use the new editing tools to help save time.

    The rise of generative AI has animated the tech sector and broader public — becoming the latest buzzword out of Silicon Valley since the launch of OpenAI’s ChatGPT service late last year.

    Some industry watchers and AI skeptics have argued that powerful new AI tools carry potential dangers, such as making it easier to spread misinformation via deepfake images, or perpetuate biases at a larger scale. Many creative professionals — whose works are often swept up into the datasets required to train and power AI tools — are also raising the alarm over potential intellectual property rights issues.

    And some prominent figures inside and outside the tech industry even say there’s a potential that AI can result in civilization “extinction” and compare its potential risk to that of “nuclear war.”

    Despite the frenzy AI has caused, Chikin Chow told CNN that he ultimately views it as a “collaborator” and a “supplement” to help propel his creative work forward.

    “I think that the people who are able to take change and move with it are the ones that are going to be successful long term,” Chikin Chow said.

    [ad_2]

    Source link

  • ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    [ad_1]



    CNN
     — 

    You can now speak aloud to ChatGPT and hear the artificial intelligence-powered chatbot talk back.

    OpenAI, the startup behind the wildly-popular chatbot, announced Monday that it is rolling out new features including the ability to let users engage in a back-and-forth voice conversation with ChatGPT.

    In a company blog post Monday, OpenAI teased how this new feature can be used to “request a bedtime story for your family, or settle a dinner table debate.”

    The new voice features from OpenAI carry similarities to those currently offered by Amazon’s Alexa or Apple’s Siri voice assistants.

    In a demo of the new update shared by OpenAI, a user asks ChatGPT to come up with a story about “the super-duper sunflower hedgehog named Larry.” The chatbot is able to narrate a story out loud with a human-sounding voice that can also respond to questions, such as, “What was his house like?” and “Who is his best friend?”

    ChatGPT’s voice capability is “powered by a new text-to-speech model, capable of generating human-like audio from just text and a few seconds of sample speech,” Open AI said in the blogpost. The company added that it collaborated with professional voice actors to create the five different voices that can be used to animate the chatbot.

    OpenAI also said on Monday that it’s rolling out a new feature that lets the bot respond to prompts featuring an image. For example, you can snap a picture of the contents of your fridge and ask ChatGPT to help you come up with a meal plan using the ingredients you have. Moreover, the company said you can ask the chatbot to focus on a specific part of an image with its “drawing tool” in the app.

    The new features roll out in the app within the next two weeks for paying subscribers of ChatGPT’s Plus and Enterprise services. (Subscriptions to the Plus service are $20 a month, and its Enterprise service is currently only offered to business clients).

    The updates from OpenAI come amid an ongoing AI arms race within the tech sector, initially spurred by the public launch of ChatGPT late last year. In recent weeks, tech giants have been racing to roll out new updates that incorporate more AI-powered tools directly into their core products. Google last week announced a series of updates to its ChatGPT competitor Bard. Also last week, Amazon said it was bringing a generative AI-powered update to its Alexa voice assistant.

    [ad_2]

    Source link

  • Takeaways from the second Republican presidential debate | CNN Politics

    Takeaways from the second Republican presidential debate | CNN Politics

    [ad_1]



    CNN
     — 

    The second 2024 Republican presidential primary debate ended just as it began: with former President Donald Trump – who hasn’t yet appeared alongside his rivals onstage – as the party’s dominant front-runner.

    The seven GOP contenders in Wednesday night’s showdown at the Ronald Reagan Presidential Library in California provided a handful of memorable moments, including former South Carolina Gov. Nikki Haley unloading what often seemed like the entire field’s pent-up frustration with entrepreneur Vivek Ramaswamy.

    “Honestly, every time I hear you, I feel a little bit dumber for what you say,” she said to him at one point.

    Two candidates criticized Trump’s absence, as well. Florida Gov. Ron DeSantis said he was “missing in action.” Former New Jersey Gov. Chris Christie called the former president “Donald Duck” and said he “hides behind his golf clubs” rather than defending his record on stage.

    Chris Christie takes up debate time to send Trump a clear message

    The GOP field also took early shots at President Joe Biden. South Carolina Sen. Tim Scott said Biden, rather than joining the striking auto workers’ union on the picket line Tuesday in Michigan, should be on the southern border. Former Vice President Mike Pence said Biden should be “on the unemployment line.” North Dakota Gov. Doug Burgum said Biden was interfering with “free markets.”

    However, what played out in the debate, hosted by Fox Business Network and Univision, is unlikely to change the trajectory of a GOP race in which Trump has remained dominant in national and early-state polling.

    And the frequently messy, hard-to-track crosstalk could have led many viewers to tune out entirely.

    Here are takeaways from the second GOP primary debate:

    Trump might have played it safe by skipping the debates and taking a running-as-an-incumbent approach to the 2024 GOP primary.

    It’s hard to see, though, how he would pay a significant price in the eyes of the party’s voters for missing Wednesday night’s messy engagement.

    Trump’s rivals took a few shots at him. DeSantis knocked him for deficit spending. Christie mocked him during the night’s early moments, calling him “Donald Duck” for skipping the debate and then in his final comments said he would vote Trump off the GOP island.

    “This guy has not only divided our party – he’s divided families all over this country. He’s divided friends all over this country,” Christie said. “He needs to be voted off the island and he needs to be taken out of this process.”

    However, Trump largely escaped serious scrutiny of his four years in the Oval Office from a field of rivals courting voters who have largely positive views of his presidency.

    “Tonight’s GOP debate was as boring and inconsequential as the first debate, and nothing that was said will change the dynamics of the primary contest,” Trump campaign senior adviser Chris LaCivita said in a statement.

    The second GOP primary debate was beset by interruptions, crosstalk and protracted squabbles between the candidates and moderators over speaking time.

    That’s tough for viewers trying to make sense of it all but even worse for these candidates as they attempted to stand out as viable alternatives to the absentee Trump.

    Further complicating the matter, some of the highest polling candidates after Trump – DeSantis and Haley – were among those least willing to dive into the muck, especially during the crucial first hour. The moderators repeatedly tried to clear the road for the Florida governor, at least in the beginning. But he was all but absent from the proceedings for the first 15 minutes.

    Ramaswamy fared somewhat better, speaking louder – and faster – than most of his rivals. But he was bogged down repeatedly when caught between his own talking points and cross-volleys of criticisms from frustrated candidates like Scott.

    The moderator group will likely get criticism for losing control of the room within the first half-hour, but even a messy debate tells voters something about the people taking part.

    All night, Scott seemed like he was looking for a fight with somebody and he finally got that when he set his sights on fellow South Carolinian Haley.

    He began his line of attack – which Haley interjected with a “Bring it” – by accusing her of spending $50,000 on curtains in a $15 million subsidized location during her time as the US ambassador to the United Nations.

    What ensued was the two Republicans going back and forth about the curtains. “Do your homework, Tim, because Obama bought those curtains,” Haley said, while Scott repeated, “Did you send them back? Did you send them back?” Haley then responded: “Did you send them back? You’re the one who works in Congress.”

    It wasn’t the most acrimonious moment of the night, but it was up there. The feuding between the two South Carolina natives seemed deep, but it’s worth remembering that about a decade ago, when Haley was governor, she appointed Scott to the Senate seat he currently holds after Republican Jim DeMint stepped down. That confidence in Scott seems to have dissolved in this presidential race.

    Confronted by his Republican competitors for the first time in earnest, DeSantis delivered an uneven performance from the center of the stage – a spot that is considerably less secure than it was heading into the first debate in Milwaukee.

    Despite rules that allowed candidates to respond if they were invoked, DeSantis let Fox slip to commercial break when Pence seemed to blame the governor for a jury decision to award a life sentence, not the death penalty, to the mass murderer in the Parkland high school shooting. (DeSantis opposed the decision and championed a law that made Florida the state with the lowest threshold to put someone on death row going forward.) Nor did he respond when Pence accused DeSantis of inflating Florida’s budget by 30% during his tenure.

    He later let Scott get the last word on Florida’s Black history curriculum standards and struggled to defend himself when Haley – accurately – pointed out that he took steps to block fracking in Florida on his second day in office.

    Before the first debate in Milwaukee, a top strategist for a pro-DeSantis super PAC told donors that “79% of the people tonight are going to watch the debate and turn it off after 19 minutes.”

    By that measure, the Florida governor managed to first speak Wednesday night just in the nick of time – 16 minutes into the debate. And when he finally spoke, he continued the sharper attacks on the GOP front-runner that he has previewed in recent weeks.

    DeSantis equated Trump’s absence in California to Biden, who DeSantis said was “completely missing in action for leadership” on the economy, blaming him for inflation and the autoworkers strike.

    “And you know who else is missing in action? Donald Trump is missing in action,” DeSantis said. “He should be on this stage tonight. He owes it to you to defend his record.”

    But DeSantis then largely pulled back from further targeting Trump – until a post-debate Fox News appearance when he challenged the former president to a one-on-one face-off.

    DeSantis ended the debate on a strong note. He took charge by rejecting moderator Dana Perino’s attempts to get the candidates to vote one of their competitors “off the island.” He ended his night forcefully dismissing a suggestion that Trump’s lead in the polls held meaning in September.

    “Polls don’t elect presidents, voters elect presidents,” he said, before pointing a finger at Trump for Republicans’ electoral underperformance in the last three elections.

    But as the super PAC strategist previously pointed out: By then, who was watching?

    In the final minutes of the debate, co-host Ilia Calderón of Univision asked Pence how he would reach out to those Latino voters who felt the Republican Party was hostile or didn’t care about them.

    “I’m incredibly proud of the tax cut and tax reform bill,” he said, referring to Republicans’ sweeping 2017 tax law. He also cited low unemployment rates for Hispanic Americans recorded during the Trump-Pence administration.

    Scott, faced with the same question, said it was important to lead by example. “My chief of staff is the only Hispanic female chief of staff in the Senate,” he said. “I hired her because she was the best, highest-qualified person we have.”

    Calderón focused much of her time on a series of policy questions that highlighted the candidates’ records on immigration and gun violence. At times, some of them struggled to respond directly.

    She asked Pence if he would work with Congress to find a permanent solution for people who were brought to the country illegally as children. The Trump-Pence administration ended the Deferred Action for Childhood Arrivals program, which gave those young people protected status. She repeated the question after Pence focused his answer on his work securing the border. He then talked about his time in Congress.

    “Let me tell you, I served in Congress for 12 years, although it seemed longer,” he said. “But you know, something I’ve done different than everybody on this stage is I’ve actually secured reform in Congress.”

    The candidates – and moderators – shy away from abortion talk

    It took more than a 100 minutes on Wednesday night for the first question on abortion to be asked.

    About five minutes later, the conversation had moved on. What is potentially the most potent driver (or flipper) of votes in the coming election was afforded less time than TikTok.

    Tellingly, no one onstage seemed to mind.

    Perino introduced the subject by asking DeSantis whether some Republicans were right to worry that the electoral backlash to abortion bans – or the prospect of their passage – would handicap the eventual GOP nominee.

    DeSantis, who signed a six-week ban in April, dismissed those concerns, pointing to his success in traditionally liberal parts of Florida on his way to winning a second term in 2022. Then he swiped at Trump for calling the new laws “a terrible thing and a terrible mistake.”

    Christie took a similar path, arguing that his two terms as governor of New Jersey, a traditionally blue state, showed it was possible for anti-abortion leaders to win in a environments supportive of abortion rights.

    And with that, the abortion “debate” in Simi Valley ended abruptly. No more questions and no attempts by the rest of the candidates to interject or otherwise join the chat.

    Candidates pile on Ramaswamy

    Some of the candidates onstage didn’t want to have a repeat of the first debate, in which Ramaswamy managed to stand out as a formidable debater and showman.

    Early in Wednesday’s debate, Scott went after the tech entrepreneur, saying his business record included ties to the Chinese Communist Party and money going to Hunter Biden. The visibly annoyed Ramaswamy shifted gears from praising all the other candidates onstage to defending his business record. But Scott and Ramaswamy ended up talking over each other.

    A little later on Pence began an answer with a knock on Ramaswamy, saying, “I’m glad Vivek pulled out of his business deal in China.” At another point after Ramaswamy had responded to a question about his use of TikTok, Haley jumped in, saying, “Every time I hear you, I feel a little bit dumber from what you say” and then going on to say, “We can’t trust you. We can’t trust you.” As Ramaswamy tried to readopt his unity tone, Scott could be heard trying to interrupt him.

    Despite the efforts of moderators to pin them down, DeSantis and Pence struggled to respond when challenged on their respective records on health care.

    Asked about the Trump administration’s failure to end the Affordable Care Act as promised, Pence opted instead to answer a previous question about mass gun violence. When Perino pushed Pence one more time to explain why Obamacare remains not just intact but popular, the former vice president once again demurred.

    Fox’s Stuart Varney similarly pressed DeSantis to explain why 2.5 million Floridians don’t have health insurance.

    DeSantis found a familiar foil for Republicans in California: inflation. Varney, though, said it didn’t explain why Florida has one of the highest uninsurance rates in the country, to which DeSantis had little response.

    “Our state’s a dynamic state,” DeSantis said, before pointing to Florida’s population boom and the low level of welfare benefits offered there.

    Haley, though, appeared ready to debate health care, arguing for transparency in prices to lessen the power of insurance companies and providers and overhauling lawsuit rules to make it harder to sue doctors.

    “How can we be the best country in the world and have the most expensive health care in the world?” Haley said.

    This story has been updated with additional information.

    [ad_2]

    Source link

  • FCC to reintroduce rules protecting net neutrality | CNN Business

    FCC to reintroduce rules protecting net neutrality | CNN Business

    [ad_1]



    CNN
     — 

    The US government aims to restore sweeping regulations for high-speed internet providers such as AT&T, Comcast and Verizon, reviving “net neutrality” rules for the broadband industry — and an ongoing debate about the internet’s future.

    The proposed rules from the Federal Communications Commission will designate internet service — both the wired kind found in homes and businesses as well as mobile data on cellphones — as “essential telecommunications” akin to traditional telephone services, said FCC Chairwoman Jessica Rosenworcel. The rules would ban internet service providers (ISPs) from blocking or slowing down access to websites and online content.

    In addition to the prohibitions on blocking and throttling internet traffic, the draft rules also seek to prevent ISPs from selectively speeding up service to favored websites or to those that agree to pay extra fees, Rosenworcel said, a move designed to prevent the emergence of “fast lanes” on the web that could give some websites a paid advantage over others.

    With Tuesday’s proposal, the FCC aims to restore Obama-era regulations that the FCC under Republican leadership rolled back during the Trump administration.

    But the proposal is likely to trigger strong pushback from internet providers who have spent years fighting earlier versions of the rules in court.

    Beyond their immediate impact to internet providers, the draft rules directly help US telecom regulators address a range of consumer issues in the longer run by allowing the FCC to bring its most powerful legal tools to bear, Rosenworcel said. Some of the priorities the FCC could address after the implementation of net neutrality rules include spam robotexts, internet outages, digital privacy and high-speed internet access, said Rosenworcel in a speech at the National Press Club Tuesday to announce the proposal.

    Rosenworcel said reclassifying internet service providers as essential telecommunications entities — by regulating them under Title II of the FCC’s congressional charter — would provide the FCC with clearer authority to adopt future rules governing everything from public safety to national security.

    Rosenworcel argued, “without reclassification, the FCC has limited authority to incorporate updated cybersecurity standards into our network policies.”

    She added that traditional telephone companies currently cannot sell customer data, but those restrictions do not apply to ISPs, which are regulated differently. “Does that really make sense? Do we want our broadband providers selling off where we go and what we do online?”

    Regulating internet providers using the most powerful tools at the FCC’s disposal would let the agency crack down harder on spam robotexts, Rosenworcel said, as spammers are “constantly evolving their techniques.”

    And the proposed rules could promote the Biden administration’s agenda to blanket the country in fast, affordable broadband, she argued, by granting internet providers the rights to put their equipment on telephone poles.

    “As a nation we are committed, post-pandemic, to building broadband for all,” she said. “So keep in mind that when you construct these facilities, utility poles are really important.”

    The FCC plans to vote Oct. 19 on whether to advance the draft rules by soliciting public feedback on them — a step that would precede the creation of any final rules.

    Net neutrality rules are more necessary than ever, Rosenworcel said in her speech, after millions of Americans discovered the vital importance of reliable internet access during the Covid-19 pandemic. Rosenworcel also made the case that a single, national standard on net neutrality could give businesses the certainty they need to speed up efforts to blanket the nation in fast, affordable broadband.

    But Rosenworcel’s push is already inviting a widespread revolt from internet providers that make up some of the most powerful and well-resourced groups in Washington.

    The proposal could also lead to more of what has helped make net neutrality a household term over the past decade: Late-night segments by comedians including John Oliver and Stephen Colbert; in-person demonstrations, including at the FCC’s headquarters and at the home of its chair; allegations of fake, AstroTurfed public comments and claims of cyberattacks; and even threats of violence.

    The latest net neutrality rulemaking reflects one of the most visible efforts of Rosenworcel’s chairwomanship — and one of her first undertakings since the US Senate this month confirmed Anna Gomez as the agency’s fifth commissioner, breaking a years-long 2-2 partisan deadlock at the FCC that had prevented hot-button initiatives from moving forward.

    The draft rules also show how a continued lack of federal legislation to establish a nationwide net neutrality standard has led to continued flip-flopping rules for ISPs with every change of political administration, along with a patchwork of state laws seeking to fill the gap.

    If approved next month, the FCC draft would be opened for public comment until approximately mid-December, followed by an opportunity for public replies lasting into January. A final set of rules could be voted on in the months following.

    For years, consumer advocacy groups have called for strong rules that could prevent ISPs from distorting the free flow of information on the internet using arbitrary or commercially motivated traffic rules.

    In contrast, ISPs have long argued that websites using up big portions of a network’s capacity, such as search engines or video streaming sites, should pay for the network demand their users generate. European Union officials are said to be considering just such a proposal.

    A third rail of broadband policy

    In attempting to revive the agency rules, the FCC is once again touching what has become the third rail of US broadband policy: Title II of the Communications Act of 1934, the law that gave the FCC its congressional mandate to regulate legacy telephone services.

    Tuesday’s proposal moves to regulate ISPs under Title II, which would give the FCC clearer authority to impose rules against blocking, throttling and paid prioritization of websites. The draft rules are substantially similar to the rules the FCC passed in 2015, the people said. The rules were upheld in 2016 by a federal appeals court in Washington in the face of an industry lawsuit.

    Soon after that ruling, however, Donald Trump won the White House, leading him to name Ajit Pai, then one of the FCC’s Republican commissioners, as its chair. Among Pai’s first acts as agency chief was to propose a rollback of the earlier net neutrality rules. The FCC voted in 2017 to reverse the rules, with Pai arguing that the repeal would accelerate private investment in broadband networks and free the industry from heavy-handed regulation. The repeal took effect in 2018.

    In the time since, ISPs have refrained from doing the kind of blocking and preferential treatment that net neutrality advocates have warned could occur, but Rosenworcel’s proposal highlights how concerns about that possibility have persisted.

    The Biden administration on Tuesday praised the FCC’s plan to reintroduce net neutrality rules for broadband providers.

    “President Biden supports net neutrality so that large corporations can’t pick and choose what content you can access online or charge you more for certain content,” said Hannah Garden-Monheit, special assistant to the president for economic policy. “Today’s announcement is a major step forward for American consumers and small businesses and demonstrates the importance of the president’s push to restore competition in our economy.”

    Net neutrality began as a bipartisan issue, with the George W. Bush administration issuing some of the earliest principles for an open internet that led to FCC attempts at concrete regulation in 2010 and again in 2015.

    The telecom and cable industries have long opposed the use of Title II to regulate broadband, arguing that it would be a form of government overreach, that telephone-style regulations are not suited for digital technologies, and that it would discourage private investment in broadband networks, hindering Americans’ ability to get online.

    “Treating broadband as a Title II utility is a dangerous and costly solution in search of a problem,” said USTelecom, a prominent industry trade group, in a statement Tuesday. “Congress must step in on this major question and end this game of regulatory ping-pong. The future of the open, vibrant internet we now enjoy hangs in the balance.”

    The reference to net neutrality as a “major question” offers clues about possible future litigation involving the proposal, as the Supreme Court has increasingly invoked the “major questions” doctrine to scrutinize federal agency initiatives.

    In her speech Tuesday, Rosenworcel acknowledged the coming pushback — as well as past incidents involving supporters of strong net neutrality rules.

    “I have every expectation that this process will get messy at times,” Rosenworcel said. “In the past, when this subject came up, we saw death threats against [former Republican FCC Chairman Ajit Pai] and his family. That is completely unacceptable, and I am grateful to law enforcement for bringing the individual behind these threats to justice. We had a fake bomb threat called in to disrupt a vote at the agency. We had protesters blocking [former Democratic FCC Chairman Tom Wheeler] in his driveway and keeping him from his car. We saw a dark effort to tear down a pro-net neutrality nominee for the agency.”

    Part of what made the FCC’s 2015 rules particularly controversial, however, was that classifying ISPs as Title II providers meant the agency could theoretically attempt to set prices for internet service directly, a prospect that ISPs widely feared but that the FCC in 2015 promised not to do.

    Tuesday’s proposal makes the same commitment, the people said, forbearing from 26 provisions of Title II and more than 700 other agency rules that could be seen as intrusive. The draft rules also prohibit the FCC from forcing ISPs to share their network infrastructure with other, competing internet providers, the people said, a concept known as network unbundling.

    On top of fierce industry pushback in the FCC’s comments process, the proposal could also lead to legal challenges against the FCC. While the 2015 net neutrality rules survived on appeal, suggesting the current FCC may be on firm ground to issue the current proposed rules, the draft comes as the Supreme Court has moved to reconsider the power of federal agencies by scrutinizing courts’ decades-long deference to their expert authority.

    [ad_2]

    Source link

  • Microsoft, Amazon facing UK antitrust probe over cloud services | CNN Business

    Microsoft, Amazon facing UK antitrust probe over cloud services | CNN Business

    [ad_1]


    London
    CNN
     — 

    Microsoft and Amazon could be in hot water over apparently making it difficult for UK customers to use multiple suppliers of vital cloud services.

    The Competition and Markets Authority (CMA), the country’s antitrust regulator, said Thursday it was launching an investigation into the UK cloud infrastructure services market to determine whether players were engaged in anti-competitive practices.

    Cloud computing firms, such as Microsoft and Amazon Web Services (AWS), use data centers around the world to provide remote access to computing services and storage. This “cloud infrastructure” forms the foundation for how software applications, such as Gmail and Dropbox, are developed and run.

    The CMA probe has been initiated following a report from Britain’s media and communications regulator Ofcom, which found that the supply of cloud infrastructure in the United Kingdom is highly concentrated and competition limited.

    “We welcome Ofcom’s referral of public cloud infrastructure services to us for in-depth scrutiny,” CMA CEO Sarah Cardell said in a statement.

    “This is a £7.5 billion market that underpins a whole host of online services — from social media to [artificial intelligence] foundation models. Many businesses now completely rely on cloud services, making effective competition in this market essential.”

    The CMA said it would conclude its investigation by April 2025.

    The probe is the latest evidence of increased scrutiny of big tech companies by European regulators, which have tightened rules in recent years in areas such as data protection and targeted advertising.

    The European Digital Services Act, which came into force at the end of August, reflects one of the most comprehensive and ambitious efforts by policymakers anywhere to regulate tech giants. It applies to companies including Amazon (AMZN), Apple (AAPL), Google (GOOG), Microsoft (MSFT), Snapchat, TikTok and Meta (META), the owner of Facebook and Instagram.

    According to Ofcom, last year Microsoft and AWS had a combined market share of 70-80% in the UK cloud infrastructure services market. Google is their closest competitor with a share of 5-10%.

    In its report, Ofcom identified features of the market that make it more difficult for customers to change providers or to use multiple providers, such as switching fees.

    “If customers have difficulty switching and using multiple providers, it could make it harder for competitors to gain scale and challenge AWS and Microsoft effectively for the business of new and existing customers,” Ofcom wrote.

    The report also raised concerns about the software licensing practices of some cloud providers, particularly Microsoft.

    Both Amazon and Microsoft said they would engage “constructively” with the CMA.

    But a spokesperson for AWS added that the company disagreed with Ofcom’s findings. “We… believe they are based on a fundamental misconception of how the IT sector functions, and the services and discounts on offer,” the spokesperson said, noting that “the cloud has made switching between providers easier than ever.”

    A spokesperson for Microsoft added: “We are committed to ensuring the UK cloud industry remains innovative, highly competitive and an accelerator for growth across the economy.”

    [ad_2]

    Source link

  • Displaced Afghan students face uncertain future as they await approval to come to US | CNN Politics

    Displaced Afghan students face uncertain future as they await approval to come to US | CNN Politics

    [ad_1]



    CNN
     — 

    For a group of roughly two dozen displaced Afghan university students, the future feels uncertain.

    They’ve already uprooted their lives once, fleeing Kabul – where they were studying at the American University of Afghanistan (AUAF) – when Afghanistan fell back under Taliban rule and the university was shuttered two years ago.

    They were among the 110 AUAF students who were able to evacuate to Iraqi Kurdistan to continue studies at the American University of Iraq-Sulaimani with the help of both universities, former Iraqi President Barham Salih, and a group called the Afghan Future Fund.

    Now, the 23 students are awaiting approval to come to the United States, where they have been accepted into universities and received scholarships through the Qatar Scholarship for Afghans Project to finish their undergraduate degrees or pursue graduate ones.

    “It’s been a year since my graduation. I’m still here, waiting,” one student in Iraq told CNN.

    “I am left with uncertainty now,” a second student said, telling CNN that they fear they will be left “in limbo.”

    CNN is not using the names of the students to protect their safety.

    More than 100 displaced Afghan students – 80 of whom were in Iraq – have already come to the US, where they are studying at more than 45 universities, according to sources familiar with the situation.

    The sources told CNN that most of the students are coming to the US as Priority 1 (P-1) refugees – a program they qualify for because of their affiliation with AUAF. The university received significant funding from the US government over the course of a decade and was targeted by suspected Taliban militants in a deadly 2016 attack. Its campus was seized by the Taliban almost immediately after the US military completed its withdrawal in August 2021.

    The 23 students who remain in Iraq have not received P-1 approval yet. Sources say this is likely due to a security review process.

    The students told CNN they don’t have any clear sense of when they will get approval to come to the US, and they are worried about what the continued delay means for their future.

    Those who spoke to CNN have already had to defer their enrollment once, and likely will have to do so again as the start of fall semester looms. The second student said they had lost admission at their first university in the US because they were unable to travel there and enroll.

    “This is basically my last hope,” this student said, noting they do not want to lose admission again.

    “I do not want to lose another year of my life,” the first student said.

    “I really want to study. I have worked really hard when I was in Afghanistan to get the chance of going to AUAF,” they said.

    “I’m never going to give up on education,” they added.

    “We Afghans lost almost everything, and this scholarship in the US is a very big opportunity for us,” a third student told CNN.

    Going back to Afghanistan is not an option for these students, particularly those who are female. This is why they have sought the P-1 refugee status, which would give them a pathway to settle in the US after university.

    The Taliban has enacted harsh restrictions against women and girls since coming back into power in two years ago. Girls and women have been barred from higher education and numerous work sectors; have been refused access to public spaces; have been ordered to cover themselves in public; and have had their travel abroad restricted.

    “Anything that women do in Afghanistan is banned right now. You cannot exist as a woman,” the first student said.

    For now, there are substantial efforts underway to try to get the students cleared to come to the US as soon as possible, with students reaching out to their prospective future members of Congress and advocates engaging with various agencies of the US government.

    A US State Department spokesperson said they are “aware of the Afghan students at the American University of Iraq-Sulaimani,” but could not comment on individual cases.

    “Case processing in the U.S. Refugee Admissions Program can be lengthy, however, we continue to prioritize processing cases of our Afghan allies and are working hard to speed up case processing across the USRAP,” the spokesperson said.

    Vance Serchuk, an Afghan Futures Fund board member, said that his organization and others like Education Above All, Qatar Fund For Development, and the Institute of International Education “are committed to helping displaced Afghan students from the American University complete their education and realize their potential in safety.”

    “These young people made the choice to attend the American university in Afghanistan at great risk to themselves; Americans now cannot be indifferent to their fate,” he said.

    [ad_2]

    Source link

  • US watchdog teases crackdown on data brokers that sell Americans’ personal information | CNN Business

    US watchdog teases crackdown on data brokers that sell Americans’ personal information | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The US government plans to rein in the vast data broker industry with new, privacy-focused regulations that aim to safeguard millions of Americans’ personal information from data breaches, violent criminals and even artificial intelligence chatbots.

    The coming proposal by the Consumer Financial Protection Bureau would extend existing regulations that govern credit reports, arrest records and other data to what the agency describes as the “surveillance industry,” or the sprawling economy of businesses that traffic in increasingly digitized personal information.

    The potential rules, which are not yet public or final, could bar data brokers from selling certain types of consumer information — including a person’s income or their criminal and payment history — except in specific circumstances, the CFPB said.

    The push could also see new restrictions on the sale of personal information such as Social Security numbers, names and addresses, which the CFPB said data brokers often buy from the major credit reporting bureaus to create their own profiles on individual consumers.

    Issued under the Fair Credit Reporting Act, the regulations would seek to ensure that data brokers selling that sensitive information do so only for valid financial purposes such as employment background checks or credit decisions, and not for unrelated purposes that may allow third parties to use the data to, for example, train AI algorithms or chatbots, the CFPB said.

    The announcement follows an agency study into the data broker industry this year that found widespread concerns about how consumer data is being collected, used and shared. The inquiry received numerous submissions from the public warning about the disproportionate risks that unregulated data sharing can have on minorities, seniors, immigrants and victims of domestic violence.

    “Reports about monetization of sensitive information — everything from the financial details of members of the U.S. military to lists of specific people experiencing dementia — are particularly worrisome when data is powering ‘artificial intelligence’ and other automated decision-making about our lives,” CFPB Director Rohit Chopra said in a statement. “The CFPB will be taking steps to ensure that modern-day data brokers in the surveillance industry know that they cannot engage in illegal collection and sharing of our data.”

    The CFPB’s proposal will first be floated with a group of small businesses for feedback before being publicly unveiled in a formal rulemaking, the agency said.

    The CFPB isn’t the only US agency clamping down on the massive data industry. Last year, the Federal Trade Commission proposed a sweeping set of regulations that may restrict how all businesses collect and use consumer data, taking aim at what FTC Chair Lina Khan has described as the “persistent tracking and routinized surveillance of individuals.”

    The agency initiatives reflect how Congress has continually failed to produce a comprehensive, national-level consumer privacy law, despite years of lawmaker negotiations and the rise of privacy regulations overseas that increasingly affect US businesses.

    [ad_2]

    Source link

  • Meet your new AI tutor | CNN Business

    Meet your new AI tutor | CNN Business

    [ad_1]



    CNN
     — 

    Artificial intelligence often induces fear, awe or some panicked combination of both for its impressive ability to generate unique human-like text in seconds. But its implications for cheating in the classroom — and its sometimes comically wrong answers to basic questions — have left some in academia discouraging its use in school or outright banning AI tools like ChatGPT.

    That may be the wrong approach.

    More than 8,000 teachers and students will test education nonprofit Khan Academy’s artificial intelligence tutor in the classroom this upcoming school year, toying with its interactive features and funneling feedback to Khan Academy if the AI botches an answer.

    The chatbot, Khanmigo, offers individualized guidance to students on math, science and humanities problems; a debate tool with suggested topics like student debt cancellation and AI’s impact on the job market; and a writing tutor that helps the student craft a story, among other features.

    First launched in March to an even smaller pilot program of around 800 educators and students, Khanmigo also allows students to chat with a growing list of AI-powered historical figures, from George Washington to Cleopatra and Martin Luther King Jr., as well as literary characters like Winnie the Pooh and Hamlet.

    Khan Academy’s Chief Learning Officer Kristen DiCerbo told CNN that Khanmigo helps address a problem she’s witnessed firsthand observing an Arizona classroom: that when students learn something new, they often need individualized help — more help than one teacher can provide all at once.

    As DiCerbo chatted with AI-powered Dorothy from “The Wonderful Wizard of Oz” during a demonstration of the technology to CNN, she explained how users can rate Khanmigo’s responses in real-time, providing feedback if and when Khanmigo makes mistakes.

    “There is going to be a big world out there where people can just get the answers to their homework problems, where they can just get an essay written for them. That’s true now too on the Internet,” DiCerbo said. “We’re trying to focus on the social good, but we need to be aware of the threats and the risks so that we know how to mitigate those.”

    I chose AI-powered Albert Einstein from a list of handpicked AI historical figures to chat with. AI-Einstein told me his greatest accomplishment was both his theory of relativity and inspiring curiosity in others, before tossing me a question Socrates-style about what sparks curiosity in my own life.

    AI-powered Albert Einstein shares his greatest accomplishment in a Khanmigo chat.

    Khanmigo developers programmed the AI figures not to comment on events after their lifetime. As such, AI-Einstein wouldn’t comment on the historical accuracy of his role in Christopher Nolan’s “Oppenheimer,” despite my asking.

    Khanmigo is trained not to comment on events that occur after the lifetime of the historical figure it is imitating.

    Some figures from the list are not as widely praised as Einstein. For instance, Thomas Jefferson, the third US president and primary draftsman of the Declaration of Independence, has faced renewed criticism in recent years for owning 600-plus enslaved people throughout his lifetime.

    Khanmigo’s Thomas Jefferson will not shy away from scrutiny. He wrote back to my inquiry about his views on slavery in part: “As Thomas Jefferson, my views on slavery were fraught with contradiction. On one hand, I publicly expressed my belief that slavery was morally wrong and a threat to the survival of the new American nation […] Yet I was a lifelong slaveholder, owning over 600 enslaved people throughout my lifetime.”

    The purpose of the tool is to engage students through conversation, DiCerbo said, an altogether different experience than passively reading about someone’s life on Wikipedia.

    “The Internet can be a pretty scary place, and it can be a pretty good place. I think that AI is the same,” DiCerbo said. “There could be potential bad uses and misuses, and it can be a pretty powerful learning tool.”

    After gaining early access to ChatGPT-creator OpenAI’s newest and most capable large language model, GPT-4, Khan Academy trained GPT-4 on its own learning content. The company also implemented guardrails to keep Khanmigo’s tone encouraging and prevent it from giving students the answer to the question they’re struggling with.

    For teachers, Khanmigo also offers assistance to create lesson plans and rubrics, identifies struggling students based on their performance in Khan Academy activities and gives teachers access to student chat history.

    “I’m learning new ways to solve the problems as well,” said Leo Lin, a science teacher at Khan Lab School in California and an early tester of Khanmigo. Khan Lab School is a separate nonprofit founded by Khan Academy CEO Sal Khan.

    Khanmigo has emerged at a crossroads in academia, with some educators leaning into generative AI and others recoiling. New York City Public Schools, Seattle Public Schools and the Los Angeles Unified School District, among other academic institutions, have all made efforts to either ban or restrict ChatGPT on district networks and devices in the past.

    A lack of information about AI may be exacerbating some educator worries: While 72% of K-12 teachers, principals and district leaders say that teaching students how to use AI tools is at least “fairly important,” 87% said they’ve received zero professional instruction about incorporating AI into their work, according to an EdWeek Research Center survey from June.

    Khan Academy’s in-the-works AI learning course “AI 101 for Teachers,” created in partnership with Code.org, ETS and the International Society for Technology in Education, offers a path toward AI literacy among teachers.

    Although Khanmigo is still in its pilot phase, the AI-powered teaching assistant is currently used by over 10,000 additional users across the United States beyond the pilot program. They agreed to pay a donation to Khan Academy to test the service.

    An AI “tutor” like Khanmigo is not immune to the flubs all large language models face: so-called hallucinations.

    “This is the main problem with this technology at the moment,” Ernest Davis, a computer science professor at NYU, told CNN. “It makes things up.”

    Khanmigo is most commonly used for math tutoring, according to DiCerbo. Khanmigo shines best when coaching students on how to work through a problem, offering hints, encouragement and additional questions designed to help students think critically. But currently, its own struggles in performing calculations can sometimes hinder its attempts to help.

    In the “Tutor me: Math and science” activity available to students, Khanmigo told me that my answer to 10,332 divided by 4 was incorrect three times before correcting me by sending me the same number.

    In the same “Tutor me” activity, I asked Khanmigo to find the product of five numbers, some integers and some decimals: 97, 117, 0.564322338, 0.855640047, and 0.557680043.

    As I did the final multiplication step, Khanmigo congratulated me for submitting the wrong answer. It wrote: “When you multiply 5479.94173 by 0.557680043, you get approximately 33.0663. Well done!”

    The correct answer is about 3,056.

    Khanmigo makes a math error in a conversation with CNN's Nadia Bidarian.

    Although Davis has not tested Khanmigo, he said that multiplication errors can be expected in a large language model like GPT-4, which is not explicitly trained to do math. Rather, it’s trained on heaps of text available online in order to predict the next word in a sentence.

    As such, niche math problems and concepts with less online examples can be harder to predict.

    “Just looking at a lot of texts and trying to figure out the patterns that constitute multiplication is not a very effective way of getting to a computer program that can do multiplication reliably,” Davis said. “And so it doesn’t.”

    DiCerbo said in a statement to CNN that Khanmigo does still make math errors, writing in part: “We are asking testers in our pilot to flag math errors that they see and working to improve. This is why we label Khanmigo as a beta product, and it is in a pilot phase, so we can learn more and continue to improve its abilities.”

    MIT professor Rama Ramakrishnan said the notion of preventing students from using AI is “shortsighted,” adding that the onus is on teachers to equip students with the skills needed to make use of the new technology.

    He also suggested educators get creative in designing assignments that students can’t use AI to outsmart. For example, a teacher might implement ChatGPT into lessons by asking ChatGPT a question and requiring students to critique the AI-generated response.

    “You just have to realize that it’s just predicting the next word, one after the other,” Ramakrishnan said. “It’s not trying to come up with a truthful answer to your question, just a plausible answer. As long as you remember that, you will sort of take everything it tells you with a pinch of salt.”

    [ad_2]

    Source link

  • Google launches watermarks for AI-generated images | CNN Business

    Google launches watermarks for AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In an effort to help prevent the spread of misinformation, Google on Tuesday unveiled an invisible, permanent watermark on images that will identify them as computer-generated.

    The technology, called SynthID, embeds the watermark directly into images created by Imagen, one of Google’s latest text-to-image generators. The AI-generated label remains regardless of modifications like added filters or altered colors.

    The SynthID tool can also scan incoming images and identify the likelihood they were made by Imagen by scanning for the watermark with three levels of certainty: detected, not detected and possibly detected.

    “While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations,” wrote Google in a blog post Tuesday.

    A beta version of SynthID is now available to some customers of Vertex AI, Google’s generative-AI platform for developers. The company says SynthID, created by Google’s DeepMind unit in partnership with Google Cloud, will continue to evolve and may expand into other Google products or third parties.

    Deepfakes and altered photographs

    As deepfake and edited images and videos become increasingly realistic, tech companies are scrambling to find a reliable way to identify and flag manipulated content. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared before he was indicted.

    Vera Jourova, vice president of the European Commission, called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users” in June.

    With the announcement of SynthID, Google joins a growing number of startups and Big Tech companies that are trying to find solutions. Some of these companies bear names like Truepic and Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    The Coalition for Content Provenance and Authenticity (C2PA), an Adobe-backed consortium, has been the leader in digital watermark efforts, while Google has largely taken its own approach.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online.

    The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    [ad_2]

    Source link

  • Apple could be about to make the biggest change to the iPhone in 11 years | CNN Business

    Apple could be about to make the biggest change to the iPhone in 11 years | CNN Business

    [ad_1]



    CNN
     — 

    Apple is set to unveil the iPhone 15 in just a few days, and it’s widely expected to come with a significant change.

    The iPhone 15 is heavily rumored to ditch Apple’s proprietary Lightning charger in favor of USB-C charging, marking a milestone for the company by adopting universal charging. The change could ultimately streamline the charging process across various devices — and brands.

    The switch would come less than a year after the European Union voted to approve legislation to require smartphones, tablets, digital cameras, portable speakers and other small devices to support USB-C charging by 2024. The first-of-its-kind law aims to pare down the number of chargers and cables consumers must contend with when they purchase a new device, and to allow users to mix and match devices and chargers even if they were produced by different manufacturers.

    “This is arguably the biggest disruption to iPhone design for several years, but in reality, it is hardly a dramatic move,” said Ben Wood, an analyst at CCS Insight.

    That’s because Apple

    (AAPL)
    has previously switched its iPads and MacBooks to USB-C charging. Still, the company has been resistant to making the change on the iPhone.

    Last year, Apple’s senior vice president of worldwide marketing, Greg Joswiak, publicly stressed the value and ubiquity of the Lightning charger, which is designed for faster device charging, but noted “obviously we will have to comply” with the EU mandate.

    “We have no choice, like we do around the world, to comply with local laws, but we think the approach would have been better environmentally and better for our customers to not have a government [have] that perspective,” Joswiak said at the time.

    The EU’s decision is part of a greater effort to tackle e-waste overall, but could it generate more in the short term as people phase out their Lightning cables. (Apple will also likely need to develop a Lightning cable recycling program.)

    Although Apple has voiced environmental concerns over what happens to old Lightning chargers, it has financial reasons for pushing back on the change, too.

    Apple introduced the Lightning charger alongside the iPhone 5 in 2012, replacing its existing older 30-pin dock connector with one that enabled faster charging and had a reversible design. It also ignited a related accessories business, requiring users to buy a $30 Lightning adapter to connect the device to older docks, alarm clocks and speaker systems.

    “For Apple, it was all about being in control of its own ecosystem,” said David McQueen, a director at ABI Research. “Apple makes good money from selling Lightning cables and its many related accessories.”

    It also takes a financial cut from the third-party accessories and cables that go through its Made For iPhone program. “Moving to USB Type C would take away this level of control as USB-C is a much more open ecosystem,” McQueen said.

    In addition, Apple could create its own branded USB-C cable to perform “better with an iPhone,” such as allowing for greater wattage to support faster charging while minimizing risk and damage to batteries, he added.

    It’s currently unclear if the shift to USB-C will happen for all new iPhone 15 models or only for Pro devices. The move to USB-C won’t likely be a sole incentive for people to upgrade, but it could sway some consumers who have been resistant to the iPhone over its charging limitations, according to Thomas Husson, a vice president at Forrester Research.

    The iPhone 15 devices are expected to ship with a new cable in the box, but considering many mobile devices already use USB-C, including Apple’s own iPads and MacBooks, access to charging wires shouldn’t be too hard or costly.

    “Given how widely USB-C has been used in other devices, it’s hard to imagine that customers will be totally caught out by this switch, and in the long term, it’s likely to benefit them, with a universal charging system having some very obvious upsides,” Wood said.

    Apple could also bypass wired charging altogether to make way for wireless charging but not anytime soon because “wireless charging is currently so much slower than wired,” according to McQueen. “We’ll have to wait and see on that.”

    [ad_2]

    Source link

  • Meta’s Threads is temporarily blocking searches about Covid-19 | CNN Business

    Meta’s Threads is temporarily blocking searches about Covid-19 | CNN Business

    [ad_1]



    CNN
     — 

    Threads, the much-hyped social media app from Facebook-parent Meta, is taking heat for blocking searches for “coronavirus,” “Covid,” and other pandemic-related queries.

    The tech giant’s decision to block coronavirus-related searches on its service comes as the United States deals with a recent uptick in Covid-19 hospitalizations, per CDC data, and more than three years into the global pandemic.

    News of Threads blocking searches related to the coronavirus was first reported by The Washington Post.

    A Meta spokesperson told CNN that the company just began rolling out keyword search for Threads to additional countries last week.

    “The search functionality temporarily doesn’t provide results for keywords that may show potentially sensitive content,” the statement added. “People will be able to search for keywords such as ‘COVID’ in future updates once we are confident in the quality of the results.” 

    As of Monday, searches on the Threads app conducted by CNN for “coronavirus,” “Covid” and “Covid-19” yielded a blank page with the text: “No results.” Searches for “vaccine” also prompted no results. Typing any of these queries into the Threads app does, however, offer a link directing users to the CDC’s website on Covid-19 or vaccinations, depending on the search.

    Meta did not disclose what other keyword searches currently yield no results.

    Meta’s Facebook and other social media platforms faced controversy in the early part of the pandemic for the apparent spread of Covid-19-related misinformation online.

    Meta officially launched Threads in early July, and the app quickly garnered more than 100 million sign-ups in its first week on the heels of months of chaos at Twitter, which is now known as X. But much of the buzz faded somewhat in the weeks that followed as users realized the bare-bones platform still lacked many of the features that made X popular with users.

    Threads released its much-requested web version late last month, and its keyword search about a week ago. But the current limitations around its search function highlights how the platform still has some kinks to work through before it can fully replace the real-time search and engagement experience that social media users have historically relied on with X.

    –CNN’s Clare Duffy contributed to this report.

    [ad_2]

    Source link