ReportWire

Tag: domestic-business

  • Everything you need to know about AI but were too afraid to ask | CNN Business

    Everything you need to know about AI but were too afraid to ask | CNN Business

    [ad_1]



    CNN
     — 

    Business executives keep talking about it. Teachers are struggling with what to do about it. And artists like Drake seem angry about it.

    Love it or hate it, everyone is paying attention to artificial intelligence right now. Almost overnight, a new crop of AI tools has found its way into products used by billions of people, changing the way we work, shop, create and communicate with each other.

    AI advocates tout the technology’s potential to supercharge our productivity, creating a new era of better jobs, better education and better treatments for diseases. AI skeptics have raised concerns about the technology’s potential to disrupt jobs, mislead people and possibly bring about the end of humanity as we know it. Confusingly, some execs in Silicon Valley seem to hold both sets of views at once.

    What’s clear, however, is that AI is not going away, but it is changing very fast. Here’s everything you need to know to keep up.

    In the public consciousness, “artificial intelligence” may conjure up images of murderous machines eager to overtake humans, and capable of doing so. But in the tech industry, it’s a broad term that refers to different tools that are trained to perform a wide range of complex tasks that might previously have required some input from an actual person.

    If you use the internet, then you almost certainly use services that rely on AI to sort data, filter content and make suggestions, among other tasks.

    It’s the technology that allows Netflix to recommend movies and that helps remove spam, hate speech and other inappropriate content from your social media feeds. It helps power everything from autocorrect features and Google Translate to facial recognition services, the last of which uses AI that, in Microsoft’s words, “mimics a human capability to recognize human faces.”

    AI can also be successful in developing techniques for solving a wide range of real world problems, such as adjusting traffic signals in real time to manage congestion issues or helping medical professionals analyze images to make a diagnosis. AI is also central to developing self-driving cars by processing tremendous amounts of visual data so the vehicles can understand their surroundings.

    The short answer: ChatGPT.

    For years, AI has largely operated in the background of services we use every day. That changed following the November launch of ChatGPT, a viral chatbot that put the power of AI front and center.

    People have already used ChatGPT, a tool created by OpenAI, to draft lawsuits, write song lyrics and create research paper abstracts so good they’ve even fooled some scientists. The tool has even passed standardized exams. And ChatGPT has sparked an intense competition among tech companies to develop and deploy similar tools.

    Microsoft and Google have each introduced features powered by generative AI, the technology underpinning ChatGPT, into their most widely used productivity tools. Meta, Amazon and Alibaba have said they’re working on generative AI tools, too. And numerous other businesses also want in on the action.

    It’s rare to see a cutting-edge technology become so ubiquitous almost overnight. Now businesses, educators and lawmakers are all racing to adapt.

    Generative AI enables tools to create written work, images and even audio in response to prompts from users.

    To get those responses, several Big Tech companies have developed their own large language models trained on vast amounts of online data. The scope and purpose of these data sets can vary. For example, the version of ChatGPT that went public last year was only trained on data up until 2021 (it’s now more up to date).

    These models work through a method called deep learning, which learns patterns and relationships between words, so it can make predictive responses and generate relevant outputs to user prompts.

    As impressive as some generative AI services may seem, they essentially just do pattern matching. These tools can mimic the writing of others or make predictions about what words might be relevant in their responses based on all the data they’ve previously been trained on.

    AGI, on the other hand, promises something more ambitious — and scary.

    AGI — short for artificial general intelligence — refers to technology that can perform intelligent tasks such as learning, reasoning and adapting to new situations in the way that humans do. OpenAI CEO Sam Altman has teased the possibility of a superintelligent AGI that could go on to change the world or perhaps backfire and end humanity.

    For the moment, however, AGI remains purely a hypothetical, so don’t worry too much about it.

    Anytime there’s an excess of buzz around a technology, it’s good to be skeptical — and there is certainly a lot of that here. Investor fascination with AI has helped push Wall Street back into a bull market, despite lingering economic uncertainty.

    Not all AI tools are equally useful and many companies will certainly tout AI features and strategies simply to tap into the current hype cycle. But even in just the past six months, AI has already shown potential to change how people do numerous everyday tasks.

    One of the biggest selling points around AI chatbots, for example, is their ability to make people more productive. Earlier this year, some real estate agents told CNN that ChatGPT saved them hours of work not only by writing listings for homes for sale but also looking up the permitted uses for certain land and calculating what mortgage payments or the return on investment might be for a client, which typically involve formulas and mortgage calculators.

    Artificial intelligence is also much broader than ChatGPT and other generative AI tools. Even if you think AI chatbots are annoying or might be a fad, the underlying technology will continue to power meaningful advances in products and services for years to come.

    The fear is AI will eliminate millions of jobs. The hope is it will help improve how millions do their jobs. The current reality is somewhere in between.

    Companies will likely need new workers to help them implement and manage AI tools. Employment of data analysts and scientists, machine learning specialists and cybersecurity experts is forecast to grow 30% on average by 2027, according to one recent estimate from the World Economic Forum.

    But the proliferation of AI will also likely put many roles at risk eventually. There could be 26 million fewer record-keeping and administrative jobs by 2027, the WEF predicted. Data entry clerks and executive secretaries are expected to see the steepest losses.

    For now, there are clearly limits to how well AI can do the job of a human on its own. When CNET, a media outlet, experimented with using AI to write articles, it came under scrutiny for publishing pieces with factual errors. Likewise, a lawyer in May made headlines for citing false court cases to a judge provided to him by ChatGPT. In an affidavit, the lawyer said he had never used ChatGPT as a legal research tool before and “was unaware of the possibility that its content could be false.”

    Alphabet CEO Sundar Pichai, left, and OpenAI CEO Sam Altman arrive to the White House for a meeting with Vice President Kamala Harris on artificial intelligence, Thursday, May 4, 2023, in Washington.

    Top AI executives have warned that AI could potentially bring about human extinction. But these same executives are also racing to deploy the technology into their products.

    Some experts say that focusing on far-off doomsday scenarios may distract from the more immediate harms that AI can cause, such as spreading misinformation, perpetuating biases that exist in training data, and enabling discrimination.

    For example, generative AI could be used to create deepfakes to spread propaganda during an election or enable a frightening new era of scams. Some AI models have also been criticized for what the industry calls “hallucinations,” or making up information.

    Even before the rise of ChatGPT, there were concerns about AI acting as a gatekeeper that can determine who does and does not move forward in a hiring process, for example. AI-powered facial recognition systems have also resulted in some wrongful arrests, and research has shown these systems are drastically more prone to error when trying to match the faces of darker skinned people.

    The more AI tools are incorporated into core parts of society, the more potential there is for unintended consequences.

    Regulators in the United States and Europe are pushing for legislation to help put guardrails in place for AI, which could ultimately impact how the technology develops. But it’s unclear if lawmakers can keep pace with the rapid advances in AI.

    Experts believe in the months ahead, generative AI will go on to create even more realistic images, videos, and audio that could further disrupt media, entertainment, tech and other industries. The technology will likely become increasingly conversational and personalized.

    In March, OpenAI unveiled GPT-4, the next-generation version of the technology that powers ChatGPT. According to the company and early tests, GPT-4 is able to provide more detailed and accurate written responses, pass academic tests with high marks and build a working website from a hand-drawn sketch. (Altman has previously said OpenAI is not yet training GPT-5.)

    AI will almost certainly be infused into many more products and services in the coming months. That means we’ll all have to learn how to live with it.

    As ChatGPT put it in response to a prompt from CNN, “AI has the potential to transform our lives … but it’s crucial for companies and individuals to be mindful of the accompanying risks and responsibly address concerns.”

    [ad_2]

    Source link

  • Alexandria Ocasio-Cortez says justices are ‘destroying the legitimacy’ of the Supreme Court | CNN Politics

    Alexandria Ocasio-Cortez says justices are ‘destroying the legitimacy’ of the Supreme Court | CNN Politics

    [ad_1]


    Washington
    CNN
     — 

    Democratic Rep. Alexandria Ocasio-Cortez of New York said Sunday that some Supreme Court justices are “destroying the legitimacy of the court,” amid a lack of oversight, calling it “profoundly dangerous” for democracy.

    “We have a broad level of tools to deal with misconduct, overreach and abuse of power, and the Supreme Court has not been receiving the adequate oversight necessary in order to preserve their own legitimacy,” Ocasio-Cortez told CNN’s Dana Bash on “State of the Union.”

    The progressive lawmaker cited recent allegations against Justices Samuel Alito and Clarence Thomas over ethics improprieties. Her comments come as the court wrapped up its term with a slew of consequential rulings, including ending affirmative action for college admissions, clocking student loan debt relief and limiting LGBTQ protections.

    Alito did not disclose a luxury 2008 trip he took in which a hedge fund billionaire flew him on a private jet, even though the businessman would later repeatedly ask the Supreme Court to intervene on his behalf, ProPublica reported. In a highly unusual move, Alito preemptively disputed the nature of the report before it published last month.

    Thomas, meanwhile, has fielded sharp criticism after a separate ProPublica report detailed his relationship with GOP megadonor Harlan Crow, including luxury travel and other lavish gifts that Thomas received from Crow, as well as Crow’s purchase from Thomas and his family the home where the justice’s mother still lives.

    The real estate transaction and the bulk of the hospitality went unreported on Thomas’ annual financial disclosures, as did Crow’s reported payments for the tuition of a grandnephew of the justice.

    Thomas has defended the omission of the Crow-financed travel from his reports, saying he was advised at the time that he was not required to report the hospitality.

    “If Chief Justice Roberts will not come before the Congress for an investigation voluntarily, I believe we should be considering subpoenas, we should be considering investigations, we should pass much more binding and stringent ethics guidelines,” Ocasio-Cortez said Sunday.

    Senate Judiciary Chairman Dick Durbin, an Illinois Democrat, previously said his committee would mark up legislation on Supreme Court ethics after lawmakers return from their July 4 recess. Durbin had also asked Chief Justice John Roberts to appear before the Judiciary panel – a request that Roberts declined in April.

    Ocasio-Cortez on Sunday also called on the Biden administration to keep pursuing student loan cancellation after the Supreme Court blocked the president’s student loan forgiveness plan Friday, rejecting a program aimed at delivering up to $20,000 of relief to millions of borrowers.

    “People should not be incurring interest during this 12-month on-ramp period,” she said, referring to the administration’s proposal to help borrowers avoid penalties if they miss a payment during the first 12 months after student loan repayments resume in October.

    “So, I highly urge the administration to consider suspending those interest payments. Of course, we still believe in pursuing student loan cancellation and acting faster than that 12-month period wherever possible.”

    “We truly believe that the president – Congress has given the president this authority. The Supreme Court is far overreaching their authority. And I believe, frankly, that we really need to be having conversations about judicial review as a check on the courts as well,” Ocasio-Cortez said.

    [ad_2]

    Source link

  • Japan’s largest port hit with ransomware attack | CNN Business

    Japan’s largest port hit with ransomware attack | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Japan’s busiest shipping port said Thursday it would resume operations after a ransomware attack prevented the port from receiving shipping containers for two days.

    The expected restoration of the Port of Nagoya, a hub for car exports and an engine of the Japanese economy, will ease concerns about any wider economic fallout from the ransomware attack.

    The hacking incident began Tuesday when the computer system that handles shipping containers was knocked offline, according to a statement from the Nagoya Harbor Transportation Association. The hack forced the port to stop handling shipping containers that came to the terminal by trailer, the association said.

    Ransomware is a type of malicious software that typically locks the computers of a victim organization so that hackers can demand payment.

    This is the first reported ransomware attack on a Japanese port, and the incident has “created great concerns over the impact on the local economy and supply chain including the auto industry,” Mihoko Matsubara, chief cybersecurity strategist at NTT Corporation, a Japanese telecom firm, told CNN.

    Japanese media reported that LockBit, a type of ransomware linked with Russian-speaking hackers, was used in the hack.

    The LockBit cybercriminal group has been prolific in recent weeks, claiming Taiwanese semiconductor giant TSMC as a victim last week (TSMC said one of its hardware suppliers was hacked but the incident had no impact on TSMC’s business operations.)

    As of midday Thursday in Japan, there was no claim of responsibility for the Port of Nagoya ransomware attack from the LockBit group on their dark-web site.

    It was unclear if the Port of Nagoya received a ransom demand. CNN was unable to reach a spokesperson for the port association.

    Japanese critical infrastructure operators should drill for cyberattacks on their supply chains and have a response plan in place, given threats from both cybercriminals and state-backed hackers, Matsubara told CNN.

    Though this may be a first for Japan, ransomware and related hacks have hit ports in other countries.

    In 2017, malicious software allegedly unleashed by the Russian military on Ukraine spread around the world and disrupted operations at shipping giant Maersk, coasting the company an estimated $300 million.

    — CNN’s Mayumi Maruyama contributed to this report

    [ad_2]

    Source link

  • Meta cut election teams months before Threads launch, raising concerns for 2024 | CNN Business

    Meta cut election teams months before Threads launch, raising concerns for 2024 | CNN Business

    [ad_1]



    CNN
     — 

    Meta has made cuts to its teams that tackle disinformation and coordinated troll and harassment campaigns on its platforms, people with direct knowledge of the situation told CNN, raising concerns ahead of the pivotal 2024 elections in the US and around the world.

    Several members of the team that countered mis- and disinformation in the 2022 US midterms were laid off last fall and this spring, a person familiar with the matter said. The staffers are part of a global team that works on Meta’s efforts to counter disinformation campaigns seeking to undermine confidence in or sow confusion around elections.

    The news comes as Meta, the parent company of Facebook and Instagram, is celebrating the unparalleled success of its new Threads platform, surpassing 100 million users just five days after launch and opening a potential new avenue for bad actors.

    A Meta spokesperson did not specify, when asked, how many staffers had been cut from its teams working on elections. In a statement to CNN on Monday night, the spokesperson said, “Protecting the US 2024 elections is one of our top priorities, and our integrity efforts continue to lead the industry.”

    The spokesperson did not answer CNN questions about what additional resources had been deployed to monitor and moderate its new platform. Instead, Meta said the social media giant had invested $16 billion in technology and teams since 2016 to protect its users.

    But the decision to lay off staffers ahead of 2024, when elections will not only take place in the United States but also in Taiwan, Ukraine, India and elsewhere, has raised concerns among those with direct knowledge of Meta’s election integrity work.

    The disparate nature of Meta’s work on elections makes it difficult for even people inside the company to say specifically how many people are part of the effort. One group of relevant employees hit harder by the layoffs were “content review” specialists who manually review election-related posts that may violate Meta’s terms of service, a person familiar with the cuts told CNN.

    Meta is trying to offset those cuts by more proactively detecting accounts that spread false election-related information, said the person, who spoke on the condition of anonymity because they were not authorized to speak to the press.

    For years, the social media giant has invested heavily in teams of personnel to root out sophisticated and coordinated networks of fake accounts. That “coordinated inauthentic behavior,” as Meta calls it, began in the lead up to the 2016 election when an infamous Russian government-linked troll operation ran amuck on Facebook.

    The team tasked with combating the influence campaigns – which includes former US government and intelligence officials – has been generally seen as the most robust in the social media industry. The company has published quarterly reports in recent years that expose governments and other entities found to have been operating covert campaigns pushing disinformation on Meta’s platforms.

    Those teams investigating disinformation campaigns now must further prioritize which campaigns and countries to focus on, another person familiar with the situation said, a trade-off that could result in some deceptive efforts going unnoticed.

    The person emphasized that Meta still has a dedicated team of professionals working on these issues, many of whom are widely respected in the cyber and information security communities.

    But while artificial intelligence and other automated systems can help detect some of these efforts, unearthing sophisticated disinformation networks is still a “very manual process” that involves intense scrutiny from expert staff, another person with direct knowledge of Meta’s counter disinformation efforts told CNN.

    The person said they feared Meta was regressing from progress it had made from learning from past mistakes. “Lessons that were learned at great costs,” they said, citing the company’s 2018 admission that its platforms were used to incite violence in Myanmar.

    In addition to its in-house team, Meta and other social media companies rely on tips from academics and other researchers who specialize in monitoring covert disinformation networks.

    Darren Linvill, a professor at Clemson University’s Media Forensics Hub, said he has sent the company valuable tips in recent months, but Meta’s response time has slowed significantly.

    Linvill, who has a long track record of successfully identifying covert online accounts, including helping to unearth a Russian election meddling effort in Africa in 2020, said that Meta recently removed a network of Russian language accounts that were posting both pro and anti-Ukraine content on Facebook and Instagram.

    “They were trying to stoke anger on both sides of the debates,” he said.

    Launched last Thursday, Threads has become an instant success with celebrities, politicians, and journalists flocking to the platform.

    The new Twitter-style app is tied to users’ existing Instagram accounts, rather than being linked directly to Facebook. Currently, Threads shares the same community standards as Instagram, but the platforms differ on issues relating to Meta’s methods to combat disinformation.

    Meta also applies labels to state-controlled accounts on Facebook and Instagram, such as Russia’s Sputnik news agency and China’s CCTV. However, these labels do not appear on state-controlled accounts on Threads.

    The launch of Threads even as Meta trims its disinformation-focused personnel comes at a turbulent and transformative time for those tasked with writing and implementing rules on social media platforms.

    Elon Musk, the billionaire who bought Twitter last year, has all but torn up that platform’s rule book and gutted the team that worked on implementing policies designed to combat disinformation efforts.

    Last month, YouTube, which has also made job cuts, announced it would allow videos that feature the false claim the 2020 US presidential election was stolen, a reversal of its previous policy.

    The rule reversals come as the Republican-controlled House of Representatives investigates interactions between technology companies and the federal government.

    Last week, a federal judge in Louisiana ordered some Biden administration agencies and top officials not to communicate with social media companies about certain content, handing a win to GOP states in a lawsuit accusing the government of going too far in its effort to combat Covid-19 disinformation.

    The restrictions and the scrutiny could give cover to social media companies that may want to pull back on some of their platforms’ rules around election integrity, said Katie Harbath, a former Facebook official who helped lead the company’s global election efforts until 2021.

    “I can [almost] hear [Meta Global Affairs President] Nick Clegg saying that ‘we’re going to be cautious of what we do, because we wouldn’t want to run afoul of the law,’” Harbath said.

    [ad_2]

    Source link

  • Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    [ad_1]



    CNN
     — 

    Microsoft, Google and other leading artificial intelligence companies committed Friday to put new AI systems through outside testing before they are publicly released and to clearly label AI-generated content, the White House announced.

    The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies – which also include Amazon, Meta, OpenAI, Anthropic and Inflection – aimed at making AI systems and products safer and more trustworthy while Congress and the White House develop more comprehensive regulations to govern the rapidly growing industry. President Joe Biden met with top executives from all seven companies at the White House on Friday.

    In a speech Friday, Biden called the companies commitments “real and concrete,” adding they will help fulfill their “fundamental obligations to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

    “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation,” Biden said.

    White House officials acknowledge that some of the companies have already enacted some of the commitments but argue they will as a whole raise “the standards for safety, security and trust of AI” and will serve as a “bridge to regulation.”

    “It’s a first step, it’s a bridge to where we need to go,” White House deputy chief of staff Bruce Reed, who has been managing the AI policy process, said in an interview. “It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we’ve seen before.”

    While most of the companies already conduct internal “red-teaming” exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology – such as a cyberattack or its potential to be used by malicious actors – and allows companies to proactively identify shortcomings and prevent negative outcomes.

    Reed said the external red-teaming “will help pave the way for government oversight and regulation,” potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser.

    The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation.

    The companies also committed to investing in cybersecurity and “insider threat safeguards,” in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely; creating a robust mechanism for third parties to report system vulnerabilities; prioritizing research on the societal risks of AI; and developing and deploying AI systems “to help address society’s greatest challenges,” according to the White House.

    Asked by CNN’s Jake Tapper Friday about worries he has when it comes to AI, Microsoft Vice Chair and President Brad Smith pointed to “what people, bad actors, individuals or countries will do” with the technology.

    “That they’ll use it to undermine our elections, that they will use it to seek to break in to our computer networks. You know, that they’ll use it in ways that will undermine the security of our jobs,” he said.

    But, Smith argued, “the best way to solve these problems is to focus on them, to understand them, to bring people together, and to solve them. And the interesting thing about AI, in my opinion, is that when we do that, and we are determined to do that, we can use AI to defend against these problems far more effectively than we can today.”

    Pressed by Tapper about AI and compensation concerns listed in a recent letter signed by thousands of authors, Smith said: “I don’t want it to undermine anybody’s ability to make a living by creating, by writing. That is the balance that we should all want to strike.”

    All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity.

    Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

    “If we’ve learned anything from the last decade and the complete mismanagement of social media governance, it’s that many companies offer a lot of lip service,” Common Sense Media CEO James Steyer said in a statement. “And then they prioritize their profits to such an extent that they will not hold themselves accountable for how their products impact the American people, particularly children and families.”

    The federal government’s failure to regulate social media companies at their inception – and the resistance from those companies – has loomed large for White House officials as they have begun crafting potential AI regulations and executive actions in recent months.

    “The main thing we stressed throughout the discussions with the companies was that we should make this as robust as possible,” Reed said. “The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago and I think that AI is progressing even more rapidly than that and it’s important for this bridge to regulation to be a sturdy one.”

    The commitments were crafted during a monthslong back-and-forth between the AI companies and the White House that began in May when a group of AI executives came to the White House to meet with Biden, Vice President Kamala Harris and White House officials. The White House also sought input from non-industry AI safety and ethics experts.

    White House officials are working to move beyond voluntary commitments, readying a series of executive actions, the first of which is expected to be unveiled later this summer. Officials are also working closely with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI.

    “This is a serious responsibility. We have to get it right. There’s an enormous, enormous potential upside as well,” Biden said.

    In the meantime, White House officials say the companies will “immediately” begin implementing the voluntary commitments and hope other companies sign on in the future.

    “We expect that other companies will see how they also have an obligation to live up to the standards of safety, security and trust. And they may choose – and we would welcome them choosing – joining these commitments,” a White House official said.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • Taiwan’s TSMC to invest $2.9 billion in new plant as demand for AI chips soars | CNN Business

    Taiwan’s TSMC to invest $2.9 billion in new plant as demand for AI chips soars | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    TSMC, the world’s largest chipmaker, says it plans to invest nearly 90 billion New Taiwan dollars ($2.9 billion) to build an advanced chip plant in Taiwan, as it expands production to meet booming demand for artificial intelligence (AI) products.

    Last week, CEO C.C. Wei told analysts the company plans to roughly double its capacity for advanced packaging in 2024 compared to 2023, in order to meet “strong demand” for AI chips from its customers, which include Nvidia

    (NVDA)
    and AMD.

    Advanced packaging in the semiconductor industry involves using high-tech methods to aggregate components from various wafers in order to create a more powerful computer chip.

    TSMC

    (TSM)
    said the new plant is expected to create 1,500 jobs.

    “To meet market needs, TSMC is planning to establish an advanced packaging fab in the Tongluo Science Park,” the company told CNN in a statement, referring to fabrication plants — the technical term for semiconductor factories.

    The science park is located in Miaoli County, south of the firm’s main facilities in Hsinchu, near Taipei.

    TSMC on Thursday reported a 23% fall in net profit for the second quarter, compared to the same period last year, as a global economic downturn took a toll on overall demand — even as customers clamored for more of its AI chips.

    Chips manufactured by TSMC for customers like Nvidia are the muscle behind generative AI, a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    That’s the kind of AI underlying ChatGPT, Google

    (GOOGL)
    ’s Bard, Dall-E and many of the other new AI technologies.

    TSMC is considered a national treasure in Taiwan, supplying semiconductors to global tech giants including Apple

    (AAPL)
    and Qualcomm

    (QCOM)
    .

    [ad_2]

    Source link

  • Juul seeks authorization on a new vape it says can verify a user’s age. Here’s how it works | CNN Business

    Juul seeks authorization on a new vape it says can verify a user’s age. Here’s how it works | CNN Business

    [ad_1]



    CNN
     — 

    E-cigarette company Juul Labs is seeking US authorization to sell a “next-generation” vape with age verification capabilities in the United States.

    To verify a user’s age, the proposed vape pairs with a phone app, requiring a customer to either upload their government ID and a real-time selfie or input personal information and allow a third-party database to verify their identity, according to a Juul spokesperson.

    A unique Pod ID chip within the Juul device can also detect counterfeit cartridges made by other companies, many of which have flooded the market with illegal fruity flavors that appeal to minors.

    The mission of the new platform is twofold, according to the company: Encourage adult smokers to switch from combustible cigarettes to e-cigarettes while restricting underage access.

    The legal age to purchase e-cigarettes in the United States is 21.

    “We look forward to engaging with FDA throughout the review process while we pursue this important harm-reduction opportunity,” Juul’s Chief Regulatory Officer Joe Murillo said in a company news release.

    If authorized by the US Food and Drug Administration, Juul Labs hasn’t yet decided on the name to market their new product in the US. In the UK and Canada, where it’s already for sale, it’s called the JUUL2.

    Advertising itself as an alternative nicotine product, Juul publicly advises that adults vape only as a replacement for combustible cigarettes.

    But Juul has a troubled history in US markets.

    “They were the spark that ignited the flame,” said Robin Koval, CEO of the nonprofit Truth Initiative, organizers of the nation’s largest campaign for youth to quit vaping. “This is not a company known to tell the truth.”

    Juul Labs has settled more than 5,000 cases brought by approximately 10,000 plaintiffs since its vaping devices initially skyrocketed in popularity in 2016, with some alleging the company deceived or failed to warn consumers about the risks of its products. The e-cigarette maker also agreed to pay $462 million to six US states and Washington, DC, in April after a lawsuit accused Juul Labs of directly promoting its products to high school students. In total, Juul Labs has agreed to pay more than $1 billion in its various legal settlements.

    Juul dominated over 70% of the US e-cigarette market at its peak in late 2018. In the same year, 27% of high school students and 7.2% of middle school students said they used tobacco for one or more days in the month, according to the 2018 National Youth Tobacco Survey.

    Juul is now a less favored brand among youth. When asked what e-cigarette brands they used in the past 30 days, youth e-cigarette users in the 2022 National Youth Tobacco Survey answered Puff Bar most frequently (29.7%), followed by Vuse (23.6%) and then Juul (22%), with the first two being disposable vaping products.

    In 2019, Juul suspended all flavors other than tobacco and menthol and suspended broadcast, digital and print publication marketing.

    Even with limited flavors, the FDA banned Juul products in the US last year after reviewing Juul’s applications seeking marketing authorization for their devices. The FDA determined that the applications lacked “sufficient evidence” within the toxicological profile of the vaporizers to prove that marketing the products would be in the interest of public health.

    The FDA has placed the ban on hold while Juul Labs appeals.

    Juul's new device is currently marketed as JUUL2 in the UK and Canada.

    Juul Labs submitted its most recent application to the FDA on July 19, as all e-cigarette manufacturers are required to do before their product can be marketed and sold legally in the United States. This first filing concerns just one flavor, Virginia Tobacco, with a nicotine concentration of 18 mg per mL.

    Although Juul’s new platform has age verification capabilities, the company does not intend to lock all their new pods before use. For example, the Virginia Tobacco pods will not come automatically locked. The spokesperson for Juul said doing so could create “friction” for the adult smokers the tobacco flavor is most likely to target.

    “If you’re an adult smoker and you go to buy a cigarette, it’s pretty easy to use the product,” a Juul spokesperson told CNN. “If you add in another barrier before product use, that creates some level of friction.”

    Using the new Pod ID feature, Juul’s new vaping device could tell a Virginia Tobacco pod apart from a menthol-flavored pod. It could then require age verification to activate only the latter, according to the spokesperson.

    Juul has researched other flavors that combine tobacco and menthol with fruity tones to potentially submit to the FDA following this filing. Juul currently sells the flavor Autumn Tobacco in the UK, which contains “tangy apple notes,” according to its website.

    Just because e-cigarette companies are required to comply with the FDA doesn’t mean all of them do. In fact, most don’t. To date, the FDA has authorized only 23 specific e-cigarette products, all of which are tobacco flavored.

    Yet more than 2.5 million US middle and high school students said they use e-cigarettes as of last year, according to the 2022 National Youth Tobacco Survey. Almost 85% consume fruity, candy or other flavored products, despite them being illegal.

    Koval of Truth Initiative said the tobacco industry “floods the market” with products such that the FDA can’t keep up.

    “It is a little bit like Whac-a-Mole for the FDA and for those of us who are trying to promote healthier behaviors for young people,” Koval said. The total number of e-cigarette brands increased by 46.2% between January 2020 and December 2022, from 184 to 269, according to a study from the Centers for Disease Control and Prevention.

    To gain FDA authorization for its latest platform, Juul must prove to the FDA that in aiding the public health crisis of adult smoking, it is not further exacerbating the spread of youth vaping.

    “This is only the beginning of new tech being developed and refined for the US market and abroad to eliminate combustible cigarettes and combat underage use,” Juul’s Chief Product Officer Kirk Phelps said.

    [ad_2]

    Source link

  • Micron Technology: China probes US chip maker for cybersecurity risks as tech tension escalates | CNN Business

    Micron Technology: China probes US chip maker for cybersecurity risks as tech tension escalates | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    China has launched a cybersecurity probe into Micron Technology, one of America’s largest memory chip makers, in apparent retaliation after US allies in Asia and Europe announced new restrictions on the sale of key technology to Beijing.

    The Cyberspace Administration of China (CAC) will review products sold by Micron in the country, according to a statement by the watchdog late on Friday.

    The move is aimed at “ensuring the security of key information infrastructure supply chains, preventing cybersecurity risks caused by hidden product problems, and maintaining national security,” it noted.

    It came on the same day that Japan, a US ally, said it would restrict the export of advanced chip manufacturing equipment to countries including China, following similar moves by the United States and the Netherlands.

    Washington and its allies have announced curbs on China’s semiconductor industry, which strike at the heart of Beijing’s bid to become a tech superpower.

    Last month, the Netherlands also unveiled new restrictions on overseas sales of semiconductor technology, citing the need to protect national security. In October, the United States banned Chinese companies from buying advanced chips and chipmaking equipment without a license.

    Micron told CNN it was aware of the review.

    “We are in communication with the CAC and are cooperating fully,” it said, adding that it stands by the security of its products.

    Shares in Micron sank 4.4% on Wall Street Friday following the news, the biggest drop in more than three months. Micron derives more than 10% of its revenue from China.

    In an earlier filing, the Idaho-based company had warned of such risks.

    “The Chinese government may restrict us from participating in the China market or may prevent us from competing effectively with Chinese companies,” it said last week.

    China has strongly criticized restrictions on tech exports, saying last month it “firmly opposes” such measures.

    In efforts to boost growth and job creation, Beijing is seeking to woo foreign investments as it grapples with mounting economic challenges. The newly minted premier Li Qiang and several top economic officials have been rolling out the welcome wagon for global CEOs and promising they would “provide a good environment and services.”

    But Beijing has also exerted growing pressure on foreign companies to bring them into line with its agenda.

    Last month, authorities closed the Beijing office of Mintz Group, a US corporate intelligence firm, and detained five local staff.

    Days earlier, they suspended Deloitte’s operations in Beijing for three months and imposed a fine of $31 million over alleged lapses in its work auditing a state-owned distressed debt manager.

    [ad_2]

    Source link

  • The city without TikTok offers a window to America’s potential future | CNN Business

    The city without TikTok offers a window to America’s potential future | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Across the United States, more than 150 million people are being faced with the possibility of a new reality: life without TikTok.

    The wildly popular short-form video app has been at the center of an ongoing battle, with lawmakers calling for an outright ban, and the company portraying itself as a critical community space, educational platform and just plain fun.

    In Hong Kong, there’s no need to imagine that reality: TikTok discontinued its services there in 2020.

    Its abrupt departure was met with mixed reactions: disappointment from some users and content creators, but also relief from others who say life is better without the app’s infinite scroll.

    At the time of its exit, TikTok had a relatively modest presence in the city and was not ubiquitous like it is in the US today.

    But the varied reactions to its departure, and the way users have pivoted to other platforms or even real-life offline communities, offer Americans a glimpse into their potential TikTok-less future.

    TikTok announced its exit from Hong Kong in July 2020, a week after China imposed a controversial national security law in the city. The decision came as the app tried to distance itself from China and its Beijing-based parent company ByteDance, in the face of growing pressure in the US under the Trump administration.

    But it meant a jarring halt for creators like Shivani Dukhande, who had roughly 45,000 followers at the time the app left Hong Kong.

    Dukhande, 25, saw her account take off in early 2020 during the pandemic, with lifestyle content such as cooking and wellness videos flourishing on the platform.

    “There were a lot of new creators emerging,” she said. “We used to all collaborate together, we had a chat where we would all speak and share ideas and it created a community.”

    Momentum began to build. Companies started reaching out to Dukhande, paying for sponsored content and collaborating on ad campaigns. Brands began partnering with creators on trending “challenges” in a bid to attract young new consumers.

    “More people were joining and it was becoming such a fun thing to do,” she said. “Then, it just kind of went away one morning.”

    “If it continued, then I probably could have made enough to have quit my 9 to 5,” she said. “If I had the chance to grow, it could have been a potential career path.”

    This is one of the main arguments TikTok has made in recent weeks in the US. In March, as the company’s CEO prepared to testify before Congress, TikTok produced a docuseries highlighting American small business owners who rely on the platform for their livelihoods.

    The platform is used by nearly five million businesses in the US, TikTok said in March. And it’s set to surpass rivals: London-based research firm Omdia projected in November that TikTok’s advertising revenues will exceed the combined video ad revenues of Meta – home of Facebook and Instagram – and YouTube by 2027.

    This is partly because people are spending more time on TikTok. In the second quarter of 2022, TikTok users globally spent an average of 95 minutes per day on the app, according to data analytics firm SensorTower – nearly twice as much time as users spent on Facebook and Instagram.

    Shivani Dukhande had created videos about wellness, lifestyle, food and Hong Kong on her TikTok account.

    But in Hong Kong, other platforms have jumped in to fill the gap. Reels, Instagram’s short-form video product, with similar features as TikTok such as an endless scroll, is growing quickly – and Dukhande has gotten on board.

    She had to rebuild her audience from scratch, and now has 12,500 Instagram followers, but she feels optimistic about its growth. Still, the loss of TikTok was a “missed opportunity,” she said, and the burgeoning community of creators has largely faded from sight.

    “The amount of jobs, the amount of content creation, the amount of marketing opportunities that were there with TikTok – we sort of missed out on that whole chunk of it.”

    But for some people, TikTok’s departure was a welcome change.

    Poppy Anderson, 16, has been using TikTok since its launch in 2018. And, like many others in her generation, she would spend hours “scrolling and scrolling” – even when feeling unfulfilled.

    “It was very easy to kind of find exactly what you like on there, because the [algorithm-run] For You page kept you there,” she said. “And it’s entertaining, but you don’t really get anything from it.”

    She described TikTok as often being a toxic environment that breeds narrow thinking, herd mentality, a misguided “cancel culture” and inappropriate online behavior such as critiquing the bodies of girls and women. Even people she knew in real life began acting differently after joining the app, which strained friendships, she said.

    Martin Poon, 15, also grew weary of TikTok, but it was hard to quit.

    “Everyone was using it, so I feel like there was a sense that you have to use it, you have to be on top of things, you have to know what’s going on. And I think that was stressful to me,” he said.

    Misinformation and misogyny ran rampant on TikTok, with accounts like those of Andrew Tate, the self-styled “alpha male” recently detained in Romania on allegations of human trafficking and rape, gaining popularity among boys at Poon’s school.

    “It’s just concerning how [these accounts] have so much impact on the youth, and it has so much grip on what we think and how it affects our behavior,” said Poon – though he added that misinformation is a major problem on all social media platforms, not just TikTok.

    Experts have long worried about the impact of TikTok on young people’s mental health, with one study claiming the app may surface potentially harmful content related to suicide and eating disorders to teenagers within minutes of them creating an account.

    In response to growing pressure, TikTok recently announced a one-hour daily screentime limit for users under 18, though users will be able to turn off this default setting.

    Anderson acknowledged some positives about TikTok, like open conversations about mental health. Still, she was glad when the app became inaccessible. Falling asleep became easier without the lure of TikTok. “I didn’t have the self control to get off it on my own,” she said.

    For Poon and his friend Ava Chan, also 15, TikTok’s disappearance sparked new beginnings.

    When the app left in 2020, they were doing online classes, isolated from friends and bored at home. At the time, Instagram Reels and YouTube Shorts had yet to arrive in Hong Kong.

    “We had to figure out how to use our time other than being on TikTok,” said Chan. “For us, that was exploring our passions more.”

    For both, that came in advocating for the neurodiverse community. They launched a club at school that spreads education and awareness about neurodiversity, as well as participating in volunteer activities with neurodiverse people.

    Both said it lent them a sense of purpose, and as time went on, they saw other benefits.

    Their friends, who would previously spend time filming and watching TikToks together, began having more face-to-face conversations. They noticed peers begin exercising outdoors more, which was made easier as Covid restrictions lifted. Their mental health improved.

    Of course, being teenagers, they’re not off social media entirely and use it as a tool to promote their club – but it’s far from the previous hours of scrolling. And while they occasionally wonder what’s happening on TikTok outside Hong Kong, the allure of it is lost when nobody else around them uses it either.

    “A lot of people, they’ve just kind of forgotten about it,” said Anderson. “People move to different platforms – or just move on.”

    [ad_2]

    Source link

  • Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    [ad_1]



    CNN
     — 

    Shares of Google-parent Alphabet fell more than 3% in early trading Monday after a report sparked concerns that its core search engine could lose market share to AI-powered rivals, including Microsoft’s Bing.

    Last month, Google employees learned that Samsung was weighing making Bing the default search engine on its devices instead of Google’s search engine, prompting a “panic” inside the company, according to a report from the New York Times, citing internal messages and documents. (CNN has not reviewed the material.)

    In an effort to address the heightened competition, Google is said to be developing a new AI-powered search engine called Project “Magi,” according to the Times. The company, which reportedly has about 160 people working on the project, aims to change the way results appear in Google Search and will include an AI chat tool available to answer questions. The project is expected to be unveiled to the public next month, according to the report.

    In a statement sent to CNN, Google spokesperson Lara Levin said the company has been using AI for years to “improve the quality of our results” and “offer entirely new ways to search,” including with a feature rolled out last year that lets users search by combining images and words.

    “We’ve done so in a responsible and helpful way that maintains the high bar we set for delivering quality information,” Levin said. “Not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new AI-powered features to Search, and will share more details soon.”

    Samsung did not immediately respond to a request for comment.

    Google’s search engine has dominated the market for two decades. But the viral success of ChatGPT, which can generate compelling written responses to user prompts, appeared to put Google on defense for the first time in years.

    In March, Google began opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT and promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    At an event in February, a Google executive also said the company will bring “the magic of generative AI” directly into its core search product and use artificial intelligence to pave the way for the “next frontier of our information products.”

    Microsoft, meanwhile, has invested in and partnered with OpenAI, the company behind ChatGPT, to deploy similar technology in Bing and other productivity tools. Other tech companies, including Meta, Baidu and IBM, as well as a slew of startups, are racing to develop and deploy AI-powered tools.

    But tech companies face risks in embracing this technology, which is known to make mistakes and “hallucinate” responses. That’s particularly true when it comes to search engines, a product that many use to find accurate and reliable information.

    Google was called out after a demo of Bard provided an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Microsoft’s Bing AI demo was also called out for several errors, including an apparent failure to differentiate between the types of vacuums and even made up information about certain products.

    In an interview with 60 Minutes that aired on Sunday, Google and Alphabet CEO Sundar Pichai stressed the need for companies to “be responsible in each step along the way” as they build and release AI tools.

    For Google, he said, that means allowing time for “user feedback” and making sure the company “can develop more robust safety layers before we build, before we deploy more capable models.”

    He also expressed his belief that these AI tools will ultimately have broad impacts on businesses, professions and society.

    “This is going to impact every product across every company and so that’s, that’s why I think it’s a very, very profound technology,” he said. “And so, we are just in early days.”

    [ad_2]

    Source link

  • Twitter removes transgender protections from hateful conduct policy | CNN Business

    Twitter removes transgender protections from hateful conduct policy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter appears to have quietly rolled back a portion of its hateful conduct policy that included specific protections for transgender people.

    The policy previously stated that Twitter prohibits “targeting others with repeated slurs, tropes or other content that intends to degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.” But the second line was removed earlier this month, according to archived versions of the page from the WayBack Machine.

    Twitter also removed a line from the policy detailing certain groups of people often subject to disproportionate abuse online, including “women, people of color, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, and marginalized and historically underrepresented communities.”

    The platform first introduced its policy prohibiting misgendering and deadnaming (referring to a person’s pre-transition name) of transgender people in 2018 as part of a broader overhaul of its hateful conduct policy.

    The change to the hateful conduct policy is one of a number of updates Twitter has made to its safety and content moderation practices since Elon Musk took over the company last fall. Twitter has also restored the accounts of users who had previously been banned for violating its rules, stopped enforcing its Covid-19 misinformation policy, allowed users to purchase blue verification checkmarks and applied controversial new labels to the accounts of several news organizations.

    LGBTQ advocacy group GLAAD called out the hateful conduct policy change in a Tuesday statement.

    “Twitter’s decision to covertly roll back its longtime policy is the latest example of just how unsafe the company is for users and advertisers alike,” GLAAD President and CEO Sarah Kate Ellis said. “This decision to roll back LGBTQ safety pulls Twitter even more out of step with TikTok, Pinterest, and Meta, which all maintain similar policies to protect their transgender users at a time when anti-transgender rhetoric online is leading to real world discrimination and violence.”

    Twitter did not respond to a request for comment about the change, although the platform did announce earlier this week some other updates to how it enforces its hateful conduct policy. The platform said it plans to start applying labels to some tweets that violate its hateful conduct policy and reduce their visibility, a similar practice to the one used under the company’s previous leadership, under which it either reduced the visibility of or removed violative tweets.

    “Restricting the reach of Tweets helps reduce binary ‘leave up versus take down’ content moderation decisions and supports our freedom of speech vs freedom of reach approach,” the company said in a tweet. Twitter also said it will not place ads next to content that has been labeled as violative.

    Musk has been in the process of trying to encourage advertisers to return to the platform, after many paused their spending over concerns about Musk’s policy changes, increased hate speech on the platform and massive cuts to the company’s workforce, threatening the company’s core business.

    The billionaire tried to assuage advertisers about Twitter’s approach to hateful conduct at a marketing conference Tuesday, saying, “If somebody has something hateful to say, it doesn’t mean you should give them a megaphone,” according to a report from the Wall Street Journal.

    Musk has faced a number of criticisms from some in the transgender community, most notably from his transgender daughter Vivian Jenna Wilson. Last year, she petitioned a court in California to change her last name to that of of her mother, Justine Wilson, Musk’s ex-wife and mother of five of his seven children, because she no longer wanted to be related to her father “in any way, shape or form.”

    Musk has also had several tweets where he mocked the idea of use of people choosing the pronouns they want to apply to them. He had one tweet in December 2020, which he later deleted, that said “when you put he/him in your bio” alongside a drawing of an 18th century soldier rubbing blood on his face in front of a pile of dead bodies and wearing a cap that read “I love to oppress.”

    And this past December, a vocal critic of many Covid restrictions and protocols, Musk tweeted, “My pronouns are Prosecute/Fauci.”

    But in other tweets, Musk has insisted he had no problems with transgender people, saying that his problem is with “all these pronouns” which he called an “esthetic nightmare.” He also pointed out that his auto company Tesla

    (TSLA)
    has repeatedly scored a 100% rating from the Human Rights Campaign as being one of the “Best Places to Work for LGBTQ+ Equality.”

    — CNN’s Chris Isidore contributed to this report

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • Twitter is adding calls and encrypted messaging | CNN Business

    Twitter is adding calls and encrypted messaging | CNN Business

    [ad_1]


    London
    CNN
     — 

    Twitter is adding encrypted messaging to the platform Wednesday, and calls will follow shortly, CEO Elon Musk tweeted late Tuesday.

    “Release of encrypted DMs [direct messages] V1.0 should happen tomorrow. This will grow in sophistication rapidly. The acid test is that I could not see your DMs even if there was a gun to my head,” he said.

    “Coming soon will be voice and video chat from your handle to anyone on this platform, so you can talk to people anywhere in the world without giving them your phone number.”

    The move comes as Musk, who took control of Twitter six months ago, looks for ways to return the platform to growth. Its future looks increasingly uncertain in the face of dwindling advertising revenue and increased competition from rivals such as Mastodon and BlueSky, developed by Twitter co-founder and former CEO Jack Dorsey.

    Adding calls and encrypted messaging could allow Twitter to compete with Mark Zuckerberg’s Meta, which owns Facebook

    (FB)
    Messenger and WhatsApp. Billions of people around the world use those platforms to communicate daily with family and friends, including in groups. Twitter, meanwhile, reported 238 million monetizable daily users last July.

    Since taking the company private in October, Musk has turned Twitter on its head. A number of users, celebrities and media organizations have said they plan to leave the platform over recent policy changes, which they say threaten to make it less safe and reliable.

    Right-wing TV host Tucker Carlson said Tuesday he would relaunch his program on Twitter, which he praised as the only remaining large free-speech platform in the world after Fox News fired him last month.

    [ad_2]

    Source link

  • Chipmakers look to Japan as worries about China grow | CNN Business

    Chipmakers look to Japan as worries about China grow | CNN Business

    [ad_1]

    Japanese Prime Minister Fumio Kishida said he welcomed and expected more investment from global chipmakers, after meeting top executives on Thursday before a Group of Seven summit.

    China is set to be high on the agenda of the annual G7 leaders meeting that begins on Friday, with the United States increasingly urging its allies to counter the Asian giant’s chip and advanced technology development.

    Growing Taiwan and US tensions with China have brought serious challenges to the semiconductor industry. Taiwan is a major producer of chips used in everything from cars and smartphones to fighter jets.

    Ensuring diversified, resilient supply chains is a key component of the economic security theme being emphasized by Japan at the talks, White House national security adviser Jake Sullivan told reporters on Air Force One.

    Kishida told the executives, including those from Micron Technology Inc

    (MU)
    , Intel Corp

    (INTC)
    and Taiwan Semiconductor Manufacturing Co

    (TSM)
    (TSMC), that stabilizing supply chains would be a topic of discussion at the G7 talks in the western city of Hiroshima.

    “I am very pleased with your positive attitude towards investment in Japan, and would like the government as a whole to work on further expanding direct investment in Japan and support the semiconductor industry,” Kishida said.

    An industry ministry official later said Kishida wanted to foster cooperation to strengthen semiconductor supply chains, while Industry Minister Yasutoshi Nishimura said Japan would use 1.3 trillion yen ($9.63 billion) of the supplementary budget from the last fiscal year to support its chip business.

    In particular, Kumamoto prefecture in southwestern Japan is quickly becoming a hotbed for tech investment from companies including TSMC and Fujifilm Holdings Corp

    (FUJIF)
    .

    Micron said in a statement that it would bring extreme ultraviolet (EUV) technology to Japan, becoming the first semiconductor company to do so, and expected to invest up to 500 billion yen ($3.6 billion) with support from the Japanese government.

    Bloomberg News reported the financial incentives would total about 200 billion yen.

    An industry ministry official said no decision had been made on whether Japan would give a subsidy to Micron, but that one would be made as soon as possible.

    [ad_2]

    Source link

  • Dutch watchdog looking into alleged Tesla data breach | CNN Business

    Dutch watchdog looking into alleged Tesla data breach | CNN Business

    [ad_1]



    Reuters
     — 

    The data protection watchdog for the Netherlands said on Friday it was aware of possible Tesla data protection breaches, but it was too early for further comment.

    Germany’s Handelsblatt reported on Thursday that Elon Musk’s Tesla had allegedly failed to adequately protect data from customers, employees and business partners, citing 100 gigabytes of confidential data leaked by a whistleblower.

    “We are aware of the Handelsblatt story and we are looking into it,” said a spokesperson for the AP data watchdog in the Netherlands, where Tesla’s European headquarters is located.

    They declined all comment on whether the agency might launch or have launched an investigation, citing policy. The Dutch agency was informed by its counterpart in the German state of Brandenberg.

    Handelsblatt said Tesla notified the Dutch authorities about the breach, but the AP spokesperson said they were not aware if the company had made any representations to the agency.

    Tesla was not immediately available for comment on Friday on the Handelsblatt report, which said customer data could be found “in abundance” in a data set labelled “Tesla Files”.

    The data protection office in Brandenburg, which is home to Tesla’s European gigafactory, described the data leak as “massive”.

    “I can’t remember such a scale,” Brandenburg data protection officer Dagmar Hartge said, adding that the case had been handed to the Dutch authorities who would be responsible if the allegations led to an enforcement action.

    The Dutch authorities has several weeks to decide whether to deal with the case as part of a European procedure, she added.

    The files include tables containing more than 100,000 names of former and current employees, including the social security number of Tesla CEO Musk, along with private email addresses, phone numbers, salaries of employees, bank details of customers and secret details from production, Handelsblatt reported.

    The breach would violate the GDPR, it said.

    If such a violation was proved, Tesla could be fined up to 4% of its annual sales, which could be 3.26 billion euros.

    German union IG Metall said the revelations were “disturbing” and called on Tesla to inform employees about all data protection violations and promote a culture in which staff could raise problems and grievances openly and without fear.

    “These revelations … fit with the picture that we have gained in just under two years,” said Dirk Schulze, IG Metall incoming district manager for Berlin, Brandenburg and Saxony.

    Handelsblatt quoted a lawyer for Tesla as saying a “disgruntled former employee” had abused their access as a service technician, adding that the company would take legal action against the individual it suspected of the leak.

    Citing the leaked files, the newspaper reported about thousands of customer complaints regarding the carmaker’s driver assistance systems with around 4,000 complaints on sudden acceleration or phantom braking.

    Last month, a Reuters report showed that groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras between 2019 and 2022.

    This week, Facebook parent Meta was hit with a record 1.2 billion euro ($1.3 billion) fine by its lead European Union privacy regulator over its handling of user information and given five months to stop transferring user data to the U.S.

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • EU officials accuse Google of antitrust violations in its ad tech business | CNN Business

    EU officials accuse Google of antitrust violations in its ad tech business | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Google’s advertising business should be broken up, European Union officials said Wednesday, alleging that the tech giant’s involvement in multiple parts of the digital advertising supply chain creates “inherent conflicts of interest” that risk harming competition.

    The formal accusations mark the latest antitrust challenge to Google over its sprawling ad tech business, following a lawsuit by the US Justice Department in January that also called for a breakup of the company.

    The EU Commission has submitted its allegations to Google in writing, officials said, kicking off a legal process that could potentially end in billions of dollars in fines in addition to a possible breakup that could impact part of its core advertising business.

    The commission alleges that since 2014, Google has unfairly boosted its own proprietary ad exchange — the online auction house known as AdX that matches advertisers and publishers — through its simultaneous ownership of some of the most popular ad tools for publishers and advertisers.

    For example, the commission claims, advertisers who used Google’s ad buying tools frequently had their purchases routed to AdX instead of to rival ad exchanges.

    Meanwhile, Google’s publisher-facing tools unfairly gave AdX a leg up over rival ad exchanges, the commission alleged, because Google’s publisher tools gave AdX competitive bidding information that the exchange could use to help advertisers win an auction.

    One proposed solution by the commission would spin off Google’s ad exchange and publisher tools from the ad-buying tools it provides to advertisers.

    “@Google controls both sides of the #adtech market: sell & buy,” tweeted Margrethe Vestager, the commission’s top competition official. “We are concerned that it may have abused its dominance to favour its own #AdX platform. If confirmed, this is illegal.”

    In a statement, Dan Taylor, Google’s vice president of global ads, said the EU’s probe “focuses on a narrow aspect of our advertising business,” that the company opposes the commission’s preliminary conclusions and that Google plans to “respond accordingly.”

    “Our advertising technology tools help websites and apps fund their content, and enable businesses of all sizes to effectively reach new customers. Google remains committed to creating value for our publisher and advertiser partners in this highly competitive sector,” Taylor said.

    A Google spokesperson told CNN Wednesday that the company has only just received the commission’s complaint and that it will take time to review the commission’s claims. Google also added that it will oppose calls for a breakup.

    [ad_2]

    Source link

  • Schumer outlines plan for how Senate will regulate AI | CNN Business

    Schumer outlines plan for how Senate will regulate AI | CNN Business

    [ad_1]



    CNN
     — 

    Senate Majority Leader Chuck Schumer announced a broad, open-ended plan for regulating artificial intelligence on Wednesday, describing AI as an unprecedented challenge for Congress that effectively has policymakers “starting from scratch.”

    The plan, Schumer said at a speech in Washington, will begin with at least nine panels to identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.” The panels will be composed of experts from industry, academia and civil society, with the first sessions taking place in September, Schumer said.

    The Senate will then turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions, Schumer added, arguing that the resulting US solution could leapfrog existing regulatory proposals from around the world.

    “If we can put this together in a very serious way, I think the rest of the world will follow and we can set the direction of how we ought to go in AI, because I don’t think any of the existing proposals have captured that imagination,” Schumer said, reflecting on other recent proposals such as the European Union’s draft AI Act, which last week was approved by the European Parliament.

    The speech represents Schumer’s most definitive remarks to date on a problem that has dogged Congress for months amid the wide embrace of tools such as ChatGPT: How to catch up, or get ahead, on policymaking for a technology that is already in the hands of millions of people and evolving rapidly.

    In the wake of ChatGPT’s viral success, Silicon Valley has raced to develop and deploy a new crop of generative AI tools that can produce images and writing almost instantly, with the potential to change how people work, shop and interact with each other. But these same tools have also raised concerns for their potential to make factual errors, spread misinformation and perpetuate biases, among other issues.

    In contrast to the fast pace of AI advancements, Schumer has stressed the importance of a deliberate approach, focusing on getting lawmakers acquainted with the basic facts of the technology and the issues it raises before seeking to legislate. He and three other colleagues began last week by convening the first in a series of closed-door briefings on AI for senators that is expected to run through the summer.

    In his remarks Wednesday, Schumer appeared to acknowledge criticism of his pace.

    “I know many of you have spent months calling on us to act,” he said. “I hear you. I hear you loud and clear.”

    But he described AI as a novel issue for which Congress lacks a guide.

    “It’s not like labor, or healthcare, or defense, where Congress has had a long history we can work off of,” he said. “Experts aren’t even sure which questions policymakers should be asking. In many ways, we’re starting from scratch.”

    Schumer described his plan as laying “a foundation for AI policy” that will do “years of work in a matter of months.”

    To guide that process, Schumer expanded on a set of principles he first announced in April. Formally unveiling the framework on Wednesday, Schumer said any legislation on AI should be geared toward facilitating innovation before addressing risks to national security or democratic governance.

    “Innovation first,” Schumer said, “but with security, accountability, [democratic] foundations and explainability.”

    The last two pillars of his framework, Schumer said, may be among the most important, as unrestricted artificial intelligence could undermine electoral processes or make it impossible to critically evaluate an AI’s claims.

    Schumer’s remarks were restrained in calling for any specific proposals. At one point, he acknowledged that a consensus may even emerge that recommends against major government intervention on the technology.

    But he was clear on one point: “We do — we do — need to require companies to develop a system where in simple and understandable terms users understand why the system produced a particular answer, and where that answer came from.”

    The Senate may still be a long way off from unveiling any comprehensive proposal, however. Schumer predicted that the process is likely to take longer than weeks but shorter than years.

    “Months would be the proper timeline,” he said.

    [ad_2]

    Source link

  • TSMC confirms supplier data breach following ransom demand by Russian-speaking cybercriminal group | CNN Business

    TSMC confirms supplier data breach following ransom demand by Russian-speaking cybercriminal group | CNN Business

    [ad_1]



    CNN
     — 

    Taiwanese semiconductor giant TSMC confirmed Friday that one of its hardware suppliers was hacked and had data stolen from it, but said the incident had no impact on business operations.

    Confirmation of the breach came after Russian-speaking cybercriminals claimed TSMC as a victim on Thursday and demanded an extraordinary $70 million ransom from the semiconductor firm.

    There were no signs that TSMC or the hardware supplier, Taiwanese firm Kinmax, had any plans to pay the hackers (representatives from both companies didn’t respond to CNN’s questions about any ransom).

    TSMC — one of the world’s largest chipmakers and a key supplier to Apple

    (AAPL)
    — was quick to assure investors and the public that the hack had no impact on its operations and that it did not compromise its customers’ data.

    “After the incident, TSMC has immediately terminated its data exchange with this concerned supplier in accordance with the Company’s security protocols and standard operating procedures,” TSMC said in a statement to CNN.

    The hackers accessed Kinmax’s internal “testing environment” for the technology it prepares to deliver to customers, Kinmax said in a statement distributed by TSMC.

    “The leaked content mainly consisted of system installation preparation that the Company provided to our customers as default configurations,” Kinmax said. The company apologized to customers whose names may show up in the leaked data.

    Ransomware groups are known to exaggerate the value of the data they steal and make outlandish demands that are never met.

    LockBit is the name of the group claiming responsibility for the hack of the TSMC supplier and the type of ransomware they use. LockBit ransomware was the most deployed ransomware around the world in 2022, according to US cybersecurity officials.

    Jon DiMaggio, an executive at security firm Analyst1 who has studied LockBit extensively, said the hackers will likely publish the stolen data or sell it if TSMC refuses to negotiate a ransom.

    For years, American officials and Taiwanese cybersecurity experts have looked to fortify the island’s infrastructure in the face of hacking threats.

    Taiwan’s chip industry is critical to the global hardware supply chain, making any potentially impactful cyberattacks on it a concern for government officials and business executives around the world.

    While the TSMC-related hacking incident doesn’t appear to have been impactful, a separate ransomware attack in 2020 on Taiwan’s state-run energy company temporarily disrupted some customers’ ability to pay for gas with company cards, according to local media reports at the time.

    [ad_2]

    Source link

  • Two very different points of view on nuclear energy in the US | CNN Politics

    Two very different points of view on nuclear energy in the US | CNN Politics

    [ad_1]

    A version of this story appears in CNN’s What Matters newsletter. To get it in your inbox, sign up for free here.



    CNN
     — 

    Two distinct and unrelated stories this week convinced me it was a good moment to look at nuclear power in the US.

    Those developments, which might give anyone pause about the future of nuclear power, are counteracted by other headlines.

    The opening of a new nuclear plant in Georgia, for example, will bring carbon emission-free energy at exactly the time worldwide temperature records drive home the reality of climate change caused by the burning of fossil fuels.

    Germany made the decision to decommission all of its nuclear plants after disasters like Chernobyl and Fukushima. The last nuclear reactor there was taken offline earlier this year, a decision some might have regretted after Germany’s access to Russian natural gas was threatened by the war in Ukraine.

    Next door, France is the worldwide nuclear leader. Most of its electricity is generated by nuclear power.

    Russia, while it has been ostracized from the world economy in almost every way since its invasion of Ukraine, remains a major player in nuclear power. It enriches and sells uranium through its state-controlled nuclear energy company, Rosatom, which builds and operates plants around the world, according to a March report from CNN’s Clare Sebastian that explains why the West has largely left Russia’s nuclear power industry alone.

    But it is China that is moving the quickest toward nuclear power production, according to the International Atomic Energy Agency.

    As of 2022, about 18% of US electricity is generated by nuclear power, according to the US Energy Information Administration. Most large US nuclear reactors are old – averaging 40 years or more.

    In addition to the Georgia reactor coming online, a new reactor began operating in Tennessee in 2016. But otherwise, the US nuclear power portfolio is old, and much of it is in need of improvement.

    For an idea of the money and corruption that can revolve around energy production, look at the sentencing last week of Ohio’s former House Speaker Larry Householder to 20 years in prison for his involvement in a bribery scheme meant to get the utility company FirstEnergy Corp. a billion-dollar taxpayer bailout for two nuclear plants.

    The bipartisan infrastructure law signed by President Joe Biden in 2021 included a $6 billion program to provide grants to nuclear reactor owners or operators and stave off closing them.

    More than a dozen reactors have closed early in the US over the past decade, according to the Department of Energy. At least one reactor, the Diablo Canyon Power Plant in California, will be kept open after a more than $1 billion grant.

    Nuclear power – and how aggressively the US and other countries should be pursuing it – is a topic that splits scientists as well.

    I talked to one nuclear expert who said the US should be slow and methodical about nuclear power and another who argued there are multiple, public misperceptions about nuclear power that should be corrected.

    The more circumspect voice is Rodney Ewing, a Stanford University professor and expert on nuclear waste who was chairman of a federal review of nuclear waste procedures. I was put in touch with him by the Bulletin of the Atomic Scientists, which aims to “reduce man-made threats to our existence.”

    Despite his decades spent focused on nuclear issues, he said something I found remarkable:

    “I don’t have yet, although I’ve tried for years, a well-formed position for or against nuclear energy,” Ewing said.

    “Too often in the enthusiasm for nuclear energy, a carbon-free source of energy – and in the present situation of the issue of climate change, really a very important existential crisis – it’s easy to say, well, we’ll solve the problems later.”

    He said the issues with nuclear energy – from the potential for disaster to the issue of how to store nuclear waste – should be compared with the potential for renewable alternatives like solar and wind energy.

    The University of Illinois energy professor, David Ruzic – who has a lively YouTube channel, “Illinois EnergyProf,” with multiple videos meant to dispel concerns about nuclear energy – has a much more positive view of nuclear energy’s future.

    Illinois, by the way, generates more nuclear power than any other state. Lawmakers there recently voted to lift a moratorium on new reactor construction that was in place until the federal government can develop a technology for disposing of nuclear waste. That new policy must still be signed by the state’s governor.

    Ruzic argues nuclear waste takes up such little space it should simply be encased in yards of solid concrete and kept at the site of nuclear reactors. The concrete, he argued, can be repaired every 70 years or so as it degrades.

    “Over the 60 years we’ve been doing this commercially, we have learned so much about how to do it extremely safely and very well,” Ruzic said, arguing that the new plant in Georgia would not be affected by an earthquake and tidal wave in the way that Fukushima was, because the new reactor in Georgia is cooled by air in case of an emergency.

    He argued that even in Fukushima, it’s important to note that there were no deaths associated with the radiation due to the failure of the plant, although many thousands were evacuated.

    Any concern you can find to raise about nuclear power, Ruzic has a ready answer. He said no one should worry about the radioactive water Japan plans to release into the ocean from Fukushima because there is a level of radioactivity in everything already.

    “You are adding something trivial and inconsequential, which will be diluted even more,” Ruzic said.

    Even the Russia-Ukraine standoff over the Zaporizhzhia plant does not concern Ruzic; the biggest threat he sees, assuming it is not targeted by bunker-busting bombs, is that the plant ceases making electricity – not that it could turn into another Chernobyl.

    “It’s really unfortunate that it’s in the middle of a war zone. But it’s also really unfortunate that chemical plants or coal plants or other plants are in the middle of a war zone as well,” he argued.

    Both professors brought up the push toward small, modular nuclear technology for which there are numerous companies speculating there will be a major market. That market could grow exponentially if the government decides to put a tax on carbon emissions to account for the harm they cause.

    Ewing argued there is not a clear US national energy strategy, and that means numerous state and federal agencies and private companies are searching, often at odds with each other, for something new. The expense and difficulty of developing nuclear technology will be a roadblock. The new Georgia plant took more than a decade to build and came in over budget.

    Ruzic said that after the initial capital expenditure, the relative low cost of fuel for nuclear plants makes them a good, long-term investment.

    When I came back to Ewing about his comment that he has no clear preference for or against nuclear energy, he said the broad question overlooks too much.

    “The nuclear landscape is, from a technical and social point of view, complicated enough that broad general positions really don’t serve us very well,” he said.

    [ad_2]

    Source link