ReportWire

Tag: business and industry sectors

  • 300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    As many as 300 million full-time jobs around the world could be automated in some way by the newest wave of artificial intelligence that has spawned platforms like ChatGPT, according to Goldman Sachs economists.

    They predicted in a report Sunday that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies than emerging markets.

    That’s partly because white-collar workers are seen to be more at risk than manual laborers. Administrative workers and lawyers are expected to be most affected, the economists said, compared to the “little effect” seen on physically demanding or outdoor occupations, such as construction and repair work.

    In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, the bank estimates.

    If generative artificial intelligence “delivers on its promised capabilities, the labor market could face significant disruption,” the economists wrote. The term refers to the technology behind ChatGPT, the chatbot sensation that has taken the world by storm.

    ChatGPT, which can answer prompts and write essays, has already prompted many businesses to rethink how people should work every day.

    This month, its developer unveiled the latest version of the software behind the bot, GPT-4. The platform has quickly impressed early users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Further use of such AI will likely lead to job losses, the Goldman Sachs economists wrote. But they noted that technological innovation that initially displaces workers has historically also created employment growth over the long haul.

    While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity — and boost global GDP by 7% annually over a 10-year period, according to Goldman Sachs.

    “Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI,” the economists added.

    “Most workers are employed in occupations that are partially exposed to AI automation and, following AI adoption, will likely apply at least some of their freed-up capacity toward productive activities that increase output.”

    Of US workers expected to be affected, for instance, 25% to 50% of their workload “can be replaced,” the researchers added.

    “The combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labor productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer.”

    — CNN’s Nicole Goodkind contributed to this report.

    [ad_2]

    Source link

  • Appeals court can rule at any time in dispute over suspending FDA approval of medication abortion drug | CNN Politics

    Appeals court can rule at any time in dispute over suspending FDA approval of medication abortion drug | CNN Politics

    [ad_1]



    CNN
     — 

    The Justice Department and a manufacturer of abortion pills have submitted the final round of court briefs in the emergency dispute over whether an appeals court should freeze a judge’s ruling that would suspend the Food and Drug Administration’s approval of medication abortion drugs.

    Now that the filings have been submitted, the US 5th Circuit Court of Appeals Court could rule at any time on whether to put a hold on the order from US District Judge Matthew Kacsmaryk.

    Kacsmaryk on Friday night said he was halting the FDA’s approval of the drug mifepristone but that he was delaying the order by seven days to give the pill’s defenders time to appeal the case. The Justice Department has asked the appeals court to act by 12 p.m. CT Thursday on its request that Kacsmaryk’s ruling be paused, to give the government time to seek a Supreme Court intervention if need be. The 5th Circuit is not obligated to meet that deadline.

    The Justice Department wrote in its new filing that Kacsmaryk purported “to be acting in a restrained manner … but there is nothing modest about upending the decades-long status quo by blocking access nationwide to a safe and effective drug.”

    “Effectively requiring Danco Laboratories and GenBioPro to cease distribution of mifepristone after more than two decades would upend the status quo, severely harming women, healthcare systems, and the public,” the Justice Department said, referring to the two US manufacturers of mifepristone.

    The Justice Department filing pushed back on the assertions by the challengers, made in their filing overnight in the emergency dispute, that the 5th Circuit did not have the authority to hear the appeal of Kacsmaryk’s ruling. The Justice Department also called out Kacsmaryk and the challengers for relying on anonymous blog posts to claim mifepristone is unsafe.

    Danco Labroratories, which intervened in the case to defend mifepristone’s approval, wrote in its new filing with the appeals court that if the ruling is not frozen, “women across the nation will face serious, unnecessary health risks from the elimination of access to a drug FDA has repeatedly deemed safe and effective and that is the standard of care.”

    In an overnight filing, the anti-abortion doctors who sued to ban medication abortion drugs told a federal appeals court that it should leave in place the ruling that will halt the drug’s FDA approval.

    The anti-abortion doctors defended Kacsmaryk’s ruling called it a “meticulously considered” ruling that “paints an alarming picture of decades-long agency lawlessness – all to the detriment of the women and girls FDA is charged to protect.”

    Mifepristone has been approved by the FDA for terminating pregnancies for nearly 23 years. Leading medical associations have rebuked the claims by the approval’s legal challengers and by the judge that the drug is unsafe.

    [ad_2]

    Source link

  • Pentagon leak spotlights surprising interplay between gaming and military secrets | CNN Politics

    Pentagon leak spotlights surprising interplay between gaming and military secrets | CNN Politics

    [ad_1]



    CNN
     — 

    The recent leak of classified US documents on social media platform Discord seemingly caught many at the Pentagon by surprise. But it wasn’t the first time that a forum popular with online gamers had hosted military secrets, underlining a major challenge for the US national security establishment and platforms alike.

    As recently as January 2023, someone on a forum for fans of the video game War Thunder reportedly published confidential information on an F-16 fighter jet. That followed reports of at least three other occasions since 2021 when War Thunder fans posted documents on British, French and Chinese tanks. These cases – which Axios also reported on in the context of the Discord leaks – typically involved users boasting of their inside knowledge of military equipment and claiming to want to make the game more realistic.

    Gaijin Entertainment, the company that produces War Thunder, took the posts down after forum moderators flagged them.

    The recent leaks on Discord exposed a shortcoming in how the US government alerts platforms that they are hosting sensitive or classified information, according to Discord’s top lawyer.

    There is currently “no structured process,” for the government to communicate whether documents posted on social media are classified or even authentic, Clint Smith, Discord’s chief legal officer, said in an April 14 statement that described classified military documents as a “significant, complex challenge” for Discord and other platforms.

    The episodes point to vexing challenges for social media platforms like Discord – where 21-year Air National Guardsman Jack Teixeira allegedly began posting classified information in December – and the US military, which has used Discord for recruiting.

    Discord and other platforms face a difficult balancing act in giving young gamers the space to be themselves while also detecting when they post illegal content.

    “A lot of these guys find their social circles in these online gaming spaces, and that can be great,” said Jennifer Golbeck, a professor at the University of Maryland’s College of Information Studies. “But if the culture of the platform shifts to rewarding things that you shouldn’t be doing, it can hard if you’re really invested in that that social group to give that up.”

    Teixeira allegedly posted the documents – which included sensitive US intelligence on the war in Ukraine – to a private Discord chat in an attempt to look after his online friends and keep them informed, one member of the chatroom has claimed.

    The Pentagon is trying to tap into online youth culture without it backfiring spectacularly, as it allegedly did with Teixeira.

    An Air Force Gaming program that allows service members to compete in video game leagues to, according to a Pentagon press release, “build morale and mental health resiliency,” has more than 28,000 members. The top of the Air Force Gaming website includes a link to join the program’s Discord channel.

    There were signs that Pentagon officials were growing wary of information young service members might share on Discord even before news of Teixeira’s alleged leak broke.

    “Don’t post anything in Discord that you wouldn’t want seen by the general public,” reads a pamphlet published by US Army Special Operations Command in March.

    That the warning came as classified documents allegedly shared by Teixeira sat on Discord appears to be entirely a coincidence; many US officials appeared unaware of the leak until news of it broke on April 6.

    “Past incidents show how hard it is to stop these leaks,” said Casey Brooks, an Army veteran and video game fan.

    “This is about maturity and how certain people seek value from interpersonal relationships and approval from peers and the competitive nature that gaming group members bond over,” Brooks told CNN.

    Classified or sensitive documents are also a unique problem for content moderators on social media sites.

    “With porn, you can at least have some kind of AI that will give a rough flag at the beginning that this looks vaguely like porn,” said Golbeck, the University of Maryland professor. “But what looks like a classified document? They’re just documents.”

    As social media platforms like Discord grapple with the challenges of detecting sensitive intelligence leaks online, current and former US officials worry that US adversaries like Russia may see an intelligence gathering opportunity.

    “If it’s not already happening, my guess would be the Russians have assessed that digging around in some of these obscure online forums … could bear fruit,” Holden Triplett, a former FBI official who worked at the US embassy in Moscow, told CNN.

    Though there is no evidence that Teixeira was approached by foreign agents, Triplett said a young generation of online gamers might be a ripe target for recruitment.

    “Ego and excitement have always been strong motivations to spy,” said Triplett, who is founder of security consultancy Trenchcoat Advisors. But the group of Discord users that included Teixeira “seemed particularly indifferent to national security concerns,” which is a vulnerability for the US government, Triplett said.

    [ad_2]

    Source link

  • Microsoft opens up its AI-powered Bing to all users | CNN Business

    Microsoft opens up its AI-powered Bing to all users | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft is rolling out the new AI-powered version of its Bing search engine to anyone who wants to use it.

    Nearly three months after the company debuted a limited preview version of its new Bing, powered by the viral AI chatbot ChatGPT, Microsoft is opening it up to all users without a waitlist – as long as they’re signed into the search engine via Microsoft’s Edge browser.

    The move highlights Microsoft’s commitment to move forward with the product even as the AI technology behind it has sparked concerns around inaccuracies and tone. In some cases, people who baited the new Bing were subject to some emotionally reactive and aggressive responses.

    “We’re getting better at speed, we’re getting better at accuracy … but we are on a never-ending quest to make things better and better,” Yusuf Mehdi, a VP at Microsoft overseeing its AI initiatives, told CNN on Wednesday.

    Bing now gets more than 100 million daily active users each day, a significant uptick in the past few months, according to Mehdi. Google, which has long dominated the market, is also adding similar AI features to its search engine.

    In February, Microsoft showed off how its revamped search engine could write summaries of search results, chat with users to answer additional questions about a query and write emails or other compositions based on the results.

    At a press event in New York City on Wednesday, the company shared an early look at some updates, including the ability to ask questions with pictures, access chat history so the chatbot remembers its rapport with users, and export responses to Microsoft Word. Users can also personalize the tone and style of the chatbot’s responses, selecting from a lengthier, creative reply to something that’s shorter and to the point.

    The wave of attention in recent months around ChatGPT, developed by OpenAI with financial backing from Microsoft, helped renew an arms race among tech companies to deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    Beyond adding AI features to search, Microsoft has said it plans to bring ChatGPT technology to its core productivity tools, including Word, Excel and Outlook, with the potential to change the way we work. The decision to add generative AI features to Bing could be particularly risky, however, given how much people rely on search engines for accurate and reliable information.

    Microsoft’s moves also come amid heightened scrutiny on the rapid pace of advancement in AI technology. In March, some of the biggest names in tech, including Elon Musk and Apple co-founder Steve Wozniak, called for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Mehdi said he doesn’t believe the AI industry is moving too fast and suggested the calls for a pause aren’t particularly helpful.

    “Some people think we should pause development for six months but I’m not sure that fixes anything or improves or moves things along,” he said. “But I understand where it’s coming from concern wise.”

    He added: “The only way to really build this technology well is to do it out in the open in the public so we can have conversations about it.”

    [ad_2]

    Source link

  • Microsoft leaps into the AI regulation debate, calling for a new US agency and executive order | CNN Business

    Microsoft leaps into the AI regulation debate, calling for a new US agency and executive order | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft joined a sprawling global debate on the regulation of artificial intelligence Thursday, echoing calls for a new federal agency to control the technology’s development and urging the Biden administration to approve new restrictions on how the US government uses AI tools.

    In a speech in Washington attended by multiple members of Congress and civil society groups, Microsoft President Brad Smith described AI regulation as the challenge of the 21st century, outlining a five-point plan for how democratic nations could address the risks of AI while promoting a liberal vision for the technology that could rival competing efforts from countries such as China.

    The remarks highlight how one of the largest companies in the AI industry hopes to influence the fast-moving push by governments, particularly in Europe and the United States, to rein in AI before it causes major disruptions to society and the economy.

    In a roughly hour-long appearance that was equal parts product pitch and policy proposal, Smith compared AI to the printing press and described how it could streamline policymaking and lawmakers’ constituent outreach, before calling for “the rule of law” to govern AI at every part of its lifecycle and supply chain.

    Regulations should apply to everything from the data centers that train large language models to the end users such as banks, hospitals and others that may apply the technology toward making life-altering decisions, Smith said.

    For decades, “the rule of law and a commitment to democracy has kept technology in its proper place,” Smith said. “We’ve done it before; we can do it again.”

    In his remarks, Smith joined calls made last week by OpenAI — the company behind ChatGPT and that Microsoft has invested billions in — for the creation of a new government regulator that can oversee a licensing system for cutting-edge AI development, combined with testing and safety standards as well as government-mandated disclosure rules.

    Whether a new federal regulator is needed to police AI is quickly emerging as a focal point of the debate in Washington; opponents such as IBM have argued, including in an op-ed Thursday, that AI regulation should be baked into every existing federal agency because of their understanding of the sectors they oversee and how AI may be most likely to transform them.

    Smith also called for President Joe Biden to develop and sign an executive order requiring federal agencies that procure AI tools to implement a risk management framework developed and published this year by the National Institute of Standards and Technology. That framework, which Congress first ordered with legislation in 2020, covers ways that companies can use AI responsibly and ethically.

    Such an order would leverage the US government’s immense purchasing power to shape the AI industry and encourage the voluntary adoption of best practices, Smith said.

    Microsoft itself plans to implement the NIST framework “across all of our services,” Smith added, a commitment he described as the direct outgrowth of a recent White House meeting with AI CEOs in Washington. Smith also pledged to publish an annual AI transparency report.

    As part of Microsoft’s proposal, Smith said any new rules for AI should include revamped export controls tailor-made for the AI age to prevent the technology from being abused by sanctioned entities.

    And, he said, the government should mandate redundant AI circuit breakers that would allow algorithms to be shut off by critical infrastructure providers or from within the data centers they depend on.

    Smith’s remarks, and a related policy paper, come a week after Google released its own proposals calling for global cooperation and common standards for artificial intelligence.

    “AI is too important not to regulate, and too important not to regulate well,” Kent Walker, Google’s president of global affairs, said in a blog post unveiling the company’s plan.

    [ad_2]

    Source link

  • Forget about the AI apocalypse. The real dangers are already here | CNN Business

    Forget about the AI apocalypse. The real dangers are already here | CNN Business

    [ad_1]



    CNN
     — 

    Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse.

    Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsoft’s CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter last month stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

    The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to “civilization destruction.” But he still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google.

    Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services.

    “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. Some of the execs are likely “genuinely worried about what they have unleashed,” he said, but others may be trying to focus attention on “abstract possibilities to detract from the more immediate possibilities.”

    Representatives for Google and OpenAI did not immediately respond to a request for comment. In a statement, a Microsoft spokesperson said: “We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly.”

    For Marcus, a self-described critic of AI hype, “the biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation.”

    Generative AI tools like OpenAI’s ChatGPT and Dall-E are trained on vast troves of data online to create compelling written work and images in response to user prompts. With these tools, for example, one could quickly mimic the style or likeness of public figures in an attempt to create disinformation campaigns.

    In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

    Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.

    Gary Marcus, professor emeritus at New York University, right, listens to Sam Altman, chief executive officer and co-founder of OpenAI, speak during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.

    Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

    Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

    “If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN.

    Regulators may be the real intended audience for the tech industry’s doomsday messaging.

    As Bender puts it, execs are essentially saying: “‘This stuff is very, very dangerous, and we’re the only ones who understand how to rein it in.’”

    Judging from Altman’s appearance before Congress, this strategy might work. Altman appeared to win over Washington by echoing lawmakers’ concerns about AI — a technology that many in Congress are still trying to understand — and offering suggestions for how to address it.

    This approach to regulation would be “hugely problematic,” Bender said. It could give the industry influence over the regulators tasked with holding it accountable and also leave out the voices and input of other people and communities experiencing negative impacts of this technology.

    “If the regulators kind of orient towards the people who are building and selling the technology as the only ones who could possibly understand this, and therefore can possibly inform how regulation should work, we’re really going to miss out,” Bender said.

    Bender said she tries, at every opportunity, to tell people “these things seem much smarter than they are.” As she put it, this is because “we are as smart as we are” and the way that we make sense of language, including responses from AI, “is actually by imagining a mind behind it.”

    Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?”

    [ad_2]

    Source link

  • The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    [ad_1]



    CNN
     — 

    Gannett, the largest newspaper publisher in the United States, is suing Google, alleging the tech giant holds a monopoly over the digital ad market.

    The publisher of USA Today and more than 200 local publications filed the lawsuit in a New York federal court on Tuesday, and is seeking unspecified damages. Gannett argues in court documents that Google and its parent company, Alphabet, controls how publishers buy and sell ads online.

    “The result is dramatically less revenue for publishers and Google’s ad-tech rivals, while Google enjoys exorbitant monopoly profits,” the lawsuit states.

    Google controls about a quarter of the US digital advertising market, with Meta, Amazon and TikTok combining for another third, according to eMarketer. News publishers and other websites combine for the other roughly 40%. Big Tech’s share of the market is beginning to erode slightly, but Google remains by far the largest individual player.

    That means publishers often rely at least in part on Google’s advertising technology to support their operations: Gannett says Google controls 90% of the ad market for publishers.

    Michael Reed, Gannett’s chairman and CEO, said in a statement Tuesday that Google’s dominance in the online advertising industry has come “at the expense of publishers, readers and everyone else.”

    “Digital advertising is the lifeblood of the online economy,” Reed added. “Without free and fair competition for digital ad space, publishers cannot invest in their newsrooms.”

    Dan Taylor, Google’s vice president of global ads, told CNN that the claims in the suit “are simply wrong.”

    “Publishers have many options to choose from when it comes to using advertising technology to monetize – in fact, Gannett uses dozens of competing ad services, including Google Ad Manager,” Taylor said in a statement Tuesday. “And when publishers choose to use Google tools, they keep the vast majority of revenue.”

    He continued: “We’ll show the court how our advertising products benefit publishers and help them fund their content online.”

    The legal action from Gannett comes as Google faces a growing number of antitrust complaints in the United States and the European Union over its advertising business, which remains its central moneymaker.

    EU officials said last week that Google’s advertising business should be broken up, alleging that the tech giant’s involvement in multiple parts of the digital advertising supply chain creates “inherent conflicts of interest” that risk harming competition.

    Earlier this year, the Justice Department and eight states sued Google, accusing the company of harming competition with its dominance in the online advertising market and similarly calling for it to be broken up.

    [ad_2]

    Source link

  • Koch network raises more than $70 million, launches new anti-Trump ads in early voting states | CNN Politics

    Koch network raises more than $70 million, launches new anti-Trump ads in early voting states | CNN Politics

    [ad_1]



    CNN
     — 

    The influential network associated with conservative billionaire Charles Koch has collected more than $70 million for political races, the group announced Thursday, as it gears up to help shape the outcome of next year’s contests up and down the ballot and encourage Republican voters to bypass former President Donald Trump in the White House nomination fight.

    Americans for Prosperity Action has pledged to back a single contender in the GOP presidential primary for the first time in its history. It has not yet announced whom it will support, but the group could dramatically reshape the Republican field by deploying its vast resources and standing army of conservative activists on behalf of a single candidate.

    The sums raised by the group will help advance those efforts. The lion’s share of the total announced Thursday came from two organizations affiliated with Koch: $25 million from his Kansas-based industrial conglomerate Koch Industries, and another $25 million from Stand Together, a nonprofit he founded, AFP Action spokesman Bill Riggs confirmed.

    The New York Times first reported the fundraising total.

    The group is also launching new digital spots, shared first with CNN, that cast Trump as a candidate Republicans can’t risk supporting in 2024.

    “Instead of making (President Joe) Biden answer for his reckless progressive agenda, Trump makes the debate about indictments, personal grievances and the election he lost,” one 30-second spot, titled “The Choice,” says.

    The second, called “Unelectable,” describes Trump as a serial loser who caused Republicans to lose the House, Senate and the White House. “If Donald Trump is the GOP nominee, we could lose everything,” the narrator says.

    The ads will run in Iowa, New Hampshire, South Carolina and Nevada, officials said.

    “President Trump continues to fight against the swampy D.C. insiders who would love nothing more than to have an establishment puppet they can control in the White House,” Trump spokesman Steven Cheung said in an email. “No amount of dirty money from shady lobbyists and mysterious donors will ever stop the America First movement, and that’s why President Trump continues to dominate poll after poll — both nationally and statewide. We welcome this fight.”

    AFP Action on Thursday also announced its first US House endorsements of the cycle, saying it will back Republican Reps. Juan Ciscomani of Arizona, Young Kim of California, Zach Nunn of Iowa and John James of Michigan, along with former GOP Rep. Yvette Herrell of New Mexico.

    In addition to attempting to stir doubts about Trump among the GOP faithful, network officials have said part of their 2024 strategy is to bring more general election voters into the GOP primary process to alter the outcome of early contests.

    Americans for Prosperity already has reached out to 1.4 million potential new Republican and swing voters in nearly a dozen states, officials said.

    In a statement to CNN earlier this month, Americans for Prosperity CEO Emily Seidel said the group’s voter interactions have demonstrated to it that many Trump supporters are “receptive to arguments that he is a weak candidate, his focus on 2020 is a liability, and his lack of appeal with independent voters is a problem.”

    “That tells us that many Republicans are ready to move on, they just need to see another candidate step up and show they can lead and win,” she added.

    [ad_2]

    Source link

  • Sarah Silverman sues OpenAI and Meta alleging copyright infringement | CNN Business

    Sarah Silverman sues OpenAI and Meta alleging copyright infringement | CNN Business

    [ad_1]



    CNN
     — 

    Comedian Sarah Silverman and two authors are suing Meta and ChatGPT-maker OpenAI, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    The pair of lawsuits against OpenAI and Facebook-parent Meta were filed in a San Francisco federal court on Friday, and are both seeking class action status. Silverman, the author of “The Bedwetter,” is joined in filing the lawsuits by fellow authors Christopher Golden and Richard Kadrey.

    A new crop of AI tools has gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning these tools are trained on vast troves of online data. But this practice has raised some concerns that these models may be sweeping up copyrighted works without permission – and that these works could ultimately be served to train tools that upend the livelihoods of creatives.

    The complaint against OpenAI claims that “when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works.” The authors “did not consent to the use of their copyrighted books as training material for ChatGPT,” according to the complaint.

    The complaint against Meta similarly claims that the company used the authors’ copyrighted books to train LLaMA, the set of large language models released by Meta in February. The suit claims that much of the material used to train Meta’s language models “comes from copyrighted works—including books written by Plaintiffs—that were copied by Meta without consent, without credit, and without compensation.”

    The suit against Meta also alleges that the company accessed the copyrighted books via an online “shadow library” website that includes a large quantity of copyrighted material.

    Meta declined to comment on the lawsuit. OpenAI did not immediately respond to a request for comment.

    The legal action from Silverman isn’t the first to focus on how large language models are trained. A separate lawsuit filed against OpenAI last month alleged the company misappropriated vast swaths of peoples’ personal data from the internet to train its AI tools. (OpenAI did not respond to a request for comment on the suit.)

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needed to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    [ad_2]

    Source link

  • With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Last month, a video posted to Twitter by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s top infectious disease specialist, were tricky to spot: they were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership, did not remove the video. Instead, it eventually added a community note — a contributor-led feature to highlight misinformation on the social media platform — to the post, alerting the site’s users that in the video “3 still shots showing Trump embracing Fauci are AI generated images.”

    Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 US Presidential election in ways that could confuse or mislead voters.

    A new crop of AI tools offer the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 US election.

    “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

    Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.

    Several major social networks have pulled back on their enforcement of some election-related misinformation and undergone significant layoffs over the past six months, which in some cases hit election integrity, safety and responsible AI teams. Current and former US officials have also raised alarms that a federal judge’s decision earlier this month to limit how some US agencies communicate with social media companies could have a “chilling effect” on how the federal government and states address election-related disinformation. (On Friday, an appeals court temporarily blocked the order.)

    Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

    “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

    The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

    The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

    Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily and cheaply create huge quantities of fake content.

    And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the US election rolls around next year.

    “We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

    The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face.

    AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

    OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

    Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

    A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

    Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.

    Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

    TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

    YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

    “Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”

    A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact checkers.

    TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

    Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

    Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform, and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

    Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated review to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

    But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated.

    Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

    One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information; it could also make it harder for them to trust real information about everything from voting to crisis situations.

    “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.”

    [ad_2]

    Source link

  • ‘X’ removed after being installed atop company headquarters following Twitter’s rebrand | CNN Business

    ‘X’ removed after being installed atop company headquarters following Twitter’s rebrand | CNN Business

    [ad_1]



    CNN
     — 

    Officials from the San Francisco Department of Building Inspection on Monday morning observed that the new “X” on top of the building formerly known as Twitter’s headquarters was being dismantled, according to Patrick Hannan, the department’s spokesman.

    The news comes after the company was issued a notice of violation (NOV) Friday for work without a permit for the new sign, which flashes at night, that adorns the building.

    “Over the weekend, the Department of Building Inspection and City Planning received 24 complaints about the unpermitted structure, including concerns about its structural safety and illumination. This morning, building inspectors observed the structure being dismantled. A building permit is required to remove the structure but, due to safety concerns, the permit can be secured after the structure is taken down,” Hannan said in an email to CNN.

    “The property owner will be assessed fees for the unpermitted installation of the illuminated structure. The fees will be for building permits for the installation and removal of the structure, and to cover the cost of the Department of Building Inspection and the Planning Department’s investigation,” he added.

    CNN has reached out to the company formerly known as Twitter for comment.

    – CNN’s Ramishah Maruf contributed to this report

    [ad_2]

    Source link

  • Best drip coffee maker in 2023 | CNN Underscored

    Best drip coffee maker in 2023 | CNN Underscored

    [ad_1]

    There are so many brewing methods to choose from (French press, the currently trendy dalgona whipped, pour-over), but many coffee lovers still rely on the classic, automatic drip for their daily fix. That’s why we tested the best-rated drip coffee makers using a wide range of criteria (outlined below) over the course of several weeks. Bags upon bags of dark roast, light roast and medium roast coffee beans were ground and brewed. We made full carafes, half carafes and single cups. And we tasted the results black, with cow’s milk, almond milk, sweetened condensed milk, cold-brew strength over ice — you name it.

    Many, many pots of coffee later, we settled on four standout drip coffee machines.

    Best drip coffee maker overall

    The Braun KF6050WH BrewSense Drip Coffee Maker produced consistently delicious, hot cups of coffee, brewed efficiently and cleanly, from sleek, relatively compact hardware that is turnkey to operate, and all for a reasonable price.

    Runner-up with a modern bent

    This was, to our eye, the most handsome and minimally designed of the straightforward auto-brewers, delivering a clean, tasty cup. It lost first place only because the touchscreen may not be for every consumer, and brew time is significantly longer than the other machines we tried out.

    Luxury pick for the design-obsessed

    In just near five minutes, the Technivorm Moccamaster 59636 KBG Coffee Brewer turns out a whole pot of pretty perfectly brewed coffee, and the process is as entrancing as a targeted Netflix trailer.

    Best affordable drip coffee maker

    One of the cheapest options we tested, the Mr. Coffee 12-cup brewer is compact, simple to operate and yields a very competitive cup. ​

    CNN Underscored_drip coffee makers_braun body

    We brewed countless pots of coffee with the BrewSense, ranging from light to dark roast, and each one yielded a strong, delicious cup with no sediment, thanks to the gold tone filter, designed to remove the bitterness from coffee as well reduce single-use paper-filter waste. The machine we tested was white — a nice option for those with a more modern kitchen design — but it also comes in black, and it’s compact enough to fit under the cabinets in a smaller space compared to some of the more cumbersome machines we tested.

    The BrewSense is straightforward to operate: It’s designed like a traditional automatic drip machine with manual operating buttons, but with a sleek, modern upgrade. The hardware is a sophisticated combination of brushed metal and plastic, with a glass carafe that feels comfortable in the hand.

    The BrewSense doesn’t have a lot of bells and whistles compared to some of the machines we tested, and that functional ease helped elevate it to the top of our list. You could unbox this machine, flush it through with water once, and be drinking a freshly brewed cup within 15 minutes, all without reading the manual. Brewing is also a nearly silent process, which can be pleasing on early mornings. Some consumers may want a machine loaded with special features, but for those who just want delicious, hot coffee every morning, without spending over a hundred bucks, this is your best bet.

    The BrewSense isn’t perfect: It’s not the fastest we tested — to brew a full pot of 12 cups took upwards of 11 minutes. And we found an annoying error in the instruction manual around how to program the clock (call us rigid, but we insisted on programming the time before using each of the machines!); the directions read to press and hold CLOCK and then SET, but that didn’t work. We had to simply press and hold the CLOCK button and then sort of trial and error our way through the hours and minutes. Meanwhile, the auto-program setup is not as obvious as we’d have liked; though once we got it, it worked like a dream. But otherwise, we found this machine intuitive and easy to operate even without the instruction manual.

    Cleanup could at times be a little messier than some of our other machines. The hot water comes up through the filter basket and spreads the grounds up to the top of the cone, and during one brewing, a tiny bit rose up outside the cone so the top of the brew apparatus needed a little wipedown. Overall, though, for less than $80, this machine delivers the best bang for your buck of anything on the market.

    CNN Underscored_drip coffee makers_cuisinart body

    Coming in just a few points behind the Braun BrewSense was one of the three Cuisinart automatic drip machines we tested: the Touchscreen 14-Cup Programmable.

    We rated all three Cuisinarts highly, but the Touchscreen ranked highest for its combination of progressive design and everyday efficacy. All the Cuisinart products we encountered were well designed, but this one feels special, like when you unbox a brand-new Apple product: Its all-black, shiny surfaces and touchscreen control panel look and feel next-level for an everyday coffee maker (and the price, $235 at Macy’s, more than three times that of the Braun, reflects that).

    But this isn’t just a fancy, aesthetically pleasing machine: It brewed strong, delicious coffee that tasted cleanly filtered but rich. It’s also relatively easy to program and use, given its tech-centric platform. The touchscreen panel features cute little icons signifying one-touch commands to help customize your brew: If you like your coffee bolder, you can select the BOLD feature; if you’re brewing less than half a pot, select the 1 to 4 cups feature for a slower brew with the proper extraction time; adjust the hot plate temperature to low, medium or high; turn the audible brew-cycle-finished tone on or off.

    That tech-centric design is also one of the reasons this didn’t come in at number one, however. As exciting and different as it felt, we did feel that this machine — the only touchscreen model we tested — would feel less intuitive and more laborious than some consumers would want as part of their morning coffee routine. The touchscreen goes dark during the brew process, which yes, is nice-looking, but also feels a bit jarring, like you’re literally in the dark, asking yourself, “What’s going on? Is coffee brewing?” The settings and operating buttons are clear enough when illuminated, but it did take us a few times brewing to get used to how much pressure you need to apply with your fingertip to the touchscreen. We could easily think of people in our own lives who would be flummoxed by this machine if left alone with it and a bag of coffee — and for that, it lost a few points in functionality.

    Also, like its Cuisinart cousins we tested, this one’s a slower brewer. We clocked 11 minutes for eight cups, and if you’re watching your coffee maker brew like, well, a watched pot, it seems like it … takes forever. We understand the appeal of a slower brewing process (pour-over and Chemex fans, we hear you!), but 12 to 14 minutes for a full pot of coffee seems like a long time to wait when you’re thirsty for your morning Joe and you’re not doing it by hand. Finally, not everyone will want to spend more than $200 on a coffee maker. But many may.

    While some consumers might be flummoxed by the technology of this higher-end product, others will embrace it and make it a centerpiece of their kitchen, and rightly so. Form plus function equals morning happiness here.

    CNN Underscored_drip coffee makers_moccamaster body

    We had heard about the Technivorm Moccaster, a machine beloved for its innovative and old-school industrial design, handmade and tested in the Netherlands since 1968, even before we received it for this story. Multiple friends reached out upon hearing that we were testing a Moccamaster, singing the brand’s praises, and one declared it superlative via Instagram DM: “Moccamaster? Test over!” And the Moccamaster arrives with its own best PR too. Its user manual applauds buyers: “Congratulations on your purchase of the World’s Finest Coffee Brewer!” (If you’re spending more than $300 on a coffee maker, perhaps the enthusiasm feels validating.)

    Once we got the apparatus set up — which takes a little focus and time, to be honest — it really did pay off, with possibly the most delicious, hot, fresh cup of coffee we have ever tasted from a home-brewed machine. What’s more, you barely have time to peruse the morning news headlines before the process is done. The Moccamaster brewed 10 cups in less than six minutes, and, on a second trial, six cups in under four minutes. The brew function is almost jarringly fast: Once you turn on the machine, the brewing starts immediately. Then, seeing the water heat in the tank and bubble up through the water transfer tube into the brewer was a throwback to middle-school science experiments in the most pleasing way, like if a lava lamp produced fresh hot coffee after a few mesmerizing undulations.

    We discovered much to love about the Moccamaster, but there also were elements we didn’t adore. Perhaps ironically, they’re about the design. Some love a more hands-on coffee-making process, but some might find that there are just too many moving parts here, literally. We needed to read the directions pretty closely to assemble the parts. Once assembled, and once we digested what was happening brew-process-wise, the machine became fairly easy to operate.

    But each time you use this machine, you have to take the brew basket apart to add a new paper filter (yes, it requires a paper filter, if that makes a difference to you) and coffee grounds, and that basket removal sometimes disrupts the outlet arm and the reservoir lid — not a huge deal, but it could feel like you have to put your coffee maker back together from scratch every morning. Also, the basket lid and outlet arm, through which the hot water travels from the tube to the brew basket, get very hot during the process. It’s fine if you’re aware and cautious, but you wouldn’t want someone to wander up and unknowingly touch the hot part of the brewer.

    And finally, perhaps our most significant beef with this model: When you return the glass carafe to hotplate in between pours, the glass scrapes the warmer in a slightly cringey way.

    The coffee that this striking machine yields, though, may diminish other distractions — we found ourselves moving this maker back to the kitchen counter time and again, because the brew process and its results were superior. If you, like us, are a fan of the Moccamaster, you’re likely to be one for many years to come, which will amortize the steep price tag accordingly.

    CNN Underscored_drip coffee makers_mr coffee body

    We won’t go on and on about the Mr. Coffee 12-Cup, but it brewed a very workable 12 cups, in both taste and temperature, in just nine minutes. The machine came packaged in some pretty intense plastic and cardboard — the unboxing took a full five minutes and a pair of scissors — but once separated from its packaging, this machine’s a breeze to put together. The hardware is very easy to use (and to program to brew at a specific time), even without reading the directions. It’s compact — one of the best small drip coffee makers we tested — and durable, and the lid, brew basket, carafe and removable top half are all dishwasher safe, which wasn’t common among the machines we tested.

    The testing process for these coffee makers was intensive, lasting more than a month. We evaluated each machine based on what would be most important to the user — namely, functionality, durability and design. We tested each machine at least twice (but four to eight times for some) with both dark and light roast freshly ground beans, did a programmed/timed brew when available, and tested the additional functions of the more specialty machines (single-cup, cold brew, tea, milk frothing). We jotted notes about every machine’s unboxing, read every instruction manual, handled and rehandled the hardware, timed the brew of each machine, noted the temperature of the resulting coffee, and tasted and had others taste and weigh in on user experience. We tried to get as acquainted as possible with each of these machines, became fond of a good many of them — and as a result, we drank way too much coffee over the month in question.

    Read on for the categories and their breakdowns.

    Brew function

    • Optimal temperature: We didn’t take the actual temperature of the coffee from each machine, because we don’t think that’s how the average coffee drinker evaluates home brewing — experts recommend that coffee be brewed at between 195 and 205 degrees Fahrenheit, and served immediately, at 180 to 185 degrees — but we scored the perceived temp of each brew against all the others. We tasted each cup immediately after brewing, black, and then with added cold milk, and recorded the results.
    • Taste: The taste of coffee is, obviously, subjective. Two people could spend a lifetime tasting the different coffee varietals and never agree on one. That being said, we tested each machine with both a dark roast and a light roast, keeping the amount of grounds consistent to the machine’s directions. As a result, some machines that recommended using more grounds yielded stronger brews — in those instances, we retested those with less grounds accordingly.
    • Time to brew: For each carafe brewed, we timed the process on an iPhone timer, both for a full carafe and half. For those machines that made single cups, we timed that process as well.
    • Heat retention: We noted whether the machine brewed into a glass or a thermal carafe, and how hot the coffee remained a half hour to an hour after brewing.
    • User-friendliness: We did an initial scan of each machine, evaluating whether a new customer would be able to brew coffee without reading the instruction manual. We then assessed whether the design of each machine is immediately intuitive, and on a more micro level, assessed the settings and buttons on the face of the machine, the markings on the water tank and carafe, how easy the carafe is to fill, and the design of the brew basket.
    • Volume yield: We noted how many ounces each machine can brew.
    • Programmability: We recorded whether you can program the machine to brew at a set time.

    Durability

    • Everyday durability: For this category, we assessed how the machine responded to being handled during setup, filling the water tank, adding the grounds, removing and replacing carafe to serve, cleanup, and how durable the hardware felt.
    • Build quality: We noted what materials the machine is built from, e.g., plastic, metal, brushed metal, glass, and the tangible feel of each machine in a user’s hands.
    • Serviceability: We noted the ease of opening and taking apart the removable parts of each machine, in the case it would need to be serviced.

    Setup and breakdown

    • Ease of assembly: We observed how long it took to unbox the machine, put it together, and do an initial water flush before the product could be used.
    • Size of machine: We assessed how much counter space each machine took up, and how easy it is to move and store.
    • Ease of clean: After each brewing, we took note of how easy it was to clean the brew basket, the carafe, and the surrounding hardware.

    Aesthetic

    • First impression: We observed our first impression of each machine, noting details of design, color, size, feel — whether this machine looked attractive on our counter.
    • Color options: We researched if the machine came in any colors besides black.

    Warranty

    • We checked the number of years of warranty of each machine.

    Ninja Hot and Cold Brewed System

    We tested two Ninja machines, both of which have some very appealing features. The hot and cold brew system brewed an excellent pot of hot coffee in less than five minutes, as well as a very tasty single cup (in multiple sizes), a less easy feat to perfect. It also brews coffee intended to be served directly over ice, an option that lots of consumers will like. We love the cool, minimalist glass carafe, though the lid features a big hole in the middle for pouring, which can lead to some splashing.

    This machine, though prolific in function, lost points because the water tank — plastic with prominent ridges — feels cheap and devolves the user experience a bit (with this machine, thankfully, the plastic tank is in the back, hidden from view, but does need to be handled every time you add water). Another problem with this machine: The water tank doesn’t have marking measurements, only half carafe, and full carafe, and two sizes of single cup. Without ounce or cup markings, how does one know how much water to add versus amount of coffee grounds? The Ninja machines come with a special-sized coffee scoop, different amounts on each end of the scoop, but it was bothersome that the water and the coffee amounts couldn’t be more standardized without relying only on the provided removable accessories (which, for the record, are cute — there’s a removable frothing wand). A lot of performance features with this machine also means a busy control panel that also feels a bit high-maintenance.

    The Ninja Specialty is similar to the hot and cold brewed one, with one major difference: The water tank is adjacent to the brew basket, and visible to the eye. This one also brews a very nice cup of hot fresh coffee, and has nifty added functions, too, like myriad sizes of individual cups, half and full carafes, and an over-ice option. The placement of the water tank front and center here, though, makes this one less appealing than the hot and cold option; the tank, similarly, feels flimsy and cheap, a factor that’s difficult to overlook in user experience. For those who like the Ninja brand products (they make blenders and other items), though, there’s a lot of function for your buck here.

    The most basic of the Cuisinart options we tested, this one brewed a nearly perfect cup at, for this reviewer, a perfectly hot temp (even after adding significant cold milk, we still had a steaming hot cup), thanks to an adjustable carafe temp. This machine is solid and well-designed, with one downside (for us): Brewing time was 14 minutes for eight cups, nearly double the time of some of the other brewers we tested.

    Cuisinart Coffee Center 10-Cup Thermal Coffee Maker and Single Serve Brewer

    Our third Cuisinart brews only 10 cups into a thermal carafe, but has the handy bonus feature of a single-serve brew — with an attachment to use prepackaged coffee pods, or an adorable mini filter to use fresh grounds. (Note: The mini filter is a bit of a chore to clean because it is so small.) Like its Cuisinart siblings above, this machine makes good coffee, but the single-serve brewer does make the whole of the hardware more cumbersome. One annoying design issue: There’s an on/off switch on the side of the machine, whose placement feels not intuitive.

    The De’Longhi TrueBrew is a superautomatic machine, meaning it incorporates a grinder so you can start with whole beans and have it do the rest for you. At the touch of a button, it can produce a single cup or a whole pot of coffee. The TrueBrew is incredibly convenient, but it’s quite expensive, and despite its wide range of options in our testing we got better tasting coffee from dedicated pour-over coffee and espresso setups. Unless you really want the ease of use of a pod machine along with the ability to use fresh whole beans it may not be for you.

    The most affordable automatic drip machine we tested, the Black & Decker 12-cup, is also a solid choice. It brewed eight tasty cups in eight short minutes — overall a good user experience. Hardware-wise, it felt a bit less durable than its closest rival, the Mr. Coffee, but it’s programmable and super easy for near the cost of two lattes with an extra shot.

    The Bonavita Connoisseur has its fans, but we had multiple issues with the machine. This pleasingly retro-looking apparatus brews a nice cup quickly and at a good temperature, but the user experience leaves much to be desired. Simply put, the design feels flawed. The lid of the carafe needs to be removed before brewing, so the coffee just brews directly into a wide-open carafe — this was so counterintuitive to us, even after three or four brew tries, that it diminished the experience of the brew process. The brewer also gets very hot during brewing — so hot that we wondered if it might actually be a safety issue. Lastly, after brewing, we screwed the carafe lid back on and tried to return the carafe to underneath the brewer — sure, maybe we were still sleepy, maybe not enough caffeine yet — but the carafe doesn’t fit under the brewer with the lid on; the entire top of the machine popped off. This affects storage of the machine, too; because the carafe lid and the brew basket don’t both fit into the hardware at the same time, there’s always one piece loose.

    We were giddy upon opening this fancy brewer with much to offer: standard brew, fast, gold (what even is that, I wondered at first glance!), cold brew, single cup (with a sold separately attachment), and a customizable to your preferences setting. The options are exciting, but also overwhelming. The user is prompted to enter the consistency of their water, on a hard to soft scale — do all home coffee drinkers know the texture of their tap water? Also, does the average coffee drinker know what Gold Cup certification is? These feel like niche details for an automatic drip machine.

    Big picture, the Breville brewed a good pot of coffee, quite quickly, but we didn’t find it hot enough. The whole apparatus is beautifully designed, with sleek brushed metal and a lightweight, handsome carafe lovely enough to join a brunch table. But digging in further, we found this machine just to be… well, just a little too much. Too much hardware — it doesn’t fit easily under our cabinets. Too many options — we needed to read up on a bunch of coffee wisdom before we could even set up the machine to our preferences. There are lots of users who would find this machine the sweet spot of function and sophistication, and enjoy exploring all of its specialties, but for those looking for turnkey coffee-making, this is a little extra.

    Read more from CNN Underscored’s hands-on testing:

    [ad_2]

    Source link

  • How your phone learned to see in the dark | CNN Business

    How your phone learned to see in the dark | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Open up Instagram at any given moment and it probably won’t take long to find crisp pictures of the night sky, a skyline after dark or a dimly lit restaurant. While shots like these used to require advanced cameras, they’re now often possible from the phone you already carry around in your pocket.

    Tech companies such as Apple, Samsung and Google are investing resources to improve their night photography options at a time when camera features have increasingly become a key selling point for smartphones that otherwise largely all look and feel the same from one year to the next.

    Earlier this month, Google brought a faster version of its Night Sight mode, which uses AI algorithms to lighten or brighten images in dark environments, to more of its Pixel models. Apple’s Night mode, which is available on models as far back as the iPhone 11, was touted as a premier feature on its iPhone 14 lineup last year thanks to its improved camera system.

    These tools have come a long way in just the past few years, thanks to significant advancements in artificial intelligence technology as well as image processing that has become sharper, quicker, and more resilient to challenging photography situations. And smartphone makers aren’t done yet.

    “People increasingly rely on their smartphones to take photos, record videos, and create content,” said Lian Jye Su, an artificial intelligence analyst at ABI Research. “[This] will only fuel the smartphone companies to up their games in AI-enhanced image and video processing.”

    While there has been much focus lately on Silicon Valley’s renewed AI arms race over chatbots, the push to develop more sophisticated AI tools could also help further improve night photography and bring our smartphones closer to being able to see in the dark.

    Samsung’s Night mode feature, which is available on various Galaxy models but optimized for its premium S23 Ultra smartphone, promises to do what would have seemed unthinkable just five to 10 years ago: enable phones to take clearer pictures with little light.

    The feature is designed to minimize what’s called “noise,” a term in photography that typically refers to poor lighting conditions, long exposure times, and other elements that can take away from the quality of an image.

    The secret to reducing noise, according to the company, is a combination of the S23 Ultra’s adaptive 200M pixel sensor. After the shutter button is pressed, Samsung uses advanced multi-frame processing to combine multiple images into a single picture and AI to automatically adjust the photo as necessary.

    “When a user takes a photo in low or dark lighting conditions, the processor helps remove noise through multi-frame processing,” said Joshua Cho, executive vice president of Samsung’s Visual Solution Team. “Instantaneously, the Galaxy S23 Ultra detects the detail that should be kept, and the noise that should be removed.”

    For Samsung and other tech companies, AI algorithms are crucial to delivering photos taken in the dark. “The AI training process is based on a large number of images tuned and annotated by experts, and AI learns the parameters to adjust for every photo taken in low-light situations,” Su explained.

    For example, algorithms identify the right level of exposure, determine the correct color pallet and gradient under certain lighting conditions, sharpen blurred faces or objects artificially, and then makes those changes. The final result, however, can look quite different from what the person taking the picture saw in real time, in what some might argue is a technical sleight-of-hand trick.

    Lights illuminate the Atlanta Botanical Gardens, in this photo taken using Google Pixel 5 Night Sight setting.

    Google is also focused on reducing noise in photography. Its AI-powered Night Sight feature captures a burst of longer-exposure frames. It then uses something called HDR+ Bracketing, which creates several photos with different settings. After a picture is taken, the images are combined together to create “sharper photos” even in dark environments “that are still incredibly bright and detailed,” said Alex Schiffhauer, a group product manager at Google.

    While effective, there can be a slight but noticeable delay before the image is ready. But Schiffhauer said Google intends to speed up this process more on future Pixel iterations. “We’d love a world in which customers can get the quality of Night Sight without needing to hold still for a few seconds,” Schiffhauer said.

    Google also has an astrophotography feature which allows people to take shots of the night sky without needing to tweak the exposure or other settings. The algorithms detect details in the sky and enhances them to stand out, according to the company.

    Apple has long been rumored to be working on an astrophotography feature, but some iPhone 14 Pro Max users have successfully been able to capture pictures of the sky through its existing Night Mode tool. When a device detects a low-light environment, Night mode turns on to capture details and brighten shots. (The company did not respond to a request to elaborate on how the algorithms work.)

    AI can make a difference in the image, but the end results for each of these features also depend on the phone’s lenses, said Gartner analyst Bill Ray. A traditional camera will have the lens several centimeters from the sensor, but the limited space on a phone often requires squeezing things together, which can result in a more shallow depth of field and reduced image quality, especially in darker environments.

    “The quality of the lens is still a big deal, and how the phone addresses the lack of depth,” Ray said.

    While night photography on phones has come a long way, a buzzy new technology could push it ahead even more.

    Generative AI, the technology that powers the viral chatbot ChatGPT, has earned plenty of attention for its ability to create compelling essays and images in response to user prompts. But these AI systems, which are trained on vast troves of online data, also have potential to edit and process images.

    “In recent years, generative AI models have also been used in photo-editing functions like background removal or replacement,” Su said. If this technology is added to smartphone photo systems, it could eventually make night modes even more powerful, Su said.

    Big Tech companies, including Google, are already fully embracing this technology in other parts of their business. Meanwhile, smartphone chipset vendors like Qualcomm and MediaTek are looking to support more generative AI applications natively on consumer devices, Su said. These include image and video augmentation.

    “But this is still about two to three years away from limited versions of this showing up on smartphones,” he said.

    [ad_2]

    Source link

  • Elon Musk’s weekend antics could only further crumble Twitter’s brand value | CNN Business

    Elon Musk’s weekend antics could only further crumble Twitter’s brand value | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Under Elon Musk, Twitter has antagonized multiple major news organizations by labeling them state-funded media, appears to have eased restrictions on Russian government accounts and made crude jokes on the front of its headquarters and on Musk’s own Twitter display name.

    And that’s just this weekend.

    Musk’s antics, which only seem to have escalated this month, threaten to further erode Twitter’s brand value. For months, the company has struggled to retain advertisers and supplement its declining ad business — which previously comprised 90% of its annual revenue — by convincing users to pay up for its Twitter Blue subscription service.

    Musk, who is on the hook for large payments to lenders after buying the company for $44 billion, including with significant debt, must either coax hesitant advertisers back to the platform or boost its subscription business -— or both. But his recent erratic moves may only complicate those turnaround efforts.

    Late last week, Twitter faced backlash for labeling NPR as a “state-affiliated media” organization akin to foreign propaganda outlets such as Russia’s RT and Sputnik, in an apparent violation of its own policies. NPR CEO John Lansing called Twitter’s move “unacceptable,” and said the organization is “supported by millions of listeners.”

    Following the pushback, Twitter changed NPR’s label to “government funded media,” and applied the same designation to British broadcaster BBC over the weekend. Twitter has not given a definition for what it considers “government funded media,” but the BBC pushed back on the label, saying it is independent and “funded by the British public through the license fee.”

    The moves risk alienating some of the best-known media organizations in the world and undermining what has long been a key selling point for the platform: its role as a central hub for news. NPR, in particular, has not tweeted from its main account in nearly a week.

    While Twitter labeled some news accounts as state-funded, it also appears to have removed some restrictions on Russian government accounts that had been put in place following the outset of Russia’s war in Ukraine, again prompting outrage among some users.

    Musk commented on the decision in a tweet Sunday saying: “I’m told Putin called me a war criminal for helping Ukraine, so he’s not exactly my best friend. All news is to some degree propaganda. Let people decide for themselves.”

    Twitter, which laid off much of its media relations team last year, did not respond to a request for comment.

    The controversial moves come as Twitter continues to face significant business challenges. Analysis firm Similarweb last week reported that traffic to Twitter’s ad portal was down nearly 19% year-over-year in March. Many major advertisers have halted spending on Twitter since Musk’s takeover over concerns about increased hate speech on the platform and massive cuts to the company’s workforce.

    Musk has said Twitter is working to improve the platform’s ad targeting to increase value for advertisers. “But all the while there have been distractions,” said Scott Kessler, technology sector lead at research firm Third Bridge, adding that there are “significant questions about the direction that the company is going.” At the same time, online ad spending broadly has contracted over concerns about the economy.

    Against that backdrop, Musk’s Twitter has made several head-scratching announcements this month, some of which might only add to its challenges.

    Musk previously frustrated some of Twitter’s celebrity users, who have long been a key selling point for the platform, with a promise to remove blue checkmarks from accounts who had been verified under Twitter’s previous system. But it didn’t exactly go to plan — instead of removing checks from all previously verified users, Twitter appeared to target a single account belonging to the New York Times.

    Days later, Twitter’s home button was temporarily replaced with doge, the meme representing the cryptocurrency dogecoin, which Musk has promoted. The company also briefly restricted Twitter users from sharing links to a rival platform, upsetting users, including one who had previously reported the so-called Twitter files using documents provided by Musk.

    As if to underscore his unique and questionable impact on the brand, the “Chief Twit” has also apparently been keeping busy with changes to Twitter’s San Francisco headquarters. Last week, photos began spreading of a piece of plastic covering the “w” in the sign on the front of the company’s office.

    At nearly midnight on Sunday, Musk tweeted that the company’s landlord “says we’re legally required to keep sign as Twitter & cannot remove ‘w,’ so we painted it background color,” alongside a photo of the “w” painted white against a white background, leaving a more asinine word in its place. “Problem solved!” Musk tweeted.

    If only the same could be said for the platform’s business troubles.

    [ad_2]

    Source link

  • We are already in the food fight portion of the GOP primary | CNN Politics

    We are already in the food fight portion of the GOP primary | CNN Politics

    [ad_1]

    A version of this story appeared in CNN’s What Matters newsletter. To get it in your inbox, sign up for free here.



    CNN
     — 

    The 2024 Republican presidential primary is not fully underway as yet and already we are in the food fight phase.

    A super PAC supporting former President Donald Trump tried to smear Florida Gov. Ron DeSantis with pudding, seizing on a report, which the governor denies, about his eating habits to make a point about Social Security and Medicare.

    The ad itself is gross. And it drew a super PAC supporting DeSantis off the sidelines to air an ad of its own wondering why Trump was going after the Florida governor.

    For the record, neither DeSantis nor Trump currently say they will touch safety net benefits, but both have a past of suggesting they could.

    I talked to CNN chief national affairs correspondent Jeff Zeleny by email about the Trump/DeSantis dynamic, the role of deep-pocketed super PACs and what else is going on in this nascent primary campaign.

    WOLF: We are nine months away from the first primaries and not all of the top candidates have even declared their candidacies. But there’s some super PAC mudslinging. What’s happening and what do we need to take from all of this?

    ZELENY: A new season of attack ads has begun, with allies of Donald Trump and Ron DeSantis firing some of the first direct shots of the young presidential campaign. Now is the time to define your opponent – whether you’re an announced candidate (Trump) or expected to be one soon (DeSantis) – and begin pointing out potential vulnerabilities. Not surprisingly, the opening volley was about Social Security and Medicare and highlighting old comments about promising to reform the entitlement programs.

    WOLF: Super PACs can’t technically coordinate with campaigns. DeSantis doesn’t technically have a campaign. How is that working exactly?

    ZELENY: The Florida governor isn’t planning on jumping into the presidential race until May or June – after the legislative session is over – so until then, a group of deep-pocketed allies are coming to his defense. The super PAC, which is called Never Back Down, is effectively a campaign in waiting, complete with pollsters and political strategists of all varieties. Federal election law prohibits coordinating with the campaign, but when there isn’t an official campaign, that formality becomes far easier.

    WOLF: Do other Republican candidates have deep pocketed super PACs? Who are the other players to watch?

    ZELENY: Not nearly as deep, no, but most major Republican candidates have at least some type of super PAC assistance. Former South Carolina Gov. Nikki Haley has some support – and is seeking more – as are other potential candidates. One likely presidential contender, Sen. Tim Scott, has one financial advantage that makes him stand apart from his rivals: He has more than $20 million left over in his campaign account from last year’s Senate race, which he can use on his presidential race. That’s a head start most of his rivals can only dream of.

    WOLF: Trump and DeSantis have been shadowboxing around each other for some time. Can we assume this is a prelude to a much more bruising fight in the making? What does this say about GOP unity heading into the primaries?

    Zeleny: GOP unity? That will come later – or that’s the hope of top Republican officials – but the bruising season of define-your-opponent is underway. The Trump-DeSantis feud has long been simmering, but their springtime exchanges are almost certainly quaint, compared to what’s likely to come.

    WOLF: What do we know about where these super PAC ads are running? Are they focused on specific types of voters or is this simply an effort to get attention from us in the media?

    ZELENY: For now, most of the ads are running on cable television and sports. The Make America Great Again group, which supports Trump, has been running ads for weeks now seeking to define DeSantis in a negative light. You have likely seen some of these, which begin with the ominous: “Think you know Ron DeSantis? Think again.”

    WOLF: Are there any changes in how you think super PACs will operate this year and how they’ll be involved in the campaign?

    ZELENY: With every passing election cycle, super PACs play a more prominent role. It’s easier to raise money – without the federal limits imposed upon candidates. If the early months of the year are any indication, the 2024 campaign will push the limits even more, with outside groups far more important than political parties or, in some cases, even the candidates themselves.

    WOLF: Are there any early conclusions we can draw about how Trump’s indictment by the Manhattan DA on criminal charges has affected his campaign? Has it impacted his popularity among Republican voters? Affected his fundraising?

    ZELENY: Early conclusions are often risky ones, but the Trump campaign insists the indictment has been a fundraising boost. It certainly has rallied many Republicans around him – or at least unified them in opposition to the indictment – but it may be far too soon to say whether this will continue to be the case. He faces potential criminal action in Georgia, for his role in trying to overturn the election results, as well as at least two federal investigations.

    [ad_2]

    Source link

  • Why the ‘Godfather of AI’ decided he had to ‘blow the whistle’ on the technology | CNN Business

    Why the ‘Godfather of AI’ decided he had to ‘blow the whistle’ on the technology | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, also known as the “Godfather of AI,” decided he had to “blow the whistle” on the technology he helped develop after worrying about how smart it was becoming, he told CNN on Tuesday.

    “I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told CNN’s Jake Tapper in an interview on Tuesday. “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. On Monday, he made headlines for leaving his role at Google, where he had worked for a decade, in order to speak openly about his growing concerns around the technology.

    In an interview Monday with the New York Times, which was first to report his move, Hinton said he was concerned about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us, and there are very few examples of a more intelligent thing being controlled by a less intelligent thing,” Hinton told Tapper on Tuesday.

    “It knows how to program so it’ll figure out ways of getting around restrictions we put on it. It’ll figure out ways of manipulating people to do what it wants.”

    Hinton is not the only tech leader to speak out with concerns over AI. A number of members of the community signed a letter in March calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk, came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers the viral chatbot ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    Apple co-founder Steve Wozniak, who was one of the signatories on the letter, appeared on “CNN This Morning” on Tuesday, echoing concerns about its potential to spread misinformation.

    “Tricking is going to be a lot easier for those who want to trick you,” Wozniak told CNN. “We’re not really making any changes in that regard – we’re just assuming that the laws we have will take care of it.”

    Wozniak also said “some type” of regulation is probably needed.

    Hinton, for his part, told CNN he did not sign the petition. “I don’t think we can stop the progress,” he said. “I didn’t sign the petition saying we should stop working on AI because if people in America stop, people in China wouldn’t.”

    But he confessed to not having a clear answer for what to do instead.

    “It’s not clear to me that we can solve this problem,” Hinton told Tapper. “I believe we should put a big effort into thinking about ways to solve the problem. I don’t have a solution at present.”

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • How the CEO behind ChatGPT won over Congress | CNN Business

    How the CEO behind ChatGPT won over Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman seems to have achieved in a matter of hours what other tech execs have been struggling to do for years: He charmed the socks off Congress.

    Despite wide-ranging concerns that artificial intelligence tools like OpenAI’s ChatGPT could disrupt democracy, national security, and the economy, Altman’s appearance Tuesday before a Senate subcommittee went so smoothly that viewers could have been forgiven for thinking the year was closer to 2013 than 2023.

    It was a pivotal moment for the AI industry. Altman’s testimony on Tuesday alongside Christina Montgomery, IBM’s chief privacy officer, promised to set the tone for how Washington regulates a technology that many fear could eliminate jobs or destabilize elections.

    But where lawmakers could have followed a familiar pattern, blasting the tech industry with hostile questioning and leveling withering allegations of reckless innovation, members of the Senate Judiciary Committee instead heaped praise on the companies — and often, on Altman in particular.

    The difference seemed to come down to OpenAI calling for proactive government regulation — and persuading lawmakers it was serious. Unlike the long list of social media hearings in recent years, this AI hearing came earlier in OpenAI’s lifecycle and, crucially, before the company or its technology had suffered any high-profile mishaps.

    Altman, more than any other figure in tech, has emerged as the face of a new crop of powerful and disruptive AI tools that can generate compelling written work and images in response to user prompts. Much of the federal government is now racing to figure out how to regulate the cutting-edge technology.

    But after his performance on Tuesday, the CEO whose company helped spark the new AI arms race may have maneuvered himself into a privileged position of influence over the rules that may soon govern the tools he’s developing.

    Altman’s easy-going, plain-spoken demeanor helped disarm skeptical lawmakers and appeared to win over Democrats and Republicans alike. His approach contrasted with the wooden, lawyerly performances that have afflicted some other tech CEOs in the past during their time in the hotseat.

    “I sense there is a willingness to participate here that is genuine and authentic,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the committee’s technology panel.

    New Jersey Democratic Sen. Cory Booker, adopting an unusual level of familiarity with a witness, found himself repeatedly addressing Altman as “Sam,” even as he referred to other panelists by their last names.

    Even Altman’s fellow witnesses couldn’t resist gushing about his style.

    “His sincerity in talking about those [AI] fears is very apparent, physically, in a way that just doesn’t communicate on the television screen,” Gary Marcus, a former New York University professor and a self-described critic of AI “hype,” told lawmakers.

    With a relaxed yet serious tone, Altman did not deflect or shy away from lawmakers’ concerns. He agreed that large-scale manipulation and deception using AI tools are among the technology’s biggest potential flaws. And he validated fears about AI’s impact on workers, acknowledging that it may “entirely automate away some jobs.”

    “If this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”

    Altman’s candor and openness has captivated many in Washington.

    On Monday evening, Altman spoke to a dinner audience of roughly 60 House lawmakers from both parties. One person in the room, speaking on condition of anonymity to discuss a closed-door meeting, described members of Congress as “riveted” by the conversation, which also saw Altman demonstrating ChatGPT’s capabilities “to much amusement” from the audience.

    Lawmakers have spent years railing against social media companies, attacking them for everything from their content moderation decisions to their economic dominance. On Tuesday, they seemed ready — or even relieved — to be dealing with another area of the technology industry.

    Whether this time is truly different remains unclear, though. The AI industry’s biggest players and aspirants include some of the same tech giants Congress has sharply criticized, including Google and Meta. OpenAI is receiving billions of dollars of investment from Microsoft in a multi-year partnership. And with his remarks on Tuesday, Altman appeared to draw from a familiar playbook for Silicon Valley: Referring to technology as merely a neutral tool, acknowledging his industry’s imperfections and inviting regulation.

    Some AI ethicists and experts questioned the value of asking a leading industry spokesperson how he would like to be regulated. Marcus, the New York University professor, cautioned that creating a new federal agency to police AI could lead to “regulatory capture” by the tech industry, but the warning could have applied just as easily to Congress itself.

    “It seems very very bad that ahead of a hearing meant to inform how this sector gets regulated, the CEO of one of the corporations that would be subject to that regulation gets to present a magic show to the regulators,” Emily Bender, a professor of computational linguistics at the University of Washington, said of Altman’s dinner with House lawmakers.

    She added: “Politicians, like journalists, must resist the urge to be impressed.”

    After years of fidgety evasiveness from other tech CEOs, however, lawmakers this week seemed easily wowed by Altman and his seemingly straight-shooting answers.

    Louisiana Republican Sen. John Kennedy, after expressing frustration with IBM’s Montgomery for providing a nuanced answer he couldn’t comprehend, visibly brightened when Altman quickly and smoothly outlined his regulatory proposals in a bulleted list. Kennedy began joking with Altman and even asked whether Altman might consider heading up a hypothetical federal agency charged with regulating the AI industry.

    “I love my current job,” Altman deadpanned, to audience laughter, before offering to send Kennedy’s office some potential candidates.

    Compounding lawmakers’ attraction to Altman is a belief on Capitol Hill that Congress erred in extending broad liability protections to online platforms at the dawn of the internet. That decision, which allowed for an explosion of blogs, e-commerce sites, streaming media and more, has become an object of regret for many lawmakers in the face of alleged mental health harms stemming from social media.

    “I don’t want to repeat that mistake again,” said Judiciary Committee Chairman Dick Durbin.

    Here too, Altman deftly seized an opportunity to curry favor with lawmakers by emphasizing distinctions between his industry and the social media industry.

    “We try to design systems that do not maximize for engagement,” Altman said, alluding to the common criticism that social media algorithms tend to prioritize outrage and negativity to boost usage. “We’re not an advertising-based model; we’re not trying to get people to use it more and more, and I think that’s a different shape than ad-supported social media.”

    In providing simple-sounding solutions with a smile, Altman is doing much more than shaping policy: He is offering members of Congress a shot at redemption, one they seem grateful to accept. Despite the many pitfalls of AI they identified on Tuesday, lawmakers appeared to thoroughly welcome Altman as a partner, not a potential adversary needing oversight and scrutiny.

    “We need to be mindful,” Blumenthal said, “of ways that rules can enable the big guys to get bigger and exclude innovation, and competition, and responsible good guys such as our representative in this industry right now.”

    [ad_2]

    Source link

  • Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    For years, Alexa has been synonymous with virtual assistants that can interact with users and do tasks on their behalf.

    Now Amazon is trying to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products.

    Amazon’s goal is to use AI “to create this great personal assistant,” said Dave Limp, senior VP of devices and services, in a recent interview with CNN. “We’ve been using all forms of AI for a long time, but now that we see this emergence of generative AI, we can accelerate that vision even faster.”

    Generative AI refers to a type of AI that can create new content, such as text and images, in response to user prompts. Limp did not elaborate on how generative AI could be used in Alexa products, but there are clear possibilities.

    In theory, this technology could one day help Alexa have more natural conversations with users, answer more complex questions, and be more creative by telling stories or making up song lyrics in seconds. It could also enable more personalized interactions, allowing the assistant to learn about the device owner’s interests, preferences and better tailor its responses to each person.

    “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    Alexa launched nearly a decade ago and, along with Siri, Cortana and other voice assistants, seemed poised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished that faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon is now slashing staff and shelving products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division has not escaped unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees as the global economic outlook continued to worsen. In March, the company said about 9,000 more jobs would be impacted. Limp said his division lost about 2,000 people, about half of which were from the Alexa team.

    Amazon also shut down some of the products it spun up earlier in the pandemic, such as its wearable fitness brand Halo, which allowed users to ask Alexa questions about their health and wellness. Limp said the company also shelved some “more risky” projects. “I wouldn’t doubt we’ll dust them off at some point and bring them back,” he said. “We’re still taking a lot of risks in this organization.”

    But Limp said Alexa remains a “North Star” for his division. “To give you a sense, there’s still thousands and thousands of people working on Alexa,” he said.

    Amazon is indeed still investing in Alexa and its related Echo smart speaker lineup. Last week, the company unveiled several new products, including the $39.99 Echo Pop and the $89.99 Echo Show 5, its smart speaker with a screen. While the products feature incremental updates, Limp said Amazon’s current lineup contains hints of what’s to come with its AI efforts, beyond generative AI.

    For example, if Alexa is enabled on an Echo Show, where it can rotate and follow users around the room, “you’ll see glimmers of where it’s going over the next months and years,” Limp said.

    But generative AI remains a key focus for the company. Amazon CEO Andy Jassy said in a letter to shareholders in April that the company is focused on “investing heavily” in the technology “across all of our consumer, seller, brand, and creator experiences.”

    The company is reportedly working on adding ChatGPT-like search capabilities for its e-commerce store. Amazon is also rumored to be planning to use generative AI to bring conversational language to a home robot.

    While Limp didn’t comment on the report, he said the end goal has long been for Alexa to communicate with users in a fluid, natural way, whether it’s through an Echo device or other products such as its robotic dog, Astro.

    The concept remains a “hard technical challenge,” he said, but one that is “more tractable” with generative AI. “There’s still some hard corner cases and things to work out,” he said.

    [ad_2]

    Source link

  • Meta threatens to pull news content in California if bill to pay publishers passes | CNN Business

    Meta threatens to pull news content in California if bill to pay publishers passes | CNN Business

    [ad_1]



    CNN
     — 

    Meta, the parent company of Facebook and Instagram, threatened to remove news from its social media sites in California if the state passes a bill requiring big tech companies to pay news outlets for their content.

    In a statement posted on Twitter, Andy Stone, Meta’s communications director, called California’s Journalism Preservation Act “a slush fund that primarily benefits big, out-of-state media companies under the guise of aiding California publishers.”

    “The bill fails to recognize that publishers and broadcasters put their content on our platform themselves and that substantial consolidation in California’s local news industry came over 15 years ago, well before Facebook was widely used,” Stone said.

    The bill, sponsored by Assemblymember Buffy Wicks, D-Oakland, requires digital companies such as Google and Facebook to pay local news publishers a “journalism usage fee” whenever their news content is used or posted on those platforms. The bill also requires news publishers to invest 70% of usage fee profits into journalism jobs.

    “This threat from Meta is a scare tactic that they’ve tried to deploy, unsuccessfully, in every country that’s attempted this,” Wicks said in a statement. “It’s egregious that one of the wealthiest companies in the world would rather silence journalists than face regulation.”

    According to a spokesperson for Wicks, the bill is due for a vote in the California State Assembly on Thursday.

    The bill has garnered praise from some of the largest journalism unions in California, including Media Guild of the West and Pacific Media Workers Guild. In a joint letter, the two unions called Meta and Google “powerful landlords overseeing an ever-expanding slum of low-quality information, happy to collect advertising rents from struggling tenants while avoiding paying for upkeep.”

    However, the bill also has its detractors. Free Press Action, a non-profit media advocacy organization, has criticized the bill as doing “nothing to support trustworthy local reporting and would instead pad the profits of massive conglomerates.”

    [ad_2]

    Source link