ReportWire

Tag: international-health and science

  • Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    [ad_1]



    CNN
     — 

    Less than a few hours after Snapchat rolled out its My AI chatbot to all users last week, Lyndsi Lee, a mother from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

    “It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.

    The feature is powered by the viral AI chatbot tool ChatGPT – and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

    The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear you’re talking to a computer.

    “I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,” Lee said. “I just think there is a really clear line [Snapchat] is crossing.”

    The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to reckon with questions that may have seemed theoretical only months ago.

    In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was released to Snap’s subscription customers, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. In particular, he cited reports that it can provide kids with suggestions for how to lie to their parents.

    “These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

    In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”

    In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.

    In another TikTok video with more than 1.5 million views, a user named Ariel recorded a song with an intro, chorus and piano chords written by My AI about what it’s like to be a chatbot. When she sent the recorded song back, she said the chatbot denied its involvement with the reply: “I’m sorry, but as an AI language model, I don’t write songs.” Ariel called the exchange “creepy.”

    Other users shared concerns about how the tool understands, interacts with and collects information from photos. “I snapped a picture … and it said ‘nice shoes’ and asked who the people [were] in the photo,” a Snapchat user wrote on Facebook.

    Snapchat told CNN it continues to improve My AI based on community feedback and is working to establish more guardrails to keep its users safe. The company also said that similar to its other tools, users don’t have to interact with My AI if they don’t want to.

    It’s not possible to remove My AI from chat feeds, however, unless a user subscribes to its monthly premium service, Snapchat+. Some teens say they have opted to pay the $3.99 Snapchat+ fee to turn off the tool before promptly canceling the service.

    But not all users dislike the feature.

    One user wrote on Facebook that she’s been asking My AI for homework help. “It gets all of the questions right.” Another noted she’s leaned on it for comfort and advice. “I love my little pocket, bestie!” she wrote. “You can change the Bitmoji [avatar] for it and surprisingly it offers really great advice to some real life situations. … I love the support it gives.”

    ChatGPT, which is trained on vast troves of data online, has previously come under fire for spreading inaccurate information, responding to users in ways they might find inappropriate and enabling students to cheat. But Snapchat’s integration of the tool risks heightening some of these issues, and adding new ones.

    Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.

    “If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” she said. “Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot. In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

    For now, the onus is on parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to show up in more popular apps and services.

    Sinead Bovell, the founder of WAYE, a startup that helps prepare youth for future with advanced technologies, said parents need to make it very clear “chatbots are not your friend.”

    “They’re also not your therapists or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” she said.

    “Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they would a friend – even though from a user design perspective, the chatbot exists in the same corner of Snapchat.”

    She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.

    [ad_2]

    Source link

  • First US senator to give birth in office offers heartfelt Mother’s Day message: ‘You’re what keeps this country strong’ | CNN Politics

    First US senator to give birth in office offers heartfelt Mother’s Day message: ‘You’re what keeps this country strong’ | CNN Politics

    [ad_1]


    Washington
    CNN
     — 

    Illinois Sen. Tammy Duckworth, the first sitting US senator to give birth while in office, offered a heartfelt Mother’s Day message on Sunday, celebrating moms nationwide for “growing the next generation for our nation.”

    “Hang in there, sister. We’re in this together, and nobody has perfect work-life balance, everybody struggles, and so do the best that you can,” the Democrat told CNN’s Dana Bash on “State of the Union.”

    “You’re what keeps this country strong.”

    Duckworth and her husband, Bryan Bowlsbey, are the parents of two daughters, Abigail and Maile. Abigail was born while Duckworth was serving in the US House as a Chicago-area congresswoman.

    In 2018, after giving birth to Maile, Duckworth became the first US senator to cast a vote on the floor with her newborn by her side.

    Her vote came just one day after the Senate changed long-standing rules to allow newborns on the chamber floor during votes. The rule change, voted through by unanimous consent, was done to accommodate senators with newborn babies and lets them bring children under 1 year old onto the Senate floor and breastfeed them during votes.

    “It feels great,” Duckworth told reporters at the time. “It is about time, huh?”

    The Illinois Democrat on Sunday spoke about Democratic efforts to pass legislation to address rising child care costs.

    “Families spend as much as a quarter to half of their income on child care, and there’s no way for working families to survive under those burdens,” Duckworth said.

    “We keep trying,” she added when asked by Bash about finding bipartisan solutions.

    Duckworth is a retired Army lieutenant colonel who was a helicopter pilot during the Iraq War. She was the first female double amputee from the war after suffering severe combat wounds when her Black Hawk helicopter was shot down.

    Duckworth served in the Obama administration as an assistant secretary of Veterans Affairs. She was first elected to the US House in 2012 and the Senate four years later.

    [ad_2]

    Source link

  • Amazon corporate workers plan walkout next week over return-to-office policies | CNN Business

    Amazon corporate workers plan walkout next week over return-to-office policies | CNN Business

    [ad_1]



    CNN
     — 

    Some Amazon corporate workers have announced plans to walk off the job next week over frustrations with the company’s return-to-work policies, among other issues, in a sign of heightened tensions inside the e-commerce giant after multiple rounds of layoffs.

    The work stoppage is being jointly organized by an internal climate justice worker group and a remote work advocacy group, according to an email from organizers and public social media posts.

    Workers participating have two main demands: asking the e-commerce giant to put climate impact at the forefront of its decision making, and to provide greater flexibility for how and where employees work.

    The lunchtime walkout is scheduled for May 31, beginning at noon. Organizers have said in an internal pledge that they are only going to go through with the walkout if at least 1,000 workers agree to participate, according to an email from organizers.

    The Washington Post was first to report the planned walkout.

    The collective action from corporate workers comes after Amazon, like other Big Tech companies, cut tens of thousands of jobs beginning late last year amid broader economic uncertainty. All told, Amazon has said this year that it is laying off some 27,000 workers in multiple rounds of cuts.

    At the same time, Amazon and other tech companies are trying to get workers into the office more. In February, Amazon said it was requiring thousands of its workers to be in the office for at least three days per week, starting on May 1.

    “Morale is really at an all-time low right now,” an Amazon corporate worker based in Los Angeles, who plans on participating in the walkout next week, told CNN. “I think the hope from this walkout is really to send a clear message to leadership that we’re expecting real action from them on a number of issues, with the thesis of just, like, we need better long term decision-making that benefits not only employees but the communities that we serve.”

    The worker, who asked not to be named, said organizers are focusing the in-person walkout efforts at the company’s Seattle headquarters but have also created a way for people to participate virtually so “all Amazonians are welcome to participate.”

    One of the internal groups spearheading next week’s walkout is dubbed Amazon Employees for Climate Justice (AECJ), the same coalition that organized protests slamming the company for inaction on climate change back in 2019.

    “Amazon must keep pace with a changing world,” the group wrote in a Twitter thread Tuesday calling for the walkout next week. “To cultivate a diverse, world-class workplace, we need real plans to tackle our climate impact and flexible work options.”

    Amazon’s Climate Pledge, signed in 2019, commits the company to reach net-zero carbon emissions by 2040, among other climate goals. But in the Twitter thread, the group blasted the pledge as “hype” and demanded “a genuine climate plan.”

    Amazon said it has made progress in meeting its goals, including by putting thousands of electric delivery vehicles on the road, and by continuing to invest in both proven and new science-backed solutions for reducing carbon emissions. Amazon also said it had the goal of powering 100% of its operations with renewable energy by 2030, and now expects to meet that goal by 2025.

    “We respect our employees’ rights to express their opinions,” Rob Munoz, an Amazon spokesperson, told CNN in a statement Tuesday.

    In response to employee concerns about the return to office, Munoz said the company has “had a great few weeks with more employees in the office.”

    “There’s been good energy on campus and in urban cores like Seattle where we have a large presence. We’ve heard this from lots of employees and the businesses that surround our offices,” Munoz said. “As it pertains to the specific topics this group of employees is raising, we’ve explained our thinking in different forums over the past few months and will continue to do so.”

    [ad_2]

    Source link

  • Microsoft leaps into the AI regulation debate, calling for a new US agency and executive order | CNN Business

    Microsoft leaps into the AI regulation debate, calling for a new US agency and executive order | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft joined a sprawling global debate on the regulation of artificial intelligence Thursday, echoing calls for a new federal agency to control the technology’s development and urging the Biden administration to approve new restrictions on how the US government uses AI tools.

    In a speech in Washington attended by multiple members of Congress and civil society groups, Microsoft President Brad Smith described AI regulation as the challenge of the 21st century, outlining a five-point plan for how democratic nations could address the risks of AI while promoting a liberal vision for the technology that could rival competing efforts from countries such as China.

    The remarks highlight how one of the largest companies in the AI industry hopes to influence the fast-moving push by governments, particularly in Europe and the United States, to rein in AI before it causes major disruptions to society and the economy.

    In a roughly hour-long appearance that was equal parts product pitch and policy proposal, Smith compared AI to the printing press and described how it could streamline policymaking and lawmakers’ constituent outreach, before calling for “the rule of law” to govern AI at every part of its lifecycle and supply chain.

    Regulations should apply to everything from the data centers that train large language models to the end users such as banks, hospitals and others that may apply the technology toward making life-altering decisions, Smith said.

    For decades, “the rule of law and a commitment to democracy has kept technology in its proper place,” Smith said. “We’ve done it before; we can do it again.”

    In his remarks, Smith joined calls made last week by OpenAI — the company behind ChatGPT and that Microsoft has invested billions in — for the creation of a new government regulator that can oversee a licensing system for cutting-edge AI development, combined with testing and safety standards as well as government-mandated disclosure rules.

    Whether a new federal regulator is needed to police AI is quickly emerging as a focal point of the debate in Washington; opponents such as IBM have argued, including in an op-ed Thursday, that AI regulation should be baked into every existing federal agency because of their understanding of the sectors they oversee and how AI may be most likely to transform them.

    Smith also called for President Joe Biden to develop and sign an executive order requiring federal agencies that procure AI tools to implement a risk management framework developed and published this year by the National Institute of Standards and Technology. That framework, which Congress first ordered with legislation in 2020, covers ways that companies can use AI responsibly and ethically.

    Such an order would leverage the US government’s immense purchasing power to shape the AI industry and encourage the voluntary adoption of best practices, Smith said.

    Microsoft itself plans to implement the NIST framework “across all of our services,” Smith added, a commitment he described as the direct outgrowth of a recent White House meeting with AI CEOs in Washington. Smith also pledged to publish an annual AI transparency report.

    As part of Microsoft’s proposal, Smith said any new rules for AI should include revamped export controls tailor-made for the AI age to prevent the technology from being abused by sanctioned entities.

    And, he said, the government should mandate redundant AI circuit breakers that would allow algorithms to be shut off by critical infrastructure providers or from within the data centers they depend on.

    Smith’s remarks, and a related policy paper, come a week after Google released its own proposals calling for global cooperation and common standards for artificial intelligence.

    “AI is too important not to regulate, and too important not to regulate well,” Kent Walker, Google’s president of global affairs, said in a blog post unveiling the company’s plan.

    [ad_2]

    Source link

  • Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    [ad_1]


    London
    CNN
     — 

    The European Union took a major step Wednesday toward setting rules — the first in the world — on how companies can use artificial intelligence.

    It’s a bold move that Brussels hopes will pave the way for global standards for a technology used in everything from chatbots such as OpenAI’s ChatGPT to surgical procedures and fraud detection at banks.

    “We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists.

    Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law.

    “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added.

    Hundreds of top AI scientists and researchers warned last month that the technology posed an extinction risk to humanity, and several prominent figures — including Microsoft President Brad Smith and OpenAI CEO Sam Altman — have called for greater regulation of the technology.

    At the Yale CEO Summit this week, more than 40% of business leaders — including Walmart chief Doug McMillion and Coca-Cola

    (KO)
    CEO James Quincy — said AI had the potential to destroy humanity five to 10 years from now.

    Against that backdrop, the EU AI Act seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects.”

    Here are the key takeaways.

    Once approved, the Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.

    The extent of regulation depends on the risks created by a particular application, from minimal to “unacceptable.”

    Systems that fall into the latter category are banned outright. These include real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China, which assign people a “health score” based on their behavior.

    The legislation also sets tight restrictions on “high-risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.”

    These include systems used to influence voters in an election, as well as social media platforms with more than 45 million users that recommend content to their users — a list that would include Facebook, Twitter and Instagram.

    The Act also outlines transparency requirements for AI systems.

    For instance, systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.

    Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.

    AI systems with minimal or no risk, such as spam filters, fall largely outside of the rules.

    Most AI systems will likely fall into the high-risk or prohibited categories, leaving their owners exposed to potentially enormous fines if they fall foul of the regulations, according to Racheal Muldoon, a barrister (litigator) at London law firm Maitland Chambers.

    Engaging in prohibited AI practices could lead to a fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    That goes much further than Europe’s signature data privacy law, the General Data Protection Regulation, under which Meta was hit with a €1.2 billion ($1.3 billion) fine last month. GDPR sets fines of up to €10 million ($10.8 million), or up to 2% of a firm’s global turnover.

    Fines under the AI Act serve as a “war cry from the legislators to say, ‘take this seriously’,” Muldoon said.

    At the same time, penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for start-ups.

    The Act also requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed.

    “The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added.

    The Act gives citizens the right to file complaints against providers of AI systems and makes a provision for an EU AI Office to monitor enforcement of the legislation. It also requires member states to designate national supervisory authorities for AI.

    Microsoft

    (MSFT)
    — which, together with Google, is at the forefront of AI development globally — welcomed progress on the Act but said it looked forward to “further refinement.”

    “We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said in a statement.

    IBM

    (IBM)
    , meanwhile, called on EU policymakers to take a “risk-based approach” and suggested four “key improvements” to the draft Act, including further clarity around high-risk AI “so that only truly high-risk use cases are captured.”

    The Act may not come into force until 2026, according to Muldoon, who said revisions were likely, given how rapidly AI was advancing. The legislation has already gone through several updates since drafting began in 2021.

    “The law will expand in scope as the technology develops,” Muldoon said.

    [ad_2]

    Source link

  • Why there won’t be a backlash against the Supreme Court this time | CNN Politics

    Why there won’t be a backlash against the Supreme Court this time | CNN Politics

    [ad_1]



    CNN
     — 

    The Supreme Court handed down several key rulings this past week that dismayed liberals. Chief among them was the court’s decision to disallow colleges and universities from using race or ethnicity as a specific factor in admissions. The court also found that President Joe Biden’s student debt forgiveness plan was unconstitutional and that a Colorado web designer could refuse to create websites that celebrate same-sex weddings over religious objections.

    Unlike last year, when the Supreme Court greatly upset liberals by overturning Roe v. Wade, this year’s big rulings by the justices are unlikely to spark a major backlash from the public at large.

    This is well reflected in the public polling. Roe v. Wade, the 1973 decision that legalized abortion nationwide, had become massively popular.

    Right before the decision to overturn Roe leaked in May 2022, a Fox News poll found that 63% of registered voters were opposed to such a move while 27% supported it. An ABC News/Washington Post poll put the split at 54% wanting the court to uphold Roe and 28% wanting the decision overturned.

    This majority of Americans who wanted abortion to be legal nationally have maintained their stance since the Supreme Court officially struck down Roe in June 2022. Since that time, abortion supporters have won every related measure placed on the ballot across the country – from deep-blue states like California to ruby-red ones like Kentucky.

    California is an important state to note because voters there faced a 2020 ballot measure to consider the use of race, sex or ethnicity in government institutions (such as education). A clear majority, 57%, voted against allowing state and local entities to consider such factors in public education, employment and contracting decisions.

    When a state that voted for Biden by nearly 30 points is against affirmative action, it shouldn’t be surprising that the nation as a whole is.

    A Pew Research Center poll released last month found that 50% of Americans disapproved of certain colleges and universities taking race and ethnicity into account in admissions decisions to increase diversity. Only 33% approved of the practice.

    This Pew poll is no outlier. An ABC News/Ipsos poll conducted after the court decided its case showed that 52% of Americans approved of the decision, while 32% were opposed.

    Some polling before the ruling had shown even more opposition: 70% of Americans in a recent CBS News/YouGov survey indicated that the Supreme Court should not allow colleges to consider race and ethnicity in admissions.

    But perhaps what’s most interesting isn’t how many people are for or against considering race in college admissions. Rather, it’s how many people simply didn’t care enough to pay close attention to the affirmative action case before the Supreme Court.

    When explicitly given the option, a majority (55%) said in a May Marquette University Law School poll that they hadn’t heard enough to form an opinion about the case. (Those who had heard enough were against allowing colleges to use race in admissions.)

    This is quite different from March 2022, when just 30% of Americans hadn’t heard enough to form an opinion about the court potentially overturning Roe v. Wade, when asked the same question by Marquette but about the abortion case. (A plurality of those who had heard enough didn’t want the court to overturn Roe.)

    It’s hard for an issue to galvanize voters when they aren’t paying attention to it.

    The same holds true for Biden’s student loan forgiveness plan that the court blocked. A USA Today/Ipsos poll from April indicated that 52% of Americans were familiar with the case and a mere 16% were very familiar with it. (Those who had student loans were more familiar at 71%, though that’s a fairly low percentage for something that could affect them directly.)

    Possibly because of that low familiarity, the percentage of Americans who favor or oppose canceling certain student debt differs greatly depending on how the question is worded. When Marquette didn’t mention Biden or the government specifically in its May poll, a majority (63%) said they favored forgiveness of up to $20,000. It was a much lower 47% in the Ipsos poll.

    Surveys that did identify the proposal as Biden’s plan tend to be in the same ballpark, with a split public and a sizable percentage unsure.

    The ABC News/Ipsos poll showed that 45% approved of the court striking down Biden’s student debt plan, with 40% disapproving. About a sixth (16%) of the public was undecided.

    This jibes with polling before the court’s decision was announced. An NBC News poll from last year showed that 43% said Biden’s plan was a good idea compared with 44%, who said it was a bad idea. Just over 10% had no opinion.

    The USA Today/Ipsos survey found that 43% of Americans wanted the Supreme Court to allow the government’s student loan forgiveness plan to move forward, while 40% did not. Another 17% had no opinion.

    (I should point out that those with student debt were more likely to want government forgiveness in all these surveys, though about 80% of Americans don’t have student loan debt.)

    The public was similarly split about the court ruling in favor of the Colorado web designer who refuses to make wedding websites for same-sex couples over religious objections. According to the ABC News/Ipsos poll, 43% of Americans agreed with the court’s decision, 42% disagreed and 14% were undecided.

    There was limited polling on this case before the ruling, though none of it indicated massive opposition. A majority (60%) in a Pew poll that specifically mentioned “wedding websites” and “same-sex marriages” indicated they believed business owners should be allowed to refuse services if it violated their religious or personal beliefs.

    The polling on Roe v. Wade didn’t look anything like this last year. There were no close splits in opinion. People were consistently against overturning Roe, and they cared a lot about it. This led to a historically strong performance for the party in the White House during the 2022 midterm elections and a major backlash against the Supreme Court.

    The current polling on affirmative action in college admissions, Biden’s student loan forgiveness plan and allowing people to opt out of certain services to married LGBTQ couples if they believe it goes against their religion suggests that court’s opinions on those issues aren’t likely to have a similar impact.

    This story has been updated with additional information.

    [ad_2]

    Source link

  • ‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree | CNN Business

    ‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree | CNN Business

    [ad_1]



    CNN
     — 

    A new crop of artificial intelligence tools carries the promise of streamlining tasks, improving efficiency and boosting productivity in the workplace. But that hasn’t been Neil Clarke’s experience so far.

    Clarke, an editor and publisher, said he recently had to temporarily shutter the online submission form for his science fiction and fantasy magazine, Clarkesworld, after his team was inundated with a deluge of “consistently bad” AI-generated submissions.

    “They’re some of the worst stories we’ve seen, actually,” Clarke said of the hundreds of pieces of AI-produced content he and his team of humans now must manually parse through. “But it’s more of the problem of volume, not quality. The quantity is burying us.”

    “It almost doubled our workload,” he added, describing the latest AI tools as “a thorn in our side for the last few months.” Clarke said that he anticipates his team is going to have to close submissions again. “It’s going to reach a point where we can’t handle it.”

    Since ChatGPT launched late last year, many of the tech world’s most prominent figures have waxed poetic about how AI has the potential to boost productivity, help us all work less and create new and better jobs in the future. “In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently,” Microsoft co-founder Bill Gates said in a blog post recently.

    But as is often the case with tech, the long-term impact isn’t always clear or the same across industries and markets. Moreover, the road to a techno-utopia is often bumpy and plagued with unintended consequences, whether it’s lawyers fined for submitting fake court citations from ChatGPT or a small publication buried under an avalanche of computer-generated submissions.

    Big Tech companies are now rushing to jump on the AI bandwagon, pledging significant investments into new AI-powered tools that promise to streamline work. These tools can help people quickly draft emails, make presentations and summarize large datasets or texts.

    In a recent study, researchers at the Massachusetts Institute of Technology found that access to ChatGPT increased productivity for workers who were assigned tasks like writing cover letters, “delicate” emails and cost-benefit analyses. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust,” Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper, said in a statement.

    Mathias Cormann, the secretary-general of the Organization for Economic Co-operation and Development recently said the intergovernmental organization has found that AI can improve some aspects of job quality, but there are tradeoffs.

    “Workers do report, though, that the intensity of their work has increased after the adoption of AI in their workplaces,” Cormann said in public remarks, pointing to the findings of a report released by the organization. The report also found that for non-AI specialists and non-managers, the use of AI had only a “minimal impact on wages so far” – meaning that for the average employee, the work is scaling up, but the pay isn’t.

    Ivana Saula, the research director for the International Association of Machinists and Aerospace Workers, said that workers in her union have said they feel like “guinea pigs” as employers rush to roll out AI-powered tools on the job.

    And it hasn’t always gone smoothly, Saula said. The implementation of these new tech tools has often led to more “residual tasks that a human still needs to do.” This can include picking up additional logistics tasks that a machine simply can’t do, Saula said, adding more time and pressure to a daily work flow.

    The union represents a broad range of workers, including in air transportation, health care, public service, manufacturing and the nuclear industry, Saula said.

    “It’s never just clean cut, where the machine can entirely replace the human,” Saula told CNN. “It can replace certain aspects of what a worker does, but there’s some tasks that are outstanding that get placed on whoever remains.”

    Workers are also “saying that my workload is heavier” after the implementation of new AI tools, Saula said, and “the intensity at which I work is much faster because now it’s being set by the machine.” She added that the feedback they are getting from workers shows how important it is to “actually involve workers in the process of implementation.”

    “Because there’s knowledge on the ground, on the frontlines, that employers need to be aware of,” she said. “And oftentimes, I think there’s disconnects between frontline workers and what happens on shop floors, and upper management, and not to mention CEOs.”

    Perhaps nowhere are the pros and cons of AI for businesses as apparent as in the media industry. These tools offer the promise of accelerating if not automating copywriting, advertising and certain editorial work, but there have already been some notable blunders.

    News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on Star Wars published by Gizmodo earlier this month similarly required a correction and resulted in employee turmoil. But both outlets have signaled they will still move forward with using the technology to assist in newsrooms.

    Others like Clarke, the publisher, have tried to combat the fallout from the rise of AI by relying on more AI. Clarke said he and his team turned to AI-powered detectors of AI-generated work to deal with the deluge of submissions but found these tools weren’t helpful because of how unreliably they flag “false positives and false negatives,” especially for writers whose second language is English.

    “You listen to these AI experts, they go on about how these things are going to do amazing breakthroughs in different fields,” Clarke said. “But those aren’t the fields they’re currently working in.”

    [ad_2]

    Source link

  • Senate votes to end Covid-19 emergency, 3 years after initial declaration | CNN Politics

    Senate votes to end Covid-19 emergency, 3 years after initial declaration | CNN Politics

    [ad_1]



    CNN
     — 

    The Senate on Wednesday passed a bill that would end the national Covid-19 emergency declared by then-President Donald Trump on March 13, 2020.

    The final vote was overwhelmingly bipartisan, 68-23. The joint resolution, which cleared the House earlier this year, now heads to President Joe Biden’s desk.

    The vote comes on the heels of two other successful efforts led by Republicans in approving legislation rescinding Biden administration policies.

    A White House official said in a statement to CNN that while the President “strongly opposes” this bill, the administration is already winding down the emergency by May 11, the date previously announced for the end of the authority.

    Still, the official noted, if the Senate passed the measure and it heads to Biden’s desk, “he will sign it, and the administration will continue working with agencies to wind down the national emergency with as much notice as possible to Americans who could potentially be impacted.”

    The White House said in January that Biden “strongly opposes” the GOP resolution to end the Covid-19 emergency, according to its statement of administration policy, but did not threaten a veto.

    While the lack of an explicit veto threat left the possibility of Biden signing the measure a clear, if not likely, option, Biden’s ultimate decision to sign the bill marked another moment where House Democrats have privately voiced frustration that the lack of clarity – or outright messaging mishap – from the White House left lawmakers in a lurch.

    House Democrats largely voted against the bill when it was brought to the floor in February except for 11 Democrats who joined Republicans in support. A separate White House official noted that the Senate vote comes after several weeks when the Biden administration has had time to accelerate its wind-down efforts – and just a little over a month before they’d announced the emergency would end.

    But it also comes after the administration drew blowback from House Democrats after sending what lawmakers viewed as mixed signals over how the president planned to respond to a Republican-led resolution that would block a controversial Washington, DC, crime bill, which opponents criticized as weak on crime. The president ultimately did not veto the measure.

    The measure was able to succeed in the Senate by a simple majority through the Congressional Review Act, which allows a vote to repeal regulations from the executive branch without breaking a filibuster at a 60-vote threshold that is required for most legislation in the chamber.

    [ad_2]

    Source link

  • FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Artificial intelligence tools such as ChatGPT could lead to a “turbocharging” of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

    Addressing House lawmakers, FTC chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

    In recent months, a new crop of AI tools have gained attention for their ability to generate convincing emails, stories and essays as well as images, audio and videos. While these tools have potential to change the way people work and create, some have also raised concerns about how they could be use to deceive by impersonating individuals.

    Even as policymakers across the federal government debate how to promote specific AI rules, citing concerns about possible algorithmic discrimination and privacy issues, companies could still face FTC investigations today under a range of statutes that have been on the books for years, Khan and her fellow commissioners said.

    “Throughout the FTC’s history we have had to adapt our enforcement to changing technology,” said FTC Commissioners Rebecca Slaughter. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … [and] not be scared off by this idea that this is a new, revolutionary technology.”

    FTC Commissioner Alvaro Bedoya said companies cannot escape liability simply by claiming that their algorithms are a black box.

    “Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” said Bedoya. “There is law, and companies will need to abide by it.”

    The FTC has previously issued extensive public guidance to AI companies, and the agency last month received a request to investigate OpenAI over claims that the company behind ChatGPT has misled consumers about the tool’s capabilities and limitations.

    [ad_2]

    Source link

  • Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta on Wednesday reported that it grew sales by 3% during the first three months of the year, reversing a trend of three consecutive quarters of revenue declines and far exceeding Wall Street analysts’ expectations.

    Meta shares jumped as much as 12% in after-hours trading following the report, continuing the company’s strong trajectory since Zuckerberg announced that 2023 would be a “year of efficiency.”

    Another bright spot: user growth was relatively strong compared to recent quarters. The number of monthly active people on Meta’s family of apps grew 5% from the prior year to more than 3.8 billion and Facebook daily active users increased 4% to more than 2 billion.

    “We had a good quarter and our community continues to grow,” Zuckerberg said in a statement Wednesday. “We’re also becoming more efficient so we can build better products faster and put ourselves in a stronger position to deliver our long term vision.”

    But Meta has a long hill to climb.

    The company also reported that profits declined by nearly a quarter compared to the same period in the prior year to $5.7 billion. Price per advertisement — an indicator of the health of the company’s core digital ad business — also decreased by 17% from the year prior.

    Meta has been in the midst of a massive restructuring, as it attempts to recover from a perfect storm of heightened competition, lingering recession fears resulting in fewer ad dollars and a multibillion dollar effort to build a future version of the internet it calls the metaverse. Meta said in November it would eliminate 11,000 jobs, the single largest round of cuts in its history. And in March, Zuckerberg announced Meta would lay off another 10,000 employees. All told, the cuts will shrink Meta’s workforce by a quarter.

    Meta took a hit of more than $1 billion related to the restructuring in the March quarter, and said it will realize additional charges of around $500 million related to 2023 layoffs by the end of the year.

    Zuckerberg said on a call with analysts Wednesday that when Meta started its “efficiency work” late last year, “our business wasn’t performing as well as I wanted, but now we’re increasingly doing this work from a position of strength.”

    The company said it expects revenue to grow again in the current quarter compared to the prior year. And it slightly lowered its expectations for full-year expenses, potentially buoying investor optimism.

    “The year of efficiency is off to a stronger than expected start for Meta,” Insider Intelligence principal analyst Debra Aho Williamson said in a statement. But she added that the company “can’t afford to sit still in this environment.”

    Like other tech companies, Meta has recently read investor cues and taken to playing up its focus on artificial intelligence rather than the metaverse. The shift comes as Meta contends with the popularity of AI tools from tech firms like Microsoft and OpenAI.

    In his statement with the results Wednesday, Zuckerberg said: “Our AI work is driving good results across our apps and business.” He added in the call that the company’s AI work includes efforts to build AI chat experiences in WhatsApp and Messenger, as well as visual creation tools for posts on Facebook and Instagram and advertisements.

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • How the technology behind ChatGPT could make mind-reading a reality | CNN Business

    How the technology behind ChatGPT could make mind-reading a reality | CNN Business

    [ad_1]



    CNN
     — 

    On a recent Sunday morning, I found myself in a pair of ill-fitting scrubs, lying flat on my back in the claustrophobic confines of an fMRI machine at a research facility in Austin, Texas. “The things I do for television,” I thought.

    Anyone who has had an MRI or fMRI scan will tell you how noisy it is — electric currents swirl creating a powerful magnetic field that produces detailed scans of your brain. On this occasion, however, I could barely hear the loud cranking of the mechanical magnets, I was given a pair of specialized earphones that began playing segments from The Wizard of Oz audiobook.

    Why?

    Neuroscientists at the University of Texas in Austin have figured out a way to translate scans of brain activity into words using the very same artificial intelligence technology that powers the groundbreaking chatbot ChatGPT.

    The breakthrough could revolutionize how people who have lost the ability to speak can communicate. It’s just one pioneering application of AI developed in recent months as the technology continues to advance and looks set to touch every part of our lives and our society.

    “So, we don’t like to use the term mind reading,” Alexander Huth, assistant professor of neuroscience and computer science at the University of Texas at Austin, told me. “We think it conjures up things that we’re actually not capable of.”

    Huth volunteered to be a research subject for this study, spending upward of 20 hours in the confines of an fMRI machine listening to audio clips while the machine snapped detailed pictures of his brain.

    An artificial intelligence model analyzed his brain and the audio he was listening to and, over time, was eventually able to predict the words he was hearing just by watching his brain.

    The researchers used the San Francisco-based startup OpenAI’s first language model, GPT-1, that was developed with a massive database of books and websites. By analyzing all this data, the model learned how sentences are constructed — essentially how humans talk and think.

    The researchers trained the AI to analyze the activity of Huth and other volunteers’ brains while they listened to specific words. Eventually the AI learned enough that it could predict what Huth and others were listening to or watching just by monitoring their brain activity.

    I spent less than a half-hour in the machine and, as expected, the AI wasn’t able to decode that I had been listening to a portion of The Wizard of Oz audiobook that described Dorothy making her way along the yellow brick road.

    Huth listened to the same audio but because the AI model had been trained on his brain it was accurately able to predict parts of the audio he was listening to.

    While the technology is still in its infancy and shows great promise, the limitations might be a source of relief to some. AI can’t easily read our minds, yet.

    “The real potential application of this is in helping people who are unable to communicate,” Huth explained.

    He and other researchers at UT Austin believe the innovative technology could be used in the future by people with “locked-in” syndrome, stroke victims and others whose brains are functioning but are unable to speak.

    “Ours is the first demonstration that we can get this level of accuracy without brain surgery. So we think that this is kind of step one along this road to actually helping people who are unable to speak without them needing to get neurosurgery,” he said.

    While breakthrough medical advances are no doubt good news and potentially life-changing for patients struggling with debilitating ailments, it also raises questions about how the technology could be applied in controversial settings.

    Could it be used to extract a confession from a prisoner? Or to expose our deepest, darkest secrets?

    The short answer, Huth and his colleagues say, is no — not at the moment.

    For starters, brain scans need to occur in an fMRI machine, the AI technology needs to be trained on an individual’s brain for many hours, and, according to the Texas researchers, subjects need to give their consent. If a person actively resists listening to audio or thinks about something else the brain scans will not be a success.

    “We think that everyone’s brain data should be kept private,” said Jerry Tang, the lead author on a paper published earlier this month detailing his team’s findings. “Our brains are kind of one of the final frontiers of our privacy.”

    Tang explained, “obviously there are concerns that brain decoding technology could be used in dangerous ways.” Brain decoding is the term the researchers prefer to use instead of mind reading.

    “I feel like mind reading conjures up this idea of getting at the little thoughts that you don’t want to let slip, little like reactions to things. And I don’t think there’s any suggestion that we can really do that with this kind of approach,” Huth explained. “What we can get is the big ideas that you’re thinking about. The story that somebody is telling you, if you’re trying to tell a story inside your head, we can kind of get at that as well.”

    Last week, the makers of generative AI systems, including OpenAI CEO Sam Altman, descended on Capitol Hill to testify before a Senate committee over lawmakers’ concerns of the risks posed by the powerful technology. Altman warned that the development of AI without guardrails could “cause significant harm to the world” and urged lawmakers to implement regulations to address concerns.

    Echoing the AI warning, Tang told CNN that lawmakers need to take “mental privacy” seriously to protect “brain data” — our thoughts — two of the more dystopian terms I’ve heard in the era of AI.

    While the technology at the moment only works in very limited cases, that might not always be the case.

    “It’s important not to get a false sense of security and think that things will be this way forever,” Tang warned. “Technology can improve and that could change how well we can decode and change whether decoders require a person’s cooperation.”

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link

  • Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    [ad_1]



    CNN
     — 

    Some of the world’s top artificial intelligence companies are launching a new industry body to work together — and with policymakers and researchers — on ways to regulate the development of bleeding-edge AI.

    The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society.

    Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry.

    News of the forum comes after the four AI firms, along with several others including Amazon and Meta, pledged to the Biden administration to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.

    The industry-led forum, which is open to other companies designing the most advanced AI models, plans to make its technical evaluations and benchmarks available through a publicly accessible library, the companies said in a joint statement.

    “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

    The announcement comes a day after AI experts such as Anthropic CEO Dario Amodei and AI pioneer Yoshua Bengio warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development.

    “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.

    Within two to three years, Amodei said, AI could become powerful enough to help malicious actors build functional biological weapons, where today those actors may lack the specialized knowledge needed to complete the process.

    The best way to prevent major harms, Bengio told a Senate panel, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world.

    The European Union is moving toward legislation that could be finalized as early as this year that would ban the use of AI for predictive policing and limit its use in lower-risk scenarios.

    US lawmakers are much further behind. While a number of AI-related bills have already been introduced in Congress, much of the driving force for a comprehensive AI bill rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry through a series of briefings this summer.

    Starting in September, Schumer has said, the Senate will hold a series of nine additional panels for members to learn about how AI could affect jobs, national security and intellectual property.

    [ad_2]

    Source link

  • Here’s what you can do if you lose Medicaid coverage | CNN Politics

    Here’s what you can do if you lose Medicaid coverage | CNN Politics

    [ad_1]



    CNN
     — 

    Though millions of Americans are expected to be kicked off of Medicaid in coming months, they don’t all have to be left uninsured.

    But it could take some work to regain health coverage.

    “For a lot of people, this can be a very disruptive period of time,” said Sabrina Corlette, co-director of the Center on Health Insurance Reforms at Georgetown University. “There is a significant time and paperwork burden being placed on families – a lot of them very low income, a lot of them medically vulnerable.”

    States are now free to terminate the Medicaid coverage of residents they deem ineligible. States had been barred from involuntarily removing anyone for the past three years as part of an early congressional Covid-19 pandemic relief package, causing enrollment in Medicaid and the Children’s Health Insurance Program to balloon to more than 92 million people.

    Of the roughly 15 million people who could lose Medicaid coverage over the next 14 months, about 8.2 million would no longer qualify, according to a Department of Health and Human Services analysis released in August.

    Some 2.7 million of these folks would qualify for enhanced federal subsidies for Affordable Care Act policies that could bring their monthly premiums to as low as $0.

    Another 5 million are expected to secure other coverage, mainly through employers.

    Some 6.8 million people, however, will be disenrolled even though they remain eligible for Medicaid.

    Check out Obamacare policies: Folks who lose their Medicaid coverage can shop for health insurance plans on the Affordable Care Act exchanges.

    Those whose annual incomes remain below 150% of the federal poverty level – $20,385 for a single person and $41,625 for a family of four in 2023 – can obtain enhanced federal assistance to lower their premiums to as little as $0 a month. That beefed-up subsidy is in place through 2025.

    Many people with higher incomes can find subsidized policies for $10 or less.

    State Medicaid agencies are tasked with easing residents’ transfer from Medicaid to the Obamacare marketplaces, but the smoothness of the process will vary greatly by state. Once someone is determined to no longer qualify for Medicaid, the agency must assess his or her eligibility for Affordable Care Act coverage and transfer the resident’s information to the exchange.

    Some states that run their own Obamacare exchanges are taking extra steps to ensure their residents remain covered. Rhode Island, for instance, is automatically enrolling certain people in marketplace coverage. It’s also paying the first two months of premiums for some residents who actively select policies.

    Those who lose Medicaid coverage and live in the 33 states covered by the federal marketplace, healthcare.gov, can apply for Affordable Care Act policies through a special enrollment period that runs through July 2024. State-based exchanges have their own deadlines, with some mirroring the federal exchange and others providing much shorter windows.

    Navigators and insurance brokers can help consumers select plans.

    Historically, very few people who lose Medicaid coverage wind up in Obamacare plans. About 4% of adults who were terminated from Medicaid enrolled in exchange policies in 2018, according to the Medicaid and CHIP Payment and Access Commission.

    The coverage differs too. Those that switch to the marketplace may have to find other doctors that are in their insurers’ networks and may face out-of-pocket costs.

    Consider job-based coverage: A number of people who are terminated from Medicaid may already be covered by their employers, particularly those who started new jobs during the pandemic. Others have the option of obtaining coverage through work, though it will almost certainly be more expensive than Medicaid since it will likely entail premiums, deductibles and copays.

    Workers may find they can afford coverage for themselves but not for their families. If the premiums for family policies cost more than 9.12% of household income, spouses and children may be able to get subsidized coverage on the Affordable Care Act exchanges.

    Employees should contact their human resources departments to sign up. Typically, they’ll have to enroll within 60 days of losing Medicaid, but those who are terminated from the program between now and July 10 will have until early September to sign up.

    See if you or your children remain eligible for Medicaid: Millions of Americans who still qualify for Medicaid may lose coverage for procedural reasons. For example, they may have moved so they don’t receive the redetermination notices. Or they may not return the necessary paperwork to prove their eligibility.

    So it’s crucial that folks update their contact information with their state agencies and reply to the letters they receive about renewing their Medicaid eligibility.

    “When you get that packet in the mail, respond to it promptly,” Corlette said.

    Those who are dropped have 90 days to submit their renewal paperwork to their state agency, which is required to reinstate them if they are found eligible. Beyond that time period, people may reapply. In most states, your coverage can be made retroactive for up to three months if you were eligible and received Medicaid-covered services.

    Parents who no longer qualify and are terminated should check if their children remain eligible. As many as 6.7 million kids are at risk of losing Medicaid coverage, according to Georgetown’s Center for Children and Families.

    Nearly three-quarters of the children projected to be dropped will remain eligible for Medicaid or CHIP but will lose coverage mainly because of administrative issues. Black and Latino children and families are more likely to be erroneously terminated, according to the center.

    [ad_2]

    Source link

  • Republican-controlled states target college students’ voting power ahead of high-stakes 2024 elections | CNN Politics

    Republican-controlled states target college students’ voting power ahead of high-stakes 2024 elections | CNN Politics

    [ad_1]



    CNN
     — 

    Republican-controlled legislatures around the country have moved to erect new barriers to voting for high school and college students in what state lawmakers describe as an effort to clamp down on potential voter fraud. Critics call it a blatant attempt to suppress the youth vote as young people increasingly bolster Democratic candidates and liberal causes at the ballot box.

    As turnout among young voters grows, new proposals that change photo ID requirements or impose other limits have emerged.

    Laws enacted in Idaho this year, for instance, prohibit the use of student IDs to register to vote or cast ballots. A new law in Ohio, in effect for the first time in Tuesday’s primary elections, requires voters to present government-authorized photo ID at the polls, but student IDs are not included. Identification issued by universities has not traditionally been accepted to vote in the Buckeye State, but the new law eliminates the use of utility bills, bank statements and other documents that students have used before.

    A proposal in Texas would eliminate all campus polling places in the state. Meanwhile, officials in Montana – where Democrat Jon Tester is seeking a fourth term in one of 2024’s highest-profile Senate contests – have appealed a court decision striking down additional document requirements for those using student IDs to vote.

    And voting rights advocates say a longstanding statute in Georgia, which bars the use of student IDs from private universities, has made it more difficult for students at several schools – including Spelman and Morehouse, storied HBCUs in Atlanta – to participate in Georgia’s competitive US Senate and presidential elections.

    “Republican legislatures … are pretty transparently trying to keep left-leaning groups from voting,” said Charlotte Hill, interim director of the Democracy Policy Initiative at UC-Berkeley’s Goldman School of Public Policy. Rather than trying to sway young voters, lawmakers seem willing “to shrink the eligible electorate,” she added.

    Proponents say the changes are needed to protect against voter fraud and shore up public confidence in elections – battered by widespread, and false, claims of a stolen presidency in 2020. And they contend that the forms of identification provided by secondary schools and colleges vary too widely to serve as a reliable way to establish a voter’s identity and residency.

    “They are issued by colleges, universities, public and private high schools, and some have address and pictures, while some do not,” Idaho state Sen. Scott Herndon, a Republican and one of the sponsors of the new law, said in an email to CNN.

    During a legislative hearing earlier this year, Herndon said his goal was straightforward: “Make sure that people who are voting at the polls are who they say they are.”

    The efforts to clamp down on student IDs and campus voting come against a backdrop of gains for Democrats among this demographic group. Exit polls analyzed by the Brookings Institution found that people ages 18 to 29 – especially young women – made a pronounced shift toward Democrats in last year’s midterm elections, helping to blunt an expected “red wave” for Republicans.

    And voter registration among 18-24 year-olds increased in several states last year over 2018 levels – including Kansas and Michigan, where voters decided on ballot measures on abortion, following the US Supreme Court decision to overturn Roe v. Wade, according to data from Tufts University’s nonpartisan Center for Information and Research on Civic Learning and Engagement, or CIRCLE. CIRCLE conducts research into youth civic engagement.

    An analysis by The Milwaukee Journal Sentinel found that voting on college campuses soared in last month’s election for a state Supreme Court seat in Wisconsin. In that contest, the liberal candidate who prevailed, Janet Protasiewicz, had made protecting abortion rights a central feature of her campaign.

    Among the voting wards in the city of Eau Claire, for instance, the highest turnout came from the ward that served several University of Wisconsin dorms – with nearly 900 votes cast, up from 150 in a Supreme Court race four years earlier, the paper found. Protasiewicz won 87% of those votes.

    Prominent conservatives have spotlighted these voting trends.

    “Young voters are the issue,” Scott Walker, Wisconsin’s former Republican governor, wrote in a widely noticed Twitter post following the state Supreme Court election. “It comes from years of radical indoctrination – on campus, in school, with social media, & throughout culture,” said Walker, who is president of Young America’s Foundation, which works to popularize conservative ideas among young people. “We have to counter it or conservatives will never win battleground states again.”

    In an interview with CNN this week, Walker said his group is not seeking to change the ground rules for voting among younger Americans. But, he said, conservatives have been “overlooking ways to communicate to young people sooner than a month or two before the election.”

    One longtime GOP lawyer has discussed ways to curtail youth voting.

    The Washington Post, citing a PowerPoint presentation along with an audio recording of portions of the presentation obtained by liberal journalist Lauren Windsor, reported that GOP lawyer Cleta Mitchell recently urged Republicans to limit campus voting during a private gathering of Republican National Committee donors.

    Mitchell, who tried to help former President Donald Trump overturn the 2020 election results in Georgia, did not respond to a CNN interview request through a spokesperson for her current organization.

    In Idaho, notably, the number of young people ages 18 and 19 registered to vote soared 81% between the week of the midterm elections in November 2018 and the same time period in November 2022 – the highest gain in the nation – according to data collected by CIRCLE.

    One of the new laws in the state, which will take effect in January, drops student IDs from the list of accepted identification to vote. Now only these forms of ID can be used: a driver’s license or ID issued by the state’s transportation department, a US passport or identification with a photo issued by the US government, tribal identification or a permit to carry a concealed weapon.

    Student IDs had been accepted for voting for more than a decade in the state.

    State Rep. Tina Lambert, who authored the House version of the bill, declined a CNN interview request, citing a busy schedule.

    But she said in an email that students should be able to navigate the new law. “Students of voting age are smart and able,” Lambert wrote. “They are able to get the ID needed to vote. Most of them have IDs already, that they use for all the other things that they need legal ID for.”

    The law also has the support of Idaho Republican Secretary of State Phil McGrane, who told legislators this year that the change would help “maintain confidence in our elections” – although he said that he doesn’t know of any “instances of students trying to commit voter fraud.”

    He also noted that student identification was rarely used. Just 104 of the nearly 600,000 voters who cast ballots in Idaho’s general election last year did so using student ID, McGrane said.

    “Even if one person out there can only use a student ID to vote, that still matters. That’s still a vote,” said Saumya Sarin, a freshman at the College of Idaho in Caldwell, Idaho, and a volunteer with Babe Vote, a nonpartisan group that has worked to boost youth voter registration in the state. She testified against the proposal in the state legislature earlier this year.

    Saumya Sarin addresses the media at a press briefing announcing that BABE VOTE filed suit challenging the new law that removes student IDs as acceptable identification for voting in Idaho at the Idaho Statehouse in Boise on Friday, March 17.

    Sarlin, who turns 19 this week, said she presented a US passport last year when she voted for the first time, but she noted that she had “several friends off the top of my head” who don’t have the forms of identification now required in Idaho.

    “I think the direction that the youth are going with their vote scares the people who are currently in power a little bit because it works against them,” she said.

    Sarlin said she’s become active on voting issues to take a stand against state policies she opposes, including Idaho’s limits on gender-affirming medical care for transgender youth and abortions. Idaho has a near-total ban on abortions and last month made it a crime to help a pregnant minor obtain an abortion in another state without parental consent.

    Babe Vote and the League of Women Voters of Idaho have filed a lawsuit in an effort to block the Idaho voter ID laws. The measures “were not driven by any legitimate or credible concerns about the ‘integrity’ of the state’s elections,” the groups argue in their civil complaint. “Instead, they are part of a broader effort to roll back voting rights, particularly for young voters by weaponizing imaginary threats to election integrity.”

    A separate lawsuit, brought by March for Our Lives Idaho and the Idaho Alliance for Retired Americans, in federal court also seeks to block the new laws.

    Not all proposals to restrict student voting have been successful to date.

    A bill introduced in February by GOP state Rep. Carrie Isaac in Texas to prohibit polling places on college campuses has not yet made it out of committee. Another Isaac bill would ban voting on K-12 campuses.

    She told CNN this week that the measures are needed because polling places are sites of raw emotions and high stress, and she doesn’t want that kind of environment in schools.

    “I don’t think it’s smart to invite people that would not otherwise have business on campus on our campuses,” Isaac said. “In Texas, we have two weeks of early voting that people are coming in, that would not otherwise be there. And I think we should do anything and everything to make our campuses as safe as possible.”

    She said she’s confident that college students can find ways to vote off-campus.

    In Georgia, a state that will be a key battleground in the 2024 White House contest, student IDs are accepted as a form of voter identification, but only if they are issued by public colleges in the state. Seven out of the 10 Historically Black Colleges and Universities Georgia are private, making it more difficult for students who attend those universities to cast their ballots, voting rights advocates say.

    Former state Sen. Cecil Staton, a Republican who sponsored the 2006 photo ID law, said the government can ensure consistent standards for student IDs at state schools. “We didn’t feel like we had that same ability with private schools,” he said.

    Aylon Gipson – a Morehouse student from Alabama and a fellow with the voting rights group Campus Vote Project – said he has a lot of friends who have had problems at the polls as a result of Georgia’s law, especially underclassmen who don’t have a driver’s license.

    Gipson, a junior economics major at Morehouse College, poses for a portrait in the library of the Martin Luther King Jr. International Chapel at Morehouse College in Atlanta on May 1.

    “I’ve seen specific instances where students will call me and say, ‘Hey, I tried to go in and vote, but I got turned around at this polling station,’ or specifically our on-campus polling station, because they didn’t have an ID or they didn’t have a valid license to be able to vote with,” Gipson said. “I think it’s disenfranchising students who attend these HBCUs simply because of the fact that we’re private.”

    And in Ohio, which will see a hotly contested US Senate race next year as Democrat Sherrod Brown seeks reelection in a state where the GOP controls the legislature and governor’s office, Tuesday’s primary election marks the first election with the new photo ID rules in place. Voting rights advocates say the new restrictions could spell problems for students who have moved to Ohio for college and are no longer allowed to provide dormitory, utility bills or other documents to establish their legal residency when voting.

    Getting the form of ID now required in Ohio, such as a state driver’s license, will invalidate identification students may possess from their home state.

    “It seems as if this specific group – out-of-state college students, who have every right to vote – have been targeted and singled out,” said Collin Marozzi, deputy policy director of the ACLU of Ohio.

    Legislators, he said, are sending a “poor signal to these college students: ‘We want your money for our colleges. We want your money for our economy. But we don’t really want you to have a voice in the future of this state.’ “

    Students in Ohio still can opt to vote absentee by mail if they don’t want to surrender their identification from the state where they used to live – provided they include the last four digits of their Social Security number on the application. (The law establishing new photo ID requirements also reduces the window to request and return absentee ballots.)

    “For that college student, they make a decision: Am I a voter in Ohio or, say, in Pennsylvania?” said Rob Nichols, a spokesman for Ohio Secretary of State Frank LaRose, a Republican. “If you want to hang on to your Pennsylvania license, you can do so, vote absentee, give the last four digits of your Social, and you are on your merry way.”

    [ad_2]

    Source link

  • A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    [ad_1]



    CNN
     — 

    Google on Wednesday unveiled its latest lineup of hardware products, including its first foldable phone and a new tablet, as well as plans to roll out new AI features to its search engine and productivity tools.

    The updates, announced at its annual Google I/O developer conference, come as the company is simultaneously trying to push beyond its core advertising business with new devices while also racing to defend its search engine from the threat posed by a wave of new AI-powered tools.

    In a sign of where Google’s focus currently lies, the company spent more than 90 minutes teasing a long list of new AI features before mentioning hardware updates.

    Here’s what Google announced at the event.

    Google became the latest tech company to unveil a foldable smartphone. Like other foldables, the $1799 Pixel Fold features a vertical hinge that can be opened to reveal a tablet-like display. But Google calls the Fold the thinnest foldable on the market.

    “It took some clever engineering work redesigning components like our speakers, our battery and haptics,” said George Hwang, a product manager at Google, on a call ahead of the announcement. The company packed a Pixel phone into a less than 6 mm body – about two thirds of the thickness of its other Pixel phones.

    The Pixel Fold is very much a phone first: when it’s unfolded, it opens up into a 7.6-inch screen, and moves on Google’s custom-built 180-degree hinge. That hinge mechanism is moved out entirely from under the display to improve its dust resistance and decrease the device’s overall thickness, according to the company.

    The Google Fold includes features you’d find on a Pixel, such as long exposure, unblur, magic eraser, which lets users remove unwanted or distracting object. It also has Pixel Fold-specific tools such as dual-screen live translate, which lets a user communicate in another language with the help of fast audio and text translations on the outer screen.

    Google said it optimized its top apps to take advantage of the larger screen but “there’s still work to be done” because “optimizing for a new foldable form factor takes time,” Hwang said. “It’s a process that we’re committed to and it requires steep investment with our developer partners across Android,” Hwang added.

    Google is far from the first to embrace foldables, but it’s possible it waited to launch its own version until the technology became more advanced. Early versions of the Samsung Galaxy Z Fold, for example, had issues with the screen and most apps were not well optimized for the design.

    But even now, the future for foldables remains uncertain. Most apps are still not optimized for foldable devices; prices remain very high; and Google’s chief rival, Apple, has yet to embrace the option.

    Despite great consumer interest in foldable phones — and a resurgence in 90s-style flip phones among celebrities and TikTok influencers — the foldable market is relatively small, with Samsung dominating the category, followed by others including Motorola, Lenovo, Oppo, and Huawei. According to ABI Research, foldable and flexible displays made up about 0.7% of the smartphone market in 2021, and in 2022 expected to fall just shy of 2%.

    The Pixel Fold will be available in the US, UK, Germany and Japan. The company said the device will start shipping next month.

    A look at the Google's Pixel 7a lineup

    On the surface, the 7a looks similar to the Pixel 7 and 7 Pro, with the same pixel camera bar along the back. It comes with the typical advancements you’d expect to find with any smartphone upgrade – better display, advanced camera and longer-lasting battery. But the 7a now boasts a Tensor G2 processor and a TItan M2 security chip, which brings advanced processing and new artificial intelligence features. It also offers wireless charging for the first time on an A model.

    The Pixel lineup has long been known for its cameras, and the 7a is no exception. It’s packed with upgrades, including a 64-megapixel main camera – the largest sensor on a Pixel A series to date, which will help with improved image quality, low light performance and other features. It also offers a new 13-megapixel ultra-wide camera for capturing even wider shots and a new 13-megapixel front camera. For the first time, each camera enables 4K video.

    The 7a also supports many significant Pixel features, including unblur, magic eraser and an improved Night Sight that’s two times faster and sharper than its predecessor. It also allows users to capture long exposure and enhanced zoom.

    The Pixel comes in several colors, including charcoal, snow, sea and coral, and starts at $499 via the Google Store on May 10.

    The Pixel Series A line has long been aimed at the cost conscious who want good features at a reasonable price, but its reach is limited. Google sells between eight to 10 million of the Pixel devices each year, according to ABI Research.

    “Generally, the smartphones were really meant for Google to showcase how software, and now AI capabilities, could be effectively optimized on hardware and improve the Android user experience,” said David McQueen, an analyst at ABI Research. “Google has purposely kept volume sales limited as it also has to be mindful of its relationship with other smartphone manufacturers that use the Android OS.”

    The Google Pixel tablet

    While phones were a key focus at the event, Google also refreshed other parts of its hardware lineup.

    Google introduced the Pixel Tablet, which is intended for use around the house, from turning off the lights off in the house to setting the thermostat without getting off the couch.

    The tablet, which has rounded edges and corners, comes in three colors: porcelain, hazel and rose, and starts at $499. It will be available on June 20.

    Under the hood, the 11-inch tablet is powered by Google’s Tensor G2 chips, which bring long-lasting battery life and AI features to the device. It also offers a front-facing camera, an 8-megapixel rear camera, and a charging dock.

    Google is also moving forward with plans to bring AI chat features to its core search engine amid a renewed arms race over the technology in Silicon Valley.

    The company said it is introducing the next evolution of Google Search, which will use an AI-powered chatbot to answer questions “you never thought Search could answer” and to help get users the information they want quicker than ever.

    With the update, the look and feel of Google Search results will be noticeably different. When users type a query into the main search bar, they will automatically see a pop-up an AI-generated response in addition to displaying traditional results.

    Users can now sign up for the new Google Search, which will first launch in the United States, via the Google app or Chrome’s desktop browser. A limited number of users will have access to it in the weeks ahead, according to the company, before it scales upward.

    Google is expanding access to its existing chatbot Bard, which operates outside the search engine and can help users do tasks such as outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    The tool, which was previously available to early users via a waitlist only in the US, will soon be available for all users in 120 countries and 40 languages.

    Google is also launching extensions for Bard from its own services, such as Gmail, Sheets and Docs, allowing users to ask questions and collaborate with the chatbot within the apps they’re using.

    Google also announced PaLM 2, its latest large language model to rival ChatGPT-creator OpenAI’s GPT-4.

    The move marks a big step forward for the technology that powers the company’s AI products and promises to be better at logic, common sense reasoning and mathematics. It can also generate specialized code in different programming languages.

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • First installment of new Obama oral history project focuses on climate | CNN Politics

    First installment of new Obama oral history project focuses on climate | CNN Politics

    [ad_1]



    CNN
     — 

    A new oral history project focused on former President Barack Obama’s administration was released on Wednesday, with the first installment centering on climate.

    The project consists of work completed by Incite, an interdisciplinary social science research institute at Columbia University, since 2019. The work from the past four years includes 470 interviews and about 1,100 hours of audio and video with senior officials, policymakers, activists and others involved with the Obama administration.

    Peter Bearman, director of Incite at Columbia University, said the project was motivated by “an urge to decenter the experience of the president and center the study around the experiences and interactions of people both inside and outside of the administration.

    He said many of the narratives in the first installment speak about key environmental and energy issues that took place during the Obama administration, including the Keystone Pipeline, food and food security, energy and international climate negotiations, such as the Paris Agreement.

    Climate is one of about 40 issue domains the project focuses on. Other sets of interviews on topics such as health care and Black politics are planned to be released throughout the rest of this year and into 2024.

    During a Wednesday discussion previewing the oral history project, panelists focused on climate change and the environment and discussed how climate was prominent in the Obama years, from his initial campaign and throughout his years in office.

    “We pushed very hard during the campaign to raise the climate issue,” environmental activist Frances Beinecke said. “And we raised it during the primaries, and then when he was the candidate we raised it. During that period, we also worked on the platform, on the Democratic platform, making sure that climate was a main feature of the platform.”

    The initial release consists of 17 of the hundreds of interviews. Former US Secretary of Energy Steven Chu, former Administrator of the Environmental Protection Agency Gina McCarthy and environmental activist Bill McKibben are among those interviewed in the first release of the series.

    Valerie Jarrett, former senior adviser to Obama, referred to climate as “one of the largest threats and concerns” and “one of the biggest priorities” for Obama.

    “By preserving these narratives, we ensure that future generations have access to the lived experiences and lessons learned,” Jarrett said during the event. “But ultimately, these interviews will serve a lot as an important record for both historians and scholars, to not just learn, but to learn with an act towards the future.”

    [ad_2]

    Source link

  • Google earned $10 million by allowing misleading anti-abortion ads from ‘fake clinics,’ report says | CNN Business

    Google earned $10 million by allowing misleading anti-abortion ads from ‘fake clinics,’ report says | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Google has earned more than $10 million over the past two years by allowing misleading advertisements for “fake” abortion clinics that aim to stop women from having the procedure, according to an estimate from a report released Thursday from the non-profit Center for Countering Digital Hate.

    The estimated amount is microscopic compared to the more than $200 billion Google generates from ad sales annually. But the report’s data hints at the broad reach pro-life groups can have by placing these advertisements in Google results for common phrases searched for by abortion seekers.

    Using Semrush, an analytics tool, researchers at the CCDH identified “188 fake clinic websites” that placed ads on Google between March, 2021 and February of this year. CCDH estimates that ads for fake clinics were clicked on by users 13 million times during this period.

    Some searching for “abortion clinics near me” on Google instead found results directing them toward so-called “crisis pregnancy centers” that may try to talk abortion-seekers out of treatment and offer medically unproven abortion pill reversal techniques, according to the report.

    Other Google searches populated by crisis clinic ads included “abortion pill,” “abortion clinic” and “planned parenthood,” the report said, with clinics in states where abortion is legal spending two times as much as those in states with bans.

    In the wake of the Supreme Court overturning Roe v Wade, Google faced calls from Congressional Democrats to do more to prevent searches for abortion clinics from returning results for misleading ads – as well as calls from Republican lawmakers to do the opposite. The dueling pressure from lawmakers highlighted how central Google can be for women searching for information on the procedure.

    In a statement Thursday, Google said its approach to abortion ads follows local laws and that any advertiser targeting certain keywords or phrases related to abortions must complete a certification to confirm if it does or does not provide abortion services.

    “We require any organization that wants to advertise to people seeking information about abortion services to be certified and clearly disclose whether they do or do not offer abortions,” a Google spokesperson told CNN. “We do not allow ads promoting abortion reversal treatments and we also prohibit advertisers from misleading people about the services they offer.”

    “We remove or block ads that violate these policies,” the company added.

    Google said it does not allow for abortion reversal pill advertisements because the treatment isn’t approved by the FDA. In response to Thursday’s CCDH report, the company told CNN it took “enforcement action” on content violating this policy.

    Google has continued to face scrutiny in recent months for the steps it takes to protect abortion seekers’ location data.

    Nearly a dozen Senate Democrats wrote to Google in May with questions about how it deletes users’ location history when they have visited sensitive locations such as abortion clinics. The letter came after tests performed by The Washington Post and other privacy advocates appeared to show that Google was not quickly or consistently deleting users’ recorded visits to fertility centers of Planned Parenthood clinics.

    Google previously declined to comment on the lawmakers’ letter. Instead, it referred CNN to a company blog post that includes abortion clinics on a list of sensitive locations, but did not explain what it means when it claims the data will be deleted “soon after” a visit.

    [ad_2]

    Source link