ReportWire

Tag: international-health and science

  • Democrats push abortion rights bills in the Senate ahead of Dobbs anniversary | CNN Politics

    Democrats push abortion rights bills in the Senate ahead of Dobbs anniversary | CNN Politics

    [ad_1]



    CNN
     — 

    Senate Democrats intend to mark the anniversary of the Supreme Court decision overturning Roe v. Wade by pushing a collection of abortion rights messaging bills.

    Ahead of the anniversary on Saturday, Senate Democrats will ask for “unanimous consent” on legislation which would seek to expand abortion access for women in the US. The procedural step allows any single senator to ask for a vote on a bill, but any one senator can object and the bill fails. It is a quick way to force a vote on an issue, but it won’t force every senator to go on the record, meaning Democrats and Republicans who may be facing a tough election in 2024 won’t be forced to take a vote.

    All of the requests are expected to fail.

    The effort is being led by Sen. Patty Murray, a member of Democratic leadership from Washington state.

    “Senate Democrats will force Republicans to go on the record once again, and explain to the American people why they refuse to codify our right to contraception, why they refuse to let women travel across state lines for lifesaving health care – as we fight to get the votes we need to restore Roe, it’s imperative that we make plain to the country just how extreme and dangerous Republicans’ anti-abortion agenda is,” Murray said in a statement.

    Abortion politics have also recently been in the spotlight in the Senate as Sen. Tommy Tuberville, an Alabama Republican, has placed a hold on confirming more than 250 military promotions over a Pentagon policy created after the Dobbs decision, which allows servicemembers to access time off and reimbursement for travel costs if they have to cross state lines to access reproductive care.

    In the 2022 midterms, abortion was a crucial motivator for many voters, as CNN exit polls showed that 46% of people said that abortion was the most important issue to their vote. Abortion is also likely to be a cornerstone of President Joe Biden’s reelection campaign, as administration officials highlight what Democrats have done to protect access to abortion.

    [ad_2]

    Source link

  • First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    [ad_1]



    CNN
     — 

    Three US senators are pressing Facebook-parent Meta, Google-parent Alphabet and Twitter about whether their layoffs may have hindered the companies’ ability to fight the spread of misinformation ahead of the 2024 elections.

    In a letter to the companies dated Tuesday, the lawmakers warned that reported staff cuts to content moderation and other teams could make it harder for the companies to fulfill their commitments to election integrity.

    “This is particularly troubling given the emerging use of artificial intelligence to mislead voters,” wrote Minnesota Democratic Sen. Amy Klobuchar, Vermont Democratic Sen. Peter Welch and Illinois Democratic Sen. Dick Durbin, according to a copy of the letter reviewed by CNN.

    Since purchasing Twitter in October, Elon Musk has slashed headcount by more than 80%, in some cases eliminating entire teams.

    Alphabet announced plans to cut roughly 12,000 workers across product areas and regions earlier this year. And Meta has previously said it would eliminate about 21,000 jobs over two rounds of layoffs, hitting across teams devoted to policy, user experience and well-being, among others.

    “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community – including our efforts to prepare for elections around the world,” Andy Stone, a spokesperson for Meta, said in a statement to CNN about the letter.

    Alphabet and Twitter did not immediately respond to a request for comment.

    The pullback at those companies has coincided with a broader industry retrenchment in the face of economic headwinds. Peers such as Microsoft and Amazon have also trimmed their workforces, while others have announced hiring freezes.

    But the social media companies are coming under greater scrutiny now in part due to their role facilitating the US electoral process.

    Tuesday’s letter asked Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai and Twitter CEO Linda Yaccarino how each company is preparing for the 2024 elections and for mis- and disinformation surrounding the campaigns.

    To illustrate their concerns, the lawmakers pointed to recent changes at Alphabet-owned YouTube to allow the sharing of false claims that the 2020 presidential election was stolen, along with what they described as content moderation “challenges” at Twitter since the layoffs.

    The letter, which seeks responses by July 10, also asked whether the companies may hire more content moderation employees or contractors ahead of the election, and how the platforms may be specifically preparing for the rise of AI-generated deepfakes in politics.

    Already, candidates such as Florida Gov. Ron DeSantis appear to have used fake, AI-generated images to attack their opponents, raising questions about the risks that artificial intelligence could pose for democracy.

    [ad_2]

    Source link

  • Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    [ad_1]



    CNN
     — 

    Microsoft, Google and other leading artificial intelligence companies committed Friday to put new AI systems through outside testing before they are publicly released and to clearly label AI-generated content, the White House announced.

    The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies – which also include Amazon, Meta, OpenAI, Anthropic and Inflection – aimed at making AI systems and products safer and more trustworthy while Congress and the White House develop more comprehensive regulations to govern the rapidly growing industry. President Joe Biden met with top executives from all seven companies at the White House on Friday.

    In a speech Friday, Biden called the companies commitments “real and concrete,” adding they will help fulfill their “fundamental obligations to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

    “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation,” Biden said.

    White House officials acknowledge that some of the companies have already enacted some of the commitments but argue they will as a whole raise “the standards for safety, security and trust of AI” and will serve as a “bridge to regulation.”

    “It’s a first step, it’s a bridge to where we need to go,” White House deputy chief of staff Bruce Reed, who has been managing the AI policy process, said in an interview. “It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we’ve seen before.”

    While most of the companies already conduct internal “red-teaming” exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology – such as a cyberattack or its potential to be used by malicious actors – and allows companies to proactively identify shortcomings and prevent negative outcomes.

    Reed said the external red-teaming “will help pave the way for government oversight and regulation,” potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser.

    The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation.

    The companies also committed to investing in cybersecurity and “insider threat safeguards,” in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely; creating a robust mechanism for third parties to report system vulnerabilities; prioritizing research on the societal risks of AI; and developing and deploying AI systems “to help address society’s greatest challenges,” according to the White House.

    Asked by CNN’s Jake Tapper Friday about worries he has when it comes to AI, Microsoft Vice Chair and President Brad Smith pointed to “what people, bad actors, individuals or countries will do” with the technology.

    “That they’ll use it to undermine our elections, that they will use it to seek to break in to our computer networks. You know, that they’ll use it in ways that will undermine the security of our jobs,” he said.

    But, Smith argued, “the best way to solve these problems is to focus on them, to understand them, to bring people together, and to solve them. And the interesting thing about AI, in my opinion, is that when we do that, and we are determined to do that, we can use AI to defend against these problems far more effectively than we can today.”

    Pressed by Tapper about AI and compensation concerns listed in a recent letter signed by thousands of authors, Smith said: “I don’t want it to undermine anybody’s ability to make a living by creating, by writing. That is the balance that we should all want to strike.”

    All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity.

    Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

    “If we’ve learned anything from the last decade and the complete mismanagement of social media governance, it’s that many companies offer a lot of lip service,” Common Sense Media CEO James Steyer said in a statement. “And then they prioritize their profits to such an extent that they will not hold themselves accountable for how their products impact the American people, particularly children and families.”

    The federal government’s failure to regulate social media companies at their inception – and the resistance from those companies – has loomed large for White House officials as they have begun crafting potential AI regulations and executive actions in recent months.

    “The main thing we stressed throughout the discussions with the companies was that we should make this as robust as possible,” Reed said. “The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago and I think that AI is progressing even more rapidly than that and it’s important for this bridge to regulation to be a sturdy one.”

    The commitments were crafted during a monthslong back-and-forth between the AI companies and the White House that began in May when a group of AI executives came to the White House to meet with Biden, Vice President Kamala Harris and White House officials. The White House also sought input from non-industry AI safety and ethics experts.

    White House officials are working to move beyond voluntary commitments, readying a series of executive actions, the first of which is expected to be unveiled later this summer. Officials are also working closely with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI.

    “This is a serious responsibility. We have to get it right. There’s an enormous, enormous potential upside as well,” Biden said.

    In the meantime, White House officials say the companies will “immediately” begin implementing the voluntary commitments and hope other companies sign on in the future.

    “We expect that other companies will see how they also have an obligation to live up to the standards of safety, security and trust. And they may choose – and we would welcome them choosing – joining these commitments,” a White House official said.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    [ad_1]



    CNN
     — 

    Less than six months after ChatGPT-creator OpenAI unveiled an AI detection tool with the potential to help teachers and other professionals detect AI generated work, the company has pulled the feature.

    OpenAI quietly shut down the tool last week citing a “low rate of accuracy,” according to an update to the original company blog post announcing the feature.

    “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company wrote in the update. OpenAI said it is also committed to helping “users to understand if audio or visual content is AI-generated.”

    The news may renew concerns about whether the companies behind a new crop of generative AI tools are equipped to build safeguards. It also comes as educators prepare for the first full school year with tools like ChatGPT publicly available.

    The sudden rise of ChatGPT quickly raised alarms among some educators late last year over the possibility that it could make it easier than ever for students to cheat on written work. Public schools in New York City and Seattle banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators moved with remarkable speed to rethink their assignments in response to ChatGPT, even as it remained unclear how widespread use of the tool was among students and how harmful it could really be to learning.

    Against that backdrop, OpenAI announced the AI detection tool in February to allow users to check if an essay was written by a human or AI. The feature, which worked on English AI-generated text, was powered by a machine learning system that takes an input and assigns it to several categories. After pasting a body of text such as a school essay into the new tool, it gave one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    But even on its launch day, OpenAI admitted the tool was “imperfect” and results should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Lama Ahmad, policy research director at OpenAI, told CNN at the time.

    While the tool might provide another reference point, such as comparing past examples of a student’s work and writing style, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Although OpenAI may be shelving its tool for now, there are some alternatives on the market.

    Other companies such as Turnitin have also rolled out AI plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan introduced a similar AI detection feature, called ZeroGPT.

    [ad_2]

    Source link

  • Hot box detectors didn’t stop the East Palestine derailment. Research shows another technology might have | CNN

    Hot box detectors didn’t stop the East Palestine derailment. Research shows another technology might have | CNN

    [ad_1]



    CNN
     — 

    A failing, flaming wheel bearing doomed the rail car that derailed and created a catastrophe in East Palestine earlier this month, but researchers have offered a solution to the faulty detectors that experts say could have averted the disaster unfolding in the small Ohio town.

    These wayside hot box detectors, stationed on rail tracks every 20 miles or so, use infrared sensors to record the temperatures of railroad bearings as trains pass by. If they sense an overheated bearing, the detectors trigger an alarm, which notifies the train crew they should stop and inspect the rail car for a potential failure.

    So why did these detectors miss a bearing failure before the catastrophe?

    An investigation into hot box detectors published in 2019 and funded by the Department of Transportation found that one “major shortcoming” of these detectors is that they can’t distinguish between healthy and defective bearings, and temperature alone is not a good indicator of bearing health.

    “Temperature is reactive in nature, meaning by the time you’re sensing a high temperature in a bearing, it’s too late, the bearing is already in its final stages of failure,” Constantine Tarawneh, director of the University Transportation Center for Railways Safety (UTCRS) and lead investigator of the study, told CNN.

    As part of the investigation, the UTCRS researchers developed a new system to better detect a bearing issue long before a catastrophic failure. The key: measuring the bearing’s vibration in addition to its temperature and load.

    The vibration of a failing bearing, Tarawneh says, often begins intensifying thousands of miles before a catastrophic failure. So his team created sensors that can be placed on board each rail car, near the bearing, to continuously monitor its vibration throughout its travels.

    “If you put an accelerometer on a bearing and you’re monitoring the vibration levels, the minute a defect happens in the bearing, the accelerometer will sense an increase in vibration, and that could be, in many cases, up to 100,000 miles before the bearing actually fails,” he said.

    Tarawneh, who argues the technology should be federally mandated, says had it been on board Norfolk Southern’s line it would have prevented the derailment in East Palestine.

    “It would have detected the problem months before this happened,” he said. “There wouldn’t have been a derailment.”

    A preliminary report from the East Palestine derailment, released Thursday by the National Transportation Safety Board, found hot box sensors detected that a wheel bearing was heating up miles before it eventually failed and caused the train to derail. But the detectors didn’t alert the crew until it was too late.

    The bearing, according to the report, was 38 degrees above ambient temperature when it passed through a hot box 30 miles outside East Palestine. No alert went out, the NTSB said.

    Ten miles later, the next hot box detected that the bearing had reached 103 degrees above ambient. Video of the train recorded in that area shows sparks and flames around the rail car. Still, no alert went to the crew.

    It wasn’t until a further 20 miles down the tracks, as the train reached East Palestine, that a hot box detector recorded the bearing’s temperature at 253 degrees above ambient and sent an alarm message instructing the crew to slow and stop the train to inspect a hot axle, the report said.

    The crew slowed the train, the report added, leading to an automatic emergency brake application. After the train stopped, the crew observed the derailment.

    The reason those first two hot box readings didn’t trigger an alert, the report said, is because Norfolk Southern’s policy is to only stop and inspect a bearing after it has reached 170 degrees above ambient temperature. The NTSB is planning to review Norfolk Southern’s use of wayside hot box detectors, including spacing and the temperature threshold that determines when crews are alerted.

    “Had there been a detector earlier, that derailment may not have occurred,” said NTSB Chair Jennifer Homendy at a Thursday press conference.

    In a statement responding to the NTSB report, Norfolk Southern stressed that its hot box detectors were operating as designed, and that those detectors trigger an alarm at a temperature threshold that is “among the lowest in the rail industry.” CNN has reached out to Norfolk Southern for comment on vibration sensor technology.

    Hot box detectors are unregulated, so companies like Norfolk Southern can turn them on and off at their own discretion and choose the temperature threshold at which crews receive an alert.

    There are several causes for overheated roller bearings, including fatigue cracking, water damage, mechanical damaging, a loose bearing or a wheel defect, according to the NTSB, and the agency says they’re investigating what caused the failure in East Palestine.

    “Roller bearings fail, but it is absolutely critical for problems to be identified and addressed early so these aren’t run until failure,” Homendy said. “You cannot wait until they’ve failed. Problems need to be identified early, so something catastrophic like this does not occur again.”

    Hum Industrial Technology, a rail car telematics company, has licensed the vibration sensor technology created by Tarawneh and his team. And it has launched pilot programs with several rail companies. But at this point, those sensors are on very few trains operating in the United States, which Tarawneh largely blames on the cost of retrofitting and monitoring cars and what he sees as companies prioritizing profit.

    It’s not clear exactly what it would cost to retrofit every train car in operation with sensors today, but Hum Industrial Technology stressed that it would cost less to put a sensor on a bearing than to replace a bearing.

    “They see it as, well, why should we do it if it’s not mandated?” Tarawneh said. “It’s like a lot of people are saying, ‘well, I’m willing to take the risk. It’s not that many derailments per year.’”

    But Steve Ditmeyer, a former Federal Railroad Administration official, says equipping every rail car with on board sensors may not be financially feasible.

    “What they’re proposing will work, but it’s very, very expensive,” Ditmeyer told CNN. “And one does have to take cost into consideration.”

    It would take more than 12 million on board sensors, according to Tarawneh, to fully equip the roughly 1.6 million rail cars in service across North America.

    Ditmeyer says railroads should invest more heavily in wayside acoustic bearing detectors, which sit along the tracks – much like hot box detectors – and monitor the sound of passing trains. They listen for noise that indicates a bearing failure well before a potential catastrophe.

    As of 2019, only 39 acoustic bearing detectors were in use across North America compared to more than 6,000 hot box detectors, according to a 2019 DOT report.

    “They are the only way that I can think of that would have prevented the accident by having caught a failing bearing earlier,” Ditmeyer said.

    [ad_2]

    Source link