ReportWire

Tag: domestic-business

  • Foxconn pulls out of $19 billion chipmaking project in India | CNN Business

    Foxconn pulls out of $19 billion chipmaking project in India | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Foxconn says it is exiting an ambitious project to help build one of India’s first chip factories.

    The world’s largest contract electronics maker will “no longer move forward” with its $19.4 billion joint venture with Vedanta

    (VEDL)
    , an Indian metals and energy conglomerate, in Asia’s third largest economy, it said Monday.

    The news was seen as a blow to the Indian government’s plans to turn the country into a tech manufacturing powerhouse, even as officials have sought to counter that view.

    In a statement to CNN, Foxconn, a Taiwanese tech giant best known for being one of Apple

    (AAPL)
    ’s top suppliers, said the decision was based on “mutual agreement” and allowed the company “to explore more diverse development opportunities.”

    The joint venture will now be wholly owned by Vedanta.

    In a followup statement Tuesday, Foxconn reaffirmed its commitment to invest in Indian chipmaking, saying it will apply for a government program that subsidizes the cost of setting up semiconductor or electronic display production facilities in the country.

    “Building fabs from scratch in a new geography is a challenge, but Foxconn is committed to invest in India,” the company said, referring to fabrication plants, the technical term for semiconductor factories.

    “There was recognition from both sides that the project was not moving fast enough, there were challenging gaps we were not able to smoothly overcome, as well as external issues unrelated to the project,” it said.

    Since announcing the deal in February 2022, Foxconn said it had worked with Vedanta on plans to set up a semiconductor plant in the country that would support a wider ecosystem for manufacturers.

    It did not provide an investment figure for the facility, but Indian Prime Minister Narendra Modi tweeted in September that the total investment would amount to 1.54 trillion rupees, which was then equivalent to $19.4 billion.

    Foxconn said last year it was actively scouting for locations for the plant and held discussions with “a few state governments.”

    Foxconn CEO Young Liu has in recent months courted Indian partners, having traveled there in February to seek new collaborators.

    The company, which already has factories in the Indian states of Andhra Pradesh and Tamil Nadu, is one of many global tech firms looking for opportunities in the country, particularly as multinationals seek to diversify their supply chains beyond China.

    On Monday, India’s electronics and information technology minister Ashwini Vaishnaw told Indian news outlet and CNN affiliate News18 that both Vedanta and Foxconn are “completely committed to India’s semiconductor mission.”

    Rajeev Chandrasekhar, the country’s minister of state for electronics and IT, also tweeted that the news “changes nothing about” India’s semiconductor manufacturing goals, adding that the decision would still allow “both companies to independently pursue their strategies” in India.

    The project had been hailed as a milestone in India’s campaign to attract more investment in manufacturing, a sector sorely needed to help ease unemployment.

    Prime Minister Modi had framed the project as a significant boost for the economy and jobs.

    Foxconn shares rose 1.3% in Taipei on Tuesday following its announcement, while Vedanta’s shares fell 1.4% in Mumbai. The latter has not responded to a request for comment.

    Other prominent tech companies have moved to expand production in India recently.

    Last month, US chipmaker Micron

    (MICR)
    announced a new factory in the western state of Gujarat, calling it the country’s first semiconductor assembly and test manufacturing facility.

    The venture will see Micron invest up to $825 million, and create “up to 5,000 new direct Micron jobs and 15,000 community jobs over the next several years,” according to the company.

    [ad_2]

    Source link

  • With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Last month, a video posted to Twitter by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s top infectious disease specialist, were tricky to spot: they were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership, did not remove the video. Instead, it eventually added a community note — a contributor-led feature to highlight misinformation on the social media platform — to the post, alerting the site’s users that in the video “3 still shots showing Trump embracing Fauci are AI generated images.”

    Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 US Presidential election in ways that could confuse or mislead voters.

    A new crop of AI tools offer the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 US election.

    “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

    Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.

    Several major social networks have pulled back on their enforcement of some election-related misinformation and undergone significant layoffs over the past six months, which in some cases hit election integrity, safety and responsible AI teams. Current and former US officials have also raised alarms that a federal judge’s decision earlier this month to limit how some US agencies communicate with social media companies could have a “chilling effect” on how the federal government and states address election-related disinformation. (On Friday, an appeals court temporarily blocked the order.)

    Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

    “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

    The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

    The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

    Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily and cheaply create huge quantities of fake content.

    And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the US election rolls around next year.

    “We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

    The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face.

    AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

    OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

    Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

    A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

    Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.

    Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

    TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

    YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

    “Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”

    A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact checkers.

    TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

    Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

    Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform, and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

    Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated review to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

    But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated.

    Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

    One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information; it could also make it harder for them to trust real information about everything from voting to crisis situations.

    “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.”

    [ad_2]

    Source link

  • ‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree | CNN Business

    ‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree | CNN Business

    [ad_1]



    CNN
     — 

    A new crop of artificial intelligence tools carries the promise of streamlining tasks, improving efficiency and boosting productivity in the workplace. But that hasn’t been Neil Clarke’s experience so far.

    Clarke, an editor and publisher, said he recently had to temporarily shutter the online submission form for his science fiction and fantasy magazine, Clarkesworld, after his team was inundated with a deluge of “consistently bad” AI-generated submissions.

    “They’re some of the worst stories we’ve seen, actually,” Clarke said of the hundreds of pieces of AI-produced content he and his team of humans now must manually parse through. “But it’s more of the problem of volume, not quality. The quantity is burying us.”

    “It almost doubled our workload,” he added, describing the latest AI tools as “a thorn in our side for the last few months.” Clarke said that he anticipates his team is going to have to close submissions again. “It’s going to reach a point where we can’t handle it.”

    Since ChatGPT launched late last year, many of the tech world’s most prominent figures have waxed poetic about how AI has the potential to boost productivity, help us all work less and create new and better jobs in the future. “In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently,” Microsoft co-founder Bill Gates said in a blog post recently.

    But as is often the case with tech, the long-term impact isn’t always clear or the same across industries and markets. Moreover, the road to a techno-utopia is often bumpy and plagued with unintended consequences, whether it’s lawyers fined for submitting fake court citations from ChatGPT or a small publication buried under an avalanche of computer-generated submissions.

    Big Tech companies are now rushing to jump on the AI bandwagon, pledging significant investments into new AI-powered tools that promise to streamline work. These tools can help people quickly draft emails, make presentations and summarize large datasets or texts.

    In a recent study, researchers at the Massachusetts Institute of Technology found that access to ChatGPT increased productivity for workers who were assigned tasks like writing cover letters, “delicate” emails and cost-benefit analyses. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust,” Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper, said in a statement.

    Mathias Cormann, the secretary-general of the Organization for Economic Co-operation and Development recently said the intergovernmental organization has found that AI can improve some aspects of job quality, but there are tradeoffs.

    “Workers do report, though, that the intensity of their work has increased after the adoption of AI in their workplaces,” Cormann said in public remarks, pointing to the findings of a report released by the organization. The report also found that for non-AI specialists and non-managers, the use of AI had only a “minimal impact on wages so far” – meaning that for the average employee, the work is scaling up, but the pay isn’t.

    Ivana Saula, the research director for the International Association of Machinists and Aerospace Workers, said that workers in her union have said they feel like “guinea pigs” as employers rush to roll out AI-powered tools on the job.

    And it hasn’t always gone smoothly, Saula said. The implementation of these new tech tools has often led to more “residual tasks that a human still needs to do.” This can include picking up additional logistics tasks that a machine simply can’t do, Saula said, adding more time and pressure to a daily work flow.

    The union represents a broad range of workers, including in air transportation, health care, public service, manufacturing and the nuclear industry, Saula said.

    “It’s never just clean cut, where the machine can entirely replace the human,” Saula told CNN. “It can replace certain aspects of what a worker does, but there’s some tasks that are outstanding that get placed on whoever remains.”

    Workers are also “saying that my workload is heavier” after the implementation of new AI tools, Saula said, and “the intensity at which I work is much faster because now it’s being set by the machine.” She added that the feedback they are getting from workers shows how important it is to “actually involve workers in the process of implementation.”

    “Because there’s knowledge on the ground, on the frontlines, that employers need to be aware of,” she said. “And oftentimes, I think there’s disconnects between frontline workers and what happens on shop floors, and upper management, and not to mention CEOs.”

    Perhaps nowhere are the pros and cons of AI for businesses as apparent as in the media industry. These tools offer the promise of accelerating if not automating copywriting, advertising and certain editorial work, but there have already been some notable blunders.

    News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on Star Wars published by Gizmodo earlier this month similarly required a correction and resulted in employee turmoil. But both outlets have signaled they will still move forward with using the technology to assist in newsrooms.

    Others like Clarke, the publisher, have tried to combat the fallout from the rise of AI by relying on more AI. Clarke said he and his team turned to AI-powered detectors of AI-generated work to deal with the deluge of submissions but found these tools weren’t helpful because of how unreliably they flag “false positives and false negatives,” especially for writers whose second language is English.

    “You listen to these AI experts, they go on about how these things are going to do amazing breakthroughs in different fields,” Clarke said. “But those aren’t the fields they’re currently working in.”

    [ad_2]

    Source link

  • DeSantis and his team unleash on Rep. Donalds for questioning Florida’s new Black history standards | CNN Politics

    DeSantis and his team unleash on Rep. Donalds for questioning Florida’s new Black history standards | CNN Politics

    [ad_1]



    CNN
     — 

    Florida Gov. Ron DeSantis on Thursday accused Rep. Byron Donalds – the only Black Republican in Florida’s congressional delegation – of aligning himself with Vice President Kamala Harris by critiquing the state’s new standards for teaching Black history.

    Donalds tweeted Wednesday that the new standards are “good, robust, & accurate.” But the two-term congressman added that a new requirement for middle school students to be taught that slaves learned skills they later benefited from “is wrong & needs to be adjusted.” He added that he has “faith that (Florida Department of Education) will correct this.”

    In the face of that seemingly gentle criticism, DeSantis’ administration and online allies unloaded on Donalds, who has backed former President Donald Trump over his home state governor for the 2024 nomination. Jeremy Redfern, the spokesman for the governor’s office, called Donalds a “supposed conservative.” Christina Pushaw, the campaign’s rapid response director, replied to Donalds’ tweet: “Did Kamala Harris write this tweet?” DeSantis’ Education Commissioner Manny Diaz tweeted that Florida would “not back down … at the behest of a supposedly conservative congressman.”

    DeSantis joined the pile on during his Iowa bus tour, telling Donalds to “stand up for your state.”

    “You got to choose: Are you going to side with Kamala Harris and liberal media outlets or are you doing to side with the state of Florida?” he said.

    Responding to the blowback to his remarks, Donalds on Twitter called the online attacks aimed at him “disingenuous” and said DeSantis supporters were “desperately attempting to score political points,” adding that that is why he is “proud to have endorsed” Trump.

    “What’s crazy to me is I expressed support for the vast majority of the new African American history standards and happened to oppose one sentence that seemed to dignify the skills gained by slaves as a result of their enslavement,” he wrote on Twitter.

    This week’s clash with Donalds is the latest example of how the DeSantis campaign’s failure to win support from key members of his state’s GOP has come back to bite him as he runs against Trump. Last week, Rep. Greg Steube, who has also endorsed Trump, put DeSantis on blast over property insurance rates in the state continuing to soar.

    “The result of the state’s top elected official failing to focus on (and be present in) Florida,” Steube said, tweeting out a headline that linked the sharp rise in premiums to DeSantis’ time in office.

    The war of words between two Florida Republicans this week is all the more remarkable because of how closely aligned Donalds and DeSantis once appeared.

    Donalds introduced DeSantis and his family at the governor’s election night victory party last year, heaping praise on the man he called “America’s governor.” He played DeSantis’ 2018 election opponent, Democrat Andrew Gillum, during debate preparation. DeSantis had also formed a close alliance with Donalds’ wife, a school choice advocate who received a plum appointment to the Florida Gulf Coast University board of trustees.

    But there was a notable break in their relationship in April when Donalds endorsed Trump over DeSantis. Donalds had previously stated publicly he would wait on an announcement until the field was set. The decision stunned DeSantis’ political operation, which had clearly underestimated the governor’s failures to build a rapport with fellow Republicans. Ultimately most Florida Republicans in the House lined up behind Trump.

    The back and forth with Donalds stems from the new standards for how Black history should be taught in the state’s public schools, which were approved earlier this month by the Florida Board of Education. While education and civil rights advocates have decried many elements of the new standards as whitewashing America’s dark history, much of the national attention has focused on one passage that clarifies middle school students should learn “how slaves developed skills which, in some instances, could be applied for their personal benefit.”

    Amid intense objections to the language, Harris responded by holding a press conference in Jacksonville where she accused Florida’s leaders of “creating these unnecessary debates.”

    “This is unnecessary to debate whether enslaved people benefited from slavery,” she said. “Are you kidding me? Are we supposed to debate that?”

    DeSantis and state education officials have fiercely defended the new standards in recent days. Redfern and others have pointed to similar language that appeared in the course framework for a new Advanced Placement African American Studies course piloted by the College Board. Florida was widely criticized by Democrats for blocking the course from being taught in state public schools.

    According to one document, the AP course intended to teach students: “In addition to agricultural work, enslaved people learned specialized trades and worked as painters, carpenters, tailors, musicians, and healers in the North and South. Once free, American Americans used these skills to provide for themselves and others.”

    The College Board said Thursday it “resolutely” disagrees with the notion that enslavement was beneficial for African Americans after some compared the content of its course to Florida’s recently approved curriculum.

    On Thursday, DeSantis said the state standards are “very clear about the injustices of slavery in vivid detail.”

    [ad_2]

    Source link

  • Hot box detectors didn’t stop the East Palestine derailment. Research shows another technology might have | CNN

    Hot box detectors didn’t stop the East Palestine derailment. Research shows another technology might have | CNN

    [ad_1]



    CNN
     — 

    A failing, flaming wheel bearing doomed the rail car that derailed and created a catastrophe in East Palestine earlier this month, but researchers have offered a solution to the faulty detectors that experts say could have averted the disaster unfolding in the small Ohio town.

    These wayside hot box detectors, stationed on rail tracks every 20 miles or so, use infrared sensors to record the temperatures of railroad bearings as trains pass by. If they sense an overheated bearing, the detectors trigger an alarm, which notifies the train crew they should stop and inspect the rail car for a potential failure.

    So why did these detectors miss a bearing failure before the catastrophe?

    An investigation into hot box detectors published in 2019 and funded by the Department of Transportation found that one “major shortcoming” of these detectors is that they can’t distinguish between healthy and defective bearings, and temperature alone is not a good indicator of bearing health.

    “Temperature is reactive in nature, meaning by the time you’re sensing a high temperature in a bearing, it’s too late, the bearing is already in its final stages of failure,” Constantine Tarawneh, director of the University Transportation Center for Railways Safety (UTCRS) and lead investigator of the study, told CNN.

    As part of the investigation, the UTCRS researchers developed a new system to better detect a bearing issue long before a catastrophic failure. The key: measuring the bearing’s vibration in addition to its temperature and load.

    The vibration of a failing bearing, Tarawneh says, often begins intensifying thousands of miles before a catastrophic failure. So his team created sensors that can be placed on board each rail car, near the bearing, to continuously monitor its vibration throughout its travels.

    “If you put an accelerometer on a bearing and you’re monitoring the vibration levels, the minute a defect happens in the bearing, the accelerometer will sense an increase in vibration, and that could be, in many cases, up to 100,000 miles before the bearing actually fails,” he said.

    Tarawneh, who argues the technology should be federally mandated, says had it been on board Norfolk Southern’s line it would have prevented the derailment in East Palestine.

    “It would have detected the problem months before this happened,” he said. “There wouldn’t have been a derailment.”

    A preliminary report from the East Palestine derailment, released Thursday by the National Transportation Safety Board, found hot box sensors detected that a wheel bearing was heating up miles before it eventually failed and caused the train to derail. But the detectors didn’t alert the crew until it was too late.

    The bearing, according to the report, was 38 degrees above ambient temperature when it passed through a hot box 30 miles outside East Palestine. No alert went out, the NTSB said.

    Ten miles later, the next hot box detected that the bearing had reached 103 degrees above ambient. Video of the train recorded in that area shows sparks and flames around the rail car. Still, no alert went to the crew.

    It wasn’t until a further 20 miles down the tracks, as the train reached East Palestine, that a hot box detector recorded the bearing’s temperature at 253 degrees above ambient and sent an alarm message instructing the crew to slow and stop the train to inspect a hot axle, the report said.

    The crew slowed the train, the report added, leading to an automatic emergency brake application. After the train stopped, the crew observed the derailment.

    The reason those first two hot box readings didn’t trigger an alert, the report said, is because Norfolk Southern’s policy is to only stop and inspect a bearing after it has reached 170 degrees above ambient temperature. The NTSB is planning to review Norfolk Southern’s use of wayside hot box detectors, including spacing and the temperature threshold that determines when crews are alerted.

    “Had there been a detector earlier, that derailment may not have occurred,” said NTSB Chair Jennifer Homendy at a Thursday press conference.

    In a statement responding to the NTSB report, Norfolk Southern stressed that its hot box detectors were operating as designed, and that those detectors trigger an alarm at a temperature threshold that is “among the lowest in the rail industry.” CNN has reached out to Norfolk Southern for comment on vibration sensor technology.

    Hot box detectors are unregulated, so companies like Norfolk Southern can turn them on and off at their own discretion and choose the temperature threshold at which crews receive an alert.

    There are several causes for overheated roller bearings, including fatigue cracking, water damage, mechanical damaging, a loose bearing or a wheel defect, according to the NTSB, and the agency says they’re investigating what caused the failure in East Palestine.

    “Roller bearings fail, but it is absolutely critical for problems to be identified and addressed early so these aren’t run until failure,” Homendy said. “You cannot wait until they’ve failed. Problems need to be identified early, so something catastrophic like this does not occur again.”

    Hum Industrial Technology, a rail car telematics company, has licensed the vibration sensor technology created by Tarawneh and his team. And it has launched pilot programs with several rail companies. But at this point, those sensors are on very few trains operating in the United States, which Tarawneh largely blames on the cost of retrofitting and monitoring cars and what he sees as companies prioritizing profit.

    It’s not clear exactly what it would cost to retrofit every train car in operation with sensors today, but Hum Industrial Technology stressed that it would cost less to put a sensor on a bearing than to replace a bearing.

    “They see it as, well, why should we do it if it’s not mandated?” Tarawneh said. “It’s like a lot of people are saying, ‘well, I’m willing to take the risk. It’s not that many derailments per year.’”

    But Steve Ditmeyer, a former Federal Railroad Administration official, says equipping every rail car with on board sensors may not be financially feasible.

    “What they’re proposing will work, but it’s very, very expensive,” Ditmeyer told CNN. “And one does have to take cost into consideration.”

    It would take more than 12 million on board sensors, according to Tarawneh, to fully equip the roughly 1.6 million rail cars in service across North America.

    Ditmeyer says railroads should invest more heavily in wayside acoustic bearing detectors, which sit along the tracks – much like hot box detectors – and monitor the sound of passing trains. They listen for noise that indicates a bearing failure well before a potential catastrophe.

    As of 2019, only 39 acoustic bearing detectors were in use across North America compared to more than 6,000 hot box detectors, according to a 2019 DOT report.

    “They are the only way that I can think of that would have prevented the accident by having caught a failing bearing earlier,” Ditmeyer said.

    [ad_2]

    Source link