ReportWire

Tag: Privacy

  • Neo-Nazis Are Fleeing Telegram for Encrypted App SimpleX Chat

    Neo-Nazis Are Fleeing Telegram for Encrypted App SimpleX Chat

    [ad_1]

    Dozens of neo-Nazis are fleeing Telegram and moving to a relatively unknown secret chat app that has received funding from Twitter founder Jack Dorsey.

    In a report from the Institute for Strategic Dialogue published on Friday morning, researchers found that in the wake of the arrest of Telegram founder Pavel Durov and charges against leaders of the so-called Terrorgram Collective, dozens of extremist groups have moved to the app SimpleX Chat in recent weeks over fears that Telegram’s privacy policies expose them to being arrested. The Terrorgram Collective is a neo-Nazi propaganda network that calls for acolytes to target government officials, attack power stations, and murder people of color.

    While ISD stopped short of naming SimpleX in its report, the researchers point out that the app promotes itself as “having a different burner email or phone for each contact, and no hassle to manage them.” This is exactly how SimpleX refers to itself on its website.

    Last month, one accelerationist group linked to the now defunct neo-Nazi terrorist group Atomwaffen Division, with more than 13,000 subscribers on Telegram, began migrating to SimpleX. Administrators of the channel advised subscribers that “while it’s not as smooth as Telegram, it appears to be miles ahead with regard to privacy and security.”

    The group now has 1,000 members on SimpleX and, according to ISD, is “part of a wider network built by neo-Nazi accelerationists that consists of nearly 30 channels and group chats,” which includes other well-known accelerationist groups like the Base. Accelerationists seek to speed up the downfall of Western society by triggering a race war in order to rebuild civilization based on their own white Christian values.

    The network of groups on SimpleX are also sharing extremist content, including al-Qaeda training manuals, Hamas rocket development guides, neo-Nazi accelerationist handbooks, and militant anarchist literature. And in their newly secure channels on SimpleX, the members of the groups have immediately made direct calls for violence.

    “During a 24-hour period on September 25, analysts observed three instances of users calling for the assassination of Vice President Kamala Harris, and one instance calling for the assassination of former President Donald Trump,” the ISD researchers wrote. “Similarly, numerous users called for a race war that would hasten the fall of society, allow them to take the US by force, and institute their desired system of white supremacy.”

    SimpleX Chat is an app that was founded by UK-based developer Evgeny Poberezkin. It was initially launched in 2021, and a blog post in August announced that it had passed 100,000 downloads on Google’s Play store. The same blog post announced that Dorsey had led a $1.3 million investment round, having previously praised the app on other social media platforms. Dorsey did not reply to a request for comment.

    For years, neo-Nazi groups have flourished on Telegram, many of them under the assumption that Telegram was a fully encrypted platform that provided a greater level of security than it really did. Telegram was used by these groups for building out their networks, sharing propaganda, and planning attacks. However, two of the leaders of the Terrorgram Collective were arrested and charged last month, which was a key factor in triggering the migration to SimpleX, the ISD analysts wrote. The group used Telegram to encourage acts of terrorism in the US and overseas.

    [ad_2]

    David Gilbert

    Source link

  • Pavel Durov Defends Telegram’s Privacy Changes Amid User Unrest

    Pavel Durov Defends Telegram’s Privacy Changes Amid User Unrest

    [ad_1]

    Telegram CEO Pavel Durov today defended recent changes to his platform, amid concerns his arrest in France has made the messaging app more compliant with legal requests to share user data with the authorities.

    Durov attempted to minimize the significance of changes made to the app since he was arrested in August and charged with complicity in a range of crimes, including spreading sexual images of children. He was forbidden from leaving France for six months and must appear at a police station twice a week.

    In his post, the 39-year-old indirectly addressed speculation that Telegram may strengthen its notoriously light-touch content moderation as a result of his arrest. “Our core principles haven’t changed,” Durov stressed, in a post on the platform. “We’ve always strived to comply with relevant local laws—as long as they didn’t go against our values of freedom and privacy.”

    He attributed a recent uptick in the number of EU legal requests received and considered valid by the app over the last several months to European authorities beginning to use the correct Telegram email address.

    Yet since Durov’s arrest, Telegram has introduced a series of subtle changes. In late August, the company’s FAQ page read: “To this day, we have disclosed 0 bytes of user data to third parties, including governments.” Now the phrase “user data” has been replaced with “user messages.” Telegram did not reply to WIRED’s request for comment asking what exactly this change means.

    Then, early in September, Telegram quietly made it possible for users to report illegal content in private and group chats for moderators to review. Later that same month, Durov also announced Telegram had changed its terms of service to prevent the app’s abuse by criminals and would share user locations in response to legal requests. “We’ve made it clear that the IP addresses and phone numbers of those who violate our rules can be disclosed to relevant authorities,” he said at the time.

    Today, Durov framed those changes as a technicality. “Since 2018, Telegram has been able to disclose IP addresses/phone numbers of criminals to authorities,” he explained. Although last week he said that privacy policies in different countries had been “unified,” he insisted that “in reality, little has changed.”

    What has changed, however, is Durov’s tone. For years, Telegram cultivated an image as a proudly anti-authority platform that was politically neutral, while governments and digital rights groups bemoaned how difficult it was to contact its moderators.

    Now, there are signs Durov is adopting a more conciliatory attitude toward the authorities. That has prompted panic among some of the app’s less savory users, including German extremists and Russian military bloggers, who have expressed concern that the CEO’s arrest may be an attempt to access their data. Durov’s message today carried yet another warning to them. “We do not allow criminals to abuse our platform or evade justice,” he said.

    [ad_2]

    Morgan Meaker

    Source link

  • Appeals court reinstates Indiana lawsuit against TikTok alleging child safety, privacy concerns

    Appeals court reinstates Indiana lawsuit against TikTok alleging child safety, privacy concerns

    [ad_1]

    INDIANAPOLIS — The Indiana Court of Appeals has reinstated a lawsuit filed by the state accusing TikTok of deceiving its users about the video-sharing platform’s level of inappropriate content for children and the security of its consumers’ personal information.

    In a 3-0 ruling issued Monday, a three-judge panel of the state appeals court reversed two November 2023 decisions by an Allen County judge which dismissed a pair of lawsuits the state had filed in December 2022 against TikTok.

    Those suits, which have been consolidated, allege the app contains “salacious and inappropriate content” despite the company claiming it is safe for children 13 years and under. The litigation also argues that the app deceives consumers into believing their sensitive and personal information is secure.

    In November’s ruling, Allen Superior Court Judge Jennifer L. DeGroote found that her court lacked personal jurisdiction over the case and reaffirmed a previous court ruling which found that downloading a free app does not count as a consumer transaction under the Indiana Deceptive Consumer Sales Act.

    But in Monday’s ruling, Judge Paul Mathias wrote on behalf of the appeals court that TikTok’s millions of Indiana users and the $46 million in Indiana-based income the company reported in 2021 create sufficient contact between the company and the state to establish the jurisdiction of Indiana’s courts over TikTok, The Times of Northwest Indiana reported.

    Mathias also wrote that TikTok’s business model of providing access to its video content library in exchange for the personal data of its Indiana users counts as a “consumer transaction” under the law, even if no payment is involved.

    “The plain and ordinary definition of the word ‘sale,’ which is not otherwise defined in the DCSA, includes any consideration to effectuate the transfer of property, not only an exchange for money,” Mathias wrote.

    “It is undisputed that TikTok exchanges access to its app’s content library for end-user personal data. That is the bargain between TikTok and its end-users. And, under the plain and ordinary use of the word, that is a ‘sale’ of access to TikTok’s content library for the end-user’s personal data. TikTok’s business model is therefore a consumer transaction under the DCSA.”

    A spokesperson for the Indiana Attorney General’s office said Tuesday in a statement that the appeals court “took a common sense approach and agreed with our office’s argument that there’s simply no serious question that Indiana has established specific personal jurisdiction over TikTok.”

    “By earning more $46 million dollars from Hoosier consumers in 2021, TikTok is doing business in the state and is therefore subject to this lawsuit,” the statement adds.

    The Associated Press left a message Tuesday afternoon for a lead attorney for TikTok seeking comment on the appeals court’s ruling.

    TikTok is owned by ByteDance, a Chinese company that moved its headquarters to Singapore in 2020. The app has been a target over the past year of state and federal lawmakers who say the Chinese government could access the app’s users’ data.

    Indiana Attorney General Todd Rokita has repeatedly personally urged Hoosiers to ”patriotically delete″ the TikTok app due to its supposed ties to the Chinese Communist Party.

    [ad_2]

    Source link

  • ICE Signs $2 Million Contract With Spyware Maker Paragon Solutions

    ICE Signs $2 Million Contract With Spyware Maker Paragon Solutions

    [ad_1]

    Paragon was founded in 2019 by veterans from the Israel Defense Forces’ powerful intelligence Unit 8200 with the active involvement of former Israeli prime minister Ehud Barak as an investor who is estimated to own a sizable slice of the company.

    The company has received investment from the Boston-headquartered Battery Ventures, “considered to be one of the world’s top venture capital firms,” and two of its founders formerly worked for Blumberg Capital, another large US venture capital firm.

    Israeli media reported in June that a US private equity fund with a portfolio of security companies has been in talks to acquire control of Paragon, estimating its valuation at $1 billion.

    To secure its unique US-approved, “ethical” positioning, Paragon has made “deliberate efforts” since its establishment to break into the US market, notes the Atlantic Council.

    In 2019, as Paragon was developing Graphite, the company enlisted WestExec Advisors, a prominent Washington, DC, consulting firm cofounded by former Obama administration officials, including current US secretary of state Antony Blinken, to advise on its “strategic approach to the US and European markets,” a company executive told the Financial Times. Avril Haines, a former WestExec staffer, is now the US director of national intelligence.

    To remain in the US government’s “good graces,” Paragon in February 2023 hired another DC-based lobbying firm, Holland & Knight, “with a good track record in avoiding sanctions,” as some reports point out. Lobbying expenditure disclosure reveals a spend of a minimum $280,000 in 2023 and 2024 for this campaign.

    The fact that the spyware vendor has neither been placed on an entity list nor have any of its executives been sanctioned by the Biden administration suggests that Paragon’s lobbying efforts have been successful.

    In addition, Biden’s executive order leaves enough margin for the deployment of tools like Graphite. When a senior US administration official was asked specifically about potential abuses of Paragon’s flagship product, they said that the executive order “requires the heads of agencies to review any activity that might be relevant,” without excluding the possibility of lawful use.

    Meanwhile, the company continues to grow and is advertising several roles in Israel. In the US, Paragon boosted its presence in the wake of the signing of the executive order and started hiring intelligence veterans, including former CIA and FBI officers at its subsidiary, “hoping it would pick up new business.” Fresh reports from February 2024 confirmed the steady growth.

    Paragon’s $2 million contract with ICE is tangible proof that the company’s approach is paying off. It remains to be seen whether Graphite’s deployment will align with the protection of human rights, privacy, and democracy.

    [ad_2]

    Vas Panagiotopoulos

    Source link

  • A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions

    A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions

    [ad_1]

    After Apple’s product launch event this week, WIRED did a deep dive on the company’s new secure server environment, known as Private Cloud Compute, which attempts to replicate in the cloud the security and privacy of processing data locally on users’ individual devices. The goal is to minimize possible exposure of data processed for Apple Intelligence, the company’s new AI platform. In addition to hearing about PCC from Apple’s senior vice president of software engineering, Craig Federighi, WIRED readers also received a first look at content generated by Apple Intelligence’s “Image Playground” feature as part of crucial updates on the recent birthday of Federighi’s dog Bailey.

    Turning to privacy protection of a very different kind in another new AI service, WIRED looked at how users of the social media platform X can keep their data from being slurped up by the “unhinged” generative AI tool from xAI known as Grok AI. And in other news about Apple products, researchers developed a technique for using eye tracking to discern passwords and PINs people typed using 3D Apple Vision Pro avatars—a sort of keylogger for mixed reality. (The flaw that made the technique possible has since been patched.)

    On the national security front, the US this week indicted two people accused to spreading propaganda meant to inspire “lone wolf” terrorist attacks. The case, against alleged members of the far-right network known as the Terrorgram Collective, marks a turn in how the US cracks down on neofascist extremists.

    And there’s more. Each week, we round up the privacy and security news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    OpenAI’s generative AI platform ChatGPT is designed with strict guardrails that keep the service from offering advice on dangerous and illegal topics like tips on laundering money or a how-to guide for disposing of a body. But an artist and hacker who goes by “Amadon” figured out a way to trick or “jailbreak” the chatbot by telling it to “play a game” and then guiding it into a science-fiction fantasy story in which the system’s restrictions didn’t apply. Amadon then got ChatGPT to spit out instructions for making dangerous fertilizer bombs. An OpenAI spokesperson did not respond to TechCrunch’s inquiries about the research.

    “It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them. The goal isn’t to hack in a conventional sense but to engage in a strategic dance with the AI, figuring out how to get the right response by understanding how it ‘thinks,’” Amadon told TechCrunch. “The sci-fi scenario takes the AI out of a context where it’s looking for censored content … There really is no limit to what you can ask it once you get around the guardrails.”

    In the fervent investigations following the September 11, 2001, terrorist attacks in the United States, the FBI and CIA both concluded that it was coincidental that a Saudi Arabian official had helped two of the hijackers in California and that there had not been high-level Saudi involvement in the attacks. The 9/11 commission incorporated that determination, but some findings indicated subsequently that the conclusions might not be sound. With the 23-year anniversary of the attacks this week, ProPublica published new evidence “suggest[ing] more strongly than ever that at least two Saudi officials deliberately assisted the first Qaida hijackers when they arrived in the United States in January 2000.”

    The evidence comes primarily from a federal lawsuit against the Saudi government brought by survivors of the 9/11 attacks and relatives of victims. A judge in New York will soon make a decision in that case about a Saudi motion to dismiss. But evidence that has already emerged in the case, including videos and documents such as telephone records, points to possible connections between the Saudi government and the hijackers.

    “Why is this information coming out now?” said retired FBI agent Daniel Gonzalez, who pursued the Saudi connections for almost 15 years. “We should have had all of this three or four weeks after 9/11.”

    The United Kingdom’s National Crime Agency said on Thursday that it arrested a teenager on September 5 as part of the investigation into a cyberattack on September 1 on the London transportation agency Transport for London (TfL). The suspect is a 17-year-old male and was not named. He was “detained on suspicion of Computer Misuse Act offenses” and has since been released on bail. In a statement on Thursday, TfL wrote, “Our investigations have identified that certain customer data has been accessed. This includes some customer names and contact details, including email addresses and home addresses where provided.” Some data related to the London transit payment cards known as Oyster cards may have been accessed for about 5,000 customers, including bank account numbers. TfL is reportedly requiring roughly 30,000 users to appear in person to reset their account credentials.

    In a decision on Tuesday, Poland’s Constitutional Tribunal blocked an effort by Poland’s lower house of parliament, known as the Sejm, to launch an investigation into the country’s apparent use of the notorious hacking tool known as Pegasus while the Law and Justice (PiS) party was in power from 2015 to 2023. Three judges who had been appointed by PiS were responsible for blocking the inquiry. The decision cannot be appealed. The decision is controversial, with some, like Polish parliament member Magdalena Sroka, saying that it was “dictated by the fear of liability.”

    [ad_2]

    Lily Hay Newman

    Source link

  • Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works

    Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works

    [ad_1]

    Apple is making every production PCC server build publicly available for inspection so people unaffiliated with Apple can verify that PCC is doing (and not doing) what the company claims, and that everything is implemented correctly. All of the PCC server images are recorded in a cryptographic attestation log, essentially an indelible record of signed claims, and each entry includes a URL for where to download that individual build. PCC is designed so Apple can’t put a server into production without logging it. And in addition to offering transparency, the system works as a crucial enforcement mechanism to prevent bad actors from setting up rogue PCC nodes and diverting traffic. If a server build hasn’t been logged, iPhones will not send Apple Intelligence queries or data to it.

    PCC is part of Apple’s bug bounty program, and vulnerabilities or misconfigurations researchers find could be eligible for cash rewards. Apple says, though, that since the iOS 18.1 beta became available in late July, no on has found any flaws in PCC so far. The company recognizes that it has only made the tools to evaluate PCC available to a select group of researchers so far.

    Multiple security researchers and cryptographers tell WIRED that Private Cloud Compute looks promising, but they haven’t spent significant time digging into it yet.

    “Building Apple silicon servers in the data center when we didn’t have any before, building a custom OS to run in the data center was huge,” Federighi says. He adds that “creating the trust model where your device will refuse to issue a request to a server unless the signature of all the software the server is running has been published to a transparency log was certainly one of the most unique elements of the solution—and totally critical to the trust model.”

    To questions about Apple’s partnership with OpenAI and integration of ChatGPT, the company emphasizes that partnerships are not covered by PCC and operate separately. ChatGPT and other integrations are turned off by default, and users must manually enable them. Then, if Apple Intelligence determines that a request would be better fulfilled by ChatGPT or another partner platform, it notifies the user each time and asks whether to proceed. Additionally, people can use these integrations while logged into their account for a partner service like ChatGPT or can use them through Apple without logging in separately. Apple said in June that another integration with Google’s Gemini is also in the works.

    Apple said this week that beyond launching in United States English, Apple Intelligence is coming to Australia, Canada, New Zealand, South Africa, and the United Kingdom in December. The company also said that additional language support—including for Chinese, French, Japanese, and Spanish—will drop next year. Whether that means that Apple Intelligence will be permitted under the European Union’s AI Act and whether Apple will be able to offer PCC in its current form in China is another question.

    “Our goal is to bring ideally everything we can to provide the best capabilities to our customers everywhere we can,” Federighi says. “But we do have to comply with regulations, and there is uncertainty in certain environments we’re trying to sort out so we can bring these features to our customers as soon as possible. So, we’re trying.”

    He adds that as the company expands its ability to do more Apple Intelligence computation on-device, it may be able to use this as a workaround in some markets.

    Those who do get access to Apple Intelligence will have the ability to do far more than they could with past versions of iOS, from writing tools to photo analysis. Federighi says that his family celebrated their dog’s recent birthday with an Apple Intelligence–generated GenMoji (viewed and confirmed to be very cute by WIRED). But while Apple’s AI is meant to be as helpful and invisible as possible, the stakes are incredibly high for the security of the infrastructure underpinning it. So how are things going so far? Federighi sums it up without hesitation: “The rollout of Private Cloud Compute has been delightfully uneventful.”

    [ad_2]

    Lily Hay Newman

    Source link

  • What You Need to Know About Grok AI and Your Privacy

    What You Need to Know About Grok AI and Your Privacy

    [ad_1]

    But X also makes it clear the onus is on the user to judge the AI’s accuracy. “This is an early version of Grok,” xAI says on its help page. Therefore chatbot may “confidently provide factually incorrect information, missummarize, or miss some context,” xAI warns.

    “We encourage you to independently verify any information you receive,” xAI adds. “Please do not share personal data or any sensitive and confidential information in your conversations with Grok.”

    Grok Data Collection

    Vast amounts of data collection are another area of concern—especially since you are automatically opted in to sharing your X data with Grok, whether you use the AI assistant or not.

    The xAI’s Grok Help Center page describes how xAI “may utilize your X posts as well as your user interactions, inputs and results with Grok for training and fine-tuning purposes.”

    Grok’s training strategy carries “significant privacy implications,” says Marijus Briedis, chief technology officer at NordVPN. Beyond the AI tool’s “ability to access and analyze potentially private or sensitive information,” Briedis adds, there are additional concerns “given the AI’s capability to generate images and content with minimal moderation.”

    While Grok-1 was trained on “publicly available data up to Q3 2023” but was not “pre-trained on X data (including public X posts),” according to the company, Grok-2 has been explicitly trained on all “posts, interactions, inputs, and results” of X users, with everyone being automatically opted in, says Angus Allan, senior product manager at CreateFuture, a digital consultancy specializing in AI deployment.

    The EU’s General Data Protection Regulation (GDPR) is explicit about obtaining consent to use personal data. In this case, xAI may have “ignored this for Grok,” says Allan.

    This led to regulators in the EU pressuring X to suspend training on EU users within days of the launch of Grok-2 last month.

    Failure to abide by user privacy laws could lead to regulatory scrutiny in other countries. While the US doesn’t have a similar regime, the Federal Trade Commission has previously fined Twitter for not respecting users’ privacy preferences, Allan points out.

    Opting Out

    One way to prevent your posts from being used for training Grok is by making your account private. You can also use X privacy settings to opt out of future model training.

    To do so select Privacy & Safety > Data sharing and Personalization > Grok. In Data Sharing, uncheck the option that reads, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”

    Even if you no longer use X, it’s still worth logging in and opting out. X can use all of your past posts—including images—for training future models unless you explicitly tell it not to, Allan warns.

    It’s possible to delete all of your conversation history at once, xAI says. Deleted conversations are removed from its systems within 30 days, unless the firm has to keep them for security or legal reasons.

    No one knows how Grok will evolve, but judging by its actions so far, Musk’s AI assistant is worth monitoring. To keep your data safe, be mindful of the content you share on X and stay informed about any updates in its privacy policies or terms of service, Briedis says. “Engaging with these settings allows you to better control how your information is handled and potentially used by technologies like Grok.”

    [ad_2]

    Kate O’Flaherty

    Source link

  • Cars talking to one another could help reduce fatal crashes on US roads

    Cars talking to one another could help reduce fatal crashes on US roads

    [ad_1]

    The secret to avoiding red lights during rush hour in Utah’s largest city might be as simple as following a bus.

    Transportation officials have spent the past few years refining a system in which radio transmitters inside commuter buses talk directly to the traffic signals in the Salt Lake City area, requesting a few extra seconds of green when they approach.

    Congestion on these so-called smart streets is already noticeably smoother, but it’s just a small preview of the high-tech upgrades that could be coming soon to roads across Utah and ultimately across the U.S.

    Buoyed by a $20 million federal grant and an ambitious calling to “Connect the West,” the goal is to ensure every vehicle in Utah, as well as neighboring Colorado and Wyoming, can eventually communicate with one another and the roadside infrastructure about congestion, accidents, road hazards and weather conditions.

    With that knowledge, drivers can instantly know they should take another route, bypassing the need for a human to manually send an alert to an electronic street sign or the mapping apps found on cellphones.

    “A vehicle can tell us a lot about what’s going on in the roadway,” said Blaine Leonard, a transportation technology engineer at the Utah Department of Transportation. “Maybe it braked really hard, or the windshield wipers are on, or the wheels are slipping. The car anonymously broadcasts to us that blip of data 10 times a second, giving us a constant stream of information.”

    When cars transmit information in real time to other cars and the various sensors posted along and above the road, the technology is known broadly as vehicle-to-everything, or V2X. Last month, the U.S. Department of Transportation unveiled a national blueprint for how state and local governments and private companies should deploy the various V2X projects already in the works to make sure everyone is on the same page.

    The overarching objective is universal: dramatically curb roadway deaths and serious injuries, which have recently spiked to historic levels.

    A 2016 analysis by the National Highway Traffic Safety Administration concluded V2X could help. Implementing just two of the earliest vehicle-to-everything applications nationwide would prevent 439,000 to 615,000 crashes and save 987 to 1,366 lives, its research found.

    Dan Langenkamp has been lobbying for road safety improvements since his wife Sarah Langenkamp, a U.S. diplomat, was killed by a truck while biking in Maryland in 2022. Joining officials at the news conference announcing the vehicle-to-everything blueprint, Langenkamp urged governments across the U.S. to roll out the technology as widely and quickly as possible.

    “How can we as government officials, as manufacturers, and just as Americans not push this technology forward as fast as we possibly can, knowing that we have the power to rescue ourselves from this disaster, this crisis on our roads,” he said.

    Most of the public resistance has been about privacy. Although the V2X rollout plan commits to safeguarding personal information, some privacy advocates remain skeptical.

    Critics say that while the system may not track specific vehicles, it can compile enough identifying characteristics — even something as seemingly innocuous as tire pressure levels — that it wouldn’t take too much work to figure out who is behind the wheel and where they are going.

    “Once you get enough unique information, you can reasonably say the car that drives down this street at this time that has this particular weight class probably belongs to the mayor,” said Cliff Braun, associate director of technology, policy and research for the Electronic Frontier Foundation, which advocates for digital privacy.

    The federal blueprint says the nation’s top 75 metropolitan areas should aspire to have at least 25% of their signalized intersections equipped with the technology by 2028, along with higher milestones in subsequent years. With its fast start, the Salt Lake City area already has surpassed 20%.

    Of course, upgrading the signals is the relatively easy part. The most important data comes from the cars themselves. While most new ones have connected features, they don’t all work the same way.

    Before embarking on the “Connect the West” plan, Utah officials tested what they call the nation’s first radio-based, connected vehicle technology, using only the data supplied by fleet vehicles such as buses and snow plows. One early pilot program upgraded the bus route on a busy stretch of Redwood Road, and it isn’t just the bus riders who have noticed a difference.

    “Whatever they’re doing is working,” said Jenny Duenas, assistant director of nearby Panda Child Care, where 80 children between 6 weeks and 12 years old are enrolled. “We haven’t seen traffic for a while. We have to transport our kiddos out of here, so when it’s a lot freer, it’s a lot easier to get out of the daycare.”

    Casey Brock, bus communications supervisor for the Utah Transit Authority, said most of the changes might not be noticeable to drivers. However, even shaving a few seconds off a bus route can dramatically reduce congestion while improving safety, he said.

    “From a commuter standpoint it may be, ‘Oh, I had a good traffic day,’” Brock said. “They don’t have to know all the mechanisms going on behind the scenes.”

    This summer, Michigan opened a 3-mile (4.8-kilometer) stretch of a connected and automated vehicle corridor planned for Interstate 94 between Ann Arbor and Detroit. The pilot project features digital infrastructure, including sensors and cameras installed on posts along the highway, that will help drivers prepare for traffic slowdowns by sending notifications about such things as debris and stalled vehicles.

    Similar technology is being employed for a smart freight corridor around Austin, Texas, that aims to inform truck drivers of road conditions and eventually cater to self-driving trucks.

    Darran Anderson, director of strategy and innovation at the Texas Department of Transportation, said officials hope the technology not only boosts the state’s massive freight industry but also helps reverse a troubling trend that has spanned more than two decades. The last day without a road fatality in Texas was Nov. 7, 2000.

    Cavnue, a Washington, D.C.-based subsidiary of Alphabet’s Sidewalk Infrastructure partners, funded the Michigan project and was awarded a contract to develop the one in Texas. The company has set a goal of becoming an industry leader in smart roads technology.

    Chris Armstrong, Cavnue’s vice president of product, calls V2X “a digital seatbelt for the car” but says it only works if cars and roadside infrastructure can communicate seamlessly with one another.

    “Instead of speaking 50 different languages, overnight we’d like to all speak the same language,” he said.

    [ad_2]

    Source link

  • Germany’s Far Right Is in a Panic Over Telegram

    Germany’s Far Right Is in a Panic Over Telegram

    [ad_1]

    Soon after the arrest of Telegram founder and CEO Pavel Durov, a warning that was viewed more than 85,000 times started circulating among Germany’s far right: “Back up your Telegram data as quickly as you can and clean your account.”

    The message came from Kim Dotcom, the embattled German founder of the now-defunct digital piracy website Megaupload who is set to be extradited from New Zealand, and who knows a thing or two about facing penalties for illegal activity on the internet.

    Telegram users may have reason to fear after French authorities threw the book at Durov, charging him with complicity in crimes that take place on the app, including the sharing of child pornography and the trading of narcotics. If Durov can be held liable for crimes on the app, so too can the criminals perpetrating them, the logic goes.

    Researchers at Germany’s Center for Monitoring, Analysis, and Strategy (CeMAS) track around 3,000 channels and 2,000 groups linked to the German far right and conspiracy movements. Users are known to post racist and antisemitic hate speech, and some groups contain Nazi symbols, Holocaust denial, and calls to violence, openly flouting Germany’s strict criminal code. But a mass exodus from the platform, where groups have spent the past five years building a global infrastructure for radicalization and offline demonstrations, would be tantamount to starting from scratch online.

    “If you’re a terrorist or you’re an extremist, you’re going to follow the path of least resistance, and in this particular case, that probably means Telegram,” Adam Hadley, the founder and executive director of the United Nations–backed organization Tech Against Terrorism, tells WIRED.

    Durov’s arrest is a shot across the bow for Telegram, which now suddenly finds itself in the sights of European law enforcement and regulators. Neo-Nazis’ favorite app is staring down an existential threat, and they’re not quite sure what to do about it.

    A ‘Bridge Technology’

    Alarm spread quickly the Saturday of Durov’s arrest. Just 90 minutes after French media reported that Durov’s private jet had been intercepted by authorities at Paris’ Le Bourget Airport, a far-right channel posted that his arrest “may have political reasons and be a tool to gain access to personal data of Telegram users.”

    The channel is associated with the Reichsbürger movement, which believes Germany is not a sovereign state and is still occupied by Allied powers. German police thwarted their coup plot in 2022, discovering a cache of more than $500,000 in gold and cash, as well as hundreds of guns, knives, ballistic helmets, and ammunition rounds.

    Similar messages began proliferating across the app. That night, Austrian extremist Martin Sellner wrote—the translation here is via Google’s translation tool—that “the ‘liberal West’ is switching off the democracy simulation. All communication channels may soon collapse. Will Musk be arrested next?” The message was viewed more than 40,000 times as estimated by TGStat, a Telegram analytics tool, which provided the view counts cited in this story.

    Sellner was banned from entering Germany in March for being the keynote speaker at the far-right Alternative für Deutschland (AfD) Party’s ill-famed November Potsdam conference. There, he presented a plan to members of Germany’s surging far-right party on conducting mass deportations once it came into power. AfD emerged victorious Sunday in a state election in eastern Germany, granting the far right a historic first since World War II.

    [ad_2]

    Josh Axelrod

    Source link

  • Telegram Faces a Reckoning. Other Founders Should Beware

    Telegram Faces a Reckoning. Other Founders Should Beware

    [ad_1]

    “[Elon] Musk and fellow executives should be reminded of their criminal liability,” said Bruce Daisley, a former executive at Twitter, who worked at the company’s British office, days after British protesters tried to set fire to a hotel for asylum seekers.

    But Telegram has provoked politicians more than any other platform. What could be called the company’s uncollaborative approach has put the platform—part messaging app, part social media network—on a collision course with governments around the world.

    The case in France is far from the first time Telegram has been reprimanded by authorities for its refusal to cooperate. Telegram has been temporarily suspended twice in Brazil, in 2022 and 2023, both times after being accused of failing to cooperate with legal orders.

    In 2022, similar events unfolded in Germany when the country’s interior minister also threatened to ban the app after letters, suggestions of fines, and even a Telegram-dedicated task force all went unanswered, according to the authorities, who were concerned about anti-lockdown groups using the app to discuss political assassinations. Multiple German newspapers, including the tabloid Bild, sent journalists to the office Telegram states as its headquarters in Dubai and found it deserted, its doors locked.

    Earlier in 2024, Spain briefly blocked Telegram after broadcasters claimed copyrighted material was circulating on the app. Judge Santiago Pedraz of Spain’s National High Court said his decision to ban was based on Telegram’s lack of cooperation with the case.

    The accusations in France are very specific to Telegram’s way of working, says Arne Möhle, cofounder of encrypted email service Tuta. “Of course it’s important to be independent but at the same time, it’s also important to comply with authority requests if they are valid,” he says. “It’s important to show [criminal activities are] something you don’t want to support with your privacy-oriented service.”

    France’s decision to charge Durov is a rare move to link a tech executive to crimes taking place on their platform, but it is not without precedent. Durov joins the ranks of the founders of The Pirate Bay, who were sentenced by Swedish authorities to a year in prison in 2009; and the German-born founder of MegaUpload, Kim Dotcom, who finally lost a 12-year battle to be extradited to the US from his home in New Zealand in August. He plans to appeal.

    Yet Durov is the first of his generation of founders behind major social media platforms to face such severe consequences. What happens next will carry lessons for them all.

    Bastien Le Querrec, legal officer at French digital freedom group La Quadrature du Net, does not defend Telegram’s lack of moderation. But he is concerned that the case against Durov reflects the huge pressure both social media and messaging apps are under right now to collaborate with law enforcement.

    “[The prosecutor] refers to a provision in French law that requires platforms to disclose any useful document that could allow law enforcement to do interception of communication,” he says. “To our knowledge, it’s the first time that a platform, whatever its size, would be prosecuted [in France] because it refused to disclose such documents. It’s a very worrying precedent.”

    [ad_2]

    Morgan Meaker

    Source link

  • Telegram CEO Pavel Durov’s Arrest Linked to Sweeping Criminal Investigation

    Telegram CEO Pavel Durov’s Arrest Linked to Sweeping Criminal Investigation

    [ad_1]

    French prosecutors gave preliminary information in a press release on Monday about the investigation into Telegram CEO Pavel Durov, who was arrested suddenly on Saturday at Paris’ Le Bourget airport. Durov has not yet been charged with any crime, but officials said that he is being held as part of an investigation “against person unnamed” and can be held in police custody until Wednesday.

    The investigation began on July 8 and involves wide-ranging charges related to alleged money laundering, violations related to import and export of encryption tools, refusal to cooperate with law enforcement, and “complicity” in drug trafficking, possession and distribution of child pornography, and more.

    The investigation was initiated by “Section J3” cybercrime prosecutors and has involved collaboration with France’s Centre for the Fight against Cybercrime (C3N) and Anti-Fraud National Office (ONAF), according to the press release. “It is within this procedural framework in which Pavel Durov was questioned by the investigators,” Paris prosecutor Laure Beccuau wrote in the statement.

    Telegram did not respond to multiple requests for comment about the investigation but asserted in a statement posted to the company’s news channel on Sunday that Durov has “nothing to hide.”

    “Given the existence of several preliminary investigations in France concerning Telegram in relation to the protection of minors’ rights and in cooperation with other French investigation units—for instance, on cyber harassment—the arrest of Durov, does not seem to me like a highly exceptional move,” says Cannelle Lavite, a French lawyer who specializes in free-speech matters.

    Lavite notes that Durov is a French citizen who was arrested in French territory with an arrest warrant issued by French judges. She adds that the list of charges involved in the investigation is “extensive,” a wide net that she says is not entirely surprising in the context of “France’s ambiguous legislative arsenal” meant to balance content moderation and free speech.

    Durov is a controversial figure for his leadership of Telegram, in large part because he has not typically cooperated with calls to moderate the platform’s content. In some ways, this has positioned him as a free-speech defender against government censorship, but it has also made Telegram a haven for hate speech, criminal activity, and abuse. Additionally, the platform is often billed as a secure communication tool, but much of it is open and accessible by default.

    “Telegram is not primarily an encrypted messenger; most people use it almost as a social network, and they’re not using any of its features that have end-to-end encryption,” says John Scott-Railton, senior researcher at Citizen Lab. “The implication there is that Telegram has a wide range of abilities and access to potentially do content moderation and respond to lawful requests. This puts Pavel Durov very much in the center of all kinds of potential governmental pressure.”

    On top of all of this, many researchers have questioned whether Telegram’s end-to-end encryption is durable when users do elect to enable it.

    French president Emmanuel Macron said in a social media post on Monday that “France is deeply committed to freedom of expression and communication … The arrest of the president of Telegram on French soil took place as part of an ongoing judicial investigation. It is in no way a political decision.”

    News of Durov’s arrest is fueling concerns, though, that the move could threaten Telegram’s stability and undermine the platform. The case seems poised, too, to have implications in long-standing debates around the world about social media moderation, government influence, and use of privacy-preserving end-to-end encryption.

    Lavite says the case certainly invokes debates about “the balance between the right to encrypted communication and free speech on the one hand, and users’ protection—content moderation—on the other hand.” But she notes that there is a lot of information about the investigation that is unknown and “a lot of blurry zones still.”

    On Monday afternoon, Telegram seemed to be receiving a download boost from the situation, moving from 18th to 8th place in Apple’s US App Store apps ranking. Global iOS downloads were up by 4 percent, and in France the app was number one in the App Store social network category and number three overall.

    [ad_2]

    Lily Hay Newman

    Source link

  • Sensors can read your sweat and predict overheating. Here’s why privacy advocates care

    Sensors can read your sweat and predict overheating. Here’s why privacy advocates care

    [ad_1]

    On a hot summer day in Oak Ridge, Tennessee, dozens of men removed pipes, asbestos and hazardous waste while working to decontaminate a nuclear facility and prepare it for demolition.

    Dressed in head-to-toe coveralls and fitted with respirators, the crew members toiling in a building without power had no obvious respite from the heat. Instead, they wore armbands that recorded their heart rates, movements and exertion levels for signs of heat stress.

    Stephanie Miller, a safety and health manager for a U.S. government contractor doing cleanup work at the Oak Ridge National Laboratory, watched a computer screen nearby. A color-coding system with little bubbles showing each worker’s physiological data alerted her if anyone was in danger of overheating.

    “Heat is one of the greatest risks that we have in this work, even though we deal with high radiation, hazardous chemicals and heavy metals,” Miller said.

    As the world experiences more record high temperatures, employers are exploring wearable technologies to keep workers safe. New devices collect biometric data to estimate core body temperature – an elevated one is a symptom of heat exhaustion – and prompt workers to take cool-down breaks.

    The devices, which were originally developed for athletes, firefighters and military personnel, are getting adopted at a time when the Atlantic Council estimates heat-induced losses in labor productivity could cost the U.S. approximately $100 billion annually.

    This article is part of AP’s Be Well coverage, focusing on wellness, fitness, diet and mental health. Read more Be Well.

    But there are concerns about how the medical information collected on employees will be safeguarded. Some labor groups worry managers could use it to penalize people for taking needed breaks.

    “Any time you put any device on a worker, they’re very concerned about tracking, privacy, and how are you going to use this against me,” said Travis Parsons, director of occupational safety and health at the Laborers’ Health and Safety Fund of North America. “There’s a lot of exciting stuff out there, but there’s no guardrails around it.”

    VULNERABLE TO HEAT

    At the Tennessee cleanup site, the workers wearing heat stress monitors made by Atlanta company SlateSafety are employed by United Cleanup Oak Ridge. The company is a contractor of the U.S. Department of Energy, which has rules to prevent on-the-job overheating.

    But most U.S. workers lack protections from extreme heat because there are no federal regulations requiring them, and many vulnerable workers don’t speak up or seek medical attention. In July, the Biden administration proposed a rule to protect 36 million workers from heat-related illnesses.

    From 1992 to 2022, 986 workers died from heat exposure in the U.S., according to the Environmental Protection Agency. Experts suspect the number is higher because a coroner might not list heat as the cause of death if a sweltering roofer takes a fatal fall.

    Setting occupational safety standards can be tricky because individuals respond differently to heat. That’s where the makers of wearable devices hope to come in.

    HOW WEARABLE HEAT TECH WORKS

    Employers have observed workers for heat-related distress by checking their temperatures with thermometers, sometimes rectally. More recently, firefighters and military personnel swallowed thermometer capsules.

    “That just was not going to work in our work environment,” Rob Somers, global environment, health and safety director at consumer product company Perrigo, said.

    Instead, more than 100 employees at the company’s infant formula plants were outfitted with SlateSafety armbands. The devices estimate a wearer’s core body temperature, and a reading of 101.3 degrees triggers an alert.

    Another SlateSafety customer is a Cardinal Glass factory in Wisconsin, where four masons maintain a furnace that reaches 3000 degrees Fahrenheit.

    “They’re right up against the face of the wall. So it’s them and fire,” Jeff Bechel, the company’s safety manager, said.

    Cardinal Glass paid $5,000 for five armbands, software and air-monitoring hardware. Bechel thinks the investment will pay off; an employee’s two heat-related emergency room visits cost the company $15,000.

    Another wearable, made by Massachusetts company Epicore Biosystems, analyzes sweat to determine when workers are at risk of dehydration and overheating.

    “Until a few years ago, you just sort of wiped (sweat) off with a towel,” CEO Rooz Ghaffari said. “Turns out there’s all this information packed away that we’ve been missing.”

    Research has shown some devices successfully predict core body temperature in controlled environments, but their accuracy remains unproven in dynamic workplaces, according to experts. A 2022 research review said factors such as age, gender and ambient humidity make it challenging to reliably gauge body temperature with the technology.

    The United Cleanup Oak Ridge workers swathed in protective gear can get sweaty even before they begin demolition. Managers see dozens of sensor alerts daily.

    Laborer Xavier Allison, 33, was removing heavy pieces of ductwork during a recent heat wave when his device vibrated. Since he was working with radioactive materials and asbestos, he couldn’t walk outside to rest without going through a decontamination process, so he spent about 15 minutes in a nearby room which was just as hot.

    “You just sit by yourself and do your best to cool off,” Allison said.

    The armband notifies workers when they’ve cooled down enough to resume work.

    “Ever since we implemented it, we have seen a significant decrease in the number of people who need to get medical attention,” Miller said.

    COLLECTING PERSONAL DATA

    United Cleanup Oak Ridge uses the sensor data and an annual medical exam to determine work assignments, Miller said. After noticing patterns, the company sent a few employees to see their personal physicians, who found heart issues the employees hadn’t known about, she said.

    At Perrigo, managers analyze the data to find people with multiple alerts and speak to them to see if there’s “a reason why they’re not able to work in the environment,” Somers said. The information is organized by identification numbers, not names, when it goes into the company’s software system, he said.

    Companies keeping years of medical data raises concerns about privacy and whether bosses may use the information to kick an employee off a health plan or fire them, said Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation.

    “The device could hurt, frankly, because you could raise your hand and say ‘I need a break,’ and the boss could say, ‘No, your heart rate is not elevated, go back to work,’” Schwartz said.

    To minimize such risks, employers should allow workers to opt in or out of wearing monitoring devices, only process strictly necessary data and delete the information within 24 hours, he said.

    Wearing such devices also may expose workers to unwanted marketing, Ikusei Misaka, a professor at Tokyo’s Musashino University, said.

    A PARTIAL SOLUTION

    The National Institute for Occupational Safety and Health advises employers to institute a plan to help workers adjust to hot conditions and to train them to recognize signs of heat-related illness and to administer first aid. Wearable devices can be part of efforts to reduce heat stress, but more work needs to be done to determine their accuracy, said Doug Trout, the agency’s medical officer.

    The technology also needs to be paired with access to breaks, shade and cool water, since many workers, especially in agriculture, fear retaliation for pausing to cool off or hydrate.

    “If they don’t have water to drink, and the time to do it, it doesn’t mean much,” Juanita Constible, senior advocate at the Natural Resources Defense Council, said. “It’s just something extra they have to carry when they’re in the hot fields.”

    ___

    This story corrects the spelling of Natural Resources Defense Council in last paragraph.

    ___

    Yuri Kageyama in Tokyo contributed to this report.

    [ad_2]

    Source link

  • The Arrest of Pavel Durov Is a Reminder That Telegram Is Not Encrypted

    The Arrest of Pavel Durov Is a Reminder That Telegram Is Not Encrypted

    [ad_1]

    French police arrested Pavel Durov, the outspoken and sperm-obsessed co-founder of Telegram, over the weekend on charges related to the spread of illicit material on the platform. As news spread of Durov’s arrest, outlets and pundits repeated a description of Telegram that isn’t true: they called it an encrypted messaging app.

    Reuters called Telegram an “encrypted application.” In Axios, Telegram is an “encrypted messaging app.” CNN quoted failed presidential candidate Robert F. Kennedy JR’s description of Durov as the CEO of the “encrypted, uncensored Telegram platform.”

    Telegram is a lot of things—a great place for open-source intelligence about war, a possible vector for child sex abuse material, and a hub for various scams and crimes—but it is absolutely not an encrypted chat app. Does Telegram provide an encrypted chat option? Yes, but it’s not on by default and turning it on isn’t easy.

    The distinction between encrypted and unencrypted apps is important. WhatsApp and Signal, for example, are end-to-end encrypted out of the box. They’re not completely secure but they do a pretty good job of keeping your information safe provided someone doesn’t get hold of your devices.

    With Telegram, all bets are off. Telegram is mostly about big group chats and channels where people share information with their fans. DMs are not, by default, end-to-end encrypted. Users can enable what Telegram calls “secret chats” but must do so for every single conversation they want encrypted. This is never on by default and can’t be activated for group DMs or channels.

    As John Hopkins security researcher Matthew Green pointed out in his blog on the subject, it’s also a pain in the ass to activate. “The button that activates Telegram’s encryption feature is not visible from the main conversation pane, or from the home screen. To find it in the iOS app, I had to click at least four times—once to access the user’s profile, once to make a hidden menu pop up showing me the options, and a final time to ‘confirm’ that I wanted to use encryption. And even after this, I was not able to actually have an encrypted conversation, since Secret Chats only works if your conversation partner happens to be online when you do this,” Green said.

    Again, you have to do this for every single chat you want kept hidden. With Signal and WhatsApp, it’s on by default for every conversation.

    So why does the world seem to think of Telegram as an encrypted app? Durov constantly says that it is and attacks the encryption of other platforms. In a long post on his Telegram channel (which isn’t encrypted) in May, Durov accused the U.S. government of having a hand in the creation of Signals’ encryption systems.

    “It looks almost as if big tech in the U.S. is not allowed to build its own encryption protocols that would be independent of government interference,” he said. “Telegram is the only massively popular messaging service that allows everyone to make sure that all of its apps indeed use the same open source code that is published on Github. For the past ten years, Telegram Secret Chats have remained the only popular method of communication that is verifiably private.”

    Durov has been bashing Signal and WhatsApp for years. He pursued a similar line of attack in 2017. “The encryption of Signal (=WhatsApp, FB) was funded by the U.S. Government,” he said in a tweet back then. “I predict a backdoor will be found there within 5 years from now.”

    Durov is right that Signal did get government grants early in development. It also got them from a lot of other places, including the Knight Foundation and the Freedom of Press Foundation. It’s ludicrous to claim, without proof, that a $3 million grant early in development equates to any kind of control or backdoor. It barely makes a dent in the $50 million it costs to run Signal annually now. Signal’s encryption algorithms are also open source and numerous cybersecurity experts have vouched for their authenticity.

    More than five years later Telegram still doesn’t have end-to-end encryption on by default, Signal is fixing its known security issues, and the French have arrested Durov on a host of charges related to the spread of illicit material on the platform.

    [ad_2]

    Matthew Gault

    Source link

  • Sensors can read your sweat and predict overheating. Why privacy advocates care

    Sensors can read your sweat and predict overheating. Why privacy advocates care

    [ad_1]

    On a hot summer day in Oak Ridge, Tennessee, dozens of men removed pipes, asbestos and hazardous waste while working to decontaminate a nuclear facility and prepare it for demolition.

    Dressed in head-to-toe coveralls and fitted with respirators, the crew members toiling in a building without power had no obvious respite from the heat. Instead, they wore armbands that recorded their heart rates, movements and exertion levels for signs of heat stress.

    Stephanie Miller, a safety and health manager for a U.S. government contractor doing cleanup work at the Oak Ridge National Laboratory, watched a computer screen nearby. A color-coding system with little bubbles showing each worker’s physiological data alerted her if anyone was in danger of overheating.

    “Heat is one of the greatest risks that we have in this work, even though we deal with high radiation, hazardous chemicals and heavy metals,” Miller said.

    As the world experiences more record high temperatures, employers are exploring wearable technologies to keep workers safe. New devices collect biometric data to estimate core body temperature – an elevated one is a symptom of heat exhaustion – and prompt workers to take cool-down breaks.

    The devices, which were originally developed for athletes, firefighters and military personnel, are getting adopted at a time when the Atlantic Council estimates heat-induced losses in labor productivity could cost the U.S. approximately $100 billion annually.

    But there are concerns about how the medical information collected on employees will be safeguarded. Some labor groups worry managers could use it to penalize people for taking needed breaks.

    “Any time you put any device on a worker, they’re very concerned about tracking, privacy, and how are you going to use this against me,” said Travis Parsons, director of occupational safety and health at the Laborers’ Health and Safety Fund of North America. “There’s a lot of exciting stuff out there, but there’s no guardrails around it.”

    At the Tennessee cleanup site, the workers wearing heat stress monitors made by Atlanta company SlateSafety are employed by United Cleanup Oak Ridge. The company is a contractor of the U.S. Department of Energy, which has rules to prevent on-the-job overheating.

    But most U.S. workers lack protections from extreme heat because there are no federal regulations requiring them, and many vulnerable workers don’t speak up or seek medical attention. In July, the Biden administration proposed a rule to protect 36 million workers from heat-related illnesses.

    From 1992 to 2022, 986 workers died from heat exposure in the U.S., according to the Environmental Protection Agency. Experts suspect the number is higher because a coroner might not list heat as the cause of death if a sweltering roofer takes a fatal fall.

    Setting occupational safety standards can be tricky because individuals respond differently to heat. That’s where the makers of wearable devices hope to come in.

    Employers have observed workers for heat-related distress by checking their temperatures with thermometers, sometimes rectally. More recently, firefighters and military personnel swallowed thermometer capsules.

    “That just was not going to work in our work environment,” Rob Somers, global environment, health and safety director at consumer product company Perrigo, said.

    Instead, more than 100 employees at the company’s infant formula plants were outfitted with SlateSafety armbands. The devices estimate a wearer’s core body temperature, and a reading of 101.3 degrees triggers an alert.

    Another SlateSafety customer is a Cardinal Glass factory in Wisconsin, where four masons maintain a furnace that reaches 3000 degrees Fahrenheit.

    “They’re right up against the face of the wall. So it’s them and fire,” Jeff Bechel, the company’s safety manager, said.

    Cardinal Glass paid $5,000 for five armbands, software and air-monitoring hardware. Bechel thinks the investment will pay off; an employee’s two heat-related emergency room visits cost the company $15,000.

    Another wearable, made by Massachusetts company Epicore Biosystems, analyzes sweat to determine when workers are at risk of dehydration and overheating.

    “Until a few years ago, you just sort of wiped (sweat) off with a towel,” CEO Rooz Ghaffari said. “Turns out there’s all this information packed away that we’ve been missing.”

    Research has shown some devices successfully predict core body temperature in controlled environments, but their accuracy remains unproven in dynamic workplaces, according to experts. A 2022 research review said factors such as age, gender and ambient humidity make it challenging to reliably gauge body temperature with the technology.

    The United Cleanup Oak Ridge workers swathed in protective gear can get sweaty even before they begin demolition. Managers see dozens of sensor alerts daily.

    Laborer Xavier Allison, 33, was removing heavy pieces of ductwork during a recent heat wave when his device vibrated. Since he was working with radioactive materials and asbestos, he couldn’t walk outside to rest without going through a decontamination process, so he spent about 15 minutes in a nearby room which was just as hot.

    “You just sit by yourself and do your best to cool off,” Allison said.

    The armband notifies workers when they’ve cooled down enough to resume work.

    “Ever since we implemented it, we have seen a significant decrease in the number of people who need to get medical attention,” Miller said.

    United Cleanup Oak Ridge uses the sensor data and an annual medical exam to determine work assignments, Miller said. After noticing patterns, the company sent a few employees to see their personal physicians, who found heart issues the employees hadn’t known about, she said.

    At Perrigo, managers analyze the data to find people with multiple alerts and speak to them to see if there’s “a reason why they’re not able to work in the environment,” Somers said. The information is organized by identification numbers, not names, when it goes into the company’s software system, he said.

    Companies keeping years of medical data raises concerns about privacy and whether bosses may use the information to kick an employee off a health plan or fire them, said Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation.

    “The device could hurt, frankly, because you could raise your hand and say ‘I need a break,’ and the boss could say, ‘No, your heart rate is not elevated, go back to work,’” Schwartz said.

    To minimize such risks, employers should allow workers to opt in or out of wearing monitoring devices, only process strictly necessary data and delete the information within 24 hours, he said.

    Wearing such devices also may expose workers to unwanted marketing, Ikusei Misaka, a professor at Tokyo’s Musashino University, said.

    The National Institute for Occupational Safety and Health advises employers to institute a plan to help workers adjust to hot conditions and to train them to recognize signs of heat-related illness and to administer first aid. Wearable devices can be part of efforts to reduce heat stress, but more work needs to be done to determine their accuracy, said Doug Trout, the agency’s medical officer.

    The technology also needs to be paired with access to breaks, shade and cool water, since many workers, especially in agriculture, fear retaliation for pausing to cool off or hydrate.

    “If they don’t have water to drink, and the time to do it, it doesn’t mean much,” Juanita Constible, senior advocate at the National Resources Defense Council, said. “It’s just something extra they have to carry when they’re in the hot fields.”

    ___

    Yuri Kageyama in Tokyo contributed to this report.

    [ad_2]

    Source link

  • The US Navy Has Run Out of Pants

    The US Navy Has Run Out of Pants

    [ad_1]

    The United States Defense Department has ideas about a dramatic strategy for defending Taiwan against a Chinese military offensive that would involve deploying an “unmanned hellscape” consisting of thousands of drones buzzing around the island nation. Meanwhile, the US National Institute of Standards and Technology announced a red-team hacking competition this week with the AI ethics nonprofit Humane Intelligence to find flaws and biases in generative AI systems.

    WIRED took a closer look at the Telegram channel and website known as Deep State that uses public data and secret intelligence to power its live-tracker map of Ukraine’s evolving front line. Protesters went to Citi Field in New York on Wednesday to raise awareness about the serious privacy risks of deploying facial recognition systems at sporting venues. The technology has increasingly been implemented at stadiums and arenas across the country with little oversight. And Amazon Web Services updated its instructions for how customers should implement authentication in its Application Load Balancer, after researchers found an implementation issue that they say could expose misconfigured web apps.

    But wait, there’s more! Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    US Navy officials confirmed to Military.com this week that pants for the standard Navy Working Uniform (NWU) are out of stock at Navy Exchanges and are in perilously low supply across the sea service’s distribution channels. The Navy’s Exchange Service Command is “experiencing severe shortages of NWU trousers” both in stores and online, according to spokesperson Courtney Williams. Sailors have been noticing out-of-stock notifications online, which state that pants are “not available for purchase in any size.” Williams said that current stock around the world is at 13 percent and that the top priority right now is providing pants to new recruits at Recruit Training Command in Illinois, the Naval Academy Preparatory School in Rhode Island, and the officer training schools.

    The shortage seems to have resulted from issues with the Defense Logistics Agency’s pants pipeline. Military.com reports that signs currently inside Navy Exchanges say the shortage is “due to Defense Logistics Agency vendor issues.” Williams said the Command has “been in communication with DLA on a timeline for the uniform’s production and supply chain.”

    Mikia Muhammad, a spokesperson for the Defense Logistics Agency, told Military.com that the first pants restocks are scheduled for October, but these supplies will go to recruits and training programs. She said that Navy exchanges should expect “full support” beginning in January.

    A joint statement on Monday by the FBI, the Office of the Director of National Intelligence, and the Cybersecurity and Infrastructure Security Agency formally accused Iran of conducting a hack-and-leak operation against Donald Trump’s presidential campaign. Trump himself had accused Iran in a social media post on August 10, following a report from Microsoft on August 9 about Iranian hackers targeting US political campaigns. The Iranian government denies the accusation.

    “The [Intelligence Community] is confident that the Iranians have through social engineering and other efforts sought access to individuals with direct access to the presidential campaigns of both political parties,” the US agencies wrote. “Such activity, including thefts and disclosures, are intended to influence the US election process.”

    Politico reported on August 10 that Iran had breached the Trump campaign, and an entity calling itself “Robert” had contacted the publication offering alleged stolen documents. The same entity also contacted The New York Times and The Washington Post hawking similar documents.

    The popular flight-tracking service FlightAware said this week that a “configuration error” in its systems exposed personal customer data, including names, email addresses, and even some Social Security numbers. The company discovered the exposure on July 25 but said in a breach notification to the attorney general of California that the situation may date as far back as January 2021. The company is mandating that all affected users reset their account passwords.

    The company said in its public statement that the exposed data includes “user ID, password, and email address. Depending on the information you provided, the information may also have included your full name, billing address, shipping address, IP address, social media accounts, telephone numbers, year of birth, last four digits of your credit card number, information about aircraft owned, industry, title, pilot status (yes/no), and your account activity (such as flights viewed and comments posted).” It also said in its disclosure to California, “Additionally, our investigation has revealed that your Social Security Number may have been exposed.”

    Since European law enforcement agencies hacked the end-to-end encrypted phone company Sky in 2021, the communications they compromised have been used as evidence in numerous EU investigations and criminal cases. But a review of court records by 404 Media and Court Watch showed this week that US agencies have also been leaning on the trove of roughly half a billion chat messages. US law enforcement has used the data in multiple drug-trafficking prosecutions, particularly to pursue alleged smugglers who transport cocaine with commercial ships and speedboats.

    [ad_2]

    Lily Hay Newman

    Source link

  • Stadiums Are Embracing Face Recognition. Privacy Advocates Say They Should Stick to Sports

    Stadiums Are Embracing Face Recognition. Privacy Advocates Say They Should Stick to Sports

    [ad_1]

    Thousands of people lined up outside Citi Field in Queens, New York, on Wednesday to watch the Mets face off with the Orioles. But outside the ticketing booth, a handful of protesters handed out flyers. They were there to protest a recent Major League Baseball program, one that’s increasingly common in professional sports: using facial recognition on fans.

    Facial recognition companies and their customers argue that these systems save time, and therefore money, by shortening lines at stadium entrances. However, skeptics argue that the surveillance tools are never totally secure, make it easier for police to get information about fans, and fuel “mission creep” where surveillance technology becomes more common or even required.

    The MLB’s facial recognition program, dubbed Go-Ahead Entry, lets participating fans go on a separate security line, usually shorter than the other queues. Fans download the MLB Ballpark app, submit a selfie, and have their face matched at an in-person camera kiosk at a stadium’s entrance.

    Six MLB teams are participating in Go-Ahead Entry, including the Philadelphia Phillies, Cincinnati Reds, Houston Astros, Kansas City Royals, San Francisco Giants, and Washington Nationals.

    Some MLB teams, including the Mets, have their own facial recognition programs for express entry. The Mets have been using the facial recognition company Wicket for its Mets Entry Express program since 2021. The Cleveland Guardians, similarly, have been using technology from the company Clear at its ballpark, Progressive Field, since 2019.

    Neither the Mets, MLB, nor Wicket immediately responded to WIRED’s requests for comment.

    The National Football League has also started using Wicket facial recognition for express entry. NFL spokesperson Brian McCarthy said in an X post that the league-wide program, at least currently, is only available to “team/game-day personnel, vendors, and media”—not fans. The Cleveland Browns and Tennessee Titans, however, do have facial recognition entry systems that fans can use. (The news of the NFL’s expanded use of face recognition still caused confusion on Facebook and X, where some people thought facial recognition would be required at the stadiums for all 32 NFL teams.)

    At Citi Field on Wednesday, the Mets Entry Express Line was used scarcely, perhaps five people every five minutes or so. There was never a line. The main security lines, though longer in comparison, took only about five minutes.

    The protesters at Citi Field represented some of the 11 organizations that consigned an open letter arguing against the use of facial recognition systems at stadiums, including Fight for the Future, the Electronic Privacy Information Center, and Amnesty International. The letter argues that “not only does facial recognition pose unprecedented threats to people’s privacy and safety, it’s also completely unnecessary.” The activists outside Citi Field on Wednesday passed out flyers to passersby with information about Go-Ahead Entry, declaring in all caps, “WE CALL FOUL ON FACIAL RECOGNITION AT SPORTING EVENTS.” This wasn’t their first protest on the issue; organizers with Fight for the Future also staged a protest last year at Citizens Bank Park, home of the Phillies, to agitate against its introduction of facial recognition.

    [ad_2]

    Caroline Haskins

    Source link

  • The Slow-Burn Nightmare of the National Public Data Breach

    The Slow-Burn Nightmare of the National Public Data Breach

    [ad_1]

    Data breaches are a seemingly endless scourge with no simple answer, but the breach in recent months of the background-check service National Public Data illustrates just how dangerous and intractable they have become. And after four months of ambiguity, the situation is only now beginning to come into focus with National Public Data finally acknowledging the breach on Monday just as a trove of the stolen data leaked publicly online.

    In April, a hacker known for selling stolen information, known as USDoD, began hawking a trove of data on cybercriminal forums for $3.5 million that they said included 2.9 billion records and impacted “the entire population of USA, CA and UK.” As the weeks went on, samples of the data started cropping up as other actors and legitimate researchers worked to understand its source and validate the information. By early June, it was clear that at least some of the data was legitimate and contained information like names, emails, and physical addresses in various combinations.

    The data isn’t always accurate, but it seems to involve two troves of information. One that includes more than 100 million legitimate email addresses along with other information and a second that includes Social Security numbers but no email addresses.

    “There appears to have been a data security incident that may have involved some of your personal information,” National Public Data wrote on Monday. “The incident is believed to have involved a third-party bad actor that was trying to hack into data in late December 2023, with potential leaks of certain data in April 2024 and summer 2024 … The information that was suspected of being breached contained name, email address, phone number, Social Security number, and mailing address(es).”

    The company says it has been cooperating with “law enforcement and governmental investigators.” NPD is facing potential class action lawsuits over the breach.

    “We have become desensitized to the never-ending leaks of personal data, but I would say there is a serious risk,” says security researcher Jeremiah Fowler, who has been following the situation with National Public Data. “It may not be immediate, and it could take years for one of the many criminal actors to successfully figure out how to use this information, but the bottom line is that a storm is coming.”

    When information is stolen from a single source, like Target customer data being stolen from Target, it’s relatively straightforward to establish that source. But when information is stolen from a data broker and the company doesn’t come forward about the incident, it’s much more complicated to determine whether the information is legitimate and where it came from. Typically, people whose data is compromised in a breach—the true victims—aren’t even aware that National Public Data held their information in the first place.

    In a blog post on Wednesday about the contents and provenance of the National Public Data trove, security researcher Troy Hunt wrote, “The only parties that know the truth are the anonymous threat actors passing the data around and the data aggregator … We’re left with 134M email addresses in public circulation and no clear origin or accountability.”

    [ad_2]

    Lily Hay Newman

    Source link

  • Utility company’s proposal to rat out hidden marijuana operations to police raises privacy concerns – Cannabis Business Executive – Cannabis and Marijuana industry news

    Utility company’s proposal to rat out hidden marijuana operations to police raises privacy concerns – Cannabis Business Executive – Cannabis and Marijuana industry news

    [ad_1]





    Utility company’s proposal to rat out hidden marijuana operations to police raises privacy concerns – Cannabis Business Executive – Cannabis and Marijuana industry news




























    skip to Main Content

    [ad_2]

    AggregatedNews

    Source link

  • Inside the Dark World of Doxing for Profit

    Inside the Dark World of Doxing for Profit

    [ad_1]

    Since the early 1990s, people have used doxing as a toxic way to strike digital revenge—stripping away someone’s anonymity by unmasking their identity online. But in recent years, the poisonous practice has taken on new life, with people being doxed and extorted for cryptocurrency and, in the most extreme cases, potentially facing physical violence.

    For the past year, security researcher Jacob Larsen—who was a victim of doxing around a decade ago when someone tried to extort him for a gaming account—has been monitoring doxing groups, observing the techniques used to unmask people, and interviewing prominent members of the doxing community. Doxing actions have led to incomes of “well over six figures annually,” and methods include making fake law enforcement requests to get people’s data, according to Larsen’s interviews.

    “The primary target of doxing, particularly when it involves a physical extortion component, is for finance,” says Larsen, who leads an offensive security team at cybersecurity company CyberCX but conducted the doxing research in a personal capacity with the support of the company.

    Over several online chat sessions last August and September, Larsen interviewed two members of the doxing community: “Ego” and “Reiko.” While neither of their offline identities is publicly known, Ego is believed to have been a member of the five-person doxing group known as ViLe, and Reiko last year acted as an administrator of the biggest public doxing website, Doxbin, as well as being involved in other groups. (Two other ViLe members pleaded guilty to hacking and identity theft in June.) Larsen says both Ego and Reiko deleted their social media accounts since speaking with him, making it impossible for WIRED to speak with them independently.

    People can be doxed for a full range of reasons—from harassment in online gaming, to inciting political violence. Doxing can “humiliate, harm, and reduce the informational autonomy” of targeted individuals, says Bree Anderson, a digital criminologist at Deakin University in Australia who has researched the subject with colleagues. There are direct “first-order” harms, such as risks to personal safety, and longer-term “second-order harms,” including anxiety around future disclosures of information, Anderson says.

    Larsen’s research mostly focused on those doxing for profit. Doxbin is central to many doxing efforts, with the website hosting more than 176,000 public and private doxes, which can contain names, social media details, Social Security numbers, home addresses, places of work, and similar details belonging to people’s family members. Larsen says he believes most of the doxing on Doxbin is driven by extortion activities, although there can be other motivations and doxing for notoriety. Once information is uploaded, Doxbin will not remove it unless it breaks the website’s terms of service.

    “It is your responsibility to uphold your privacy on the internet,” Reiko said in one of the conversations with Larsen, who has published the transcripts. Ego added: “It’s on the users to keep their online security tight, but let’s be real, no matter how careful you are, someone might still track you down.”

    Impersonating Police, Violence as a Service

    Being entirely anonymous online is almost impossible—and many people don’t try, often using their real names and personal details in online accounts and sharing information on social media. Doxing tactics to gather people’s details, some of which were detailed in charges against ViLe members, can include reusing common passwords to access accounts, accessing public and private databases, and social engineering to launch SIM swapping attacks. There are also more nefarious methods.

    Emergency data requests (EDR) can also be abused, Larsen says. EDRs allow law enforcement officials to ask tech companies for people’s names and contact details without any court orders as they believe there may be danger or risks to people’s lives. These requests are made directly to tech platforms, often through specific online portals, and broadly need to come from official law enforcement or government email addresses.

    [ad_2]

    Matt Burgess

    Source link

  • The Controversial Kids Online Safety Act Faces an Uncertain Future

    The Controversial Kids Online Safety Act Faces an Uncertain Future

    [ad_1]

    After passing the Senate nearly unanimously last week, the future of the Kids Online Safety Act (KOSA) appears uncertain. Congress is now on a six-week recess, and reporting from Punchbowl News indicates that the House Republican leadership may not prioritize bringing the bill to the floor for a vote when legislators return.

    In response to Punchbowl’s reporting, Senate Majority Leader Chuck Schumer released a statement saying, “Just one week ago, Speaker Johnson said that he’d like to get KOSA done. I hope that hasn’t changed. Letting KOSA and [the Children and Teens’ Online Protection Act] collect dust in the House would be an awful mistake and a gut punch—a gut punch to these brave, wonderful parents who have worked so hard to reach this point.” The bill has also received support from vice president and Democratic presidential candidate Kamala Harris.

    But the bill created a massive divide among the digital rights and tech accountability community. If passed, the legislation would require online platforms to block users under 18 from seeing certain types of content that the government considers harmful.

    Proponents of the measure, which included the Tech Oversight Project, an nonprofit focused on tech accountability through antitrust legislation, saw the bill as a meaningful step toward holding tech companies accountable for the way their products impact children.

    “Too many young people, parents, and families have experienced the dire consequences that result from social media companies’ greed,” said Sacha Haworth, executive director of the Tech Oversight Project, in a statement in June. “The accountability KOSA would provide for these families is long overdue.”

    Others, like the nonprofit digital rights organization the Center for Technology and Democracy, said that, if enacted, the law could be used to prevent young users from accessing critical information about topics like sexual health and LGBTQ+ issues. This meant that some organizations that regularly lobby to hold Silicon Valley accountable found themselves siding with tech companies and their lobbyists in trying to kill the bill.

    “KOSA is not ready for a floor vote,” said Aliya Bhatia, policy analyst with the Center for Technology and Democracy’s Free Expression Project, in a statement in July. “In its current form, KOSA can still be misused to target marginalized communities and politically sensitive information.”

    Evan Greer, director of the nonprofit advocacy group Fight for the Future, which opposed the bill, tells WIRED that KOSA and legislation like it “divides our coalition” while allowing tech companies to “keep getting away with murder and avoiding regulation.”

    “This was never really about protecting kids,” Greer says. “It was sort of about lawmakers wanting to say that they’re protecting kids, and that doesn’t actually help kids.” Instead of legislators focusing on the “flawed” legislation, Greer says that Congress could have spent that same time and energy on antitrust-focused legislation like the American Innovation and Choice Online and the Open App Markets Act, or on the American Privacy Rights Act.

    “When our coalition is divided in fighting each other, we’re going to get rolled every time by Big Tech,” she says.

    Meanwhile, Linda Yaccarino, CEO of X, has said that she supports KOSA, as has the Center for Countering Digital Hate, a tech accountability nonprofit that was sued by X last year for exposing hate speech on its platform.

    Although the House Republican leadership’s decision may signal the beginning of the end of KOSA itself, Gautam Hans, an associate law professor at Cornell University, says that “given the bipartisan interest in enacting this law, I suspect other proposals will follow—with hopefully more extensive safeguards against potential censorship by the state.”

    [ad_2]

    Vittoria Elliott

    Source link