ReportWire

Tag: Surveillance

  • Amazon’s Ring Wants to Wash Away Your Surveillance Concerns With Lost Puppies

    Ring has been getting plenty of critical press lately, with concerns over local police and federal law enforcement potentially gaining access to sensitive surveillance camera footage across the country. Anti-ICE activists have been calling for a boycott over Ring’s announcement that it would be cooperating with Flock Safety, which has built a nationwide surveillance network used by police to track license plates. And it’s not like any of these concerns are really new. Ring has gotten heat over privacy concerns for most of its existence, though there’s renewed interest in how surveillance tech is being used in 2026 as federal agents terrorize cities like Minneapolis, threatening anyone who isn’t white with deportation and executing observers in the streets.

    It’s against this backdrop of bad PR that Ring, a subsidiary of Amazon, has expanded a feature that helps people find their lost dogs. And while the company credits the feature with helping find roughly one dog per day, a laudable achievement, no doubt celebrated by pet owners across the country, it comes at a time when every American is trying to weigh the pros and cons of blanketing the globe with cameras watching our every move.

    “Ring has expanded Search Party for Dogs, an AI-powered community feature that enables your outdoor Ring cameras to help reunite lost dogs with their families, to anyone in the U.S. who needs help finding their lost pup,” Amazon said in a press release posted online Monday.

    The Search Party feature allows Ring users to put out an alert to neighbors within the Ring ecosystem when their dog has gone missing, similar to existing apps like PawBoost. And neighbors can opt in to have their own camera on the lookout for any dog that might look similar. The feature has been expanded to allow people without Ring cameras to download the app and post their missing dog as well.

    Everyone can get behind the idea of helping find lost dogs. But the feature feels like a PR move that pulls attention from the threat of omnipresent surveillance in an ostensibly free society: the fact that every American’s device can be turned against them in an instant. If you don’t like it, well, I guess you like lost dogs.

    Ring says federal law enforcement is not given access to the features that allow authorities to request access to footage from Ring users. The company explains that local police must make a relatively narrow request for footage in a specific geographic area and a time-bound request within a 12-hour span. Police also need to provide an investigation number and explain what kind of crime they’re investigating, something that users can search for themselves if they’re trying to decide whether to provide their own footage to the cops. A spokesperson for Ring told Gizmodo on Monday that they hadn’t seen any requests related to immigration and that if the company found a local police department surreptitiously providing an agency like ICE with security footage, it would cut off that department from access.

    “Ring has no partnership with ICE, does not give ICE videos, feeds, or back-end access, and does not share video with them,” Ring spokesperson Emma Daniels told Gizmodo in a statement.

    But those safeguards might be cold comfort in a political environment where the U.S. federal government doesn’t seem bound by any rules. A judge in Minnesota recently noted that ICE violated nearly 100 court orders in the state during January alone.

    Authorities can also get footage directly from Ring through a judicial warrant, and the company told Gizmodo that an administrative warrant isn’t sufficient.

    “Like all companies, Ring may receive legally valid and binding demands for information from law enforcement, such as search warrants, subpoenas, or court orders,” said Daniels. “We do not disclose customer information unless required to do so by law, or in rare emergency situations when there is an imminent danger of death or serious physical injury. Outside of that legal process, customers control which videos are shared with law enforcement.”

    Judicial warrants are issued by real judges who are part of the judicial branch, as opposed to immigration “judges” that are housed under the executive branch and the U.S. Department of Justice. The distinction matters because administrative warrants aren’t sufficient to demand entry into a private residence. However, the New York Times broke news last week that ICE has told its agents that administrative warrants are enough to go storming into any house they like.

    All of which is to say that when the rules are breaking down, it’s important to pay attention to what private individuals and companies do in the face of tyranny. Will Ring really pull the plug if ICE tries to abuse its power or gain access to footage through a local police department? We don’t really know. And as we all get used to being constantly on video thanks to a combination of state surveillance and private cameras, it makes sense that a company like Ring would want to highlight the positives of our global panopticon.

    One positive? It’s easier to help your neighbors find Fido. Unfortunately, it might also help the feds find your neighbor.

    Matt Novak

    Source link

  • Here’s the tech powering ICE’s deportation crackdown  | TechCrunch

    President Donald Trump said he would make countering immigration one of his flagship policies during his second term in The White House, promising an unprecedented number of deportations. 

    A year in, data shows that deportations by Immigration and Customs Enforcement (ICE) and Customs and Border Protection have surpassed at least 350,000 people

    ICE has taken center stage in Trump’s mass removal campaign, raiding homes, workplaces, and public parks in search of undocumented people, prompting widespread protests and resistance from communities across the United States. 

    ICE uses several technologies to identify and surveil individuals. Homeland Security has also used the shadow of Trump’s deportations to challenge long-standing legal norms, including forcibly entering homes to arrest people without a judicial warrant, a move that legal experts say violates the Fourth Amendment protections against unreasonable searches and seizures. 

    Here are some of the technologies that ICE is relying on.

    Cell-site simulators

    ICE has a technology known as cell-site simulators to snoop on cellphones. These surveillance devices, as the name suggests, are designed to appear as a cellphone tower, tricking nearby phones to connect to them. Once that happens, the law enforcement authorities who are using the cell-site simulators can locate and identify the phones in their vicinity, and potentially intercept calls, text messages, and internet traffic.  

    Cell-site simulators are also known as “stingrays,” based on the brand name of one of the earliest versions of the technology, which was made by U.S. defense contractor Harris (now L3Harris); or IMSI catchers, a technology that can capture a nearby cell phone’s unique identifier which law enforcement can use for identifying the phone’s owner.  

    In the last two years, ICE has signed contracts for more than $1.5 million with a company called TechOps Specialty Vehicles (TOSV), which produces customized vans for law enforcement. 

    A contract worth more than $800,000 dated May 8, 2025 said TOSV will provide “Cell Site Simulator (CSS) Vehicles to support the Homeland Security Technical Operations program.”  

    TOSV president Jon Brianas told TechCrunch that the company does not manufacture the cell-site simulators, but rather integrates them “into our overall design of the vehicle.” 

    Cell-site simulators have long been controversial for several reasons.  

    These devices are designed to trick all nearby phones to connect to them, which means that by design they gather the data of many innocent people. Also, authorities have sometimes deployed them without first obtaining a warrant.  

    Authorities have also tried to keep their use of the technology secret in court, withholding information, and even accepting plea deals and dropping cases rather than disclose information about their use of cell-site simulators. In a court case in 2019 in Baltimore, it was revealed that prosecutors were instructed to drop cases rather than violate a non-disclosure agreement with the company that makes the devices.  

    Facial recognition

    Clearview AI is perhaps the most well-known facial-recognition company today. For years, the company promised to be able to identify any face by searching through a large database of photos it had scraped from the internet. 

    On Monday, 404 Media reported that ICE has signed a contract with the company to support its law enforcement arm Homeland Security Investigations (HSI), “with capabilities of identifying victims and offenders in child sexual exploitation cases and assaults against law enforcement officers.” 

    According to a government procurement database, the contract signed last week is worth $3.75 million. 

    ICE has had other contracts with Clearview AI in the last couple of years. In September 2024, the agency purchased “forensic software” from the company, a deal worth $1.1 million. The year before, ICE paid Clearview AI nearly $800,000 for “facial recognition enterprise licenses.”

    Clearview AI did not respond to a request for comment. 

    ICE is also using a facial recognition app called Mobile Fortify, which federal agents use to identify people on the street. The app relies on scanning a person’s driver’s license photo against 200 million photos, much of the data sourced from state driver’s license databases.

    Paragon phone spyware

    Contact Us

    Do you have more information about ICE and the technology it uses? We would love to learn how this affects you. From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.

    In September 2024, ICE signed a contract worth $2 million with Israeli spyware maker Paragon Solutions. Almost immediately, the Biden administration issued a “stop work order,” putting the contract under review to make sure it complied with an executive order on the government’s use of commercial spyware. 

    Because of that order, for nearly a year, the contract remained in limbo. Then, last week, the Trump administration lifted the stop work order, effectively reactivating the contract

    At this point, the status of Paragon’s relationship with ICE in practice is unclear.  

    The records entry from last week said that the contract with Paragon is for “a fully configured proprietary solution including license, hardware, warranty, maintenance, and training.” Practically speaking, unless the hardware installation and training were done last year, it may take some time for ICE to have Paragon’s system up and running.

    It’s also unclear if the spyware will be used by ICE or HSI, an agency whose investigations are not limited to immigration, but also cover online child sexual exploitation, human trafficking, financial fraud, and more.

    Paragon has long tried to portray itself as an “ethical” and responsible spyware maker, and now has to decide if it’s ethical to work with Trump’s ICE. A lot has happened to Paragon in the last year. In December, American private equity giant AE Industrial purchased Paragon, with a plan to merge it with cybersecurity company RedLattice, according to Israeli tech news site Calcalist.

    In a sign that the merger may have taken place, when TechCrunch reached out to Paragon for comment on the reactivation of the ICE contract last week, we were referred to RedLattice’s new vice president of marketing and communications Jennifer Iras. 

    RedLattice’s Iras did not respond to a request for comment for this article, nor for last week’s article.

    In the last few months, Paragon has been ensnared in a spyware scandal in Italy, where the government has been accused of spying on journalists and immigration activists. In response, Paragon cut ties with Italy’s intelligence agencies. 

    Phone hacking and unlocking technology

    In mid-September, ICE’s law enforcement arm Homeland Security Investigations signed a contract with Magnet Forensics for $3 million.

    This contract is specifically for software licenses so that HSI agents can “recover digital evidence, process multiple devices,” and “generate forensic reports,” according to the contract description.

    Magnet is the current maker of the phone hacking and unlocking devices known as Graykey. These devices essentially give law enforcement agents the ability to connect a locked phone to them and unlock them and access the data inside of them. 

    Magnet Forensics, which merged with Graykey makers Grayshift in 2023, did not respond to a request for comment.

    Cellphone location data 

    At the end of September, 404 Media reported that ICE bought access to “an “all-in-one” surveillance tool that allows the agency to search through databases of historical cellphone location data, as well as social media information.  

    The tool appears to be made of two products called Tangles and Webloc, which are made by a company called Penlink. One of the tools promises to leverage “a proprietary data platform to compile, process, and validate billions of daily location signals from hundreds of millions of mobile devices, providing both forensic and predictive analytics,” according to a redacted contract found by 404 Media.  

    The redacted contract does not identify which one of the tools makes that promise, but given its description, it’s likely Webloc. Forbes previously cited a case study that said Webloc can search a given location to “monitor trends of mobile devices that have given data at those locations and how often they have been there.”  

    This type of cellphone location data is harvested by companies around the world using software development kits (SDKs) embedded in regular smartphone apps, or with an online advertising process called real-time bidding (RTB) where companies bid in real-time to place an ad on the screen of a cellphone user based on their demographic or location data. The latter process has the by-product of giving ad tech companies that kind of personal data.  

    Once collected, this mass of location data is transferred to a data broker who then sells it to government agencies. Thanks to this layered process, authorities have used this type of data without getting a warrant by simply purchasing access to the data. 

    The other tool, Tangles, is an “AI-powered open-source intelligence” tool that automates “the search and analysis of data from the open, deep, and the dark web,” according to Penlink’s official site.  

    Forbes reported in September that ICE spent $5 million on Penlink’s two tools.  

    Penlink did not respond to a request for comment.  

    License plate readers

    ICE relies on automated license plate reader (ALPR) companies to follow drivers across a large swath of the U.S., such as where people go and when.

    ICE also leans on its connections with local law enforcement agencies, which have contracts with ALPR providers, like surveillance company Flock Safety, to obtain immigration data by the backdoor. Flock is one of the largest ALPR providers, with over 40,000 license plate scanners around the United States, and only getting larger with its partnerships with other companies, such as video surveillance company Ring.

    Efforts by ICE to informally request data from local law enforcement has prompted some police departments to cut off federal agencies from their access.

    Border Patrol runs its own surveillance network of ALPR cameras, the Associated Press reported.

    For years, ICE has used the legal research and public records data broker LexisNexis to support its investigations. 

    In 2022, two non-profits obtained documents via Freedom of Information Act requests, which revealed that ICE performed more than 1.2 million searches over seven months using a tool called Accurint Virtual Crime Center. ICE used the tool to check the background information of migrants.   

    A year later, The Intercept revealed that ICE was using LexisNexis to detect suspicious activity and investigate migrants before they even committed a crime, a program that a critic said enabled “mass surveillance.”

    According to public records, LexisNexis currently provides ICE “with a law enforcement investigative database subscription (LEIDS) which allows access to public records and commercial data to support criminal investigations.” 

    This year, ICE has paid $4.7 million to subscribe to the service. 

    LexisNexis spokesperson Jennifer Richman told TechCrunch that ICE has used the company’s product “data and analytics solutions for decades, across several administrations.”

    “Our commitment is to support the responsible and ethical use of data, in full compliance with laws and regulations, and for the protection of all residents of the United States,” said Richman, who added that LexisNexis “partners with more than 7,500 federal, state, local, tribal, and territorial agencies across the United States to advance public safety and security.” 

    Surveillance giant Palantir

    Data analytics and surveillance technology giant Palantir has signed several contracts with ICE in the last year. The biggest contract, worth $18.5 million from September 2024, is for a database system called “Investigative Case Management,” or ICM.

    The contract for ICM goes back to 2022, when Palantir signed a $95.9 million deal with ICE. The Peter Thiel-founded company’s relationship with ICE dates back to the early 2010s. 

    Earlier this year, 404 Media, which has reported extensively on the technology powering Trump’s deportation efforts, and particularly Palantir’s relationship with ICE, revealed details of how the ICM database works. The tech news site reported that it saw a recent version of the database, which allows ICE to filter people based on their immigration status, physical characteristics, criminal affiliation, location data, and more. 

    According to 404 Media, “a source familiar with the database” said it is made up of ‘tables upon tables’ of data and that it can build reports that show, for example, people who are on a specific type of visa who came into the country at a specific port of entry, who came from a specific country, and who have a specific hair color (or any number of hundreds of data points).” 

    The tool, and Palantir’s relationship with ICE, was controversial enough that sources within the company leaked to 404 Media an internal wiki where Palantir justifies working with Trump’s ICE. 

    Palantir is also developing a tool called “ImmigrationOS,” according to a contract worth $30 million revealed by Business Insider

    ImmigrationOS is said to be designed to streamline the “selection and apprehension operations of illegal aliens,” give “near real-time visibility” into self-deportations, and track people overstaying their visa, according to a document first reported on by Wired.

    First published on September 13, 2025 and updated on September 18, 2025 to include Magnet Forensics’ new contract, again on October 8, 2025 to include cell-site simulators and location data, and again on January 26, 2026 to include license plate readers.

    Lorenzo Franceschi-Bicchierai, Zack Whittaker

    Source link

  • ‘The Most Dangerous Negro’: 3 Essential Reads on the FBI’s Assessment of MLK’s Radical Views and Allies

    Rev. Martin Luther King Jr. relaxes at home in May 1956 in Montgomery, Alabama. Michael Ochs Archives/Getty Images

    Howard Manly, The Conversation

    Left out of GOP debates about “the weaponization” of the federal government is the use of the FBI to spy on civil rights leaders for most of the 20th century.

    Martin Luther King Jr. was one of the targets.

    As secret FBI documents became declassified, The Conversation U.S. published several articles looking at the details that emerged about King’s personal life and how he was considered in 1963 by the FBI as “the most dangerous Negro.”

    1. The radicalism of MLK

    As a historian of religion and civil rights, University of Colorado Colorado Springs Professor Paul Harvey writes that while King has come to be revered as a hero who led a nonviolent struggle to build a color blind society, the true radicalism of MLK’s beliefs remain underappreciated.

    “The civil saint portrayed nowadays was,” Harvey writes, “by the end of his life, a social and economic radical, who argued forcefully for the necessity of economic justice in the pursuit of racial equality.”

    2. The threat of being called a communist

    Jason Miller, a North Carolina State University English professor, details the delicate balance that King was forced to strike between some of his radical allies and the Kennedy and Johnson administrations.

    As the leading figure in the civil rights movement, Miller explains, King could not be perceived as a communist in order to maintain his national popularity.

    As a result, King did not overtly invoke the name of one of the Harlem Renaissance’s leading poets, Langston Hughes, a man the FBI suspected of being a communist sympathizer.

    But Miller’s research reveals the shrewdness with which King still managed to use Hughes’ poetry in his speeches and sermons, most notably in King’s “I Have a Dream” speech which echoes Hughes’ poem “I Dream a World.”

    “By channeling Hughes’ voice, King was able to elevate the subversive words of a poet that the powerful thought they had silenced,” Miller writes.

    3. ‘We must mark him now’

    As a historian who has done substantial research regarding FBI files on the Black freedom movement, UCLA labor studies lecturer Trevor Griffey points out that from 1910 to the 1970s, the FBI treated civil rights activists as either disloyal “subversives” or “dupes” of foreign agents.

    Screenshot from a 1966 FBI memo regarding the surveillance of Martin Luther King Jr. National Archives via Trevor Griffey photo

    As King ascended in prominence in the late 1950s and 1960s, it was inevitable that the FBI would investigate him.

    In fact, two days after King delivered his famous “I Have a Dream” speech at the 1963 March on Washington for Jobs and Freedom, William Sullivan, the FBI’s director of intelligence, wrote: “We must mark him now, if we have not done so before, as the most dangerous Negro of the future in this Nation from the standpoint of communism, the Negro and national security.”

    Editor’s note: This story is a roundup of articles from The Conversation’s archives.

    Howard Manly, Outreach Editor, The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    The Conversation

    Source link

  • You’ve been targeted by government spyware. Now what? | TechCrunch

    It was a normal day when Jay Gibson got an unexpected notification on his iPhone. “Apple detected a targeted mercenary spyware attack against your iPhone,” the message read.

    Ironically, Gibson used to work at companies that developed exactly the kind of spyware that could trigger such a notification. Still, he was shocked that he received a notification on his own phone. He called his father, turned off and put his phone away, and went to buy a new one.

    “I was panicking,” he told TechCrunch. “It was a mess. It was a huge mess.”  

    Gibson is just one of an ever-increasing number of people who are receiving notifications from companies like Apple, Google, and WhatsApp, all of which send similar warnings about spyware attacks to their users. Tech companies are increasingly proactive in alerting their users when they become targets of government hackers, and in particular those who use spyware made by companies such as Intellexa, NSO Group, and Paragon Solutions.

    But while Apple, Google, and WhatsApp alert, they don’t get involved in what happens next. The tech companies direct their users to people who could help, but at which point the companies step away.

    This is what happens when you receive one of these warnings. 

    Warning 

    You have received a notification that you were the target of government hackers. Now what? 

    First of all, take it seriously. These companies have reams of telemetry data about their users and what happens on both their devices and their online accounts. These tech giants have security teams that have been hunting, studying, and analyzing this type of malicious activity for years. If they think you have been targeted, they are probably right. 

    It’s important to note that in the case of Apple and WhatsApp notifications, receiving one doesn’t mean you were necessarily hacked. It’s possible that the hacking attempt failed, but they can still tell you that someone tried. 

    A photo showing the text of a threat notification sent by Apple to a suspected spyware victim (Image: Omar Marques/Getty Images)

    In the case of Google, it’s most likely that the company blocked the attack, and is telling you so you can go into your account and make sure you have multi-factor authentication on (ideally a physical security key or passkey), and also turn on its Advanced Protection Program, which also requires a security key and adds other layers of security to your Google account. In other words, Google will tell you how to better protect yourself in the future. 

    In the Apple ecosystem, you should turn on Lockdown Mode, which switches on a series of security features that makes it more difficult for hackers to target your Apple devices. Apple has long claimed that it has never seen a successful hack against a user with Lockdown Mode enabled, but no system is perfect. 

    Mohammed Al-Maskati, the director of Access Now’s Digital Security Helpline, a 24/7 global team of security experts who investigate spyware cases against members of civil society, shared with TechCrunch the advice that the helpline gives people who are concerned that they may be targeted with government spyware.

    This advice includes keeping your devices’ operating systems and apps up-to-date; switching on Apple’s Lockdown Mode, and Google’s Advanced Protection for accounts and for Android devices; be careful with suspicious links and attachments; to restart your phone regularly; and to pay attention to changes in how your device functions.

    Contact Us

    Have you received a notification from Apple, Google, or WhatsApp about being targeted with spyware? Or do you have information about spyware makers? We would love to hear from you. From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email.

    Reaching out for help

    What happens next depends on who you are. 

    There are open source and downloadable tools that anyone can use to detect suspected spyware attacks on their devices, which requires a little technical knowledge. You can use the Mobile Verification Toolkit, or MVT, a tool that lets you look for forensic traces of an attack on your own, perhaps as a first step before looking for assistance. 

    If you don’t want or can’t use MVT, you can go straight to someone who can help. If you are a journalist, dissident, academic, or human rights activist, there are a handful of organizations that can help. 

    You can turn to Access Now and its Digital Security Helpline. You can also contact Amnesty International, which has its own team of investigators and ample experience in these cases. Or, you can reach out to The Citizen Lab, a digital rights group at the University of Toronto, which has been investigating spyware abuses for almost 15 years. 

    If you are a journalist, Reporters Without Borders also has a digital security lab that offers to investigate suspected cases of hacking and surveillance. 

    Outside of these categories of people, politicians or business executives, for example, will have to go elsewhere. 

    If you work for a large company or political party, you likely have a competent (hopefully!) security team you can go straight to. They may not have the specific knowledge to investigate in-depth, but in that case they probably know who to turn to, even if Access Now, Amnesty, and Citizen Lab cannot help those outside of civil society. 

    Otherwise, there aren’t many places executives or politicians you can turn to, but we have asked around and found the ones below. We can’t fully vouch for any of these organizations, nor do we endorse them directly, but based on suggestions from people we trust, it’s worth pointing them out. 

    Perhaps the most well known of these private security companies is iVerify, which makes an app for Android and iOS, and also gives users an option to ask for an in-depth forensic investigation. 

    Matt Mitchell, a well-regarded security expert who’s been helping vulnerable populations protect themselves from surveillance has a new startup, called Safety Sync Group, which offers this kind of service. 

    Jessica Hyde, a forensic investigator with experience in the public and private sectors, has her own startup called Hexordia, and offers to investigate suspected hacks. 

    Mobile cybersecurity company Lookout, which has experience analyzing government spyware from around the world, has an online form that allows people to reach out for help to investigate cyberattacks involving malware, device compromise, and more. The company’s threat intelligence and forensics teams may then get involved.  

    Then, there’s Costin Raiu, who heads TLPBLACK, a small team of security researchers who used to work at Kaspersky’s Global Research and Analysis Group, or GReAT. Raiu was the unit’s head when his team discovered sophisticated cyberattacks from elite government hacking teams from the United States, Russia, Iran, and other countries. Raiu told TechCrunch that people who suspect they’ve been hacked can email him directly.

    Investigation

    What happens next depends on who you go to for help. 

    Generally speaking, the organization you reach out to may want to do an initial forensic check by looking at a diagnostic report file that you can create on your device, which you can share with the investigators remotely. At this point, this doesn’t require you to hand over your device to anyone. 

    This first step may be able to detect signs of targeting or even infection. It may also turn out nothing. In both cases, the investigators may want to dig deeper, which will require you to send in a full backup of your device, or even your actual device. At that point, the investigators will do their work, which may take time because modern government spyware attempts to hide and delete its tracks, and will tell you what happened. 

    Unfortunately, modern spyware may not leave any traces. The modus operandi these days, according to Hassan Selmi, who leads the incident response team at Access Now’s Digital Security Helpline, is a “smash and grab” strategy, meaning that once spyware infects the target device, it steals as much data as it can, and then tries to remove any trace and uninstall itself. This is assumed as the spyware makers trying to protect their product and hide its activity from investigators and researchers.  

    If you are a journalist, a dissident, an academic, a human rights activist, the groups who help you may ask if you want to publicize the fact that you were attacked, but you’re not required to do so. They will be happy to help you without taking public credit for it. There may be good reasons to come out, though: To denounce the fact that a government targeted you, which may have the side effect of warning others like you of the dangers of spyware; or to expose a spyware company by showing that their customers are abusing their technology. 

    We hope you never get one of these notifications. But we also hope that, if you do, you find this guide useful. Stay safe out there.

    Lorenzo Franceschi-Bicchierai

    Source link

  • 20-year-old shot by deputies after opening fire during “homicide” investigation

    A 20-year-old was taken to the hospital after an Orange County deputy returned fire while serving a search.The sheriff’s office says deputies from the felony unit were stationed near the 2200 block of Buchanan Bay Circle around 9:40 p.m. Friday doing surveillance of a homicide suspect.Deputies were preparing to serve a DNA search warrant in a murder that happened earlier this week, when the suspect and a 20-year-old man exited the house.They say the 20-year-old opened fire at the deputies, hitting an unmarked vehicle, while the suspect tried to run back into the residence.A deputy returned fire, striking the 20-year-old shooter.Deputies rendered aid until paramedics were able to get to the scene and transport the man to the hospital, where he underwent surgery. Deputies say he will face charges for the shooting.The suspect in the homicide case was quickly detained and was questioned by detectives later Friday evening.No deputies were injured in this shooting.As is standard procedure, the deputy who fired his weapon is on temporary, paid administrative leave pending the initial FDLE review.

    A 20-year-old was taken to the hospital after an Orange County deputy returned fire while serving a search.

    The sheriff’s office says deputies from the felony unit were stationed near the 2200 block of Buchanan Bay Circle around 9:40 p.m. Friday doing surveillance of a homicide suspect.

    Deputies were preparing to serve a DNA search warrant in a murder that happened earlier this week, when the suspect and a 20-year-old man exited the house.

    They say the 20-year-old opened fire at the deputies, hitting an unmarked vehicle, while the suspect tried to run back into the residence.

    A deputy returned fire, striking the 20-year-old shooter.

    Deputies rendered aid until paramedics were able to get to the scene and transport the man to the hospital, where he underwent surgery.

    Deputies say he will face charges for the shooting.

    The suspect in the homicide case was quickly detained and was questioned by detectives later Friday evening.

    No deputies were injured in this shooting.

    As is standard procedure, the deputy who fired his weapon is on temporary, paid administrative leave pending the initial FDLE review.

    Source link

  • National City said a surveillance tower was stopping prostitution — but now it’s broken with no fix in sight

    Trucks and motels line Roosevelt Avenue in National City. Image from Google Earth

    When a hit and run driver hit the National City Police Department’s mobile SkyWatch tower in mid-October, it did more than take the two-story tower out of service. 

    It created an opening for pimps and prostitutes to retake the streets over which the tower once stood. 

    During one afternoon earlier this week, multiple women in revealing, skimpy outfits and high heels moved around the area in a stretch so notorious for its history as a go-to place to buy sex on the street that city officials installed a mobile police surveillance tower in 2021, which they purchased for $220,000 through a FEMA grant. 

    The women paced up and down Roosevelt between 4th and 5th Streets, a stretch scattered with motels. One set up camp in the middle of 5th Street.  Others waved and smiled at passing motorists. Yet another woman stood next to a car, leaning through the window talking to a male driver. 

    It was a clear scene of an open sex trade at 3:45 in the afternoon. 

    The SkyWatch tower, which Mayor Ron Morrison earlier that day credited for combatting the problem and turning a high-profile trafficking arrest into an exception rather than the rule, was nowhere in sight.

    But it has been out of commission since mid-October due to the hit-and-run accident.

    The city tried repairing it, but has been unable to because it is a “unique piece of equipment,” said National City Police Department Sergeant Paul Hernandez. He said they’re currently looking for a solution to put the tower back in service.

    When in action, it “is a rugged, highly reinforced mobile surveillance platform,” according to the manufacturer’s website. It says the tower is” rapidly deployable” and “provides a strategic perspective and deterrent.” 

    “The damage was noticed on 10-13-25. It was taken out of service shortly after. Unfortunately, I do not have the exact date it was removed from the area,” Hernandez said.

    Morrison said prostitution fell “like a rock off a cliff”after the city installed the tower.

    He has not responded to follow up questions on how the city is combatting the problem now that the tower is out of commission.  

    National City Police Chief Alejandro Hernandez also did not respond to questions on whether the department has implemented any other actions in the Roosevelt area to deter prostitution in the weeks since the tower was removed.


    Source link

  • Vaping Is ‘Everywhere’ in Schools—Sparking a Bathroom Surveillance Boom

    It’s this creeping surveillance that gives some students pause, even those who told The 74 they otherwise support vape detectors in bathrooms. The possibility of unknown capabilities with the sensors is “very scary to me” said Moledina, the Austin teen, who worries about a future where bathrooms come with cameras.

    “Just knowing that there is vape smoke in the bathroom doesn’t really help you because the administrators already know it’s happening, and just by knowing that it’s there isn’t going to help them find out who is doing it,” he said. “So my concern is that, at the end of the day, we’re going to end up having cameras in bathrooms, which is definitely not what we want.”

    Minneapolis educators have used surveillance cameras in conjunction with the sensors to identify students for vaping in the bathrooms, discipline logs show.

    In February, for example, a Roosevelt High School senior was suspended for a day based on accusations they hit a weed vape in the bathroom. Officials reviewed footage from a surveillance camera outside the bathroom and determined the student was “entering and exiting the bathroom during the timeframe that the detector went off.” They were searched, and administrators found “a marijuana vape, an empty glass jar with a weed smell and a baggie with weed shake in it.”

    That same month, educators referred a Camden High School student to a drug and alcohol counselor for “vaping in the single stall bathrooms.”

    “After I reviewed the camera it does show [a] student leaving out that same stall bathroom,” campus officials reported.

    Gutierrez, the 18-year-old from Arizona, said she quit vaping after she was suspended and now copes with depression through positive means like painting. What she didn’t do, however, was quit because she received help at school for the mental health challenges that led her to vape in the first place.

    She stopped vaping while she was suspended, she said, because she was away from her friends and lacked access. She was frightened into further compliance, Gutierrez recalled, by the online lessons depicting vaping as a gross, gooey purple monster that would poison her relationships.

    “Yes I stopped, but it wasn’t a good stop,” she said. “I didn’t get no support. I didn’t get no counseling. I stopped because I was scared.”

    Mark Keierleber

    Source link

  • Palantir’s CEO Disavows Surveillance Concerns, Thinks ‘Patriotism Will Make You Rich’

    Alex Karp, the CEO of defense contractor Palantir, has been on the offensive lately. The billionaire with a penchant for making off-the-cuff remarks has sought to tamp down ongoing criticism and doubts about his firm, which is not only playing a pivotal role in the current presidential administration but has been having a very good year, stock-wise.

    The most recent example of this took place on Thursday, when Karp appeared at the Yahoo Finance Invest Conference. There, he laid into critics who claimed that his company, which has been helping the Trump administration with shadowy missions at home and abroad, was overvalued. “By my reckoning, Palantir is one of the only companies where the average American bought—and the average sophisticated American sold,” Karp said.

    He also seemed to characterize his industry critics as leeches. “Should an enterprise be parasitic? Should the host be paying to make your company larger while getting no actual value?” he asked.

    Karp also defended against criticism that his company is making its money by helping the White House with its less savory activities—like helping Trump’s deportation machine or turbocharging domestic surveillance. “Not only was the patriotism right, the patriotism will make you rich,” Karp said.

    The rest of the interview was something of a confused burble of half-articulated thoughts that sounded a little bit as if ChatGPT had been crammed full of MAGA talking points and forced to expel them all at once. Topics included Karp’s belief in a national border and his position that discrimination against white males is wrong, etc. Edgy stuff.

    Why is Karp making so many media appearances lately? It’s unclear, but maybe it’s just about projecting a show of strength and letting his critics know he doesn’t scare easily. Palantir has been around for quite a long time, but it’s never been more powerful and, as a result, it’s also never been more prominent. Under the harsh spotlight of national attention, the company has come under new levels of scrutiny—from both the press and from industry critics.

    A strategy Karp seems to be employing to deal with all of this has been taking a page out of his buddy Elon Musk’s notebook and ginning up some viral infotainment for the masses. The viral clip is today’s version of bread and circuses, and if you can keep the court of public opinion entertained, then chances are everything will turn out all right in the end.

    For example, during a recent appearance on Sourcery, a tech podcast hosted by Molly O’Shea, Karp resorted to some rather juvenile antics to spur attention to his brand. Karp somehow got hold of a sword and started performatively thrusting it around in front of his young female interviewer. It’s not a chainsaw exactly, but, as far as sad attempts at virility from over-the-hill billionaires go, I suppose it will do.

    So far, Karp seems to be earning his braggadocio—and his firm remains unvanquished, despite ongoing incursions. During the Sourcery appearance, Karp noted that he’s “currently in a battle with short-sellers.” Michael Burry, the hedge fund manager and wealthy short-seller of The Big Short fame, recently made it known that he was betting against Karp’s company, as well as the whole AI industry. The Financial Times notes that Burry’s bet against Palantir and a smaller one against Nvidia were “particularly damaging for the companies” because Burry is popular amongst the “online retail investors who have helped to make Palantir one of the world’s best performing stocks.”

    At least when it comes to Burry, it seems that Palantir has notched a temporary victory. On Thursday, Burry began winding down his hedge fund, Scion Asset Management. “My estimation of value in securities is not now, and has not been for some time, in sync with the markets,” Burry said in a letter to investors.

    Karp has come out swinging against people like Burry, making it known that it was unwise to bet against him and his company. “When I hear short sellers attacking what I believe is clearly the most important software company in America — and therefore the world, in terms of our impact — simply to make money, and trying to call the AI revolution into question . . . [it] is super triggering to me,” Karp said.

    Lucas Ropek

    Source link

  • Palantir CEO slams ‘parasitic’ critics calling the tech a surveillance tool: ‘Not only is patriotism right, patriotism will make you rich’ | Fortune

    Palantir CEO Alex Karp is sick and tired of his critics. That much is clear. But during the Yahoo Finance Invest Conference Thursday, he escalated his counteroffensive, aimed squarely at analysts, journalists, and political commentators who have long attacked the company as a symbol of an encroaching surveillance state, or as overvalued

    Karp’s message: They were wrong then, they’re wrong now, and they’ve cost everyday Americans real money.

    “How often have you been right in the past?” Karp said when asked why some analysts still insist Palantir’s valuation is too high. 

    He said he thinks negative commentary from traditional finance people—and “their minions,” the analysts—has repeatedly failed to grasp how the company operates, and failed to grasp what Palantir’s retail base saw years earlier. 

    “Do you know how much money you’ve robbed from people with your views on Palantir?” he asked those analysts, arguing those who rated the stock a sell at $6, $12, or $20 pushed regular Americans out of one of tech’s biggest winners, while institutions sat on the sidelines. 

    “By my reckoning, Palantir is one of the only companies where the average American bought—and the average sophisticated American sold,” Karp continued, tone incredulous. 

    That sort-of populist inversion sits at the core of Karp’s broader argument: The people who call Palantir a surveillance tool—his word for them is “parasitic”—understand neither the product nor the country that enabled it.

    “Should an enterprise be parasitic? Should the host be paying to make your company larger while getting no actual value?” he questioned, drawing a line between Palantir’s pitch and what he said he sees as the “woke-mind-virus” versions of enterprise software that generate fees without changing outcomes.

    Instead, Karp insists Palantir’s software is built for the welder, the truck driver, the factory technician, and the soldier—not the surveillance bureaucrat.

    He describes the company’s work as enabling “AI that actually works”: systems that improve routing for truck drivers, upgrade the capabilities of welders, help factory workers manage complex tasks, and give warfighters technology so advanced “our adversaries don’t want to fight with us.”

    That, he argues, is the opposite of a surveillance dragnet. It’s a national-security asset, part of the deeper American story. That’s what Palantir’s retail-heavy investor base understands: the country’s constitutional and technological system is uniquely powerful, and defending it isn’t just morally correct, it’s financially rewarded.

    “Not only was the patriotism right, the patriotism will make you rich,” he said, arguing Silicon Valley only listens to ideas when they make money. Palantir’s success, in his view, is proof the combination of American military strength and technological dominance—“chips to ontology, above and below”—remains unmatched worldwide.

    That, he believes, is what critics get wrong. While detractors warn Palantir fuels the surveillance state, Karp argues the company exists to prevent abuses of power—by making the U.S. so technologically dominant it rarely needs to project force.

    “Our project is to make America so strong we never fight,” he said. “That’s very different than being almost strong enough, so you always fight.”

    Karp savors the reversal: ‘broken-down car’ vs. ‘beautiful Tesla’

    Karp bitterly contrasted the fortunes of analysts who doubted the company with the retail investors who stuck with it.

    “Nothing makes me happier,” he said, than imagining “the bank executive…cruising along in their broken-down car,” watching a truck driver or welder—“someone who didn’t go to an elite school”—drive a “beautiful Tesla” paid for with Palantir gains.

    This wasn’t even a metaphor. Karp said he regularly meets everyday workers who “are now rich because of Palantir”—and the people who bet against the company have themselves become a kind-of meme.

    Critics—especially civil-liberties groups—have accused Palantir for years of building analytics tools that enable government surveillance. Karp says these attacks rely on caricature, not fact.

    “Pure ideas don’t change the world,” he said. “Pure ideas backed by military strength and economic strength do.”

    Eva Roytburg

    Source link

  • DHS Kept Chicago Police Records for Months in Violation of Domestic Espionage Rules

    On November 21, 2023, field intelligence officers within the Department of Homeland Security quietly deleted a trove of Chicago Police Department records. It was not a routine purge.

    For seven months, the data—records that had been requested on roughly 900 Chicagoland residents—sat on a federal server in violation of a deletion order issued by an intelligence oversight body. A later inquiry found that nearly 800 files had been kept, which a subsequent report said breached rules designed to prevent domestic intelligence operations from targeting legal US residents. The records originated in a private exchange between DHS analysts and Chicago police, a test of how local intelligence might feed federal government watchlists. The idea was to see whether street-level data could surface undocumented gang members in airport queues and at border crossings. The experiment collapsed amid what government reports describe as a chain of mismanagement and oversight failures.

    Internal memos reviewed by WIRED reveal the dataset was first requested by a field officer in DHS’s Office of Intelligence & Analysis (I&A) in the summer of 2021. By then, Chicago’s gang data was already notorious for being riddled with contradictions and error. City inspectors had warned that police couldn’t vouch for its accuracy. Entries created by police included people purportedly born before 1901 and others who appeared to be infants. Some were labeled by police as gang members but not linked to any particular group.

    Police baked their own contempt into the data, listing people’s occupations as “SCUM BAG,” “TURD,” or simply “BLACK.” Neither arrest nor conviction was necessary to make the list.

    Prosecutors and police relied on the designations of alleged gang members in their filings and investigations. They shadowed defendants through bail hearings and into sentencing. For immigrants, it carried extra weight. Chicago’s sanctuary rules barred most data sharing with immigration officers, but a carve-out at the time for “known gang members” left open a back door. Over the course of a decade, immigration officers tapped into the database more than 32,000 times, records show.

    The I&A memos—first obtained by the Brennan Center for Justice at NYU through a public records request—show that what began inside DHS as a limited data-sharing experiment seems to have soon unraveled into a cascade of procedural lapses. The request for the Chicagoland data moved through layers of review with no clear owner, its legal safeguards overlooked or ignored. By the time the data landed on I&A’s server around April 2022, the field officer who had initiated the transfer had left their post. The experiment ultimately collapsed under its own paperwork. Signatures went missing, audits were never filed, and the deletion deadline slipped by unnoticed. The guardrails meant to keep intelligence work pointed outward—toward foreign threats, not Americans—simply failed.

    Faced with the lapse, I&A ultimately killed the project in November 2023, wiping the dataset and memorializing the breach in a formal report.

    Spencer Reynolds, a senior counsel at the Brennan Center, says the episode illustrates how federal intelligence officers can sidestep local sanctuary laws. “This intelligence office is a workaround to so-called sanctuary protections that limit cities like Chicago from direct cooperation with ICE,” he says. “Federal intelligence officers can access the data, package it up, and then hand it off to immigration enforcement, evading important policies to protect residents.”

    Dell Cameron

    Source link

  • How Wearable Tech Could Become ‘Big Brother’ in the Workplace

    Wearable tech continues to be one of the Next Big Things in technology innovation, thanks to what many experts expect to be the replacement for the humble smartphoneAR and VR headsets, as well as other AI-powered devices. But wearables like fitness monitors and smartwatches are already part of some workplaces as a useful tool for monitoring employees — providing data on everything from performance to employee well-being. But this sometimes controversial data collection carries some risks, as a new report highlights.

    A team of management researchers from the U.K.’s University of Surrey recently did a meta-analysis of previous studies on the benefits and risks of using wearable worker monitoring tech. They found that most workplaces that have deployed wearable tech are using them to track employees’ well-being and health data. The devices were helpful for accurately tracking “sleep quality, stress markers, physical activity, and even team dynamics,” science news site Phys.org reported. That, which aligns with some of the ways devices like FitBits and Apple Watches are promoted. 

    But the way some businesses roll out these devices is problematic the researchers said, since many of these efforts aren’t fully transparent and leave employees guessing about what personal data is being collected by their companies and why it’s being gathered. Meanwhile, many businesses have inconsistent policies for analyzing collected employee data, and they may even store it insecurely. This behavior risks making workers feel insecure and suffering the effects of “invasive surveillance,” Phys.org says. That level of explicit oversight can harm workplace culture. 

    When used properly, these wearables, many of which are commercial off-the-shelf products, can warn HR departments in real time about potential problems. One good example is their potential to spot “rising stress before burnout or to safety hazards before accidents,” wrote Dr. Sebastiano Massaro, a neuroscience lecturer and co-author of the study.

    But unless companies have “robust methodological and ethical guardrails,” there’s a risk of blurring the lines between “science and pseudoscience, between real support and dangerous surveillance,” Massaro worries. In their best uses, wearables can “help create safer, healthier, and more responsive and productive workplaces” he thinks. Done badly, they could “normalize unnecessary monitoring and paradoxically increase workplace stress rather than reduce it.”

    Recently, Amazon revealed it was developing smart glasses (a little like Meta’s recently unveiled AR glasses) the company said will help its delivery drivers “identify hazards, seamlessly navigate to customers’ doorsteps, and improve customer deliveries.” The goggles sound like powerful tech, melding “AI-powered sensing capabilities and computer vision” with cameras and a display so a driver can see “everything from navigation details to hazards to delivery tasks,” as well as spotting the right packages in their truck at a delivery address. It’s plausible that these devices could speed up deliveries—a form of 21st century optimization that’s akin to a business efficiency decision that means UPS delivery trucks almost never turn left.

    But Amazon’s product announcement immediately triggered ethical and privacy worries, both about the drivers’ well-being and about data collected outside the trucks, when drivers are at a delivery location, for example. Amazon, after all, has repeatedly been in the news over the way it surveils its workforce, including landing a 32-million euro fine ($36 million) in France in 2024 for doing so excessively

    How can you best apply this research for your own company?

    Offering your workers wearable tech can be presented positively — the devices have a certain social cachet, and if they help workers monitor their health and fitness for their own purposes (as well as for more workplace-directed reasons, like monitoring stress levels) then they can be seen as an attractive workplace perk. The data they collect can, if used responsibly, also help you avoid complex health issues like burnout.

    But if you do deploy tech like this, it’s important to be open and transparent about what data is being collected, and what for, and also to be rigorous in protecting sensitive employee medical data. Otherwise you risk harming employee well-being and your company’s reputation. 

    The early-rate deadline for the 2026 Inc. Regionals Awards is Friday, November 14, at 11:59 p.m. PT. Apply now.

    Kit Eaton

    Source link

  • CBP Searched a Record Number of Phones at the US Border Over the Past Year

    The recent spike in searches at the border has mostly been driven by an increase in the past six months. Between April and June, CBP searched 14,899 devices—which at the time marked a record high for any quarter of the year. However, the most recent figures show this increase has continued: Between July and September, there were 16,173 phones searched, the newly published CBP figures show.

    Over the past decade, there has been an uptick in the number of phone and electronics searches taking place at the border—with the increases taking place throughout multiple political administrations. Statistics published by the CBP show there were 8,503 searches in 2015. Since 2018, the number of yearly searches has risen from around 30,000 to more than 55,000 this year. The new figures are the first time searches have surpassed 50,000.

    CBP spokesperson Rhonda Lawson says that its most recent search numbers are “consistent with increases since 2021, and less than 0.01 percent” of travelers have devices searched. Lawson says searches can be conducted to “detect digital contraband, terrorism-related content, and information relevant to visitor admissibility.”

    “It may be helpful for travelers to know when they weigh the decision of what device to bring with them when traveling into the United States that searches of electronic personal devices are not new, the policy and procedures for searches have not changed, and that the likelihood of a search has not increased and remains exceedingly rare,” Lawson says.

    Of the 55,000 device searches that took place over the past 12 months, the vast majority of these (51,061) were basic searches, with a total of 4,363 advanced device searches—a 3 percent increase over the 2024 fiscal year.

    Federal courts remain split on whether advanced phone searches require warrants. The answer can change with the airport. The Eleventh and Eighth Circuits allow suspicionless searches of phones, while the Fourth and Ninth require reasonable suspicion for advanced, forensic searches. Recent district-court decisions in New York go further, requiring probable cause.

    Several incidents involving tourists, including a French scientist whose phone was reportedly searched to discover whether he had criticized Trump, have shown how easily the intensified screening can slip into international controversy. In June, a 21-year-old Norwegian tourist was reportedly denied entry at Newark Liberty International Airport because his phone contained a now-famous meme mocking Vice President JD Vance—a small act of humor allegedly treated as grounds for expulsion.

    CBP disputes many of those accounts, but the impression abroad is clear: The US is becoming an increasingly harder—if not more hostile—place to visit.

    Matt Burgess, Dell Cameron

    Source link

  • DHS Wants a Fleet of AI-Powered Surveillance Trucks

    The US Department of Homeland Security is seeking to develop a new mobile surveillance platform that fuses artificial intelligence, radar, high-powered cameras, and wireless networking into a single system, according to federal contracting records reviewed by WIRED. The technology would mount on 4×4 vehicles capable of reaching remote areas and transforming into rolling, autonomous observation towers, extending the reach of border surveillance far beyond its current fixed sites.

    The proposed system surfaced Friday after US Customs and Border Protection quietly published a pre-solicitation notice for what it’s calling a Modular Mobile Surveillance System, or M2S2. The listing includes draft technical documents, data requirements, and design objectives.

    DHS did not respond to a request for comment.

    If M2S2 performs as described, border patrol agents could park their vehicles, raise a telescoping mast, and within minutes start detecting motion several miles away. The system would rely heavily on so-called computer vision, a kind of “artificial intelligence” that allows machines to interpret visual data frame by frame and detect shapes, heat signatures, and movement patterns. Such algorithms—previously developed for use in war drones—are trained on thousands if not millions of images to distinguish between people, animals, and vehicles.

    The development of M2S2 comes amid the Trump administration’s sweeping crackdown on undocumented immigrants across the US. As part of this push, which has sparked widespread protests and condemnation for the brutal tactics used by immigration authorities, Congress boosted DHS’s discretionary budget authority to roughly $65 billion. The GOP’s “One Big Beautiful Bill” allocates over $160 billion for immigration enforcement and border measures—most of it directed to DHS—with the funds scheduled to be distributed over multiple years. The administration has sought to increase DHS funding by roughly 65 percent, proposing the largest expansion in the agency’s history to fund new border enforcement, detention capacity, and immigration surveillance initiatives.

    According to documents reviewed by WIRED, locations of objects targeted by the system would be pinpointed on digital maps within 250 feet of their true location (with a stretch goal of around 50 feet) and transmit that data across an app called TAK—a government-built tactical mapping platform developed by the US Defense Department to help troops coordinate movements and avoid friendly fire.

    DHS envisions two modes of operation: one with an agent on site and another where the trucks sit mostly unattended. In the latter case, the vehicle’s onboard AI would conduct the surveillance and send remote operators alerts when it detects activity. Missions are to be logged start to finish, with video, maps, and sensor data retained for a minimum of 15 days, locked against deletion “under any circumstances.”

    Dell Cameron

    Source link

  • Opinion | Xi Is Watching as Chinese Christians Pray

    Zion Church moved many of its services online. Beijing still arrested its pastor.

    Mindy Belz

    Source link

  • UW Study Looks to Question How Flock Security Camera Information Being Shared – KXL

    SEATTLE, Wash.  — A new study by The University of Washington Center for Human Rights appears to show how Flock Security Cameras have in some cases been used incorrectly.

    It suggests there are cases of direct, indirect and other ways agencies at the federal level have been able to get information on private U.S. citizens and ondocumented immigrants they otherwise may not have been able to gather.  Springfield and Eugene, Oregon have made decisions to pause their use as has Auburn, Washington and other cities in the Northwest.

    Oregon U.S. Senator Ron Wyden has been an outspoken critic of the use of the surveillance cameras while some local law enforcement agencies have praised them.

    More about:

    Brett Reckamp

    Source link

  • Satellites Are Leaking the World’s Secrets: Calls, Texts, Military and Corporate Data

    That suggests anyone could set up similar hardware somewhere else in the world and likely obtain their own collection of sensitive information. After all, the researchers restricted their experiment to only off-the-shelf satellite hardware: a $185 satellite dish, a $140 roof mount with a $195 motor, and a $230 tuner card, totaling less than $800.

    “This was not NSA-level resources. This was DirecTV-user-level resources. The barrier to entry for this sort of attack is extremely low,” says Matt Blaze, a computer scientist and cryptographer at Georgetown University and law professor at Georgetown Law. “By the week after next, we will have hundreds or perhaps thousands of people, many of whom won’t tell us what they’re doing, replicating this work and seeing what they can find up there in the sky.”

    One of the only barriers to replicating their work, the researchers say, would likely be the hundreds of hours they spent on the roof adjusting their satellite. As for the in-depth, highly technical analysis of obscure data protocols they obtained, that may now be easier to replicate, too: The researchers are releasing their own open-source software tool for interpreting satellite data, also titled “Don’t Look Up,” on Github.

    The researchers’ work may, they acknowledge, enable others with less benevolent intentions to pull the same highly sensitive data from space. But they argue it will also push more of the owners of that satellite communications data to encrypt that data, to protect themselves and their customers. “As long as we’re on the side of finding things that are insecure and securing them, we feel very good about it,” says Schulman.

    There’s little doubt, they say, that intelligence agencies with vastly superior satellite receiver hardware have been analyzing the same unencrypted data for years. In fact, they point out that the US National Security Agency warned in a 2022 security advisory about the lack of encryption for satellite communications. At the same time, they assume that the NSA—and every other intelligence agency from Russia to China—has set up satellite dishes around the world to exploit that same lack of protection. (The NSA did not respond to WIRED’s request for comment).

    “If they aren’t already doing this,” jokes UCSD cryptography professor Nadia Heninger, who co-led the study, “then where are my tax dollars going?”

    Heninger compares their study’s revelation—the sheer scale of the unprotected satellite data available for the taking—to some of the revelations of Edward Snowden that showed how the NSA and Britain’s GCHQ were obtaining telecom and internet data on an enormous scale, often by secretly tapping directly into communications infrastructure.

    “The threat model that everybody had in mind was that we need to be encrypting everything, because there are governments that are tapping undersea fiber optic cables or coercing telecom companies into letting them have access to the data,” Heninger says. “And now what we’re seeing is, this same kind of data is just being broadcast to a large fraction of the planet.”

    Andy Greenberg, Matt Burgess

    Source link

  • Court of Appeals sides with ShotSpotter critics in Detroit, finding city ‘repeatedly’ violated transparency law – Detroit Metro Times

    A state appeals court handed a partial victory to critics of Detroit’s controversial ShotSpotter surveillance system, ruling that city officials violated a transparency ordinance when they approved contracts for the gunshot detection technology without properly notifying the public.

    In a published decision released Thursday, a divided Michigan Court of Appeals panel found that the Detroit Police Department failed to comply with the city’s Community Input Over Government Surveillance (CIOGS) ordinance, which requires the public release of a detailed report on surveillance technology at least 14 days before it is discussed by the City Council. The court reversed part of a lower court ruling that had dismissed the case and sent it back for further proceedings.

    “The City of Detroit uses surveillance technology to identify the location of gunshots in certain precincts,” Judge Brock Swartzle wrote for the majority. “Given the inherent invasiveness of surveillance technology, the City adopted specific procedural requirements that must be met when procuring such technology. These requirements were not met here.”

    Critics argue ShotSpotter, which relies on a network of sensors to detect gunshots, is unproven, invasive, and racially discriminatory. The city counters that it saves lives and helps police find suspects more quickly.

    The ruling means the Wayne County Circuit Court must revisit whether the city’s ShotSpotter contracts are valid and whether the plaintiffs — five Detroiters and the James and Grace Lee Boggs Center to Nurture Community Leadership — are entitled to any relief.

    The appeals court found that the Detroit Police Department did not post the legally required Surveillance Technology Specification Report (STSR) until September 28, 2022 after several key council committee meetings had already taken place and just one day after the council voted to renew an existing $1.5 million contract with ShotSpotter. The council later approved a $7 million expansion two weeks later.

    “Thus, the record confirms that defendants repeatedly violated the requirement under § 17-5-452(c) that the STSR ‘be made available on the City’s website at least 14 days prior to holding any of the hearings or meetings,’” the court wrote. “The trial court erred in concluding otherwise when it granted summary disposition in favor of defendants.”

    The panel also rejected the city’s argument that it was exempt from the ordinance because ShotSpotter had already been in use before the law took effect in 2021. The court ruled that the so-called “grandfather clause” only applies to surveillance technology that was previously approved under the ordinance, and the ShotSpotter system was not.

    The lawsuit was filed in 2022 by the Detroit Justice Center, Sugar Law Center for Economic and Social Justice, and attorney Jack Schulz. They argued that the city violated its own ordinance by failing to be transparent and involve the community in approving the technology.

    “Much congrats to each of our clients for standing up in this case on behalf of all residents of the city,” John Philo, executive and legal director for Sugar Law Center, said. “While more limited in scope than hoped for, the court’s decision is an important recognition that citizens’ oversight and input ordinances matter and cannot simply be ignored by government officials.”

    ShotSpotter operates through a network of microphones that detect loud noises and notify police of suspected gunfire. Detroit police have praised it as a tool that helps officers respond to shootings faster.

    “ShotSpotter has been an invaluable investigative tool that is helping to make our city safer,” Detroit Police Department Assistant Chief Franklin Hayes said in a statement to Metro Times. “In areas where ShotSpotter is deployed, we have seen significant reductions in gunfire. So far this year, we have recovered 244 firearms and made 131 arrests as a result of ShotSpotter cases.”

    Hayes said the technology also “helps save lives.”

    “Just this week, DPD responded to a ShotSpotter alert of multiple shots fired, for which no 911 calls were placed,” Hayes said. “When officers arrived, they found a critically injured victim who likely would have succumbed to his injuries at the scene had ShotSpotter technology not alerted DPD to the incident and to its location.” 

    Community advocates and civil rights groups argue that the system sends officers charging into predominantly Black neighborhoods on high alert, even though the majority of alerts turn out to be false alarms. An analysis by Chicago’s Office of Inspector General found that ShotSpotter alerts “rarely produce evidence of a gun-related crime” and led police to increase stop-and-frisk encounters in areas already over-policed. About 89% of ShotSpotter alerts in Chicago resulted in no evidence of gunfire or any crime.

    Opponents also note that several cities — including San Antonio, Charlotte, Trenton, Troy, and Grand Rapids — have canceled or rejected ShotSpotter contracts amid concerns about its reliability and cost.

    The appeals court remanded the Detroit case to Wayne County Circuit Court to determine potential remedies and address the city’s defenses, including claims that the lawsuit is moot because the contracts have already been implemented.

    “With surveillance and similar technology ever encroaching into every recess of modern life, procedural safeguards cannot be ignored or downplayed by government actors as mere technicalities,” the court wrote. “To ensure that technology serves the people, and not the other way around, strict compliance with procedural safeguards like the CIOGS Ordinance may well be needed. And, unfortunately, such compliance was lacking here.”

    In a statement, Detroit Corporation Counsel Conrad Mallett noted that the court’s opinion does not impact the use of ShotSpotter in the city.

    “The Court of Appeals opinion does not void the use of this technology, which is still in place,” Mallett said. “In its opinion the Court of Appeals recognized the City of Detroit’s defenses to the lawsuit that may result in another dismissal by the trial court.”


    Steve Neavling

    Source link

  • California Lets Residents Opt-Out of a Ton of Data Collection on the Web

    This week, California Governor Gavin Newsom signed into law new legislation that will give Californians the ability to easily opt out of digital data collection with a simple portal that should apply to all websites in their browser. The move promises to make the state’s digital privacy protections that much easier to take advantage of, and could set a new precedent for future privacy regulations.

    In a press release shared this week, Newsom’s office announced the passage of two new laws, SB 361 and AB 566, that will strengthen the state’s landmark California Consumer Privacy Act. The CCPA, created in 2018, notably gave state residents the ability to request that companies share with them—but also delete—information that had been collected about them as part of their business practices.

    The passage of the CCPA was a big deal, but, as is often the case with landmark legislation, its execution has left something to be desired. While the CCPA did, indeed, force companies—for the first time—to give web users a certain amount of control over their data, the mechanisms by which that control can be exerted have always been quite imperfect.

    In other words, loopholes in the law have created a situation in which every single time a web user visits a website, they are forced to go through the annoying process of selecting their privacy preferences. In some cases, companies have capitalized on this process by making it confusing or difficult to navigate, thus tilting the scales in their favor.

    Now, however, due to the passage of AB 566, Californians should—theoretically—be able to opt out of all data collection via a simple portal made available through their web browser. The legislation “helps consumers exercise their opt-out rights” under the CCPA by “requiring browsers to include a setting to send websites an opt-out preference signal to enable Californians to opt out of third-party sales of their data at one time instead of on each individual website,” Newsom’s press release states.

    This is a great step towards giving web users more control over their data, although—given that the bill was just passed into law—it’s not yet clear how the regulation will manifest for consumers. Hopefully, it will be as easy as checking a box in your browser.

    The legislation puts California miles ahead of the rest of the country when it comes to digital privacy enforcement. In recent years, the state has also taken strides towards improving its ability to police and punish companies for infringing upon this law. Currently, enforcement is operated through the state Attorney General’s office. This year, a number of companies—including a tractor company and a health information publisher—were fined upwards of a million dollars for alleged CCPA violations. However, in 2020, the state also approved the creation of a new agency, the California Privacy Protection Agency (or CPPA—which has been dubbed the nation’s first “privacy police”), which is tasked with administering and implementing the CCPA.

    Also signed into law this week was SB 361, which is designed to strengthen California’s already existing data broker registry. The law will give consumers “more information about the personal information collected by data brokers and who may have access to consumers’ data,” Newsom’s office said.

    Lucas Ropek

    Source link

  • ICE Wants to Build Out a 24/7 Social Media Surveillance Team

    United States immigration authorities are moving to dramatically expand their social media surveillance, with plans to hire nearly 30 contractors to sift through posts, photos, and messages—raw material to be transformed into intelligence for deportation raids and arrests.

    Federal contracting records reviewed by WIRED show that the agency is seeking private vendors to run a multiyear surveillance program out of two of its little-known targeting centers. The program envisions stationing nearly 30 private analysts at Immigration and Customs Enforcement facilities in Vermont and Southern California. Their job: Scour Facebook, TikTok, Instagram, YouTube, and other platforms, converting posts and profiles into fresh leads for enforcement raids.

    The initiative is still at the request-for-information stage, a step agencies use to gauge interest from contractors before an official bidding process. But draft planning documents show the scheme is ambitious: ICE wants a contractor capable of staffing the centers around the clock, constantly processing cases on tight deadlines, and supplying the agency with the latest and greatest subscription-based surveillance software.

    The facilities at the heart of this plan are two of ICE’s three targeting centers, responsible for producing leads that feed directly into the agency’s enforcement operations. The National Criminal Analysis and Targeting Center sits in Williston, Vermont. It handles cases across much of the eastern US. The Pacific Enforcement Response Center, based in Santa Ana, California, oversees the western region and is designed to run 24 hours a day, seven days a week.

    Internal planning documents show that each site would be staffed with a mix of senior analysts, shift leads, and rank-and-file researchers. Vermont would see a team of a dozen contractors, including a program manager and 10 analysts. California would host a larger, nonstop watch floor with 16 staff. At all times, at least one senior analyst and three researchers would be on duty at the Santa Ana site.

    Together, these teams would operate as intelligence arms of ICE’s Enforcement and Removal Operations division. They will receive tips and incoming cases, research individuals online, and package the results into dossiers that could be used by field offices to plan arrests.

    Dell Cameron

    Source link

  • Microsoft blocks Israel’s use of its data centers for mass surveillance of Palestinians

    Microsoft has ended access to its data centers for a unit of the Israeli military that helped power a massive surveillance operation against Palestinian civilians, according to a report by The Guardian. The company says that the country’s spy agency has violated its terms of service.

    This surveillance system collected every day in Gaza and the West Bank. The massive trove of data has been stored via Microsoft’s Azure cloud platform, but the company just informed Israel’s spy agency that this practice will no longer be acceptable.

    Microsoft’s vice-chair and president, Brad Smith, alerted staff of the move in an email, writing that the company had “ceased and disabled a set of services to a unit within the Israel ministry of defense.” He went to suggest that this included cutting off access to cloud storage and some AI services.

    “We do not provide technology to facilitate mass surveillance of civilians,” he continued. “We have applied this principle in every country around the world, and we have insisted on it repeatedly for more than two decades.”

    Microsoft came to this decision after conducting an external inquiry to review the spy agency’s use of its Azure cloud platform. It also comes amid pressure from both employees and investors for the company to examine its relationship with Israel as it relates to the military offensive in Gaza.

    This reportedly started back in 2021, when Microsoft CEO Satya Nadella allegedly okayed the storage effort personally after meeting with a commander from Israel’s elite military surveillance corps, Unit 8200. Nadella reportedly gave the country a customized and segregated area within the Azure platform to store these phone calls, all without knowledge or consent from Palestinians.

    While conflict has existed between Israel and Palestinian groups for decades, these platforms were built out a full two years before the the most recent escalation in violence, beginning October 7, 2023. The mantra when building out the project was to record “a million calls an hour.”

    Leaked Microsoft files suggested that the lion’s share of this data was being stored in Azure facilities in the Netherlands, but Israel allegedly moved it after Microsoft started its initial investigation. The Guardian has reported that Unit 8200 planned on transferring the data to the Amazon Web Services cloud platform. We have contacted Amazon to ask if it has accepted this gigantic trove of personal data.

    Lawrence Bonk

    Source link