ReportWire

Tag: Facial recognition

  • British Transport Police to launch trial of Live Facial Recognition today – Tech Digest

    [ad_1]

    Share


    The British Transport Police (BTP) has officially launched a trial of Live Facial Recognition (LFR) technology.

    Starting today (Wednesday, 11 February), the operation began at London Bridge railway station as part of a scheduled six-month pilot program.

    The initiative is designed to test how the technology performs within a busy railway environment. Cameras positioned at the station will scan the faces of passengers and compare them against a specific watchlist of individuals wanted for serious criminal offences.

    If the system identifies a potential match, it generates an alert for a human officer to review.

    Chief Superintendent Chris Casey, the senior officer overseeing the project, emphasized the safety goals of the trial. “I want to reiterate that this is a trial of the technology to assess how it performs in a railway setting,” he said.

    “The initiative follows a significant amount of research and planning, and forms part of BTP’s commitment to using innovative technology to make the railways a hostile place for individuals wanted for serious criminal offences, helping us keep the public safe.”

    Addressing privacy concerns, the force confirmed that images of anyone not on the authorized database are deleted immediately and permanently. Furthermore, passengers who do not wish to participate in the scan have been informed that alternative routes are available to bypass the recognition zone.

    The project team has collaborated with several industry partners, including Network Rail and the Department for Transport, to prepare for the rollout.

    The BTP has committed to transparency throughout the pilot, promising to publish the dates and locations of all future LFR operations online before they take place. “The cameras work by scanning faces and comparing them to a watchlist of offenders wanted for serious offences,” Casey added.

    “If there’s a match, then the system generates an alert. An officer will review it and carry out further checks to determine if the person is a suspect and if they need to take further action.”

    Members of the public are being encouraged to provide feedback on the technology via QR codes displayed on posters at the station.

    https://www.btp.police.uk/news/btp/news/appeals/british-transport-policeto-launchtrial-of-live-facial-recognition-technologytoday-london/


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Here’s the tech powering ICE’s deportation crackdown  | TechCrunch

    [ad_1]

    President Donald Trump said he would make countering immigration one of his flagship policies during his second term in The White House, promising an unprecedented number of deportations. 

    A year in, data shows that deportations by Immigration and Customs Enforcement (ICE) and Customs and Border Protection have surpassed at least 350,000 people

    ICE has taken center stage in Trump’s mass removal campaign, raiding homes, workplaces, and public parks in search of undocumented people, prompting widespread protests and resistance from communities across the United States. 

    ICE uses several technologies to identify and surveil individuals. Homeland Security has also used the shadow of Trump’s deportations to challenge long-standing legal norms, including forcibly entering homes to arrest people without a judicial warrant, a move that legal experts say violates the Fourth Amendment protections against unreasonable searches and seizures. 

    Here are some of the technologies that ICE is relying on.

    Cell-site simulators

    ICE has a technology known as cell-site simulators to snoop on cellphones. These surveillance devices, as the name suggests, are designed to appear as a cellphone tower, tricking nearby phones to connect to them. Once that happens, the law enforcement authorities who are using the cell-site simulators can locate and identify the phones in their vicinity, and potentially intercept calls, text messages, and internet traffic.  

    Cell-site simulators are also known as “stingrays,” based on the brand name of one of the earliest versions of the technology, which was made by U.S. defense contractor Harris (now L3Harris); or IMSI catchers, a technology that can capture a nearby cell phone’s unique identifier which law enforcement can use for identifying the phone’s owner.  

    In the last two years, ICE has signed contracts for more than $1.5 million with a company called TechOps Specialty Vehicles (TOSV), which produces customized vans for law enforcement. 

    A contract worth more than $800,000 dated May 8, 2025 said TOSV will provide “Cell Site Simulator (CSS) Vehicles to support the Homeland Security Technical Operations program.”  

    TOSV president Jon Brianas told TechCrunch that the company does not manufacture the cell-site simulators, but rather integrates them “into our overall design of the vehicle.” 

    Cell-site simulators have long been controversial for several reasons.  

    These devices are designed to trick all nearby phones to connect to them, which means that by design they gather the data of many innocent people. Also, authorities have sometimes deployed them without first obtaining a warrant.  

    Authorities have also tried to keep their use of the technology secret in court, withholding information, and even accepting plea deals and dropping cases rather than disclose information about their use of cell-site simulators. In a court case in 2019 in Baltimore, it was revealed that prosecutors were instructed to drop cases rather than violate a non-disclosure agreement with the company that makes the devices.  

    Facial recognition

    Clearview AI is perhaps the most well-known facial-recognition company today. For years, the company promised to be able to identify any face by searching through a large database of photos it had scraped from the internet. 

    On Monday, 404 Media reported that ICE has signed a contract with the company to support its law enforcement arm Homeland Security Investigations (HSI), “with capabilities of identifying victims and offenders in child sexual exploitation cases and assaults against law enforcement officers.” 

    According to a government procurement database, the contract signed last week is worth $3.75 million. 

    ICE has had other contracts with Clearview AI in the last couple of years. In September 2024, the agency purchased “forensic software” from the company, a deal worth $1.1 million. The year before, ICE paid Clearview AI nearly $800,000 for “facial recognition enterprise licenses.”

    Clearview AI did not respond to a request for comment. 

    ICE is also using a facial recognition app called Mobile Fortify, which federal agents use to identify people on the street. The app relies on scanning a person’s driver’s license photo against 200 million photos, much of the data sourced from state driver’s license databases.

    Paragon phone spyware

    Contact Us

    Do you have more information about ICE and the technology it uses? We would love to learn how this affects you. From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.

    In September 2024, ICE signed a contract worth $2 million with Israeli spyware maker Paragon Solutions. Almost immediately, the Biden administration issued a “stop work order,” putting the contract under review to make sure it complied with an executive order on the government’s use of commercial spyware. 

    Because of that order, for nearly a year, the contract remained in limbo. Then, last week, the Trump administration lifted the stop work order, effectively reactivating the contract

    At this point, the status of Paragon’s relationship with ICE in practice is unclear.  

    The records entry from last week said that the contract with Paragon is for “a fully configured proprietary solution including license, hardware, warranty, maintenance, and training.” Practically speaking, unless the hardware installation and training were done last year, it may take some time for ICE to have Paragon’s system up and running.

    It’s also unclear if the spyware will be used by ICE or HSI, an agency whose investigations are not limited to immigration, but also cover online child sexual exploitation, human trafficking, financial fraud, and more.

    Paragon has long tried to portray itself as an “ethical” and responsible spyware maker, and now has to decide if it’s ethical to work with Trump’s ICE. A lot has happened to Paragon in the last year. In December, American private equity giant AE Industrial purchased Paragon, with a plan to merge it with cybersecurity company RedLattice, according to Israeli tech news site Calcalist.

    In a sign that the merger may have taken place, when TechCrunch reached out to Paragon for comment on the reactivation of the ICE contract last week, we were referred to RedLattice’s new vice president of marketing and communications Jennifer Iras. 

    RedLattice’s Iras did not respond to a request for comment for this article, nor for last week’s article.

    In the last few months, Paragon has been ensnared in a spyware scandal in Italy, where the government has been accused of spying on journalists and immigration activists. In response, Paragon cut ties with Italy’s intelligence agencies. 

    Phone hacking and unlocking technology

    In mid-September, ICE’s law enforcement arm Homeland Security Investigations signed a contract with Magnet Forensics for $3 million.

    This contract is specifically for software licenses so that HSI agents can “recover digital evidence, process multiple devices,” and “generate forensic reports,” according to the contract description.

    Magnet is the current maker of the phone hacking and unlocking devices known as Graykey. These devices essentially give law enforcement agents the ability to connect a locked phone to them and unlock them and access the data inside of them. 

    Magnet Forensics, which merged with Graykey makers Grayshift in 2023, did not respond to a request for comment.

    Cellphone location data 

    At the end of September, 404 Media reported that ICE bought access to “an “all-in-one” surveillance tool that allows the agency to search through databases of historical cellphone location data, as well as social media information.  

    The tool appears to be made of two products called Tangles and Webloc, which are made by a company called Penlink. One of the tools promises to leverage “a proprietary data platform to compile, process, and validate billions of daily location signals from hundreds of millions of mobile devices, providing both forensic and predictive analytics,” according to a redacted contract found by 404 Media.  

    The redacted contract does not identify which one of the tools makes that promise, but given its description, it’s likely Webloc. Forbes previously cited a case study that said Webloc can search a given location to “monitor trends of mobile devices that have given data at those locations and how often they have been there.”  

    This type of cellphone location data is harvested by companies around the world using software development kits (SDKs) embedded in regular smartphone apps, or with an online advertising process called real-time bidding (RTB) where companies bid in real-time to place an ad on the screen of a cellphone user based on their demographic or location data. The latter process has the by-product of giving ad tech companies that kind of personal data.  

    Once collected, this mass of location data is transferred to a data broker who then sells it to government agencies. Thanks to this layered process, authorities have used this type of data without getting a warrant by simply purchasing access to the data. 

    The other tool, Tangles, is an “AI-powered open-source intelligence” tool that automates “the search and analysis of data from the open, deep, and the dark web,” according to Penlink’s official site.  

    Forbes reported in September that ICE spent $5 million on Penlink’s two tools.  

    Penlink did not respond to a request for comment.  

    License plate readers

    ICE relies on automated license plate reader (ALPR) companies to follow drivers across a large swath of the U.S., such as where people go and when.

    ICE also leans on its connections with local law enforcement agencies, which have contracts with ALPR providers, like surveillance company Flock Safety, to obtain immigration data by the backdoor. Flock is one of the largest ALPR providers, with over 40,000 license plate scanners around the United States, and only getting larger with its partnerships with other companies, such as video surveillance company Ring.

    Efforts by ICE to informally request data from local law enforcement has prompted some police departments to cut off federal agencies from their access.

    Border Patrol runs its own surveillance network of ALPR cameras, the Associated Press reported.

    For years, ICE has used the legal research and public records data broker LexisNexis to support its investigations. 

    In 2022, two non-profits obtained documents via Freedom of Information Act requests, which revealed that ICE performed more than 1.2 million searches over seven months using a tool called Accurint Virtual Crime Center. ICE used the tool to check the background information of migrants.   

    A year later, The Intercept revealed that ICE was using LexisNexis to detect suspicious activity and investigate migrants before they even committed a crime, a program that a critic said enabled “mass surveillance.”

    According to public records, LexisNexis currently provides ICE “with a law enforcement investigative database subscription (LEIDS) which allows access to public records and commercial data to support criminal investigations.” 

    This year, ICE has paid $4.7 million to subscribe to the service. 

    LexisNexis spokesperson Jennifer Richman told TechCrunch that ICE has used the company’s product “data and analytics solutions for decades, across several administrations.”

    “Our commitment is to support the responsible and ethical use of data, in full compliance with laws and regulations, and for the protection of all residents of the United States,” said Richman, who added that LexisNexis “partners with more than 7,500 federal, state, local, tribal, and territorial agencies across the United States to advance public safety and security.” 

    Surveillance giant Palantir

    Data analytics and surveillance technology giant Palantir has signed several contracts with ICE in the last year. The biggest contract, worth $18.5 million from September 2024, is for a database system called “Investigative Case Management,” or ICM.

    The contract for ICM goes back to 2022, when Palantir signed a $95.9 million deal with ICE. The Peter Thiel-founded company’s relationship with ICE dates back to the early 2010s. 

    Earlier this year, 404 Media, which has reported extensively on the technology powering Trump’s deportation efforts, and particularly Palantir’s relationship with ICE, revealed details of how the ICM database works. The tech news site reported that it saw a recent version of the database, which allows ICE to filter people based on their immigration status, physical characteristics, criminal affiliation, location data, and more. 

    According to 404 Media, “a source familiar with the database” said it is made up of ‘tables upon tables’ of data and that it can build reports that show, for example, people who are on a specific type of visa who came into the country at a specific port of entry, who came from a specific country, and who have a specific hair color (or any number of hundreds of data points).” 

    The tool, and Palantir’s relationship with ICE, was controversial enough that sources within the company leaked to 404 Media an internal wiki where Palantir justifies working with Trump’s ICE. 

    Palantir is also developing a tool called “ImmigrationOS,” according to a contract worth $30 million revealed by Business Insider

    ImmigrationOS is said to be designed to streamline the “selection and apprehension operations of illegal aliens,” give “near real-time visibility” into self-deportations, and track people overstaying their visa, according to a document first reported on by Wired.

    First published on September 13, 2025 and updated on September 18, 2025 to include Magnet Forensics’ new contract, again on October 8, 2025 to include cell-site simulators and location data, and again on January 26, 2026 to include license plate readers.

    [ad_2]

    Lorenzo Franceschi-Bicchierai, Zack Whittaker

    Source link

  • AI surveillance stopping 1,400 shoplifting crimes daily across UK retailers – Tech Digest

    [ad_1]

    Share

    Over 1,400 shoplifters are being intercepted by facial recognition cameras every day as Britain’s retail industry turns to high-tech warfare to combat an industrial-scale wave of theft.

    New data reveals that the Facewatch AI system, currently deployed across major chains including Sainsbury’s, Sports Direct, and Home Bargains, issued more than half a million “known thief” alerts to shop staff in 2025. This represents a staggering 1,415 interventions per day, more than doubling the volume of detections recorded just one year prior.

    The technology works by scanning the faces of shoppers as they enter a store and cross-referencing them against a digital watchlist of prolific offenders. Within an average of nine seconds, the system can flag a “subject of interest,” allowing security teams to either monitor the individual or escort them from the premises before goods are taken.

    Facewatch CEO Nick Fisher defended the rapid expansion of the network, stating that the figures reflect a reality where retailers must act “faster and smarter” to protect employees and stock.

    While official police records show shoplifting offences hit a record high of 529,994 last year, industry experts believe the true scale of the crisis is closer to 20 million thefts annually, costing businesses £2.6 billion.

    However, the rise of the machines has sparked a fierce backlash from privacy campaigners who warn that innocent shoppers are being caught in a digital dragnet. Groups like Big Brother Watch have highlighted cases of “human error” where shoppers were humiliated and blacklisted from their local stores after being falsely flagged by the AI.

    One victim, a shopper named Jenny, described being blocked by security and accused of theft in front of other customers due to a false match. She warned that technology companies have effectively become “judge, jury and executioner” with no legal due process for those wrongly accused.

    Despite these concerns, retailers are doubling down on the technology. During the week leading up to Christmas Eve alone, the system issued nearly 15,000 alerts, marking the busiest period for AI-driven crime prevention in UK retail history.

    Battling Retail Crime with Tech: Body-Worn Cameras and Beyond


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • These Charlotte airport passengers can go through security now without showing ID

    [ad_1]

    Just in time for the holiday rush, American Airlines has rolled out a faster security process at Charlotte’s airport for its loyalty program members.

    The airline launched a streamlined, photo-based process on Monday, Dec. 22 for AAdvantage passengers going through Charlotte Douglas International Airport using the Transportation Security Administration’s PreCheck Touchless ID facial recognition system.

    This allows eligible members to move through security more easily after an identity match compares a customer’s image to photos previously provided to the government. This includes pictures from passports, Global Entry or visa cards.

    TSA has used the technology for domestic travel for several years since the pandemic, according to spokesman Carter Langston. The announcement is part of a new partnership with American to make passengers faces their ID, if they want to sign up for the PreCheck and airline awards services.

    Facial recognition technology has also been used for passengers coming to CLT from foreign countries since May 2021, and since 2023 for people leaving the Charlotte airport for another country.

    Facial recognition can save time by allowing travelers to skip ID checks, but groups like the American Civil Liberties Union are not in favor of the process. The organization believes it allows authorities to track people in public without their knowledge or consent, according to its website. The ACLU considers facial recognition to be a threat to privacy.

    The ACLU also supports the Traveler Privacy Protection Act, a bill that would regulate and limit the TSA’s use of facial recognition technology at airports. While lawmakers debate the process, American Airlines is moving forward with its voluntary program.

    American Airlines is offering the PreCheck Touchless ID system for passengers enrolled in the AAdvantage awards program.
    American Airlines is offering the PreCheck Touchless ID system for passengers enrolled in the AAdvantage awards program. AMERICAN AIRLINES

    How the new American Airlines facial recognition program works

    AAdvantage customers over 18 will be offered the opportunity to participate by providing their membership number, passport, and a TSA PreCheck traveler number. Members can opt in anytime through their profile on aa.com, and they must renew once a year.

    Identity verification is a critical part of transportation security, said TSA’s CLT Federal Security Director Greg Hawko in a news release.

    “Passengers can be identified and verified using their face as their identification, allowing the passenger to have their phones and IDs safely packed away before beginning the screening process,” Hawko said.

    At CLT airport, the service is now available at Security Checkpoint 2.

    Charlotte Douglas is one of 12 airports where this service is offered. Some of the others include airports serving major cities such as Atlanta, Chicago, New York, Philadelphia, Portland, Oregon, Seattle, Washington, and the Dallas–Fort Worth region.

    American plans to expand the service at additional airports in the coming months with help from TSA.

    The Transportation Security Administration’s PreCheck Touchless ID system is available for American Airlines passengers enrolled in its awards program.
    The Transportation Security Administration’s PreCheck Touchless ID system is available for American Airlines passengers enrolled in its awards program. AMERICAN AIRLINES

    More on Charlotte Douglas and American Airlines

    CLT is the sixth-busiest airport in the world for takeoffs and landings, according to preliminary rankings released in April by Airports Council International. The airport had 596,583 flights last year — an 11% increase from 2023, when it was ranked seventh internationally.

    Charlotte Douglas is the second-largest hub for American, accounting for about 90% of all flights out of the airport. The airline provides flights to more than 170 destinations in 27 countries worldwide at CLT through its global network.

    American is expected to operate more than 11,700 flights at Charlotte Douglas, with more than 1.4 million seats available for the holiday travel period from Dec. 18 through Jan. 5.

    The busiest days for American at CLT will be Sundays, Dec. 28 and Jan. 4, with up to 660 scheduled departures each day.

    Related Stories from Charlotte Observer

    Chase Jordan

    The Charlotte Observer

    Chase Jordan is a business reporter for The Charlotte Observer, and has nearly a decade of experience covering news in North Carolina. Prior to joining the Observer, he was a growth and development reporter for the Wilmington StarNews. The Kansas City native is a graduate of Bethune-Cookman University.

    [ad_2]

    Chase Jordan

    Source link

  • U.S. Immigration Enforcement Apparently Provides Facial Scanning Tech to Local Cops

    [ad_1]

    As the U.S. government ramps up its efforts to rid the nation of illegal immigrants, it is turning to a bevy of new technologies to help it sift through the domestic population. Increasingly, the Trump administration is using facial recognition, and a new report now claims that the Department of Homeland Security is even distributing face recording tech to local police departments to assist with immigration enforcement operations.

    404 Media writes that Customs and Border Protection recently released an app called Mobile Identify onto the Google Play Store that seems to be designed to use biometric scanning on potential detainees. The app is being made available to state and local law enforcement agencies as part of a program called 287(g), which essentially deputizes officials within those agencies to work on behalf of the federal government. The program appears active in a majority of states.

    404 got ahold of Mobile Identify and unspooled its code to attempt to see what it does. The journalists write that “multiple parts of the app’s code make clear references to scanning faces. One package is called ‘facescanner.’ Other parts mention ‘FacePresence’ and ‘No facial image found.’” However, they note that the Play Store’s description of the app does not mention facial recognition as a function of the app. That said, the app’s description does everything but mention face recording—with the obvious point of the app being to identify people who are in the country illegally.

    The Play Store’s description reads:

    Through a formal agreement, or Memorandum of Agreement (MOA), with DHS, participating agencies like your Sheriff’s Department can have designated officers who are trained, certified, and authorized to perform certain immigration enforcement functions, helping to identify and process individuals who may be in the country unlawfully. This tool is built to streamline those responsibilities securely and efficiently, directly in the field.

    A screenshot provided in the Play Store also notes that the app “requires camera access to take photos of subjects.”

    When reached for comment by Gizmodo, a DHS spokesperson did not deny that the app used facial recognition but did not otherwise provide any details about what it does. “While the Department does not discuss specific vendors or operational tools, any technology used by DHS Components must comply with the requirements and oversight framework,” an agency spokesperson said.

    Gizmodo sought direct confirmation from DHS that the app uses facial recognition, and will update if we hear back.

    The news follows on the heels of another report from 404 that showed that the agency was using an app called Mobile Fortify, which similarly leveraged facial recognition technology. The outlet previously reported that Fortify could frighteningly a person’s name, birthday, “alien number,” and information on whether they’ve been given an order of deportation or not.

    The Trump administration’s deportation efforts are aggressive and ongoing. During the first six months of Trump’s second term in office, the administration claimed to have deported a total of 150,000 people. Those numbers aren’t entirely unprecedented (indeed, an article from July noted that Trump’s current deportation numbers are roughly comparable with the Obama administration’s numbers from 2014). However, the loud, aggressive fashion in which the administration has gone about these operations certainly stands out from previous administrations, as does its leverage of increasingly invasive and powerful digital tools.

    [ad_2]

    Lucas Ropek

    Source link

  • CBP will photograph non-citizens entering and exiting the US for its facial recognition database

    [ad_1]

    The US Customs and Border Protection (CBP) submitted a new measure that allows it to photograph any non-US citizen who enters or exits the country for facial recognition purposes. According to a filing with the government’s Federal Register, CBP and the Department of Homeland Security are looking to crack down on threats of terrorism, fraudulent use of travel documents and anyone who overstays their authorized stay.

    The filing detailed that CBP will “implement an integrated, automated entry and exit data system to match records, including biographic data and biometrics, of aliens entering and departing the United States.” The government agency already has the ability to request photos and fingerprints from anyone entering the country, but this new rule change would allow for requiring photos of anyone exiting as well. These photos would “create galleries of images associated with individuals, including photos taken by border agents, and from passports or other travel documents,” according to the filing, adding that these galleries would be compared to live photos at entry and exit points.

    These new requirements are scheduled to go into effect on December 26, but CBP will need some time to implement a system to handle the extra demand. According to the filing, the agency said “a biometric entry-exit system can be fully implemented at all commercial airports and sea ports for both entry and exit within the next three to five years.”

    [ad_2]

    Jackson Chen

    Source link

  • Meta is bringing new facial recognition tools to the UK, EU and South Korea

    [ad_1]

    Meta is of facial recognition in Europe, the UK and South Korea to crack down on accounts that impersonate public figures. The new facial recognition-powered safety features are now live on Facebook in the regions and will expand to Instagram in the coming months.

    The technology was initially put to use last year starting in the US, helping to identify ads that fraudulently use a celebrity’s likeness as well as to help people regain access to hacked accounts. Public figures opt in to this program in Europe, which is also being rolled out in South Korea alongside the new protections against impersonation. This new use case is aimed at scammers who pose as public figures to trick unsuspecting users into sending money or other scams of that nature.

    “We’ll now use facial recognition technology to compare the profile picture on the suspicious account to the real public figure’s Facebook and Instagram profile pictures. If there’s a match, we will remove the impostor account,” said a Meta spokesperson.

    In addition to the US rollout, the company’s facial recognition technology has been used to aid account recovery in the UK, EU and South Korea since March. This came three years after Facebook decided to shut down its facial recognition system on Facebook, due in large part to public backlash against the technology.

    The social media giant touts the benefits of these tools, reporting that in the first half of 2025, user reports of “celebrity bait” ads dropped by 22 percent globally. Facial recognition remains a controversial technology, with differing public opinion on its use in and the .

    [ad_2]

    Andre Revilla

    Source link

  • Here’s the tech powering ICE’s deportation crackdown  | TechCrunch

    [ad_1]

    President Donald Trump made countering immigration one of his flagship issues during last year’s presidential campaign, promising an unprecedented number of deportations. 

    In his first eight months in office, that promise turned into around 350,000 deportations, a figure that includes deportations by Immigration and Customs Enforcement, or ICE (around 200,000), Customs and Border Protection (more than 132,000), and almost 18,000 self-deportations, according to CNN.  

    ICE has taken center stage in Trump’s mass deportation campaign, raiding homes, workplaces, and public parks in search of undocumented immigrants. To aid its efforts, ICE has at its disposal several technologies capable of identifying and surveilling individuals and communities.

    Here is a recap of some of the technology that ICE has in its digital arsenal. 

    Clearview AI facial recognition

    Clearview AI is perhaps the most well-known facial-recognition company today. For years, the company promised to be able to identify any face by searching through a large database of photos it had scraped from the internet. 

    On Monday, 404 Media reported that ICE has signed a contract with the company to support its law enforcement arm Homeland Security Investigations (HSI), “with capabilities of identifying victims and offenders in child sexual exploitation cases and assaults against law enforcement officers.” 

    According to a government procurement database, the contract signed last week is worth $3.75 million. 

    ICE has had other contracts with Clearview AI in the last couple of years. In September 2024, the agency purchased “forensic software” from the company, a deal worth $1.1 million. The year before, ICE paid Clearview AI nearly $800,000 for “facial recognition enterprise licenses.”

    Clearview AI did not respond to a request for comment. 

    Paragon phone spyware

    Contact Us

    Do you have more information about ICE and the technology it uses? We would love to learn how this affects you. From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.

    In September 2024, ICE signed a contract worth $2 million with Israeli spyware maker Paragon Solutions. Almost immediately, the Biden administration issued a “stop work order,” putting the contract under review to make sure it complied with an executive order on the government’s use of commercial spyware. 

    Because of that order, for nearly a year, the contract remained in limbo. Then, last week, the Trump administration lifted the stop work order, effectively reactivating the contract

    At this point, the status of Paragon’s relationship with ICE in practice is unclear.  

    The records entry from last week said that the contract with Paragon is for “a fully configured proprietary solution including license, hardware, warranty, maintenance, and training.” Practically speaking, unless the hardware installation and training were done last year, it may take some time for ICE to have Paragon’s system up and running.

    It’s also unclear if the spyware will be used by ICE or HSI, an agency whose investigations are not limited to immigration, but also cover online child sexual exploitation, human trafficking, financial fraud, and more.

    Paragon has long tried to portray itself as an “ethical” and responsible spyware maker, and now has to decide if it’s ethical to work with Trump’s ICE. A lot has happened to Paragon in the last year. In December, American private equity giant AE Industrial purchased Paragon, with a plan to merge it with cybersecurity company RedLattice, according to Israeli tech news site Calcalist.

    In a sign that the merger may have taken place, when TechCrunch reached out to Paragon for comment on the reactivation of the ICE contract last week, we were referred to RedLattice’s new vice president of marketing and communications Jennifer Iras. 

    RedLattice’s Iras did not respond to a request for comment for this article, nor for last week’s article.

    In the last few months, Paragon has been ensnared in a spyware scandal in Italy, where the government has been accused of spying on journalists and immigration activists. In response, Paragon cut ties with Italy’s intelligence agencies. 

    Phone hacking and unlocking technology

    In mid-September, ICE’s law enforcement arm Homeland Security Investigations signed a contract with Magnet Forensics for $3 million.

    This contract is specifically for software licenses so that Homeland Security Investigations agents can “recover digital evidence, process multiple devices” and “generate forensic reports,” according to the contract description.

    Magnet is the current maker of the phone hacking and unlocking devices known as Graykey. These devices essentially give law enforcement agents the ability to connect a locked phone to them, and be able to unlock it and access the data inside of them. 

    Magnet Forensics, which merged with Graykey makers Grayshift in 2023, did not respond to a request for comment.

    For years, ICE has used the legal research and public records data broker LexisNexis to support its investigations. 

    In 2022, two non-profits obtained documents via Freedom of Information Act requests, which revealed that ICE performed more than 1.2 million searches over seven months using a tool called Accurint Virtual Crime Center. ICE used the tool to check the background information of migrants.   

    A year later, The Intercept revealed that ICE was using LexisNexis to detect suspicious activity and investigate migrants before they even committed a crime, a program that a critic said enabled “mass surveillance.”

    According to public records, LexisNexis currently provides ICE “with a law enforcement investigative database subscription (LEIDS) which allows access to public records and commercial data to support criminal investigations.” 

    This year, ICE has paid $4.7 million to subscribe to the service. 

    LexisNexis spokesperson Jennifer Richman told TechCrunch that ICE has used the company’s product “data and analytics solutions for decades, across several administrations.”

    “Our commitment is to support the responsible and ethical use of data, in full compliance with laws and regulations, and for the protection of all residents of the United States,” said Richman, who added that LexisNexis “partners with more than 7,500 federal, state, local, tribal, and territorial agencies across the United States to advance public safety and security.” 

    Surveillance giant Palantir

    Data analytics and surveillance technology giant Palantir has signed several contracts with ICE in the last year. The biggest contract, worth $18.5 million from September 2024, is for a database system called “Investigative Case Management,” or ICM.

    The contract for ICM goes back to 2022, when Palantir signed a $95.9 million deal with ICE. The Peter Thiel-founded company’s relationship with ICE dates back to the early 2010s. 

    Earlier this year, 404 Media, which has reported extensively on the technology powering Trump’s deportation efforts, and particularly Palantir’s relationship with ICE, revealed details of how the ICM database works. The tech news site reported that it saw a recent version of the database, which allows ICE to filter people based on their immigration status, physical characteristics, criminal affiliation, location data, and more. 

    According to 404 Media, “a source familiar with the database” said it is made up of ‘tables upon tables’ of data and that it can build reports that show, for example, people who are on a specific type of visa who came into the country at a specific port of entry, who came from a specific country, and who have a specific hair color (or any number of hundreds of data points).” 

    The tool, and Palantir’s relationship with ICE, was controversial enough that sources within the company leaked to 404 Media an internal wiki where Palantir justifies working with Trump’s ICE. 

    Palantir is also developing a tool called “ImmigrationOS,” according to a contract worth $30 million revealed by Business Insider

    ImmigrationOS is said to be designed to streamline the “selection and apprehension operations of illegal aliens,” give “near real-time visibility” into self-deportations, and track people overstaying their visa, according to a document first reported on by Wired.

    First published on September 13, and updated on September 18 to include Magnet Forensics’ new contract.

    [ad_2]

    Lorenzo Franceschi-Bicchierai

    Source link

  • Here’s the tech powering ICE’s deportation crackdown  | TechCrunch

    [ad_1]

    President Donald Trump made countering immigration one of his flagship issues during last year’s presidential campaign, promising an unprecedented number of deportations. 

    In his first eight months in office, that promise turned into around 350,000 deportations, a figure that includes deportations by Immigration and Customs Enforcement (around 200,000), Customs and Border Protection (more than 132,000), and almost 18,000 self-deportations, according to CNN.  

    ICE has taken center stage in Trump’s mass deportation campaign, raiding homes, workplaces, and public parks in search of undocumented immigrants. To aid its efforts, the ICE has at its disposal several technologies capable of identifying and surveilling individuals and communities.

    Here is a recap of some of the technology that ICE has in its digital arsenal. 

    Clearview AI facial recognition

    Clearview AI is perhaps the most well-known facial recognition company today. For years, the company promised to be able to identify any face by searching through a large database of photos it had scraped from the internet. 

    On Monday, 404 Media reported that ICE has signed a contract with the company to support its law enforcement arm Homeland Security Investigations (HSI), “with capabilities of identifying victims and offenders in child sexual exploitation cases and assaults against law enforcement officers.” 

    According to a government procurement database, the contract signed last week is worth $3.75 million. 

    ICE has had other contracts with Clearview AI in the last couple of years. In September 2024, the agency purchased “forensic software” from the company, a deal worth $1.1 million. The year before, ICE paid Clearview AI nearly $800,000 for “facial recognition enterprise licenses.”

    Clearview AI did not respond to a request for comment. 

    Contact Us

    Do you have more information about ICE and the technology it uses? We would love to learn how this affects you. From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.

    Paragon phone spyware

    In September 2024, ICE signed a contract worth $2 million with Israeli spyware maker Paragon Solutions. Almost immediately, the Biden administration issued a “stop work order,” putting the contract under review to make sure it complied with an executive order on the government’s use of commercial spyware. 

    Because of that order, for nearly a year, the contract remained in limbo. Then, last week, the Trump administration lifted the stop work order, effectively reactivating the contract

    At this point, it’s unclear what’s the status of Paragon’s relationship with ICE in practice. 

    The records entry from last week said that the contract with Parago is for “a fully configured proprietary solution including license, hardware, warranty, maintenance, and training.” Practically speaking, unless the hardware installation and training were done last year, it may take some time for ICE to have Paragon’s system up and running.

    It’s also unclear if the spyware will be used by ICE or HSI, an agency whose investigations are not limited to immigration, but also cover online child sexual exploitation, human trafficking, financial fraud, and more.

    Paragon has long tried to portray itself as an “ethical” and responsible spyware maker, and now has to decide if it’s ethical to work with Trump’s ICE. A lot has happened to Paragon in the last year. In December, American private equity giant AE Industrial purchased Paragon, with a plan to merge it with cybersecurity company Red Lattice, according to Israeli tech news site Calcalist.

    In a sign that the merger may have taken place, when TechCrunch reached out to Paragon for comment on the reactivation of the ICE contract last week, we were referred to RedLattice’s new vice president of marketing and communications Jennifer Iras. 

    RedLattice’s Iras did not respond to a request for comment for this article, nor for last week’s article.

    In the last few months, Paragon has been ensnared in a spyware scandal in Italy, where the government has been accused of spying on journalists and immigration activists. In response, Paragon cut ties with Italy’s intelligence agencies. 

    For years, ICE has used the legal research and public records data broker LexisNexis to support its investigations. 

    In 2022, two non-profits obtained documents via Freedom of Information Act requests, which revealed that ICE performed more than 1.2 million searches over seven months using a tool called Accurint Virtual Crime Center. ICE used the tool to check the background information of migrants.   

    A year later, The Intercept revealed that ICE was using LexisNexis to detect suspicious activity and investigate migrants before they even committed a crime, a program that a critic said enabled “mass surveillance.”

    According to public records, LexisNexis currently provides ICE “with a law enforcement investigative database subscription (LEIDS) which allows access to public records and commercial data to support criminal investigations.” 

    This year, ICE has paid $4.7 million to subscribe to the service. 

    LexisNexis spokesperson Jennifer Richman told TechCrunch that ICE has used the company’s product “data and analytics solutions for decades, across several administrations.”

    “Our commitment is to support the responsible and ethical use of data, in full compliance with laws and regulations, and for the protection of all residents of the United States,” said Richman, who added that LexisNexis “partners with more than 7,500 federal, state, local, tribal, and territorial agencies across the United States to advance public safety and security.” 

    Surveillance giant Palantir

    Data analytics and surveillance technology giant Palantir has signed several contracts with ICE in the last year. The biggest contract, worth $18.5 million from September 2024, is for a database system called “Investigative Case Management,” or ICM.

    The contract for ICM goes back to 2022, when Palantir signed a $95.9 million deal with Palantir. The Peter Thiel-founded company’s relationship with ICE dates back to the early 2010s. 

    Earlier this year, 404 Media, which has reported extensively on the technology powering Trump’s deportation efforts, and particularly Palantir’s relationship with ICE, revealed details of how the ICM database works. The tech news site reported that it saw a recent version of the database, which allows ICE to filter people based on their immigration status, physical characteristics, criminal affiliation, location data, and more. 

    404 Media cited “a source familiar with the database,” who said it is made up of ‘tables upon tables’ of data and that it can build reports that show, for example, people who are on a specific type of visa who came into the country at a specific port of entry, who came from a specific country, and who have a specific hair color (or any number of hundreds of data points).” 

    The tool, and Palantir’s relationship with ICE, was controversial enough that sources within the company leaked to 404 Media an internal wiki where Palantir justifies working with Trump’s ICE. 

    Palantir is also developing a tool called “ImmigrationOS,” according to a contract worth $30 million revealed by Business Insider
    ImmigrationOS is said to be designed to streamline the “selection and apprehension operations of illegal aliens,” give “near real-time visibility” into self-deportations, and track people overstaying their visa, according to a document first reported on by Wired.

    [ad_2]

    Lorenzo Franceschi-Bicchierai

    Source link

  • Senators demand ICE cease use of facial recognition app

    [ad_1]

    Senators Edward J. Markey, Ron Wyden and Jeff Merkley Thursday to Acting US Immigration and Customs Enforcement (ICE) Director Todd Lyons urging the agency to stop using “Mobile Fortify,” a smartphone app that uses biometric identification, including facial recognition. The lawmakers said facial recognition and warned that real-time surveillance could have a chilling effect on constitutionally protected activities.

    “As studies have shown, when individuals believe they are being surveilled, they are less likely to engage in First Amendment-protected activities, such as protests or rallies — undermining the very core of our democracy,” the senators wrote.

    They requested answers from the agency by October 2 as to who built the app, when it was deployed, whether ICE tested its accuracy, the legal basis for its use and current agency policies governing the tool’s use. They also asked whether ICE would commit to ending the use of Mobile Fortify, and to explain why if they would not. The letter was also signed by Senators Elizabeth Warren, Cory Booker, Chris Van Holle, Tina Smith, Bernie Sanders and Adam Schiff.

    Earlier this summer The Washington Post reported that the New Orleans police were secretly using facial recognition on a private camera network of over 200 live feeds. This went on for two years despite city ordinances requiring the technology only be used to search for specific suspects of violent crimes, and that the use be documented and reported to the city council. Facial recognition technology remains controversial, though a plurality of Americans support its use in both law enforcement and the workplace, with limitations.

    As there is still no federal regulation on the use of facial recognition, states have been left to craft their own guardrails, with states like allowing individuals to sue for damages over misuse of biometric data and requiring written consent for its use. Last year Meta to the state of Texas (the largest financial settlement ever paid out to a single state) for allegedly collecting biometric data on millions of Texans without their consent.

    [ad_2]

    Andre Revilla

    Source link

  • Senators demand ICE cease use of facial recognition app

    [ad_1]

    Senators Edward J. Markey, Ron Wyden and Jeff Merkley sent a letter Thursday to Acting US Immigration and Customs Enforcement (ICE) Director Todd Lyons urging the agency to stop using “Mobile Fortify,” a smartphone app that uses biometric identification, including facial recognition. The lawmakers said facial recognition remains unreliable and warned that real-time surveillance could have a chilling effect on constitutionally protected activities.

    “As studies have shown, when individuals believe they are being surveilled, they are less likely to engage in First Amendment-protected activities, such as protests or rallies — undermining the very core of our democracy,” the senators wrote.

    They requested answers from the agency by October 2 as to who built the app, when it was deployed, whether ICE tested its accuracy, the legal basis for its use and current agency policies governing the tool’s use. They also asked whether ICE would commit to ending the use of Mobile Fortify, and to explain why if they would not. The letter was also signed by Senators Elizabeth Warren, Cory Booker, Chris Van Holle, Tina Smith, Bernie Sanders and Adam Schiff.

    Earlier this summer The Washington Post reported that the New Orleans police were secretly using facial recognition on a private camera network of over 200 live feeds. This went on for two years despite city ordinances requiring the technology only be used to search for specific suspects of violent crimes, and that the use be documented and reported to the city council. Facial recognition technology remains controversial, though a plurality of Americans support its use in both law enforcement and the workplace, with limitations.

    As there is still no federal regulation on the use of facial recognition, states have been left to craft their own guardrails, with states like Illinois allowing individuals to sue for damages over misuse of biometric data and requiring written consent for its use. Last year Meta paid a $1.4 billion settlement to the state of Texas (the largest financial settlement ever paid out to a single state) for allegedly collecting biometric data on millions of Texans without their consent.

    [ad_2]

    Andre Revilla

    Source link

  • Florida man’s arrest wiped from record after AI software leads police to wrong suspect

    [ad_1]

    A wrongful arrest has now been wiped from a Lee County man’s record. Gulf Coast News first exposed the injustice months ago. The arrest happened after artificial intelligence facial recognition led police to the wrong suspect. “They say in life, everything happens for a reason. I can’t for the life of me figure out this one,” Robert Dillon, the man wrongfully arrested, told Gulf Coast News earlier this year. ‘How did this happen?’ One year ago, right outside his home in San Carlos Park, Dillon was arrested for a crime he never committed. His stunned reaction was captured on the body camera of the deputy who’d knocked on his door. “I’m thinking, ‘How in the hell did this happen. How did this happen?’” Dillon recalled. Dillon was accused of trying to lure a child at a fast-food restaurant more than 300 miles away in Jacksonville Beach. Investigators there submitted restaurant surveillance photos of the suspect to an AI-assisted facial recognition program, which identified Dillon as a 93% match. Beyond that, and a witness who picked his photo out of a lineup, there was no evidence tying him to it.As Dillon first explained months ago, he’s never been to Jacksonville Beach. “Out of the blue. They pick some guy that lives six and a half hours away and says, ‘This is you.’ It blew my mind,” Dillon said earlier this year. Case dropped, arrest wiped from recordOnce Dillon and his attorney provided evidence to show that he did not commit the crime, the state attorney’s office in Jacksonville dropped the case.When Gulf Coast News first reported on it, a spokesman for the state attorney’s office said they were submitting paperwork to the Florida Department of Law Enforcement for the case to be stricken from Dillon’s record. Now, the spokesman confirmed Dillon is no longer in their system. His arrest mugshot — and his case file — are nowhere to be found online. Not the first time…”This is a technology that’s really dangerous, because it often gets it wrong. But police often treat it like it has to be right,” Nate Wessler said of facial recognition programs. Wessler is an attorney with the American Civil Liberties Union. He focuses on government and police use of new technology, like the facial recognition in Dillon’s case. “Now that we know about it, we want to dig deeper,” Wessler said of the case. “This is a real miscarriage of justice. And it’s the latest in a series of wrongful arrests we know of around the country after police relied on incorrect results from face recognition technology.” In 2020, Robert Williams was wrongfully arrested in front of his home by Detroit police. His wife and two daughters watched it happen. “I can’t really put it into words. It was one of the most shocking things I’ve ever had happen to me,” Williams said in an interview with the ACLU after his arrest. A surveillance photo of a man stealing from a watch store was run through face recognition technology by investigators and identified Williams — who was nowhere near the store at time — as a possible match. Wessler was part of the legal team that sued the city of Detroit on Williams’ behalf. “The way to avoid this kind of travesty of justice is to either take this technology out of the hands of police, or lock it down really seriously with a set of policies and restrictions,” Wessler said. Detroit PD changes policy after wrongful arrestWilliams’ lawsuit led to a settlement, which included not only a payout for him but also sparked a policy change within the Detroit PD. In Williams’ case, much like Robert Dillon’s, police relied on two pieces of evidence: the face recognition match and someone picking his photo out of a lineup. Now, in Detroit, more evidence is required to make an arrest. “When you go straight from a face recognition result right to a photo lineup, there’s a high, high likelihood of tainting the reliability of that lineup,” Wessler explained. “You’re going to populate it with an innocent lookalike, plus five people who don’t look much like the suspect. And now you’ve just created this totally suggestible situation, where even a well-meaning witness is going to be tricked.”Months later, Dillon still hopes to get justiceRobert Dillon is relieved the arrest is off his record, but he wants to file a lawsuit to fight back against the injustice. After all, he said he can never get back the sleepless nights wondering if he’d serve time for a crime he never committed. “You cannot wrongfully imprison somebody. No matter who you are. Everybody’s got rights,” Dillon said. Gulf Coast News reached out to the Jacksonville Beach Police Department again, but they still refuse to answer any questions about their investigation.

    A wrongful arrest has now been wiped from a Lee County man’s record.

    Gulf Coast News first exposed the injustice months ago.

    The arrest happened after artificial intelligence facial recognition led police to the wrong suspect.

    “They say in life, everything happens for a reason. I can’t for the life of me figure out this one,” Robert Dillon, the man wrongfully arrested, told Gulf Coast News earlier this year.

    ‘How did this happen?’

    One year ago, right outside his home in San Carlos Park, Dillon was arrested for a crime he never committed. His stunned reaction was captured on the body camera of the deputy who’d knocked on his door.

    “I’m thinking, ‘How in the hell did this happen. How did this happen?’” Dillon recalled.

    Dillon was accused of trying to lure a child at a fast-food restaurant more than 300 miles away in Jacksonville Beach.

    Investigators there submitted restaurant surveillance photos of the suspect to an AI-assisted facial recognition program, which identified Dillon as a 93% match.

    Beyond that, and a witness who picked his photo out of a lineup, there was no evidence tying him to it.

    As Dillon first explained months ago, he’s never been to Jacksonville Beach.

    “Out of the blue. They pick some guy that lives six and a half hours away and says, ‘This is you.’ It blew my mind,” Dillon said earlier this year.

    Case dropped, arrest wiped from record

    Once Dillon and his attorney provided evidence to show that he did not commit the crime, the state attorney’s office in Jacksonville dropped the case.

    When Gulf Coast News first reported on it, a spokesman for the state attorney’s office said they were submitting paperwork to the Florida Department of Law Enforcement for the case to be stricken from Dillon’s record.

    Now, the spokesman confirmed Dillon is no longer in their system. His arrest mugshot — and his case file — are nowhere to be found online.

    Not the first time…

    “This is a technology that’s really dangerous, because it often gets it wrong. But police often treat it like it has to be right,” Nate Wessler said of facial recognition programs.

    Wessler is an attorney with the American Civil Liberties Union. He focuses on government and police use of new technology, like the facial recognition in Dillon’s case.

    “Now that we know about it, we want to dig deeper,” Wessler said of the case. “This is a real miscarriage of justice. And it’s the latest in a series of wrongful arrests we know of around the country after police relied on incorrect results from face recognition technology.”

    In 2020, Robert Williams was wrongfully arrested in front of his home by Detroit police. His wife and two daughters watched it happen.

    “I can’t really put it into words. It was one of the most shocking things I’ve ever had happen to me,” Williams said in an interview with the ACLU after his arrest.

    A surveillance photo of a man stealing from a watch store was run through face recognition technology by investigators and identified Williams — who was nowhere near the store at time — as a possible match.

    Wessler was part of the legal team that sued the city of Detroit on Williams’ behalf.

    “The way to avoid this kind of travesty of justice is to either take this technology out of the hands of police, or lock it down really seriously with a set of policies and restrictions,” Wessler said.

    Detroit PD changes policy after wrongful arrest

    Williams’ lawsuit led to a settlement, which included not only a payout for him but also sparked a policy change within the Detroit PD.

    In Williams’ case, much like Robert Dillon’s, police relied on two pieces of evidence: the face recognition match and someone picking his photo out of a lineup.

    Now, in Detroit, more evidence is required to make an arrest.

    “When you go straight from a face recognition result right to a photo lineup, there’s a high, high likelihood of tainting the reliability of that lineup,” Wessler explained. “You’re going to populate it with an innocent lookalike, plus five people who don’t look much like the suspect. And now you’ve just created this totally suggestible situation, where even a well-meaning witness is going to be tricked.”

    Months later, Dillon still hopes to get justice

    Robert Dillon is relieved the arrest is off his record, but he wants to file a lawsuit to fight back against the injustice.

    After all, he said he can never get back the sleepless nights wondering if he’d serve time for a crime he never committed.

    “You cannot wrongfully imprison somebody. No matter who you are. Everybody’s got rights,” Dillon said.

    Gulf Coast News reached out to the Jacksonville Beach Police Department again, but they still refuse to answer any questions about their investigation.

    [ad_2]

    Source link

  • Meta is bringing back facial recognition with new safety features for Facebook and Instagram

    Meta is bringing back facial recognition with new safety features for Facebook and Instagram

    [ad_1]

    Meta is bringing facial recognition tech back to its apps more than three years after it shut down Facebook’s “face recognition” system amid a broader backlash against the technology. Now, the social network will begin to deploy facial recognition tools on Facebook and Instagram to fight scams and help users who have lost access to their accounts, the company said in an update.

    The first test will use facial recognition to detect scam ads that use the faces of celebrities and other public figures. “If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb-bait, we will try to use facial recognition technology to compare faces in the ad against the public figure’s Facebook and Instagram profile pictures,” Meta explained in a blog post. “If we confirm a match and that the ad is a scam, we’ll block it.”

    The company said that it’s already begun to roll the feature out to a small group of celebs and public figures and that it will begin automatically enrolling more people into the feature “in the coming weeks,” though individuals have the ability to opt out of the protection. While Meta already has systems in place to review ads for potential scams, the company isn’t always able to catch “celeb-bait” ads as many legitimate companies use celebrities and public figures to market their products, Monika Bickert, VP of content policy at Meta, said in a briefing. “This is a real time process,” she said of the new facial recognition feature. “It’s faster and it’s more accurate than manual review.”

    Separately, Meta is also testing facial recognition tools to address another long-running issue on Facebook and Instagram: account recovery. The company is experimenting with a new “video selfie” option that allows users to upload a clip of themselves, which Meta will then match to their profile photos, when users have been locked out of their accounts. The company will also use it in cases of a suspected account compromise to prevent hackers from accessing accounts using stolen credentials.

    The tool won’t be able to help everyone who loses access to a Facebook or Instagram account. Many business pages, for example, don’t include a profile photo of a person, so those users would need to use Meta’s existing account recovery options. But Bickert says the new process will make it much more difficult for bad actors to game the company’s support tools “It will be a much higher level of difficulty for them in trying to bypass our systems,” Bickert said.

    With both new features, Meta says it will “immediately delete” facial data that’s used for comparisons and that the scans won’t be used for another purpose. The company is also making the features optional, though celebrities will need to opt-out of the scam ad protection rather than opt-ion.

    That could draw criticism from privacy advocates, particularly given Meta’s messy history with facial recognition. The company previously used the technology to power automatic photo-tagging, which allowed the company to automatically recognize the faces of users in photos and videos. The feature was discontinued in 2021, with Meta deleting the facial data of more than 1 billion people, citing “growing societal concerns.” The company also faces lawsuits, notably from the Texas and Illinois, over its use of the tech. Meta paid $650 million to settle a lawsuit related to the Illinois law and $1.4 billion to resolve a similar suit in Texas.

    It’s notable, then, that the new tools won’t be available in either Illinois or Texas to start. It also won’t roll out to users in the United Kingdom or European Union as the company is “continuing to have conversations there with regulators” in the region, according to Bickert. But the company is “hoping to scale this technology globally sometime in 2025,” according to a Meta spokesperson.

    [ad_2]

    Karissa Bell

    Source link

  • This Facial Recognition Experiment With Meta’s Smart Glasses Is a Terrifying Vision of the Future

    This Facial Recognition Experiment With Meta’s Smart Glasses Is a Terrifying Vision of the Future

    [ad_1]

    Two college students have used Meta’s smart glasses to build a tool that quickly identifies any stranger walking by and brings up that person’s sensitive information, including their home address and contact information, according to a demonstration video posted to Instagram. And while the creators say they have no plans to release the code for their project, the demo gives us a peek at humanity’s very likely future—a future that used to be confined to dystopian sci-fi movies.

    The two people behind the project, AnhPhu Nguyen and Caine Ardayfio, are students working on computer science at Harvard who often post their tech experiments on social media, including 3D printed images and wearable flame-throwers. But it’s their latest experiment, first spotted by 404 Media, that’s probably going to make a lot of people feel uneasy.

    An Instagram video posted by Nguyen explains how the two men built a program that feeds the visual information from Meta Ray Ban smart glasses into facial recognition tools like Pimeyes, which have essentially scraped the entire web to identify where that person’s face shows up online. From there, a large language model infers the likely name and other details about that person. That name is then fed to various websites that can reveal the person’s home address, phone number, occupation or other organizational affiliations, and even the names of relatives.

    “To use it, you just put the glasses on, then as you walk by people, the glasses will detect when somebody’s face is in frame. This photo is used to analyze them, and after a few seconds, their personal information pops up on your phone,”  Nguyen explains in the Instagram video.

    Nguyen and Ardayfio call their project I-XRAY and it’s pretty stunning how much information they’re able to pull up in a short amount of time. They’re quick to point out that many of these tools have only become widely available in the past few years. For example, Meta’s smart glasses with camera capabilities that look like regular eyeglasses were only released last year. And the kind of LLM data extraction they’re achieving was only possible in the past two years. Even the ability to look up partial social security numbers (thanks to all those data leaks you read about every day now) was only possible at the consumer level since 2023.

    As you can see in the video, they also approached strangers and acted like they knew those people from elsewhere after instantly looking up their information.

    “The system leverages the ability of LLMs to understand, process, and compile vast amounts of information from diverse sources–inferring relationships between online sources, such as linking a name from one article to another, and logically parsing a person’s identity and personal details through text,” the creators say in an explanation document posted to Google Drive. “This synergy between LLMs and reverse face search allows for fully automatic and comprehensive data extraction that was previously not possible with traditional methods alone.”

    The creators list the tools they used in their release, noting that anyone can request that those services remove their information. For reverse facial search engines, there’s Pimeyes and Facecheck ID. For search engines that include personal information there’s FastPeopleSearch, CheckThem, and Instant Checkmate. As for the social security number information, there’s no way to get that stuff removed, so the students recommend freezing your credit.

    The students didn’t immediately respond to questions from Gizmodo on Wednesday morning. Meta also didn’t respond to a request for comment. We’ll update this post if we hear back. But in the meantime, we should all probably get ready for this kind of tech to emerge more widely since this kind of technological mash-up feels inevitable at this point—especially if any of the new smart glasses that guys like Mark Zuckerberg love so much really become mainstream.

    It may take quite a while for the biggest tech companies to get behind it, but just as we saw OpenAI essentially shoot the starting gun for consumer-facing generative AI, any small upstart could plausibly make this product happen and start the dominoes falling for other larger tech companies to get this future started. Let’s cross our fingers and hope for the best, given the privacy implications. It really feels like nobody will have any semblance of anonymity in public once this ball gets rolling.

    [ad_2]

    Matt Novak

    Source link

  • Detroit police can no longer use facial recognition results as the sole basis for arrests

    Detroit police can no longer use facial recognition results as the sole basis for arrests

    [ad_1]

    The Detroit Police Department has to adopt new rules curbing its reliance on facial recognition technology after the city reached a settlement this week with Robert Williams, a Black man who was wrongfully arrested in 2020 due to a false face match. It’s not an all-out ban on the technology, though, and the court’s jurisdiction to enforce the agreement only extends four years. Under the new restrictions, which the is calling the strongest such policies for law enforcement in the country, police cannot make arrests based solely on facial recognition results or conduct a lineup based only on facial recognition leads.

    Williams was arrested after facial recognition technology flagged his expired driver’s license photo as a possible match for the identity of an alleged shoplifter, which police then used to construct a photo lineup. He was arrested at his home, in front of his family, which he says “completely upended my life.” Detroit PD is known to have made at least two other wrongful arrests based on the results of facial recognition technology (FRT), and in both cases, the victims were Black, the ACLU noted in its announcement of the settlement. Studies have shown that facial recognition is .

    The new rules stipulate that “[a]n FRT lead, combined with a lineup identification, may never be a sufficient basis for seeking an arrest warrant,” according to a summary of the agreement. There must also be “further independent and reliable evidence linking a suspect to a crime.” Police in Detroit will have to undergo training on the technology that addresses the racial bias in its accuracy rates, and all cases going back to 2017 in which facial recognition was used to obtain an arrest warrant will be audited.

    In an op-ed for published today, Williams wrote that the agreement means, essentially, that “DPD can no longer substitute facial recognition for basic investigative police work.”

    This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

    [ad_2]

    Cheyenne MacDonald

    Source link

  • Detroit Police Department agrees to new rules around facial recognition tech | TechCrunch

    Detroit Police Department agrees to new rules around facial recognition tech | TechCrunch

    [ad_1]

    As part of a legal settlement, the Detroit Police Department has agreed to new guardrails limiting how it can use facial recognition technology.

    These new policies prohibit the police from arresting people based solely on the results of a facial recognition search, or on the results of photo lineups conducted immediately after a facial recognition search. It also states that photo lineups cannot be conducted solely on the basis of facial recognition — instead, there must be additional evidence linking a suspect to the crime.

    The policies — which can be enforced by a court for the next four years — also require police training around the risks and dangers of facial recognition tech, and an audit of all cases since 2017 where facial recognition was used to obtain an arrest warrant.

    Roger Williams, a Black man who was arrested after being identified by facial recognition tech, had sued the police department and was represented by lawyers from the American Civil Liberties Union and the Civil Rights Litigation Initiative at the University of Michigan Law School.

    In announcing the settlement, the ACLU described it as achieving “the nation’s strongest police department policies and practices constraining law enforcement’s use of this dangerous technology.” It also noted that women and people of color are “substantially more likely to be misidentified by facial recognition technology.”

    “With this painful chapter of our lives closing, my wife and I will continue raising awareness about the dangers of this technology,” Williams said in a statement.

    He reportedly spent 30 hours in jail after he was wrongly identified as a man captured on surveillance footage stealing five watches from a store in downtown Detroit. His driver’s license photo came up in a facial recognition search of a database of mugshots and license photos, and the security contractor who provided the footage agreed he was the best match, leading to his arrest.

    Prosecutors later dropped the charges. The police department said it’s also paying Williams $300,000 as part of the settlement.

    In a statement of its own, the police department said it is “pleased with its work with the ACLU and University of Michigan over the last year and a half,” adding that it “firmly” believes the new policy “will serve as a national best practice and model for other agencies using this technology.”

    Cities including San Francisco have banned the use of facial recognition by law enforcement. Microsoft also recently banned police departments from using its AI tech for facial recognition.

    [ad_2]

    Anthony Ha

    Source link

  • 2012 Pennsylvania Unsolved Murder Cracked with Evidence from Cigarette Butt and Styrofoam Cup

    2012 Pennsylvania Unsolved Murder Cracked with Evidence from Cigarette Butt and Styrofoam Cup

    [ad_1]

    A New Jersey resident was apprehended following a Pennsylvania investigation that linked him to a 2012 homicide through advanced DNA forensic techniques and evidence, including a Styrofoam cup found at the crime scene and a cigarette butt discovered at his mother’s home.

    The District Attorney’s Office of Berks County, Pennsylvania, disclosed the arrest of 39-year-old Vallis L. Slaughter. He faces charges of first-degree murder in the death of 34-year-old Julio Torres outside the West Reading Diner in March 2012.

    John T. Adams, the Berks County District Attorney, reported in a press briefing on Monday that the initial probe into Torres’ death resulted in the apprehension and conviction of 22-year-old Jomain Case at the time.

    The investigation revealed that Torres, Case, and another individual were embroiled in a dispute before Torres was killed. Case’s DNA was matched to that found on a Styrofoam cup at the scene, leading to his arrest.

    Despite an initial match, further analysis of the Styrofoam cup’s DNA did not align with any database samples. Investigators later found that Slaughter, who was in Reading, Pennsylvania, on the murder night, became a person of interest. The case went cold after leads dried up and no new information surfaced.

    Read Also: Most Dangerous Cities In Pennsylvania

    Twelve years post-crime, the investigation was reopened, uncovering a cell phone photo of Slaughter taken on the murder night. Facial recognition technology helped identify him as the alleged shooter.

    In December, while residing in Jersey City, New Jersey, with his mother, Slaughter was linked to the murder through DNA evidence from a cigarette butt, matching it to the Styrofoam cup’s DNA. He was arrested at his mother’s residence by Jersey City Police and is currently held at the Hudson County Correctional Facility, awaiting extradition to Berks County.

    Slaughter is charged with first- and third-degree murder, criminal conspiracy, aggravated assault, and possession of crime instruments.

    Source: https://www.nbcnews.com/news/us-news/dna-cigarette-butt-styrofoam-cup-lead-arrest-unsolved-pennsylvania-sla-rcna145029

    [ad_2]

    Srdjan Ilic

    Source link

  • New Nationals season will include futuristic tech, replica World Series rings – WTOP News

    New Nationals season will include futuristic tech, replica World Series rings – WTOP News

    [ad_1]

    The Washington Nationals have their first home game of the new season coming up April 1, and the team is showing off some futuristic technology that will greet fans when they show up this year.

    The Washington Nationals have unveiled what’s new at the ballpark for the 2024 season.
    (WTOP/Nick Iannelli)

    WTOP/Nick Iannelli

    Additions to the food menu include the “Taste of the Majors NYC Dog,” steak kebabs and “Screech Burger Sliders.”
    (WTOP/Nick Iannelli)

    WTOP/Nick Iannelli

    Fans can also enjoy deep fried calamari, shrimp tacos and Chesapeake crab cake.
    (WTOP/Nick Iannelli)

    WTOP/Nick Iannelli

    The team also announced Monday that fans would soon have a chance to get their very own replica World Series championship ring.
    (WTOP/Nick Iannelli)

    WTOP/Nick Iannelli

    Fans no longer need to show their ticket in order to enter the stadium. All they need to do is show their face, thanks to “Go-Ahead Entry” facial authentication technology.
    (WTOP/Nick Iannelli)

    WTOP/Nick Iannelli

    The Washington Nationals have their first home game of the new season coming up on April 1, and the team is showing off some futuristic technology that will greet fans when they show up this year.

    Fans no longer need to show their ticket in order to enter the stadium.

    All they need to do is show their face.

    It’s called “Go-Ahead Entry,” and it allows people to enter Nationals Park using facial authentication technology.

    “They’re able to register through the app and then go through our dedicated lanes, and essentially skip any lines,” said Thomas Kildahl, executive director of ticket services for the Nationals.

    Fans can enroll by logging into the MLB Ballpark app and following the prompts to capture an image of the user’s face.

    After that, when they get to the stadium, all they need to do is walk through an entrance marked “Go-Ahead Entry” and a camera will scan their face and recognize that it’s them.

    “Right when you register, you’ll be set up and you’ll be able to walk through the gate right away,” Kildahl said. “We’ll have QR codes set up around the park for folks to be able to register on the spot.”

    Not every stadium entrance will be equipped with the cutting-edge technology, meaning fans who don’t want to participate don’t have to.

    Getting your own World Series ring

    The team also announced Monday that fans would soon have a chance to get their very own World Series championship ring … sort of.

    It’s a replica of the 2019 championship ring that the players earned after the Nationals defeated the Houston Astros.

    “We designed it to be just like the rings that were given to players and personnel in 2019,” said Lindsey Norris, the team’s senior director of promotions and events.

    The ring sparkles in the sunlight and, at a distance, it looks like the real thing.

    The first 20,000 fans will get one when they go to the game on April 20.

    “It has blue, red and white rhinestones,” Norris said. “It also comes with a nice little ring holder so fans can put it with their memorabilia and set it out.”

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2024 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    [ad_2]

    Nick Iannelli

    Source link

  • Haunting ‘Demon Faces’ Show What It’s Like to Have Rare Distorted Face Syndrome

    Haunting ‘Demon Faces’ Show What It’s Like to Have Rare Distorted Face Syndrome

    [ad_1]

    A 58-year-old man with a rare medical condition sees faces normally on screens and paper, but in person, they take on a demonic quality. The patient has a unique case of prosopometamorphopsia (PMO), a condition that causes peoples’ faces to appear distorted, reptilian, or otherwise inhuman.

    A new study published in The Lancet describes the case, which is unique in that, to the man, the faces only appear demonic when the individuals are physically present. The patient has been perceiving faces as distorted for 31 months; at first, it was distressing to him, but now, he has “become habituated to them,” the paper states.

    Because faces appear ordinary to him on screens and in person, the research team had a unique opportunity to probe how the distortions manifest and create accurate visualizations of the “demonic” countenances.

    “In other studies of the condition, patients with PMO are unable to assess how accurately a visualization of their distortions represents what they see because the visualization itself also depicts a face, so the patients will perceive distortions on it too,” said Antônio Mello, a researcher at Dartmouth College and lead author of the study, in a university release. “Through the process, we were able to visualize the patient’s real-time perception of the face distortions.”

    For the patient, faces in person are unsettlingly distorted. Eyes are stretched and angular, nostrils flare out and lips stretch outwards to comprise the entire width of the face. Grooves appear in the forehead, and ears warp into an elvish shape, ending in sharp points. In milder cases, facial features merely droop, appear out of position, or are smaller or larger than they are in real life.

    In another case, published in The Lancet in 2014, a 52-year-old woman in The Netherlands reported:

    A life-long history of seeing people’s faces change into dragon-like faces and hallucinating similar faces many times a day. She could perceive and recognise actual faces, but after several minutes they turned black, grew long, pointy ears and a protruding snout, and displayed a reptiloid skin and huge eyes in bright yellow, green, blue, or red. She saw similar dragon-like faces drifting towards her many times a day from the walls, electrical sockets, or the computer screen, in both the presence and absence of face-like patterns, and at night she saw many dragon-like faces in the dark.

    According to Brad Duchaine, senior author on the study and principal investigator of Dartmouth’s Social Perception Lab, people suffering from PMO are often diagnosed with other disorders, like schizophrenia, and prescribed anti-psychotics.

    “It’s not uncommon for people who have PMO to not tell others about their problem with face perception because they fear others will think the distortions are a sign of a psychiatric disorder,” Duchaine said. “It’s a problem that people often don’t understand.”

    The 58-year-old patient had a history of bipolar affective disorder and post-traumatic stress disorder (PTSD), the research team noted, as well as a head injury when he was 43 years old. The patient had no impairments to eyesight and a small round lesion on his left hippocampus, which the team concluded was a cyst. Other individuals suffering from an Alice in Wonderland syndrome (a catch-all term for perceptual distortions) also were reported to have brain lesions; encephalitis, migraines, and psychoactive drug use are also linked with the syndrome, though none were observed in the recent patient’s case.

    To characterize the facial distortions, the researchers had the man describe perceived differences between the face of a person in the room with him and a photo of that person. Due to his PMO, the in-person face was distorted, and the on-screen face looked like an ordinary face.

    PMO can last just days for some, and years for others. Only 75 case reports of PMO have been published, according to the researchers. It’s certainly one of the rare—and more disturbing—perceptual disorders, but knowing how it manifests means that fewer patients will be misdiagnosed in the future.

    More: Vital Clues to Chronic Fatigue Syndrome Found in Major New Study

    [ad_2]

    Isaac Schultz

    Source link

  • Grandfather Sues For Rape & False Imprisonment, Says Facial Recognition Software Falsely Identified Him As Robbery Suspect – Perez Hilton

    Grandfather Sues For Rape & False Imprisonment, Says Facial Recognition Software Falsely Identified Him As Robbery Suspect – Perez Hilton

    [ad_1]

    [Warning: Potentially Triggering Content]

    We’ve already been seeing the upsetting and harmful consequences that can come from the use of artificial intelligence — but this grandfather’s claims make it all the more scary.

    Earlier this week, Harvey Eugene Murphy Jr., a 61-year-old Houston resident, filed a lawsuit against Sunglass Hut, its parent company EssilorLuxottica, and Macy’s with claims their facial recognition software led to his false imprisonment and eventual rape.

    In January of 2022, there was an armed robbery at a Sunglass Hut in Houston, which led to the theft of thousands of dollars worth of merchandise and cash. According to Murphy’s filings, which have been reported on by multiple outlets, EssilorLuxottica teamed up with Macy’s to use facial recognition software to figure out who their suspect was. The victim also says he didn’t know anything was out of the ordinary until he went and tried to renew his license, which brought up his warrant. How awful!

    Related: Google Software Engineer Charged With Beating Wife To Death

    Per a police lineup, which the elderly man’s lawyers Rusty Hardin & Associates claim was tainted, and what he believes was “faulty” AI on the companies’ parts, Murphy was arrested and put in jail. While speaking to The Guardian about how it all went down, Murphy said:

    “I almost thought it was a joke.”

    But the alleged false imprisonment wasn’t the only horror he had to deal with. He claims that while behind bars, he was sexually assaulted by three men in the prison bathroom, which left him with permanent injuries. Sadly, he also says he didn’t report the incident at the time for fear of retaliation from his abusers:

    “That was kind of terrifying. Your anxiety is up so high, you’re still shaking the entire time. And I just got up on my bunk and just faced the wall and was just praying that something would come through and get me out of that tank.”

    Heartbreaking…

    According to reports, the Harris County District Attorney’s office has since determined he was NOT involved with the crime per his lawyer.

    Murphy is seeking $10 million in damages for the alleged negligence, while his lawyers are not only fighting for him, but trying to put out a BIG warning to everyone else. His attorney Daniel Dutko told CBS:

    “Mr. Murphy’s story is troubling for every citizen in this country. Any person could be improperly charged with a crime based on error-prone facial recognition software just as he was.”

    And to make matters worse, studies conducted by ACLU highly suggest that minorities are more at risk for false positives with facial recognition software.

    Such a scary situation, we truly can’t imagine…

    What do U think, Perezcious readers? Let us know in the comments (below).

    If you or someone you know has experienced sexual violence and would like to learn more about resources, consider checking out https://www.rainn.org/resources.

    [Image via FOX 26 Houston/YouTube]

    [ad_2]

    Perez Hilton

    Source link