ReportWire

Tag: Privacy

  • California’s biggest credit union SchoolsFirst tackles cybersecurity

    California’s biggest credit union SchoolsFirst tackles cybersecurity

    [ad_1]

    When Bill Cheney led the National Trade Association, policymakers often asked him, “If credit unions are as good a deal as you say, why isn’t everyone a member of a credit union?”

    His response was always, “Exactly!”

    “If I were the CEO of a bank, my job would be to maximize the value of that bank for the shareholders,” said Cheney, who is now the CEO of SchoolsFirst Federal Credit Union, the largest credit in California for school employees and their families. “We don’t pay dividends to shareholders because we don’t have shareholders; we pay dividends to our members. Our job is to put members first. It’s really an amazing business model.”

    As a member-owned, not-for-profit financial cooperative, SchoolsFirst is part of a unique and trusted banking experience 90 years in the making.

    Founded on June 12, 1934 during the Great Depression, what was then the Orange County Teachers Credit Union began when 126 school employees pooled $1,200 to establish it. The credit union has grown steadily since.

    A 2020 merger with Sacramento-based Schools Financial Credit Union made the state’s largest credit union even bigger. Originally serving Orange County, it now covers the entire state, offering a variety of products and services such as checking and savings, credit cards, home and car loans and retirement planning.

    With this expansion, SchoolsFirst’s big challenge is educating younger generations about credit unions while safeguarding its members’ finances against cyberattacks and effectively integrating new technologies.

    Southern California News Group spoke to Cheney about SchoolsFirst’s 90 years of serving school employees and their families and what the future might hold. The interview has been edited for space:

    Q: Do all credit unions focus on a specific community?

    A: Credit unions have what’s called a field of membership. Our field of membership is the educational community and has changed only in the sense that we’ve expanded geographically.

    Q: Did that expansion coincide with your recent merger?

    A: No, we actually expanded our charter before that.

    Schools Financial became part of SchoolsFirst on January 1, 2020, but our systems were integrated toward the end of the year. When we planned the merger, we didn’t plan to send everybody home in the middle of March — hats off to our team for pulling it off.

    Q: What impact did the pandemic have on your day-to-day business?

    A: We’re an essential business, so we kept all our branches open except those serving colleges, universities and school districts. For example, we closed a small branch at Cal State Fullerton, but our biggest, oldest and busiest branch in Santa Ana stayed open.

    We had to move quickly to protect the employees at our branches. But we also sent hundreds of team members home, so we had to make arrangements for them to work from home.

    That first week, I reassured our team — and the rest of our leadership team did as well — that everybody’s job was protected regardless of their role in the organization and that our members needed us now more than ever.

    Q: And how did you reassure your members?

    A: We have an emergency loan program for use if, for example, there’s a state government shutdown and people’s pay is delayed. It hasn’t happened for a while, but it has happened. And so, we had this program in place (during Covid-19).

    The government stepped in and provided stimulus payments, so we didn’t have to utilize (the program) too much. But some of our members did lose their jobs and that emergency loan program helped them through that interim period until the government stimulus kicked in.

    But the big challenge credit unions face is educating younger generations about their value, mission, and purpose because it’s not always clear. Even some of our members refer to us as their bank. We are in the banking business, but we are not a bank. We’re a credit union; we’re a mutual.

    We have board members like a bank, but our board members are elected by our members to serve as volunteers to run this $30 billion financial institution. They represent our members’ interests, and that builds trust.

    Q: Can we talk about services? For example, there is immense pressure in California to own and finance a home. How is SchoolsFirst working to make these loans happen for your members, and how much of the business does it represent?

    A: People are challenged by higher interest rates and higher prices. Higher interest rates are good for our members who save, but if you’re a borrower, it’s challenging. You used to be able to get a mortgage for 3%, and now they’re close to 7% and higher. That’s a big difference on a home payment in a high-priced market like California.

    Real estate is a huge part of our business—not as much as it was when rates were lower, but we do make a lot of mortgage loans and home equity loans. Most of our real estate team is in Tustin, although we also have operation centers in Riverside and Sacramento.

    With first mortgage lending, we do have some flexibility, but the rates are pretty much set by the secondary market. Our rates are competitive, but the difference may not be as much on the real estate side, just because of the way the market works.

    What’s different are the fees and the terms of the loans. For instance, we have a special school employee mortgage with a low down payment and no private mortgage insurance requirement. By not requiring them to have that, we’re able to lower their monthly cost quite dramatically.

    Q: Do you ever bundle and sell loans?

    A: It does happen occasionally, but when we sell a loan, we retain the servicing. The member still comes through us for everything.

    Q: Why do you think SchoolsFirst has managed to grow when smaller credit unions have folded or been absorbed?

    A: We’ve expanded geographically, and we’ve certainly changed a lot in the products and services that we offer over the 90 years. I actually started on the 80th year of the credit union, coincidentally, and we’ve seen a lot of growth in that time period. But really, since our beginning, we’ve stayed focused on school employees and their families with, as we say in our mission statement, world-class personal service.

    Q: What does the future look like for SchoolsFirst?

    A: Things are now changing faster than ever, and our member’s needs are changing. Cybersecurity is a huge deal. We have a great team here that protects our system and our servers. And, of course, you can’t open a newspaper or turn on a program without hearing about AI.

    In some respects, we’ve been using artificial intelligence in our business for a long time, but it isn’t the same as people. If a member calls with a question, for example, we have an internal pilot that uses AI to help our team quickly find the answer by going through thousands of pages of standard operating procedures. But a person always answers the member’s question.

    Continuing to focus on our members and anticipate their needs and look out for their financial wellbeing—it’s what got us to this point. And that’s what is going to make us successful in the future.

    Q: Will you continue to expand geographically?

    A: Yes. We are expanding geographically in several ways. We provide a wholly-owned subsidiary organization that provides third-party administration services to more than 300 school districts and county offices. That’s expanding statewide as far north as Nevada County. 

    We also work with a third party to help us understand where our members are and where there’s potential for growth in terms of our future expansion. We typically add two or three branches a year, so it’s not rapid growth; it’s controlled. Even if people never go into a branch, they like to know that there is one convenient to them in case they need it.

    Bill Cheney

    Title: CEO

    Organization: SchoolsFirst Federal Credit Union has more than 30 billion in assets and serves 1.4 million school employees and their families. It has 69 branches and more than 300 ATMs statewide. Members can also access a cooperative of thousands of free ATMs there and nationwide.

    When he first joined a credit union: “My initial introduction to credit unions was (at the McCombs School of Business at the University of Texas at Austin),” he said. “I worked for the State Property Tax Board and joined the Public Employees Credit Union in Austin, Texas, in the early ’80s.”

    How he ended up working for credit unions: After graduating from college, Cheney spent five years at what was then Andersen Consulting.

    [ad_2]

    Sandra Barrera

    Source link

  • My Memories Are Just Meta’s Training Data Now

    My Memories Are Just Meta’s Training Data Now

    [ad_1]

    In R. C. Sherriff’s novel The Hopkins Manuscript, readers are transported to a world 800 years after a cataclysmic event ended Western civilization. In pursuit of clues about a blank spot in their planet’s history, scientists belonging to a new world order discover diary entries in a swamp-infested wasteland formerly known as England. For the inhabitants of this new empire, it is only through this record of a retired school teacher’s humdrum rural life, his petty vanities and attempts to breed prize-winning chickens, that they begin to learn about 20th-century Britain.

    If I were to teach futuristic beings about life on earth, I once believed I could produce a time capsule more profound than Sherriff’s small-minded protagonist, Edgar Hopkins. But scrolling through my decade-old Facebook posts this week, I was presented with the possibility that my legacy may be even more drab.

    Earlier this month, Meta announced that my teenage status updates were exactly the kind of content it wants to pass on to future generations of artificial intelligence. From June 26, old public posts, holiday photos, and even the names of millions of Facebook and Instagram users around the world would effectively be treated as a time capsule of humanity and transformed into training data.

    That means my mundane posts about university essay deadlines (“3 energy drinks down 1,000 words to go”) as well as unremarkable holiday snaps (one captures me slumped over my phone on a stationary ferry) are about to become part of that corpus. The fact that these memories are so dull, and also very personal, makes Meta’s interest more unsettling.

    The company says it is only interested in content that is already public: private messages, posts shared exclusively with friends, and Instagram Stories are out of bounds. Despite that, AI is suddenly feasting on personal artifacts that have, for years, been gathering dust in unvisited corners of the internet. For those reading from outside Europe, the deed is already done. The deadline announced by Meta applied only to Europeans. The posts of American Facebook and Instagram users have been training Meta AI models since 2023, according to company spokesperson Matthew Pollard.

    Meta is not the only company turning my online history into AI fodder. WIRED’s Reece Rogers recently discovered that Google’s AI search feature was copying his journalism. But finding out which personal remnants exactly are feeding future chatbots was not easy. Some sites I’ve contributed to over the years are hard to trace. Early social network Myspace was acquired by Time Inc. in 2016, which in turn was acquired by a company called Meredith Corporation two years later. When I asked Meredith about my old account, they replied that Myspace had since been spun off to an advertising firm, Viant Technology. An email to a company contact listed on its website was returned with a message that the address “couldn’t be found.”

    Asking companies still in business about my old accounts was more straightforward. Blogging platform Tumblr, owned by WordPress owner Automattic, said unless I’d opted out, the public posts I made as a teenager will be shared with “a small network of content and research partners, including those that train AI models” per a February announcement. YahooMail, which I used for years, told me that a sample of old emails—which have apparently been “anonymized” and “aggregated”—are being “utilized” by an AI model internally to do things like summarize messages. Microsoft-owned LinkedIn also said my public posts were being used to train AI although some “personal” details included in those posts were excluded, according to a company spokesperson, who did not specify what those personal details were.

    [ad_2]

    Morgan Meaker

    Source link

  • Salem State gets $624K grant for cybersecurity training center

    Salem State gets $624K grant for cybersecurity training center

    [ad_1]

    SALEM — Salem State University announced this week that it received a $624,437 grant to establish and operate a cybersecurity training facility on campus.

    The grant is part of the state’s Security Operations Center (SOC) Cyber Range Initiative, a program managed by Mass Tech’s MassCyberCenter that aims to help build a diverse generation of cybersecurity professionals through education, training and workforce development, according to a news release.

    “Massachusetts is committed to leading in cybersecurity and ensuring that all communities have the skills, resources and capacity to protect their businesses and residents,” Gov. Maura Healey said. “Congratulations to Salem State on this award and their efforts to grow the cyber workforce.”

    Lt. Gov. Kim Driscoll said how proud she is, “as Salem’s former mayor and a Salem State graduate … of the work the university is doing to teach students critical cybersecurity skills.

    “Cybersecurity affects every part of our community whether you are a small business, elementary school or local government office. The more cybersecurity professionals we have, the more we can ensure our communities are protected online,” Driscoll said.

    “Salem State is grateful to the Healey-Driscoll Administration and the MassCyberCenter for selecting us for this important partnership,” Salem State President John Keenan said. “This type of investment and professional relationships are a win-win for everyone involved.

    “Like our nursing and occupational therapy simulation labs, the CyberRange will imitate real-world problems for students to solve in real time,” he said.

    The funding is expected “to promote cybersecurity while also ensuring Massachusetts stays competitive in modern economic development,” said Yvonne Hao, state secretary of economic development and board chair of the Massachusetts Technology Collaborative.

    Salem State will join Bridgewater State University, Springfield Technical Community College and MassBay Community College as a critical part of a statewide network of cybersecurity educators, MassCyberCenter Director John Petrozzelli said.

    The award will support capital expenditures to construct the CyberRange and expenditures for the first year of operations.

    The center is expected to promote the Massachusetts cybersecurity ecosystem by working to build a strong cyber talent pipeline and to strengthen the defense of local communities.

    More information is available online at https://masscybercenter.org.

    [ad_2]

    By Buck Anderson | Staff Writer

    Source link

  • Vermont Legislature overrides governor, passing overdose prevention, renewable energy, tax measures

    Vermont Legislature overrides governor, passing overdose prevention, renewable energy, tax measures

    [ad_1]

    The Democratic-controlled Vermont Legislature on Monday overturned a number of the Republican governor’s vetoes, passing measures to prevent drug overdoses, restrict a pesticide that’s toxic to bees and to require state utilities to source all renewable energy by 2035.

    But the Legislature failed to override Gov. Phil Scott’s veto of a data privacy bill that was considered to be among the strongest in the country. It would have allowed consumers to file civil lawsuits against companies that break certain privacy rules. Scott vetoed the legislation last week, saying it would make Vermont “a national outlier and more hostile than any other state to many businesses and non-profits.”

    The Vermont House voted to override his veto but the Senate sustained his decision.

    The vote came after the Legislature reconvened Monday to try to override Scott’s vetoes of seven bills. Each chamber needed two-thirds of those present to vote to override to be successful in passing the bills.

    Senate President Pro Tem Philip Baruth, a Democrat, thanked colleagues at the end of the day, calling it “an incredibly productive day, a long day and an exhausting day in many ways but with brilliant results.”

    Gov. Scott, on the other hand, called it a sad day for Vermonters “who simply cannot afford further tax burdens and cost increases. Many will talk about these votes as a major loss for me, but it’s really a major loss for Vermont taxpayers, workers and families.”

    Scott said last month that the Legislature is out of balance and at times “focuses so much on their goals they don’t consider the unintended consequences.” While he said his vetoes aren’t popular in Montpelier, “I’ll take that heat when I believe I’m making the right choice for the everyday Vermonter,” Scott said.

    The drug overdose prevention law enacted by the Legislature allows for a safe injection site in Vermont’s largest city of Burlington where people can use narcotics under the supervision of trained staff and be revived if they take too much.

    The center will provide referrals to addiction treatment as well as medical and social services. It also will offer education about overdose prevention and distribute overdose reversal medications.

    “The data is clear. Overdose prevention centers save lives, connect people to treatment, reduce pressures on emergency rooms and Emergency Medical Services, and reduce public drug consumption and discarded supplies in our communities,” Baruth said in a statement.

    The new law allocates $1.1 million in fiscal year 2025 to the Vermont Department of Health to award grants to the city of Burlington to establish such a center. The money will come from the Opioid Abatement Special Fund made up of Vermont’s share of a national settlement with drug manufacturers and distribution companies. Before then, the Health Department is required to contract with a researcher or consultant to study the impact of the overdose prevention center pilot program.

    Two years ago, the first sanctioned overdose prevention centers in New York City opened, according to the Drug Policy Alliance. Rhode Island is expected to open one in Providence this summer.

    By Monday afternoon, the state House and Senate had overturned the governor’s veto of a bill that requires state utilities to source all renewable energy by 2035, making Vermont the second state with such an ambitious timeline. Scott had said the renewable energy bill would be too costly for ratepayers. Under the legislation, the biggest utilities will need to meet the goal by 2030.

    “The renewable energy standard will put Vermont on track to achieve 100% renewable electricity by 2035, dramatically reducing planet-warming carbon pollution and saving Vermonters money over time,” Baruth said in a separate statement. He called the governor’s veto an attempt to continue rejecting “critical progress on climate action” at a time when Vermonters still are facing “the impacts of recent climate disasters.”

    The Legislature also enacted a property tax bill to pay for education that will increase property taxes by an average of nearly 14% and create a committee to recommend changes to make Vermont’s education system more affordable. Scott has said Vermonters cannot afford double-digit tax increases.

    In addition, the Legislature overrode Scott’s veto of a measure that restricts a type of pesticide that’s toxic to bees. The Legislature passed the bill after New York Gov. Kathy Hochul signed off on what she described as the nation’s first law last year to severely limit the use of neonics in her state. In vetoing the bill, Scott said it was “more anti-farmer than it is pro-pollinator.”

    [ad_2]

    Source link

  • Apple’s AI Cloud System Makes Big Privacy Promises, but Can It Keep Them?

    Apple’s AI Cloud System Makes Big Privacy Promises, but Can It Keep Them?

    [ad_1]

    Apple’s new Apple Intelligence system is designed to infuse generative AI into the core of iOS. The system offers users a host of new services, including text and image generation as well as organizational and scheduling features. Yet while the system provides impressive new capabilities, it also brings complications. For one thing, the AI system relies on a huge amount of iPhone users’ data, presenting potential privacy risks. At the same time, the AI system’s substantial need for increased computational power means that Apple will have to rely increasingly on its cloud system to fulfill users’ requests.

    Apple has historically offered iPhone customers unparalleled privacy; it’s a big part of the company’s brand. Part of those privacy assurances has been the option to choose when mobile data is stored locally and when it’s stored in the cloud. While an increased reliance on the cloud might ring some privacy alarm bells, Apple has anticipated these concerns and created a startling new system that it calls its Private Cloud Compute, or PCC. This is really a cloud security system designed to keep users’ data away from prying eyes while it’s being used to help fulfill AI-related requests.

    On paper, Apple’s new privacy system sounds really impressive. The company claims to have created “the most advanced security architecture ever deployed for cloud AI compute at scale.” But what looks like a massive achievement on paper could ultimately cause broader issues for user privacy down the road. And it’s unclear, at least at this juncture, whether Apple will be able to live up to its lofty promises.

    How Apple’s Private Cloud Compute Is Supposed to Work

    In many ways, cloud systems are just giant databases. If a bad actor gets into that system/database, they can look at the data contained within. However, Apple’s Private Cloud Compute (PCC) brings a number of unique safeguards that are designed to prevent that kind of access.

    Apple says it has implemented its security system at both the software and hardware levels. The company created custom servers that will house the new cloud system, and those servers go through a rigorous process of screening during manufacturing to ensure they are secure.  “We inventory and perform high-resolution imaging of the components of the PCC node,” the company claims. The servers are also being outfitted with physical security mechanisms such as a tamper-proof seal. iPhone users’ devices can only connect to servers that have been certified as part of the protected system, and those connections are end-to-end encrypted, meaning that the data being transmitted is pretty much untouchable while in transit.

    Once the data reaches Apple’s servers, there are more protections to ensure that it stays private. Apple says its cloud is leveraging stateless computing to create a system where user data isn’t retained past the point at which it is used to fulfill an AI service request. So, according to Apple, your data won’t have a significant lifespan in its system. The data will travel from your phone to the cloud, interact with Apple’s high-octane AI algorithms—thus fulfilling whatever random question or request you’ve submitted (“draw me a picture of the Eiffel Tower on Mars”)—and then the data (again, according to Apple) will be deleted.

    Apple has instituted an array of other security and privacy protections that can be read about in more detail on the company’s blog. These defenses, while diverse, all seem designed to do one thing: prevent any breach of the company’s new cloud system.

    But Is This Really Legit?

    Companies make big cybersecurity promises all the time and it’s usually impossible to verify whether they’re telling the truth or not. FTX, the failed crypto exchange, once claimed it kept users’ digital assets in air-gapped servers. Later investigation showed that was pure bullshit. But Apple is different, of course. To prove to outside observers that it’s really securing its cloud, the company says it will launch something called a “transparency log” that involves full production software images (basically copies of the code being used by the system). It plans to publish these logs regularly so that outside researchers can verify that the cloud is operating just as Apple says.

    What People Are Saying About the PCC

    Apple’s new privacy system has notably polarized the tech community. While the sizable effort and unparalleled transparency that characterize the project have impressed many, some are wary of the broader impacts it may have on mobile privacy in general. Most notably—aka loudly—Elon Musk immediately began proclaiming that Apple had betrayed its customers.

    Simon Willison, a web developer and programmer, told Gizmodo that the “scale of ambition” of the new cloud system impressed him.

    “They are addressing multiple extremely hard problems in the field of privacy engineering, all at once,” he said. “The most impressive part I think is the auditability—the bit where they will publish images for review in a transparency log which devices can use to ensure they are only talking to a server running software that has been made public. Apple employs some of the best privacy engineers in the business, but even by their standards this is a formidable piece of work.”

    But not everybody is so enthused. Matthew Green, a cryptography professor at Johns Hopkins University, expressed skepticism about Apple’s new system and the promises that went along with it.

    “I don’t love it,” said Green with a sigh. “My big concern is that it’s going to centralize a lot more user data in a data center, whereas right now most of that is on people’s actual phones.”

    Historically, Apple has made local data storage a mainstay of its mobile design, because cloud systems are known for their privacy deficiencies.

    “Cloud servers are not secure, so Apple has always had this approach,” Green said. “The problem is that, with all this AI stuff that’s going on, Apple’s internal chips are not powerful enough to do the stuff that they want it to do. So they need to send the data to servers and they’re trying to build these super protected servers that nobody can hack into.”

    He understands why Apple is making this move, but doesn’t necessarily agree with it, since it means a higher reliance on the cloud.

    Green says Apple also hasn’t made it clear whether it will explain to users what data remains local and what data will be shared with the cloud. This means that users may not know what data is being exported from their phones. At the same time, Apple hasn’t made it clear whether iPhone users will be able to opt out of the new PCC system. If users are forced to share a certain percentage of their data with Apple’s cloud, it may signal less autonomy for the average user, not more. Gizmodo reached out to Apple for clarification on both of these points and will update this story if the company responds.

    To Green, Apple’s new PCC system signals a shift in the phone industry to a more cloud-reliant posture. This could lead to a less secure privacy environment overall, he says.

    “I have very mixed feelings about it,” Green said. “I think enough companies are going to be deploying very sophisticated AI [to the point] where no company is going to want to be left behind. I think consumers will probably punish companies that don’t have great AI features.”

    [ad_2]

    Lucas Ropek

    Source link

  • Bangladeshi police agents accused of selling citizens’ personal information on Telegram | TechCrunch

    Bangladeshi police agents accused of selling citizens’ personal information on Telegram | TechCrunch

    [ad_1]

    Two senior officials working for anti-terror police in Bangladesh allegedly collected and sold classified and personal information of citizens to criminals on Telegram, TechCrunch has learned. 

    The data allegedly sold included national identity details of citizens, cell phone call records and other “classified secret information,” according to a letter signed by a senior Bangladeshi intelligence official, seen by TechCrunch.

    The letter, dated April 28, was written by Brigadier General Mohammad Baker, who serves as a director of Bangladesh’s National Telecommunications Monitoring Center, or NTMC, the country’s electronic eavesdropping agency. Baker confirmed the legitimacy of the letter and its contents in an interview with TechCrunch. 

    “Departmental investigation is ongoing for both the cases,” Baker said in an online chat, adding that the Bangladeshi Ministry of Home Affairs ordered the affected police organizations to take “necessary action against those officers.” 

    The letter, which was originally written in Bengali and addressed to the senior secretary of the Ministry of Home Affairs Public Security Division, alleges the two police agents accessed and passed “extremely sensitive information” of private citizens on Telegram in exchange for money.

    According to the letter, the police agents were caught after investigators analyzed logs of the NTMC’s systems and how often the two accessed it.

    The letter reveals the identity of the officials. One of the accused is a police superintendent serving with the Anti-Terrorism Unit (ATU). The other is an assistant police superintendent deputy at the Rapid Action Battalion, also known as RAB 6, a controversial paramilitary unit that the U.S. government sanctioned in 2021 over allegations that the unit is linked to hundreds of disappearances and extrajudicial killings. TechCrunch is not naming the two people who were accused as it’s unclear if they have been charged under the country’s legal system.

    The NTMC is a government intelligence agency established under Bangladesh’s Ministry of Home Affairs. The agency’s core task is to monitor all telecommunications traffic and intercept phone and web communications to detect and prevent threats to national security. 

    Organizations like Human Rights Watch and Freedom House have criticized the NTMC for lacking safeguards against abuses, both against free speech as well as privacy. Over the years, NTMC procured sophisticated technology from companies in Israel, which Bangladesh does not officially recognize, as well as other Western countries, to conduct mass surveillance largely on opposition party members, journalists, civil society members and activists.  

    As part of its mission, the NTMC runs the National Intelligence Platform, or NIP, an internal government web portal that holds classified citizen information, like national identification details, cell phone registration and cell data records, criminal profiles and other information. 

    Various law enforcement and intelligence agencies have user accounts on the NIP portal provided by the NTMC. 

    NTMC’s own investigation concluded that the agents used the NIP platform more frequently than others, and accessed and collected information that was not relevant to them.

    “Considering the context, such irrelevant access and unlawful handover of extremely sensitive classified data should be investigated to identify everyone involved in this and we also request for appropriate action against all those identified/involved,” the letter read.  

    Baker told TechCrunch that there were a “number of Telegram channels,” adding that one of them was called BD CYBER GANG.

    TechCrunch could not identify the specific channel on Telegram. 

    Contact Us

    Do you have more information about this incident, or similar incidents? From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram, Keybase and Wire @lorenzofb, or email. You can also reach out to Zulkarnain Saer Khan on Signal at +36707723819, or on X @ZulkarnainSaer. You also can contact TechCrunch via SecureDrop.

    Baker told TechCrunch that it appears that the two agents sent the information to the administrator of at least one Telegram group, who then attempted to sell it. 

    Baker said that the two agents have been notified of the investigation. 

    Because of the investigation, all NIP users from ATU and RAB 6 have had their access suspended “until the involved officials are identified, and proper action is taken,” according to the letter.

    Baker confirmed the suspended access, saying that if agents “need any information for investigation purposes they can collect through Police and RAB HQ.”

    Spokespeople for Bangladesh’s Ministry of Home Affairs and ATU did not respond to multiple requests for comment. A person identifying only as an “operations officer” at RAB 6 told TechCrunch that the agency had no comment. 

    Last year, a security researcher found that the NTMC was leaking people’s personal information on an unsecured server. The leaked data included real-world names, phone numbers, email addresses, locations and exam results, according to Wired. Another Bangladeshi government agency, the Office of the Registrar General, Birth & Death Registration, also leaked citizens’ sensitive data last year, as TechCrunch reported at the time.

    In both cases, the leaks were found by Viktor Markopoulos, a researcher who works at Bitcrack Cyber Security. 

    While those were significant cases of data exposure, this incident allegedly involving the ATU and RAB 6 agents is potentially more damaging, given that the agents allegedly sold information online in an attempt to profit from their privileged access to classified personal information.  

    Although the incident is under investigation, a well-placed source within the government told TechCrunch that there are still officials who are offering to sell citizens’ data.

    [ad_2]

    Lorenzo Franceschi-Bicchierai

    Source link

  • Hacked, leaked, exposed: Why you should never use stalkerware apps | TechCrunch

    Hacked, leaked, exposed: Why you should never use stalkerware apps | TechCrunch

    [ad_1]

    Last week, an unknown hacker broke into the servers of the U.S.-based stalkerware maker pcTattletale. The hacker then stole and leaked the company’s internal data. They also defaced pcTattletale’s official website with the goal of embarrassing the company. 

    “This took a total of 15 minutes from reading the techcrunch article,” the hackers wrote in the defacement, referring to a recent TechCrunch article where we reported that pcTattletale was used to monitor several front desk check-in computers at Wyndham hotels across the United States.

    As a result of this hack, leak and shame operation, pcTattletale founder Bryan Fleming said he was shutting down his company.

    Consumer spyware apps like pcTattletale are commonly referred to as stalkerware because jealous spouses and partners use them to surreptitiously monitor and surveil their loved ones. These companies often explicitly market their products as solutions to catch cheating partners by encouraging illegal and unethical behavior. And there have been multiple court cases, journalistic investigations, and surveys of domestic abuse shelters that show that online stalking and monitoring can lead to cases of real-world harm and violence. 

    And that’s why hackers have repeatedly targeted some of these companies.

    According to TechCrunch’s tally, with this latest hack, pcTattletale has become the 20th stalkerware company since 2017 that is known to have been hacked or leaked customer and victims’ data online. That’s not a typo: Twenty stalkerware companies have either been hacked or had a significant data exposure in recent years. And three stalkerware companies were hacked multiple times. 

    Eva Galerpin, the director of cybersecurity at the Electronic Frontier Foundation and a leading researcher and activist who has investigated and fought stalkerware for years, said the stalkerware industry is a “soft target.” “The people who run these companies are perhaps not the most scrupulous or really concerned about the quality of their product,” Galperin told TechCrunch.

    Given the history of stalkerware compromises, that may be an understatement. And because of the lack of care for protecting their own customers — and consequently the personal data of tens of thousands of unwitting victims — using these apps is doubly irresponsible. The stalkerware customers may be breaking the law, abusing their partners by illegally spying on them, and, on top of that, putting everyone’s data in danger. 

    A history of stalkerware hacks

    The flurry of stalkerware breaches began in 2017 when a group of hackers breached the U.S.-based Retina-X and the Thailand-based FlexiSpy back to back. Those two hacks revealed that the companies had a total number of 130,000 customers all over the world.

    At the time, the hackers who — proudly — claimed responsibility for the compromises explicitly said their motivations were to expose and hopefully help destroy an industry that they consider toxic and unethical.

    “I’m going to burn them to the ground, and leave absolutely nowhere for any of them to hide,” one of the hackers involved then told Motherboard. 

    Referring to FlexiSpy, the hacker added: “I hope they’ll fall apart and fail as a company, and have some time to reflect on what they did. However, I fear they might try and give birth to themselves again in a new form. But if they do, I’ll be there.”

    Despite the hack, and years of negative public attention, FlexiSpy is still active today. The same cannot be said about Retina-X.

    The hacker who broke into Retina-X wiped its servers with the goal of hampering its operations. The company bounced back — and then it got hacked again a year later. A couple of weeks after the second breach, Retina-X announced that it was shutting down

    Just days after the second Retina-X breach, hackers hit Mobistealth and Spy Master Pro, stealing gigabytes of customer and business records, as well as victims’ intercepted messages and precise GPS locations. Another stalkerware vendor, the India-based SpyHuman, encountered the same fate a few months later, with hackers stealing text messages and call metadata, which contained logs of who called who and when. 

    Weeks later, there was the first case of accidental data exposure, rather than a hack. SpyFone left an Amazon-hosted S3 storage bucket unprotected online, which meant anyone could see and download text messages, photos, audio recordings, contacts, location, scrambled passwords and login information, Facebook messages and more. All that data was stolen from victims, most of whom did not know they were being spied on, let alone know their most sensitive personal data was also on the internet for all to see. 

    Other stalkerware companies that over the years have irresponsibly left customer and victims’ data online are FamilyOrbit, which left 281 gigabytes of personal data online protected only by an easy-to-find password; mSpy, which leaked over 2 million customer records; Xnore, which let any of its customers see the personal data of other customers’ targets, which included chat messages, GPS coordinates, emails, photos and more; Mobiispy, which left 25,000 audio recordings and 95,000 images on a server accessible to anyone; KidsGuard, which had a misconfigured server that leaked victims’ content; pcTattletale, which prior to its hack also exposed screenshots of victims’ devices uploaded in real-time to a website that anyone could access; and Xnspy, whose developers left credentials and private keys left in the apps’ code, allowing anyone to access victims’ data.

    As far as other stalkerware companies that actually got hacked, there was Copy9, which saw a hacker steal the data of all its surveillance targets, including text messages and WhatsApp messages, call recordings, photos, contacts, and brows history; LetMeSpy, which shut down after hackers breached and wiped its servers; the Brazil-based WebDetetive, which also got its servers wiped, and then hacked again; OwnSpy, which provides much of the backend software for WebDetetive, also got hacked; Spyhide, which had a vulnerability in its code that allowed a hacker to access the back-end databases and years of stolen around 60,000 victims’ data; and Oospy, which was a rebrand of Spyhide, shut down for a second time.

    Finally there is TheTruthSpy, a network of stalkerware apps, which holds the dubious record of having been hacked or having leaked data on at least three separate occasions

    Hacked, but unrepented

    Of these 20 stalkerware companies, eight have shut down, according to TechCrunch’s tally. 

    In a first and so far unique case, the Federal Trade Commission banned SpyFone and its chief executive, Scott Zuckerman, from operating in the surveillance industry following an earlier security lapse that exposed victims’ data. Another stalkerware operation linked to Zuckerman, called SpyTrac, subsequently shut down following a TechCrunch investigation. 

    PhoneSpector and Highster, another two companies that are not known to have been hacked, also shut down after New York’s attorney general accused the companies of explicitly encouraging customers to use their software for illegal surveillance. 

    But a company closing doesn’t mean it’s gone forever. As with Spyhide and SpyFone, some of the same owners and developers behind a shuttered stalkerware maker simply rebranded. 

    “I do think that these hacks do things. They do accomplish things, they do put a dent in it,” Galperin said. “But if you think that if you hack a stalkerware company, that they will simply shake their fists, curse your name, disappear in a puff of blue smoke and never be seen again, that has most definitely not been the case.”

    “What happens most often, when you actually manage to kill a stalkerware company, is that the stalkerware company comes up like mushrooms after the rain,” Galperin added. 

    There is some good news. In a report last year, security firm Malwarebytes said that the use of stalkerware is declining, according to its own data of customers infected with this type of software. Also, Galperin reports seeing an increase in negative reviews of these apps, with customers or prospective customers complaining they don’t work as intended.

    But, Galperin said that it’s possible that security firms aren’t as good at detecting stalkerware as they used to be, or stalkers have moved from software-based surveillance to physical surveillance enabled by AirTags and other Bluetooth-enabled trackers.

    “Stalkerware does not exist in a vacuum. Stalkerware is part of a whole world of tech enabled abuse,” Galperin said.

    Say no to stalkerware

    Using spyware to monitor your loved ones is not only unethical, it’s also illegal in most jurisdictions, as it’s considered unlawful surveillance. 

    That is already a significant reason not to use stalkerware. Then there is the issue that stalkerware makers have proven time and time again that they cannot keep data secure — neither data belonging to the customers nor their victims or targets.

    Apart from spying on romantic partners and spouses, some people use stalkerware apps to monitor their children. While this type of use, at least in the United States, is legal, it doesn’t mean using stalkerware to snoop on your kids’ phone isn’t creepy and unethical. 

    Even if it’s lawful, Galperin thinks parents should not spy on their children without telling them, and without their consent. 

    If parents do inform their children and get their go-ahead, parents should stay away from insecure and untrustworthy stalkerware apps, and use parental tracking tools built into Apple phones and tablets and Android devices that are safer and operate overtly. 


    If you or someone you know needs help, the National Domestic Violence Hotline (1-800-799-7233) provides 24/7 free, confidential support to victims of domestic abuse and violence. If you are in an emergency situation, call 911. The Coalition Against Stalkerware has resources if you think your phone has been compromised by spyware.

    [ad_2]

    Lorenzo Franceschi-Bicchierai

    Source link

  • California advances measures targeting AI discrimination and deepfakes

    California advances measures targeting AI discrimination and deepfakes

    [ad_1]

    SACRAMENTO, Calif. — As corporations increasingly weave artificial intelligence technologies into the daily lives of Americans, California lawmakers want to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography.

    The efforts in California — home to many of the world’s biggest AI companies — could pave the way for AI regulations across the country. The United States is already behind Europe in regulating AI to limit risks, lawmakers and experts say, and the rapidly growing technology is raising concerns about job loss, misinformation, invasions of privacy and automation bias.

    A slew of proposals aimed at addressing those concerns advanced last week, but must win the other chamber’s approval before arriving at Gov. Gavin Newsom’s desk. The Democratic governor has promoted California as an early adopter as well as regulator, saying the state could soon deploy generative AI tools to address highway congestion, make roads safer and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices.

    With strong privacy laws already in place, California is in a better position to enact impactful regulations than other states with large AI interests, such as New York, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

    “You need a data privacy law to be able to pass an AI law,” Rice said. “We’re still kind of paying attention to what New York is doing, but I would put more bets on California.”

    California lawmakers said they cannot wait to act, citing hard lessons they learned from failing to reign in social media companies when they might have had a chance. But they also want to continue attracting AI companies to the state.

    Here’s a closer look at California’s proposals:

    Some companies, including hospitals, already use AI models to define decisions about hiring, housing and medical options for millions of Americans without much oversight. Up to 83% of employers are using AI to help in hiring, according to the U.S. Equal Employment Opportunity Commission. How those algorithms work largely remains a mystery.

    One of the most ambitious AI measures in California this year would pull back the curtains on these models by establishing an oversight framework to prevent bias and discrimination. It would require companies using AI tools to participate in decisions that determine results and to inform people affected when AI is used. AI developers would have to routinely make internal assessments of their models for bias. And the state attorney general would have authority to investigate reports of discriminating models and impose fines of $10,000 per violation.

    AI companies also might soon be required to start disclosing what data they’re using to train their models.

    Inspired by the months-long Hollywood actors strike last year, a California lawmaker wants to protect workers from being replaced by their AI-generated clones — a major point of contention in contract negotiations.

    The proposal, backed by the California Labor Federation, would let performers back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. It would also require that performers be represented by an attorney or union representative when signing new “voice and likeness” contracts.

    California may also create penalties for digitally cloning dead people without the consent of their estate, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s permission.

    Real-world risks abound as generative AI creates new content such as text, audio and photos in response to prompts. So lawmakers are considering requiring guardrails around “extremely large” AI systems that have the potential to spit out instructions for creating disasters — such as building chemical weapons or assisting in cyberattacks — that could cause at least $500 million in damages. It would require such models to have a built-in “kill switch,” among other things.

    The measure, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices, including for still-more powerful models that don’t yet exist. The state attorney general also would be able to pursue legal actions in case of violations.

    A bipartisan coalition seeks to facilitate prosecuting people who use AI tools to create images of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if the materials are not depicting a real person, law enforcement said.

    A host of Democratic lawmakers are also backing a bill tackling election deepfakes, citing concerns after AI-generated robocalls mimicked President Joe Biden’s voice ahead of New Hampshire’s recent presidential primary. The proposal would ban “materially deceptive” deepfakes related to elections in political mailers, robocalls and TV ads 120 days before Election Day and 60 days thereafter. Another proposal would require social media platforms to label any election-related posts created by AI.

    [ad_2]

    Source link

  • How Researchers Cracked an 11-Year-Old Password to a $3 Million Crypto Wallet

    How Researchers Cracked an 11-Year-Old Password to a $3 Million Crypto Wallet

    [ad_1]

    “We ultimately got lucky that our parameters and time range was right. If either of those were wrong, we would have … continued to take guesses/shots in the dark,” Grand says in an email to WIRED. “It would have taken significantly longer to precompute all the possible passwords.”

    Grand and Bruno created a video to explain the technical details more thoroughly.

    RoboForm, made by US-based Siber Systems, was one of the first password managers on the market, and currently has more than 6 million users worldwide, according to a company report. In 2015, Siber seemed to fix the RoboForm password manager. In a cursory glance, Grand and Bruno couldn’t find any sign that the pseudo-random number generator in the 2015 version used the computer’s time, which makes them think they removed it to fix the flaw, though Grand says they would need to examine it more thoroughly to be certain.

    Siber Systems confirmed to WIRED that it did fix the issue with version 7.9.14 of RoboForm, released June 10, 2015, but a spokesperson wouldn’t answer questions about how it did so. In a changelog on the company’s website, it mentions only that Siber programmers made changes to “increase randomness of generated passwords,” but it doesn’t say how they did this. Siber spokesman Simon Davis says that “RoboForm 7 was discontinued in 2017.”

    Grand says that, without knowing how Siber fixed the issue, attackers may still be able to regenerate passwords generated by versions of RoboForm released before the fix in 2015. He’s also not sure if current versions contain the problem.

    “I’m still not sure I would trust it without knowing how they actually improved the password generation in more recent versions,” he says. “I’m not sure if RoboForm knew how bad this particular weakness was.”

    Customers may also still be using passwords that were generated with the early versions of the program before the fix. It doesn’t appear that Siber ever notified customers when it released the fixed version 7.9.14 in 2015 that they should generate new passwords for critical accounts or data. The company didn’t respond to a question about this.

    If Siber didn’t inform customers, this would mean that anyone like Michael who used RoboForm to generate passwords prior to 2015—and are still using those passwords—may have vulnerable passwords that hackers can regenerate.

    “We know that most people don’t change passwords unless they’re prompted to do so,” Grand says. “Out of 935 passwords in my password manager (not RoboForm), 220 of them are from 2015 and earlier, and most of them are [for] sites I still use.”

    Depending on what the company did to fix the issue in 2015, newer passwords may also be vulnerable.

    Last November, Grand and Bruno deducted a percentage of bitcoins from Michael’s account for the work they did, then gave him the password to access the rest. The bitcoin was worth $38,000 per coin at the time. Michael waited until it rose to $62,000 per coin and sold some of it. He now has 30 BTC, now worth $3 million, and is waiting for the value to rise to $100,000 per coin.

    Michael says he was lucky that he lost the password years ago because, otherwise, he would have sold off the bitcoin when it was worth $40,000 a coin and missed out on a greater fortune.

    “That I lost the password was financially a good thing.”

    [ad_2]

    Kim Zetter

    Source link

  • Microsoft’s New Recall AI Tool May Be a ‘Privacy Nightmare’

    Microsoft’s New Recall AI Tool May Be a ‘Privacy Nightmare’

    [ad_1]

    Sex, drugs, and … Eventbrite? A WIRED investigation published this week uncovered a network of spammers and scammers pushing the illegal sale of controlled substances like Xanax and oxycodone, escort services, social media accounts, and personal information on the event management platform. Making matters worse, Eventbrite’s recommendation algorithm promoted posts for opioids alongside addiction recovery events. The good news is, the company appears to have removed most of the more than 7,400 illicit posts WIRED uncovered.

    If you drive a Tesla Model 3, make sure to enable your PIN-to-drive feature or your car could be easily stolen within seconds. While the company has added new ultra-wideband radio tech to its keyless system, which can prevent “relay attacks,” researchers at Beijing-based security firm GoGoByte found that Model 3s (as well as other unnamed makes and models of vehicles) are still vulnerable. Relay attacks use inexpensive radios to transmit the signal from someone’s key fob or phone app that can then be used to unlock and start an impacted vehicle. Tesla says its adoption of ultra-wideband radio was not meant to stop relay attacks (even though it technically could), but it’s possible the automaker will add that protection in the future.

    Police busting people for running illicit online markets is nearly as old a tale as the dark web itself. But this week’s takedown offered a new twist. The FBI recently arrested Lin Rui-siang, a 23-year-old accused of operating Incognito Market, which authorities claim facilitated $100 million in sales of narcotics on the dark web. US prosecutors claim Lin then extorted Incognito’s users by threatening to expose them unless they paid up. Curiously, Lin’s professional experience includes teaching police how to catch cybercriminals by tracing cryptocurrency on blockchains. If the US Justice Department is correct about his alleged involvement in Incognito Market, that would make him one of the most unusual cybercriminals we’ve ever encountered.

    Leaks don’t just impact people on the wrong side of the law, of course. An unsecured database recently exposed biometric data of police officers in India, including face scans, fingerprints, and more. The incident reveals the dangers of collecting sensitive biometrics in the first place.

    Finally, the saga of WikiLeaks founder Julian Assange inched forward again this week, with a British court ruling that he can appeal his extradition to the US, where he faces 18 charges under the Espionage Act for WikiLeaks’ publication of classified US military information. The judges said that Assange can appeal US prosecutors’ assurances about how his trial would be conducted and on First Amendment grounds. The appeals process will inevitably push back any final decision about his potential extradition for months.

    But that’s not all. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    Following the trend of tech companies in the AI race throwing privacy and caution to the wind, Microsoft unveiled plans this week to launch a tool on its forthcoming Copilot+ PCs called Recall that takes screenshots of its customers’ computers every few seconds. Microsoft says the tool is meant to give people the ability to “find the content you have viewed on your device.” The company also claims to have a range of protections in place and says the images are only stored locally in an encrypted drive, but the response has been roundly negative nonetheless, with some watchdogs reportedly calling it a possible “privacy nightmare.” The company notes that an intruder would need a password and physical access to the device to view any of the screenshots, which should rule out the possibility of anyone with legal concerns ever adopting the system. Ironically, Recall’s description sounds eerily reminiscent of computer monitoring software the FBI has used in the past. Microsoft even acknowledges that the system takes no steps to redact passwords or financial information.

    Federal authorities are reportedly working quietly to establish ties between antiwar demonstrators on US campuses and any foreign groups or individuals overseas, according to journalist Ken Klippenstein, formerly of the Intercept, who says the National Counterterrorism Center is at the center of the effort. Evidence of overseas ties would lend further ammunition to politicians, university officials, and police, who’ve widely claimed “outside agitators” are to blame for the demonstrations—an allegation that’s routinely lobbed at protesters in the United States, often meant to imply that the protesters themselves are dupes. Incidentally, authorities may also overcome constitutional hurdles to surveillance by establishing a foreign target to spy on; someone unprotected by the country’s Fourth Amendment. Republicans in Congress—representatives Mark Green and August Pfluger—have, meanwhile, asked the FBI and Department of Homeland Security to supply congressional committees with records about the government’s surveillance of the protesters, including any efforts to infiltrate them using “online covert employees or confidential human sources.”

    The FBI has nabbed a 42-year-old Wisconsin man for using Stable Diffusion, the text-to-image generative AI software, to manufacture child sexual abuse material. The man was reportedly caught with “thousands of realistic images” of children, some featuring them nude or partially clothed with men. Court records indicate the evidence includes more than 13,000 gen-AI images as well as the prompts he used to create the images. “Using AI to produce sexually explicit depictions of children is illegal, and the Justice Department will not hesitate to hold accountable those who possess, produce, or distribute AI-generated child sexual abuse material,” Nicole Argentieri, head of the Justice Department’s Criminal Division, says in a statement. The arrest is part of Project Safe Childhood, a collaboration between the government and corporations reportedly targeting online offenders.

    Security researchers this week disclosed to TechCrunch that they’d discovered consumer-grade spyware—often known as “stalkerware”—on the computers of “at least three” Wyndham hotels in the United States, potentially exposing travelers’ personal details. The stalkerware, called pcTattletale, can be installed on Android and Windows devices, giving whoever has control of the sneaky app the ability to access data on the targeted machine and monitor users’ activity. The presence of pcTattletale was discovered thanks to a security flaw in the spyware that exposed screenshots of infected machines to the open internet, according to the researchers. Although the researchers found pcTattletale on Wyndham computers, the hotel company says each of its locations are franchises, suggesting that the spyware infection could be limited to just a few locations.

    [ad_2]

    Dell Cameron, Andrew Couts

    Source link

  • Most US TikTok Creators Don’t Think a Ban Will Happen

    Most US TikTok Creators Don’t Think a Ban Will Happen

    [ad_1]

    A majority of US TikTok creators don’t believe the platform will be banned within a year, and most haven’t seen brands they work for shift their marketing budgets away from the app, according to a new survey of people who earn money from posting content on TikTok shared exclusively with WIRED.

    The findings suggest that TikTok’s influencer economy largely isn’t experiencing existential dread after Congress passed a law last month that put the future of the app’s US operations in jeopardy. The bill demands that TikTok separate from its Chinese parent company within a year or face a nationwide ban; TikTok is challenging the constitutionality of the measure in court.

    Fohr, an influencer marketing platform that connects creators with clients for sponsored content, polled US-based TikTok creators on its platform with at least 10,000 followers. It got 200 responses, half from people who rely on influencing as their sole source of income. Out of the respondents, 62 percent said they didn’t think TikTok would be banned by 2025, while the remaining 38 percent said they believed it would be.

    Some creators may be skeptical that a ban will really happen after they watched the Trump White House and Congress try and fail several times to crack down on TikTok over the past few years. The platform has so far only continued to grow more popular in the US, sparking alarm in Silicon Valley over the threat its competition poses. There’s also the possibility TikTok will be sold to a group of American investors—several interested bidders have emerged—though TikTok has made it clear that such an acquisition would be practically impossible.

    Some creators are simply struggling to believe the bizarre situation their favorite app has landed in. “I’m in denial, because I think the TikTok ban is ridiculous,” one anonymous creator told Fohr through its survey. “I think our government has bigger things to worry about than banning a platform where people are allowed to express their views and opinions.”

    Most creators said they haven’t lost business from brands that pay for marketing content on TikTok since the new law was signed: 83 percent of the influencers who responded said their sponsorships have been unaffected. But the rest had seen signs of brands pulling back from the app or at least diversifying their marketing. Some 7 percent said a brand had paused or canceled a campaign they worked on, and 8 percent said a brand had asked to shift a deliverable to another social media platform or at least inquired about such a change.

    Companies may be reluctant to walk away from TikTok because it’s become one of the most popular avenues for consumers to discover new products, particularly from small businesses. Over the past year, TikTok has tried to leverage that influence into a new revenue stream through an ecommerce feature called TikTok Shop. Over 11 percent of US households have made a purchase through TikTok Shop since September 2023, according to credit card transaction data published in April by the research firm Earnest Analytics.

    It doesn’t look as though the passage of the divestiture bill last month prompted people to spend significantly less time on TikTok or avoid the app altogether. The popularity of the platform in US app stores has remained largely consistent over the past month, according to the market-intelligence firm Sensor Tower. And Fohr found that 60 percent of creators said their video views have remained the same, 28 percent said they had seen them fall, and 10 percent reported their engagement increased. These shifts could simply be caused by routine changes TikTok makes to its algorithm, variability of the content that influencers are sharing, or the whims of users consuming videos.

    TikTok’s rise has spurred US tech giants to mimic many of its features, with Google’s YouTube pushing its Shorts format and Meta’s Instagram launching Reels. Fohr’s survey suggests that if creators start leaving TikTok because of uncertainty about the app’s future or a ban, Instagram stands to benefit the most. A clear majority of creators—67 percent—said they saw it as the best alternative for growing their audience, while 22 percent cited YouTube. Only a small fraction pointed to Snapchat, Pinterest, and other platforms.

    Several of the creators, however, said that it’s harder to gain traction on Instagram compared to TikTok, and one noted that Meta’s platform doesn’t offer anything equivalent to TikTok’s Creativity Program, which pays users based on how many views and other engagement metrics their videos receive.

    Across social platforms, the most common way for creators to get paid is by signing deals with brands to make posts featuring their products. But Fohr’s survey also showed the growth of a novel monetization scheme called the TikTok Creative Challenge, which the app launched last year. It allows companies to post requests for creators to make marketing videos that brands can then use on their own channels. Influencers are compensated based on how well their video performs in terms of views and engagement.

    In Fohr’s survey, that type of content, known as UGC, represented the largest TikTok revenue stream for 18 percent of creators. Whatever happens to TikTok in the US, history suggests that it may not be long before its American competitors begin rolling out their own user-generated content initiatives.

    [ad_2]

    Louise Matsakis

    Source link

  • Secrecy Concerns Mount Over Spy Powers Targeting US Data Centers

    Secrecy Concerns Mount Over Spy Powers Targeting US Data Centers

    [ad_1]

    Last month, US president Joe Biden signed a surveillance bill enhancing the National Security Agency’s power to compel US businesses to wiretap communications going in and out of the country. The changes to the law have left legal experts largely in the dark as to the true limits of this new authority, chiefly when it comes to the types of companies that could be affected. The American Civil Liberties Union and organizations like it say the bill has rendered the statutory language governing the limits of a powerful wiretap tool overly vague, potentially subjecting large swaths of corporate America to warrantless and secretive surveillance practices.

    In April, Congress rushed to extend the US intelligence system’s “crown jewel,” Section 702 of the Foreign Intelligence Surveillance Act (FISA). The spy program allows the NSA to wiretap calls and messages between Americans and foreigners abroad—so long as the foreigner is the individual being “targeted” and the intercept serves a significant “foreign intelligence” purpose. Since 2008, the program has been limited to a subset of businesses that the law calls “electronic communications service providers,” or ECSPs—corporations such as Microsoft and Google, which provide email services, and phone companies like Sprint and AT&T.

    In recent years, the government has worked quietly to redefine what it means to be an ECSP in an attempt to extend the NSA’s reach, first unilaterally and now with Congress’s backing. The issue remains that the bill Biden signed last month contains murky language that attempts to redefine the scope of a critical surveillance program. In response, a coalition of digital rights organizations from the Brennan Center for Justice to the Electronic Frontier Foundation are pressing the US attorney general, Merrick Garland, and the nation’s top spy, Avril Haines, to declassify details about a relevant court case that could, they say, shed much-needed light on the situation.

    In a letter to the top officials, more than 20 such organizations say they believe the new definition of an ECSP adopted by Congress might “permit the NSA to compel almost any US business to assist” the agency, noting that all companies today provide some sort of “service” and have access to equipment on which “communications” are stored.

    “Deliberately writing overbroad surveillance authorities and trusting that future administrations will decide not to exploit them is a recipe for abuse,” the letter says. “And it is entirely unnecessary, as the administration can—and should—declassify the fact that the provision is intended to reach data centers.”

    The Justice Department confirmed receipt of the letter on Tuesday, but referred WIRED to the Office of the Director of National Intelligence (ODNI), which has primary purview over declassification decisions. The ODNI has not responded to a request for comment.

    It is widely believed—and has been reported—that data centers are the intended target of this textual change, and Matt Olsen, the assistant US attorney general for national security, appeared to confirm as much during an April 17 episode of the Lawfare podcast.

    [ad_2]

    Dell Cameron

    Source link

  • Microsoft Deploys Generative AI for US Spies

    Microsoft Deploys Generative AI for US Spies

    [ad_1]

    Law enforcement in the United States, United Kingdom, and Australia this week named a Russian national as the person behind LockBitSupp, the pseudonym of the leader of the LockBit ransomware gang that the US says is responsible for extracting $500 million from its victims. Dmitry Yuryevich Khoroshev has been sanctioned and charged with 26 criminal counts in the US, which combined could result in a prison sentence of 185 years. That is, if he’s ever arrested and successfully prosecuted—an extremely rare event for suspects who live in Russia.

    Elsewhere in the world of cybercrime, WIRED’s Andy Greenberg interviewed a representative of Cyber Army of Russia, a group of hackers who have targeted water utilities in the US and Europe and are said to have ties to the notorious Russian military hacking unit known as Sandworm. The responses from Cyber Army of Russia were littered with pro-Kremlin talking points—and some curious admissions.

    A deputy director of the FBI has urged the agency’s employees to continue to use a massive foreign surveillance database to search for the communications of “US persons,” sparking the ire of privacy and civil liberty advocates who unsuccessfully fought for such searches to require a warrant. Section 702 of the Foreign Intelligence Surveillance Act requires that “targets” of the surveillance program be based outside the US, but the texts, emails, and phone call of people in the US can be included in the 702 database if one of the parties involved in the communication is foreign. An amendment that would have required the FBI to obtain a warrant for 702 searches of US persons failed in a tie vote earlier this year.

    Security researchers this week revealed an attack on VPNs that forces some or all of a user’s web traffic to be routed outside the encrypted tunnel, thus negating the entire reason for using a VPN. Dubbed “TunnelVision,” the attack impacts nearly all VPN applications, and the researchers say the attack has been possible since 2022, meaning it’s possible that it’s already been used by malicious actors.

    That’s not all. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    Microsoft has developed an offline generative AI model designed specifically to handle top-secret information for US intelligence agencies, according to Bloomberg. This system, based on GPT-4, is isolated from the internet and only accessible through a network exclusive to the US government. William Chappell, Microsoft’s chief technology officer for strategic missions and technology, told Bloomberg that, theoretically, around 10,000 individuals could access the system.

    Although spy agencies are eager to leverage the capabilities of generative AI, concerns have been raised about the potential unintended leakage of classified information, as these systems typically rely on online cloud services for data processing. However, Microsoft claims that the model it created for the US government is “clean,” meaning it can read files without learning from them, preventing secret information from being integrated into the platform. Bloomberg noted that this marks the first time a major large language model has operated entirely offline.

    Sky News reported this week that Britain’s Ministry of Defence was the target of a significant cyberattack on its third-party payroll system. On Tuesday, Grant Shapps, the UK defence secretary, informed members of Parliament that payroll records of approximately 270,000 current and former military personnel, including their home addresses, had been accessed in the cyberattack. “State involvement” could not be ruled out, he said.

    While the government has not publicly identified a specific country involved, Sky News has reported that the Chinese government is suspected. China’s foreign ministry has denied the allegations, saying in a statement that it “firmly opposes and fights all forms of cyber attacks” and “rejects the use of this issue politically to smear other countries.”

    The payroll company, Shared Services Connected, had known about the breach for months before reporting it to the government, according to The Guardian.

    The United States Marine Forces Special Operations Command (MARSOC) is testing robotic dogs that can be armed with artificial-intelligence-enabled gun systems. According to reporting from The War Zone, the manufacturer of the AI gun system, Onyx Industries, confirmed to reporters at a defense conference this week that as many as two of MARSOC’s robot dogs, developed by Ghost Robotics, are equipped with its weapons systems.

    In a statement to The War Zone, MARSOC clarified that the robot dogs are “under evaluation” and are not yet being deployed in the field. They noted that weapons are just one possible application for the technology, which could also be used for surveillance and reconnaissance. MARSOC emphasized that they are fully compliant with US Department of Defense policies on autonomous weapons.

    The US Marine Corps has previously tested robotic dogs armed with rocket launchers.

    Days after a hacker posted to BreachForums offering to sell data from nearly 50 million Dell customers, the company began notifying its customers of a data breach in a company portal. According to the email sent to the people impacted, the leaked data contains names, addresses, and information about purchased hardware. “The information involved does not include financial or payment information, email address, telephone number or any highly sensitive customer information,” the email to affected customers states.

    [ad_2]

    Dhruv Mehrotra, Andrew Couts

    Source link

  • ‘TunnelVision’ Attack Leaves Nearly All VPNs Vulnerable to Spying

    ‘TunnelVision’ Attack Leaves Nearly All VPNs Vulnerable to Spying

    [ad_1]

    Researchers have devised an attack against nearly all virtual private network applications that forces them to send and receive some or all traffic outside of the encrypted tunnel designed to protect it from snooping or tampering.

    TunnelVision, as the researchers have named their attack, largely negates the entire purpose and selling point of VPNs, which is to encapsulate incoming and outgoing Internet traffic in an encrypted tunnel and to cloak the user’s IP address. The researchers believe it affects all VPN applications when they’re connected to a hostile network and that there are no ways to prevent such attacks except when the user’s VPN runs on Linux or Android. They also said their attack technique may have been possible since 2002 and may already have been discovered and used in the wild since then.

    Reading, Dropping, or Modifying VPN Traffic

    The effect of TunnelVision is that “the victim’s traffic is now decloaked and being routed through the attacker directly,” a video demonstration explained. “The attacker can read, drop or modify the leaked traffic and the victim maintains their connection to both the VPN and the internet.”

    The attack works by manipulating the DHCP server that allocates IP addresses to devices trying to connect to the local network. A setting known as option 121 allows the DHCP server to override default routing rules that send VPN traffic through a local IP address that initiates the encrypted tunnel. By using option 121 to route VPN traffic through the DHCP server, the attack diverts the data to the DHCP server itself. Researchers from Leviathan Security explained:

    Our technique is to run a DHCP server on the same network as a targeted VPN user and to also set our DHCP configuration to use itself as a gateway. When the traffic hits our gateway, we use traffic forwarding rules on the DHCP server to pass traffic through to a legitimate gateway while we snoop on it.

    We use DHCP option 121 to set a route on the VPN user’s routing table. The route we set is arbitrary and we can also set multiple routes if needed. By pushing routes that are more specific than a /0 CIDR range that most VPNs use, we can make routing rules that have a higher priority than the routes for the virtual interface the VPN creates. We can set multiple /1 routes to recreate the 0.0.0.0/0 all traffic rule set by most VPNs.

    Pushing a route also means that the network traffic will be sent over the same interface as the DHCP server instead of the virtual network interface. This is intended functionality that isn’t clearly stated in the RFC. Therefore, for the routes we push, it is never encrypted by the VPN’s virtual interface but instead transmitted by the network interface that is talking to the DHCP server. As an attacker, we can select which IP addresses go over the tunnel and which addresses go over the network interface talking to our DHCP server.

    We now have traffic being transmitted outside the VPN’s encrypted tunnel. This technique can also be used against an already established VPN connection once the VPN user’s host needs to renew a lease from our DHCP server. We can artificially create that scenario by setting a short lease time in the DHCP lease, so the user updates their routing table more frequently. In addition, the VPN control channel is still intact because it already uses the physical interface for its communication. In our testing, the VPN always continued to report as connected, and the kill switch was never engaged to drop our VPN connection.

    The attack can most effectively be carried out by a person who has administrative control over the network the target is connecting to. In that scenario, the attacker configures the DHCP server to use option 121. It’s also possible for people who can connect to the network as an unprivileged user to perform the attack by setting up their own rogue DHCP server.

    The attack allows some or all traffic to be routed through the unencrypted tunnel. In either case, the VPN application will report that all data is being sent through the protected connection. Any traffic that’s diverted away from this tunnel will not be encrypted by the VPN and the internet IP address viewable by the remote user will belong to the network the VPN user is connected to, rather than one designated by the VPN app.

    Interestingly, Android is the only operating system that fully immunizes VPN apps from the attack because it doesn’t implement option 121. For all other OSes, there are no complete fixes. When apps run on Linux there’s a setting that minimizes the effects, but even then TunnelVision can be used to exploit a side channel that can be used to de-anonymize destination traffic and perform targeted denial-of-service attacks. Network firewalls can also be configured to deny inbound and outbound traffic to and from the physical interface. This remedy is problematic for two reasons: (1) A VPN user connecting to an untrusted network has no ability to control the firewall, and (2) it opens the same side channel present with the Linux mitigation.

    The most effective fixes are to run the VPN inside of a virtual machine whose network adapter isn’t in bridged mode or to connect the VPN to the internet through the Wi-Fi network of a cellular device. The research, from Leviathan Security researchers Lizzie Moratti and Dani Cronce, is available here.

    This story originally appeared on Ars Technica.

    [ad_2]

    Dan Goodin, Ars Technica

    Source link

  • Top FBI Official Urges Agents to Use Warrantless Wiretaps on US Soil

    Top FBI Official Urges Agents to Use Warrantless Wiretaps on US Soil

    [ad_1]

    House Intelligence Committee chair Mike Turner and ranking member Jim Himes blasted out invitations announcing a “bipartisan celebration” of the 702 program’s continuation last week. The event, which the lawmakers have dubbed FISA Fest, is being held in a reception room in the US Capitol building Wednesday night.

    A House Intelligence Committee spokesperson did not respond to a request for comment.

    Turner and Himes were instrumental in preserving the FBI’s warrantless access to 702 data. In countless “briefings” since October, the pair urged members of their respective parties to avoid reining in the FBI’s authority too greatly. Instead, the new procedures designed by the bureau itself were touted by both lawmakers as a sufficient bulwark against further abuse.

    Narrowly winning that battle last month, Himes and Turner worked to kill an amendment that would have forced FBI employees to get search warrants before reviewing the communications of Americans swept up by the program. (The amendment, opposed by the Biden White House, failed in a tie vote, 212-212.) Instead, the FBI’s procedures, now part of the 702 statute, require employees to affirmatively “opt in” before accessing the wiretaps. They must also seek permission from an FBI attorney before conducting “batch queries” of the database. And queries for communications of elected officials, reporters, academics, and religious figures are now all deemed “sensitive” and require approval from higher up the chain of command.

    Congress established Section 702 in 2008 to legitimize an existing surveillance program run by the National Security Agency (NSA) without congressional oversight or approval. The program, more narrowly defined at the time, intercepted communications that were at least partly domestic but included a target the government believed was a known terrorist. While bringing the surveillance under its authority, Congress has helped to steadily expand the scope of the surveillance to encompass a new slate of threats, from cybercrime and drug trafficking to arms proliferation.

    While advocates for 702 surveillance often imply that Americans who are wiretapped are communicating with terrorists—a concoction that Turner himself repeatedly lent credence to this year—the allegation is dubious. Officially, it is the US government’s position that it is impossible to know which US citizens are being surveilled or even how many of them there are. The chief aim of the 702 program is to acquire “foreign intelligence information,” a term that encompasses not only terrorism and acts of sabotage but information necessary for the government to conduct its own “foreign affairs.”

    Surveillance critics worry that the array of possible targets extends far beyond what is being characterized in unclassified settings. It is uncontroversial to suggest that the US government—like all governments with the power to spy—finds reasons to spy on foreign allies, businesses, even news publications. So long as the target is foreign, they have no privacy rights.

    The limits of the 702 program remain murky, even to congressional members insisting that it should not be curbed further. The Senate Intelligence Committee chair, Mark Warner, acknowledged to reporters this week that language in Section 702 needs to be “fixed,” even though he voted last month to make the current language law.

    FISA experts had warned for months that new language introduced by the House Intelligence Committee is far too vague in the way it describes the categories of businesses the US government can compel, fearing that the government would obtain the power to force anyone with access to a target’s online communications into snooping on the NSA’s behalf—IT workers and data center staff among them.

    A trade group representing Google, Amazon, IBM, and Microsoft, among some of the world’s other largest technology companies, concurred last month, arguing that the new version of the surveillance program threatens to “dramatically expand the scope of entities and individuals” subject to Section 702 orders.

    “We are working on it,” Warner told The Record on Monday. “I am absolutely committed to getting that fixed,” he said, suggesting the best time to do so would be “in the next intelligence bill.”

    [ad_2]

    Dell Cameron, William Turton

    Source link

  • Private Eyes: 5 things an N&O investigation into NC license plate cameras revealed

    Private Eyes: 5 things an N&O investigation into NC license plate cameras revealed

    [ad_1]

    Automated license plate reader cameras can be hard to spot if you’re just driving by.

    But along hundreds of North Carolina streets, these shoebox-sized devices are quietly capturing details on every passing vehicle, data easily made accessible to law enforcement officers across the country.

    Until now, no one in North Carolina had a full picture of how widespread these cameras have become. But a News & Observer investigation shows they’re a much more common tool for law enforcement, who say the devices can act as a force multiplier for solving crime.

    In our series, Private Eyes, we show these cameras have generated a lot of success stories for closing cases — recovering stolen vehicles, finding missing children, even arresting an attempted murder suspect who fled out of state. But the embrace of these devices by law enforcement has also raised serious privacy concerns from groups worried about cases of misuse, overpolicing and misidentification leading to arrests.

    Here’s a look at five major things our reporting over the last several months revealed.

    A Flock automated license plate reader camera used by the Raleigh Police Department is mounted on a Duke Energy utility pole on Hillsborough Street in Raleigh Jan. 29. RPD operates 26 automated readers that collect license plate and vehicle information including color, make and type.
    A Flock automated license plate reader camera used by the Raleigh Police Department is mounted on a Duke Energy utility pole on Hillsborough Street in Raleigh Jan. 29. RPD operates 26 automated readers that collect license plate and vehicle information including color, make and type. Travis Long tlong@newsobserver.com

    From rare to regular practice in just a few years

    Flock Safety got its start in 2017. It didn’t officially register to do business in North Carolina until 2021.

    Yet the company has in that time signed contracts with at least 80 law enforcement agencies across the state, from the Nags Head Police Department to the Buncombe County Sheriff’s Office, The N&O found. Our survey of police and sheriff’s departments statewide has so far tallied more than 700 of Flock’s fixed cameras on North Carolina roads, a count that far exceeds any of the company’s competitors, like Rekor and Motorola.

    And because Flock doesn’t sell its cameras — it leases them — that can mean big money for the private company.

    Contracts with several North Carolina clients show the cameras cost between $2,000 to $3,000 each annually. So a conservative estimate is that North Carolina law enforcement agencies are spending upwards of $1.49 million on the devices every year.

    And it’s not just law enforcement. Flock markets its cameras to companies and HOAs, which as we explored in our series sparked controversy in one Knightdale neighborhood.

    Flock CEO Garrett Langley has discussed that explosive growth nationally, telling an Atlanta podcast in 2023 that the company has gone from “single-digit millions to over a hundred-million in revenue in four years.”

    ALPR cameras don’t have the same safeguards

    From the video camera inside Target to the doorbell camera on your neighbor’s front porch, Americans are already awash in surveillance.

    So what makes automated license plate readers from Flock or any other vendor different?

    Access, for one.

    With some exceptions, the vast majority of privately operated video surveillance isn’t readily available for law enforcement to search or review. Camera owners can turn it over on request, sure. But forcing the matter requires a warrant issued by the court, based on probable cause.

    What if police wanted GPS location data tracked by your phone? That also requires a search warrant served on Google (at least it did before the company announced in late 2023 it would cut off access to such data).

    Could detectives acquire your mobile device’s location via cell towers? Or attach a GPS device to your car? Both techniques require search warrants, the U.S. Supreme Court has ruled.

    In North Carolina, state laws place protections on license plate data captured for certain non-law enforcement purposes.

    Toll cameras, for instance, capture and retain images of vehicles and license plates for 90 days to bill drivers. But the agency requires a subpoena to provide police with any of that footage, says N.C. Turnpike Authority spokesperson Logen Hodges.

    When police officers search for license plates or other vehicle data through an ALPR system like Flock, they don’t need a warrant — or any other external oversight. And although state law now makes misuse of ALPR devices a misdemeanor, privacy advocates are concerned.

    Flock and police departments argue, however, that license plate readers capture information available in public spaces where there is no expectation of privacy — the equivalent of an officer standing on a corner to jot down every plate number.

    Flock Safety automated license plate reader cameras monitor around 400,000 vehicles per month in Raleigh, according to the police department’s transparency portal.
    Flock Safety automated license plate reader cameras monitor around 400,000 vehicles per month in Raleigh, according to the police department’s transparency portal. Travis Long tlong@newsobserver.com

    Across North Carolina, transparency isn’t consistent

    Much of The N&O’s reporting was built on the collection of thousands of data points from Flock Safety’s transparency portals, websites that provide basic details on a department’s use of the cameras. That’s everything from how many cameras they have installed to the number of cars they’ve detected in the last month or so.

    The portals are optional, and not all of Flock’s clients have committed to using them.

    Flock did provide a list of about 30 North Carolina agencies using the transparency portals. That’s far short of the 80 or more agencies The N&O independently counted that are using the service in the state so far.

    A number of law enforcement agencies told us through our survey that they have no plans to use the sites.

    Case in point: police at UNC-Chapel Hill. The university, which fought to keep its contracts with Flock Safety secret from the public before relenting earlier this year, “has not discussed the creation of a transparency portal,” according to spokesperson Kevin Best.

    The N&O found more than 360 of the sites across the country. But it’s hard to know how many of the company’s 5,000-plus law enforcement clients actually have the portals activated because the company hasn’t told us.

    Oversight in other states exceeds regulation here

    North Carolina has a law on the books that regulates the use of automated license plate readers.

    The rules limit retention of license plate data to 90 days and prohibit its use for enforcing simple traffic violations. The law also requires agencies using these systems to have a written policy that addresses, among other things, training, oversight and “annual or more frequent auditing.”

    But the regulations don’t require anyone to oversee whether agencies follow their own rules.

    And North Carolina law enforcement agencies aren’t always forthcoming about how they abide by those rules.

    The Raleigh Police Department, for example, has provided no evidence that an annual audit of its ALPR system has been completed.

    New Jersey, by contrast, issues a report publicly through its attorney general’s office on which law enforcement agencies completed audits and which saw violations and complaints.

    The limit on how long North Carolina agencies can keep data, meanwhile, pales in comparison to New Hampshire.

    The Granite State — whose motto is “Live Free or Die — requires law enforcement to purge license plate data after 3 minutes. New Hampshire is one of only three states where Flock does not operate.

    What’s next for these cameras on state highways? Unclear.

    Over the last several years, lawmakers introduced bills to undo a decade-old legal interpretation that prohibited automated license plate readers from state-maintained roads and highways. Those efforts failed repeatedly over objections by Republican legislators with privacy concerns about the technology.

    In early 2023, a new version of the bill drew support from law enforcement, including Raleigh Police Chief Estella Patterson and Nash County Sheriff Keith Stone, who testified to lawmakers that the devices were critical tools for fighting crime.

    The legislature approved the measure in October, allowing the devices on N.C. Department of Transportation right-of-ways through a pilot program run by DOT and the State Bureau of Investigation. The SBI, either on its own or on behalf of a local law enforcement agency, would need to enter into an agreement with NCDOT on where to place the devices.

    That will likely mean more ALPR cameras along 80,000 miles of North Carolina streets. But when those new cameras will start appearing — that’s hard to say.

    Despite the law going into effect in January, neither agency has not provided any detail on how they’ll implement it.

    “Discussions and meetings continue” about the pilot project’s implementation, SBI spokesperson Angie Grube said in early April. After The N&O checked in last week, Grube said the agency had nothing to announce.

    As of Thursday, NCDOT has yet to receive any requests to install the devices, according to spokesperson Aaron Moody.

    Related stories from Charlotte Observer

    Tyler Dukes is an investigative reporter for The News & Observer who specializes in data and public records. In 2017, he completed a fellowship at the Nieman Foundation for Journalism at Harvard University. Prior to joining the N&O, he worked as an investigative reporter at WRAL News in Raleigh. He is a graduate of North Carolina State University and grew up in Elizabeth City.

    [ad_2]

    Source link

  • A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

    A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

    [ad_1]

    A lawsuit filed Wednesday against Meta argues that US law requires the company to let people use unofficial add-ons to gain more control over their social feeds.

    It’s the latest in a series of disputes in which the company has tussled with researchers and developers over tools that give users extra privacy options or that collect research data. It could clear the way for researchers to release add-ons that aid research into how the algorithms on social platforms affect their users, and it could give people more control over the algorithms that shape their lives.

    The suit was filed by the Knight First Amendment Institute at Columbia University on behalf of researcher Ethan Zuckerman, an associate professor at the University of Massachusetts—Amherst. It attempts to take a federal law that has generally shielded social networks and use it as a tool forcing transparency.

    Section 230 of the Communications Decency Act is best known for allowing social media companies to evade legal liability for content on their platforms. Zuckerman’s suit argues that one of its subsections gives users the right to control how they access the internet, and the tools they use to do so.

    “Section 230 (c) (2) (b) is quite explicit about libraries, parents, and others having the ability to control obscene or other unwanted content on the internet,” says Zuckerman. “I actually think that anticipates having control over a social network like Facebook, having this ability to sort of say, ‘We want to be able to opt out of the algorithm.’”

    Zuckerman’s suit is aimed at preventing Facebook from blocking a new browser extension for Facebook that he is working on called Unfollow Everything 2.0. It would allow users to easily “unfollow” friends, groups, and pages on the service, meaning that updates from them no longer appear in the user’s newsfeed.

    Zuckerman says that this would provide users the power to tune or effectively disable Facebook’s engagement-driven feed. Users can technically do this without the tool, but only by unfollowing each friend, group, and page individually.

    There’s good reason to think Meta might make changes to Facebook to block Zuckerman’s tool after it is released. He says he won’t launch it without a ruling on his suit. In 2020, the company argued that the browser Friendly, which had let users search and reorder their Facebook news feeds as well as block ads and trackers, violated its terms of service and the Computer Fraud and Abuse Act. In 2021, Meta permanently banned Louis Barclay, a British developer who had created a tool called Unfollow Everything, which Zuckerman’s add-on is named after.

    “I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly,” Barclay wrote for Slate at the time. “But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically.”

    [ad_2]

    Vittoria Elliott

    Source link

  • Takeaways from AP’s investigation into fatal police encounters involving injections of sedatives

    Takeaways from AP’s investigation into fatal police encounters involving injections of sedatives

    [ad_1]

    The practice of giving sedatives to people detained by police spread quietly across the nation over the last 15 years, built on questionable science and backed by police-aligned experts, an investigation led by The Associated Press has found.

    At least 94 people died after they were given sedatives and restrained by police from 2012 through 2021, according to findings by the AP in collaboration with FRONTLINE (PBS) and the Howard Centers for Investigative Journalism. That’s nearly 10% of the more than 1,000 deaths identified during the investigation of people subdued by police in ways that are not supposed to be fatal.

    Supporters say sedatives enable rapid treatment for drug-related behavioral emergencies and psychotic episodes, protect front-line responders from violence and are safely administered thousands of times annually to get people with life-threatening conditions to hospitals. Critics say forced sedation should be strictly limited or banned, arguing the medications, given without consent, are too risky to be administered during police encounters.

    The injections spanned the country, from a desert in Arizona to a street in St. Louis to a home in Florida. They happened in big cities such as Dallas, suburbs like Lithonia, Georgia, and rural areas such as Dale, Indiana. They occurred in homes, in parking lots, in ambulances and occasionally in hospitals where police encounters came to a head.

    It was impossible to determine the role sedatives may have played in each of the 94 deaths, which often involved the use of other potentially dangerous force on people who had taken drugs or consumed alcohol. Medical experts told the AP their impact could be negligible in people who were already dying; the final straw that triggered heart or breathing failure in the medically distressed; or the main cause of death when given in the wrong circumstances or mishandled.

    While sedatives were mentioned as a cause or contributing factor in a dozen official death rulings, authorities often didn’t even investigate whether injections were appropriate. Medical officials have traditionally viewed them as mostly benign treatments. Now some say they may be playing a bigger role than previously understood and deserve more scrutiny.

    Here are takeaways from AP’s investigation:

    The investigation found that about half those who died after injections were Black.

    Behind the racial disparity is a disputed medical condition called excited delirium, which fueled the rise of sedation outside hospitals. Critics say its purported symptoms, including “superhuman strength” and high pain tolerance, play into racist stereotypes about Black people and lead to biased decisions about who needs sedation.

    Guidelines require paramedics to make rapid, subjective assessments of the potential dangers posed by the people they treat. Only those judged to be at high risk of harming themselves or others are supposed to be candidates for shots.

    But the investigation found that some whose behavior did not meet the bar — who had already largely calmed down or in rare cases even passed out — were given injections. In some cases, paramedics cited fears that people would become violent on the way to hospitals.

    The 2019 death of Elijah McClain in Aurora, Colorado, put a spotlight on the practice. A paramedic convicted of giving McClain an overdose of ketamine was sentenced last month to five years in prison, and a second paramedic was sentenced to 14 months in jail and probation Friday.

    Time and time again, the AP found, agitated people who were held by police facedown, often handcuffed and with officers pushing on their backs, struggled to breathe and tried to get free. Citing combativeness, paramedics administered sedatives, further slowing their breathing. Cardiac and respiratory arrest often occurred within minutes.

    Paramedics drugged people who were not a threat to themselves or others, violating treatment guidelines. Medics often didn’t know whether other drugs or alcohol were in people’s systems, although some combinations cause serious side effects.

    Police officers sometimes suggested paramedics should give shots to suspects they were detaining, a potential abuse of their power.

    The majority of those who died had been restrained facedown in handcuffs, which can restrict breathing.

    Experts say giving sedatives to someone who is already struggling to breathe can create a risk for death, because the drugs slow the respiratory drive. If they are unable to get enough oxygen and blow off enough carbon dioxide, their hearts can stop or they can stop breathing.

    The use of sedatives by emergency medical responders outside hospitals spread rapidly over the last two decades based on a now-discredited theory. Law enforcement leaders in the 2000s were concerned by the number of people who were dying after they were shocked with police Tasers and forcibly restrained.

    They began promoting a new strategy calling for officers to view encounters with severely agitated people, including those experiencing psychotic episodes or high on drugs, as medical emergencies. Rather than use force to try to gain compliance, officers were encouraged to call emergency medical services to sedate people and transport them to hospitals.

    Supporters of this approach promoted a term to describe behavior they said put combative people at risk of sudden death: excited delirium.

    The strategy received a boost in 2009 when the American College of Emergency Physicians recognized excited delirium and urged the rapid use of ketamine, midazolam and other drugs to treat it.

    EMS agencies quickly adopted excited delirium protocols, though drugs like ketamine had not been thoroughly studied in the field. The paramedics who injected McClain with ketamine said they were following one such policy.

    Critics have argued that the concept of excited delirium shifts blame from police in the deaths. The National Association of Medical Examiners and the American College of Emergency Physicians distanced themselves from the concept in 2023.

    Deaths involving police often result in news headlines and criminal investigations that focus on the use of force by officers. But the AP investigation found that medical personnel who gave sedatives were often largely ignored.

    The use of sedatives in nearly half the deaths has not been previously reported by news outlets. Many reasons explain this lack of attention.

    Police narratives omit the use of sedatives due to medical privacy concerns. EMS treatment records are not subject to public records laws. Medical examiners view sedatives as treatments and rarely cite them as contributing factors in deaths. Investigators are unknowledgeable about the role sedatives play and uninterested in diving into the complicated details.

    ___

    Associated Press researcher Rhonda Shafner contributed from New York.

    ___ The Associated Press receives support from the Public Welfare Foundation for reporting focused on criminal justice. This story also was supported by Columbia University’s Ira A. Lipman Center for Journalism and Civil and Human Rights in conjunction with Arnold Ventures. Also, the AP Health and Science Department receives support from the Howard Hughes Medical Institute’s Science and Educational Media Group. The AP is solely responsible for all content.

    ___

    Contact AP’s global investigative team at Investigative@ap.org or https://www.ap.org/tips/

    ___ This story is part of an ongoing investigation led by The Associated Press in collaboration with the Howard Center for Investigative Journalism programs and FRONTLINE (PBS). The investigation includes the Lethal Restraint interactive story, database and the documentary, “Documenting Police Use Of Force,” premiering April 30 on PBS.

    [ad_2]

    Source link

  • The Best VPNs to Protect Yourself Online

    The Best VPNs to Protect Yourself Online

    [ad_1]

    Mullvad offers apps for every major platform, as well as routers. The applications are all open source, and you can check the code on GitHub. The service has been independently audited as well. Advanced users can download configuration files and use them directly with OpenVPN or Wireguard.

    In my testing, speeds were very good. I never encountered a situation where I couldn’t get a fast connection. Over the years Mullvad remains the VPN I rely on day-to-day.

    Mullvad VPN costs 5 euros (around $5) per month, cash or charge.


    Best Free VPN

    Proton VPN is part of a suite of privacy tools from Proton, which is most famous for its encrypted email service, ProtonMail. The company is based in Switzerland, which has no data retention laws, so Proton VPN can have a no-logs policy. It has been independently audited and maintains a warrant canary page. All the usual features of a good VPN are here, including support for multi-hop connections, a kill switch in the app, split tunneling support, pretty good geo evasion for making Netflix work, and support for torrents. There’s also support for ad-blocking, custom DNS, and high-speed streaming.

    One thing Proton VPN offers that others do not is a free plan that gets you full access to all the regular plan’s features. However, it is limited to a single device, and there are only three server locations (Japan, Netherlands, and the US). If your needs are limited and you want to keep costs down, this is a good option.

    Proton’s pricing structure can be confusing since you can combine it with other services to lower the rates. For purposes of testing, I used a one-year Proton VPN Plus plan that’s $6 per month. If you use other Proton services, Proton Unlimited pricing is a better deal ($10 per month gets you access to all five Proton services).

    Proton’s VPN app is open source and available for macOS, Linux, Windows, Android, and iOS. With the Plus plan, 10 devices can connect simultaneously. Proton VPN uses a mix of IKEv2, OpenVPN, and WireGuard for connections. By default, the app chooses for you, but you can make a selection in the settings. I also like the Permanent Kill Switch, which prevents your device from reconnecting to the internet without a VPN even after a reboot.

    In my testing over the past few months, speeds on Proton VPN vary considerably by server and time of day. Overall, Proton VPN is very fast, dropping my speed by only around 7 to 8 percent versus unprotected speed. I also did not detect any DNS leaks through any of the servers I tried.

    Proton VPN has a free plan but it’s limited to one device. It otherwise costs $5 per month if you buy two years upfront, $6 per month if you buy one year, and $10 per month if you pay monthly.


    Best for Circumventing Geographic Restrictions

    Surfshark wouldn’t be my top pick if my life depended on my VPN, but for most of us, that’s not the case. If you want a way to get around some geographical restrictions on content (aka access Netflix) and protect your traffic while using an open Wi-Fi hotspot, Surfshark is a good choice. It’s secure, and it provides great value for the money if you pay for two years upfront.

    In my testing over the years, Surfshark has consistently had some of the best speeds of any VPN I’ve used. Yes, it is slower than not using a VPN, but I have never had any problem streaming HD content through Surfshark. It’s fast enough that you won’t notice any speed degradation.

    [ad_2]

    Scott Gilbertson

    Source link

  • The Next US President Will Have Troubling New Surveillance Powers

    The Next US President Will Have Troubling New Surveillance Powers

    [ad_1]

    The ability of the United States to intercept and store Americans’ text messages, calls, and emails in pursuit of foreign intelligence was not only extended but enhanced over the weekend in ways likely to remain enigmatic to the public for years to come.

    On Saturday, US president Joe Biden signed a controversial bill extending the life of a warrantless US surveillance program for two years, bringing an end to a months-long fight in Congress over an authority that US intelligence agencies acknowledge has been widely abused in the past.

    At the urging of the agencies and with the help of powerful bipartisan allies on Capitol Hill, the program has also been extended to cover a wide range of new businesses, including US data centers, according to recent analysis by legal experts and civil liberties organizations that were vocally opposed to its passage.

    Section 702 of the Foreign Intelligence Surveillance Act, or FISA, allows the US National Security Agency (NSA) and Federal Bureau of Investigation (FBI), among other agencies, to eavesdrop on calls, texts, and emails traveling through US networks, so long as one side of the communication is foreign.

    Americans caught up in the program face diminished privacy rights.

    While the government requires a foreign target to commence a wiretap, Americans are often party to those intercepted conversations. And although US attorney general Merrick Garland insisted in a statement on Saturday that the updates to the 702 program “ensure the protection of Americans’ privacy and civil liberties,” and that the government never intentionally targets Americans, the government nevertheless reserves the right to store their communications and access them later without probable cause.

    “Section 702 is supposed to be used only for spying on foreigners abroad,” says Dick Durbin, chair of the Senate Judiciary Committee. “Instead, sadly, it has enabled warrantless access to vast databases of Americans’ private phone calls, text messages, and emails.”

    Under the law, the government can retain communications captured by the 702 program for half a decade or more—indefinitely, so long as the government makes no effort to decrypt them.

    A trade organization representing some of the world’s largest tech companies came out against plans to expand Section 702 in the final hours of the debate, claiming that a new provision authored by House Intelligence Committee members would damage the competitiveness of US technologies, “arguably imperiling the continued global free flow of data between the US and its allies.”

    US intelligence obtains its vast surveillance power through yearly certifications doled out by a secret court. The certifications permit the NSA in particular to force businesses in the US—categorized as “electronic communications service providers,” or ECSPs—to cooperate with the program, collecting data and installing wiretaps on the agency’s behalf.

    Years ago, the government sought to unilaterally expand the definition of ECSP under the law, seeking to compel the cooperation of whole new categories of businesses. That effort was beaten back by the FISA court in 2022, in a ruling that stated only Congress has the “competence and constitutional authority” to rewrite the law.

    [ad_2]

    Dell Cameron

    Source link