ReportWire

Tag: online safety act

  • Porn provider fined £800,000 as 4chan refuses to comply with UK law – Tech Digest

    [ad_1]

    Share


    A major pornography provider has been slapped with a massive fine for failing to protect children from adult content.

    Ofcom announced today that Kick Online Entertainment SA must pay £800,000 after it failed to implement mandatory age verification between July and December 2025.

    The penalty follows the introduction of the UK’s Online Safety Act, which made “highly effective” age checks a legal requirement as of July 25, 2025.

    According to the regulator, Kick Online Entertainment ignored these rules for five months, only bringing its sites into compliance at the end of the year.

    “Having highly effective age checks on adult sites to protect children from pornographic content is non-negotiable,” said Suzanne Cater, Ofcom’s Director of Enforcement. “Any company that fails to meet this duty – or engage with us – can expect to face robust enforcement action, including significant fines.”

    In addition to the main penalty, the company faces a £30,000 fine for failing to respond to Ofcom’s inquiries, alongside a recurring daily fine of £200. While Kick has since updated its systems, the regulator warned that it is currently investigating dozens of other sites.

    4chan refusing to comply

    The crackdown has sparked a high-stakes legal standoff with the controversial US-based message board 4chan. While Ofcom has issued a provisional notice indicating a likely £520,000 fine for age-check failures, 4chan’s legal team has stated the company has no intention of paying.

    The site’s lawyer, Preston Byrne, argued that 4chan has “broken no law in the only jurisdiction that matters – the United States.” He claimed that any attempt to enforce UK fines against the Delaware-incorporated company would violate the First Amendment, which protects free speech.

    If 4chan continues to refuse payment, Ofcom’s next steps could be more aggressive than a simple fine. Under the Online Safety Act, the regulator has the power to apply for court orders to block non-compliant sites in the UK entirely.

    This would force UK internet service providers to prevent users from accessing the platform. Ofcom can also seek “business disruption measures,” such as requiring payment processors and advertisers to stop working with the site.

    “We continue to investigate other sites under the UK’s age check rules and will take further action where necessary,” Cater added, signalling that the regulator is prepared for a long battle with offshore platforms.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • UK regulator fines 4chan for ignoring Online Safety Act demands

    [ad_1]

    Ofcom has slapped 4chan with a £20,000 ($26,700) fine for failing to comply with the internet and telecommunications regulator’s request for information under the UK’s Online Safety Act of 2023. The regulator has released an update for 11 of the investigations it opened after the first of its online safety codes became enforceable in March this year. Apparently, 4chan has ignored its requests for a copy of its illegal harms risk assessment and to provide information about its qualifying worldwide revenue. This is the first fine Ofcom has handed down under the new law, which was designed to prevent children from accessing harmful content online and which has prompted websites like Reddit and X to put up age verification measures.

    When the regulator launch its probe into 4chan in June, it said it received complaints about illegal content on the anonymous online board. It doesn’t exactly come as a surprise that 4chan refuses to give the regulator information about the risks of illegal content on its website: Back in August, the service filed a lawsuit against Ofcom, arguing that the enforcement of the UK’s Online Safety Act violates Americans’ freedom of speech. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material,” said Liz Kendall, the UK Secretary of State for Science, Innovation and Technology. The regulator is also imposing an additional penalty of £100 ($133) per day on 4chan until it complies with its requests for information.

    Ofcom has announced the results of other investigations, as well, such as finding “serious compliance concerns” with two file-sharing services that have now deployed an automated tool that can detect and quickly remove uploads with child sexual abuse material (CSAM). Four other file-sharing services that were also under investigation for CSAM chose to geoblock access from UK IP addresses instead, so the regulator closed their cases.

    [ad_2]

    Mariella Moon

    Source link

  • The growing debate over age verification laws | TechCrunch

    [ad_1]

    Technologists and policymakers are reckoning with a generation-defining problem on the internet: while it can be a revolutionary force for unprecedented education and connection across the globe, it can also pose dangers to children when they have completely unfettered access.

    There is no simple way, however, to monitor children’s internet access without surveilling adults, paving the way for disastrous online privacy violations.

    While some advocates praise these laws as victories for children’s safety, many security experts warn that these laws are being proposed and passed with flawed implementation plans, which pose dangerous security risks for adult users as well.

    Here’s a primer on where the debate over age and identity verification stands, and where these laws are being enacted.

    What exactly is age verification?

    When we talk about age verification laws, we aren’t talking about when you made a Neopets account as a kid and checked a box to affirm that you were at least 13 years old. In the United States, those types of age checks are a result of the Children’s Online Privacy Protection Act (COPPA), an internet safety law passed in 1998. But, as you already know, if you had a Neopets account when you were 10, COPPA-era age checks are very easy to navigate around. You simply click a box that says you’re 13.

    In the context of the laws that have cropped up during the 2020s, age verification usually refers to a user uploading an official ID to a third-party verification system to prove who they are. Users might also upload biometric facial scans, like the ones that power Face ID on iPhones.

    What is the point of age verification?

    Of course, internet safety is not really about preventing children from playing games like Neopets. Parents and lawmakers are concerned about children accessing content that’s potentially dangerous for minors, like online pornography, information about illicit drug use, and social media sites where they may encounter strangers with bad intentions.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    These concerns are not unfounded. Parents have turned to lawmakers to share horrific stories of how their children died after purchasing fentanyl-laced drugs on Facebook, or how they took their own lives after facing incessant bullying on Snapchat.

    As technology becomes more sophisticated, the problem is getting worse: Meta’s AI chatbots have reportedly flirted with children, while Character.AI and OpenAI are facing lawsuits over the suicides of children that were allegedly encouraged by the companies’ chatbots.

    We know the internet isn’t all bad, though. Without leaving your home or spending any money, you can learn to play guitar or write code. You can forge meaningful friendships with people from the other side of the world. You can access specialized telehealth care, even if you live somewhere where no doctor is trained in your diagnosis. You can find the answer to just about any question you want at any given moment (the capital of Madagascar is Antananarivo, by the way).

    This is how global lawmakers have arrived at what they believe to be a sound compromise: they won’t nuke the whole internet, but they’ll just put certain content behind a gate that you can only unlock if you can prove you’re an adult. But in this case, you’re not just clicking a box to confirm your age – you’re uploading your government ID or scanning your biometric data to prove you can access certain content.

    Is it safe to verify your identity by uploading a government ID or a biometric scan?

    The safety of any digital security measure depends on its implementation.

    Apple builds out products like Face ID so that these biometric scans of your face never leave your iPhone – they’re never shared over the cloud, which massively limits the potential for hackers to gain access.

    But when any sort of connection to another network gets involved, that’s when identity verification can get fishy. We’ve already watched how these measures can play out poorly when the technology is anything but rock-solid.

    “No method of age verification is both privacy-protective and entirely accurate,” the Electronic Frontier Foundation writes. “These methods don’t each fit somewhere on a spectrum of ‘more safe’ and ‘less safe,’ or ‘more accurate’ and ‘less accurate.’ Rather, they each fall on a spectrum of ‘dangerous in one way’ to ‘dangerous in a different way.’”

    In recent memory, we have some strong examples of how badly things can go when a company slips up on its security.

    On Tea, an app that women use to share information about men they meet on dating apps, users have to upload selfies and photos of their IDs to prove that they are who they say they are. But users on 4chan, a misogynistic web forum, found that Tea left users’ data exposed, meaning that bad actors could access tens of thousands of users’ government IDs, selfies, and even direct messages on the platform, where women shared sensitive information about their dating experiences. What was once purported to be an app for women’s safety ended up exposing its users to vicious harassment, giving bad actors access to personal information like their home addresses.

    These hacks were possible despite Tea’s promise that these images were not stored anywhere and were deleted immediately (evidently, those claims were false).

    This kind of thing happens all the time – just look at TechCrunch’s security coverage. But it’s not just happening to new apps like Tea. World governments and trillion-dollar tech giants are certainly not exempt from data breaches.

    Does it really matter if I lose my anonymity on the internet? I’m not doing anything shady.

    These laws have inspired much backlash, but it’s not just because people are shy about linking their porn viewership to their government IDs.

    In places where people can be prosecuted for political speech, anonymity is vital to allow people to meaningfully discuss current events and critique those in power without fear of retribution. Corporate whistleblowers could be unable to report a company’s wrongdoing if all of their online activity is linked to their identity, and victims of domestic abuse will find it even more difficult to flee dangerous situations.

    In the U.S., the idea of being prosecuted for one’s political beliefs is becoming less theoretical. President Trump has threatened to send his political opponents to prison, and the government has revoked visas from international students who have criticized the Israeli government or participated in protests against the country’s military actions.

    What age verification laws have gone into effect in the U.S.?

    In the United States, twenty-three states have enacted age verification laws as of August 2025, while two more states have laws slated to take effect in late September 2025.

    These laws mostly impact websites that host certain percentages of “sexual material harmful to minors,” which varies from state to state.

    In practice, this means that pornographic websites must verify a user’s identity before they can access the website. But some sites, like Pornhub, have opted to simply block traffic from certain states.

    “Since age verification software requires users to hand over extremely sensitive information, it opens the door to the risk of data breaches,” Pornhub wrote on its blog. “Whether or not your intentions are good, governments have historically struggled to secure this data.”

    What counts as “sexual material harmful to minors”?

    The definition of this term varies depending on who is enforcing the law.

    At a time when LGBTQ rights are under attack in the U.S., activists have warned that laws like this could be used to classify non-pornographic information about the LGBTQ community, as well as basic sex education, as “sexual material harmful to minors.” These concerns appear well-founded, given that President Trump’s administration has removed references to civil rights movements and LGBTQ history from some government websites.

    Texas’s age verification law – which was upheld in a Supreme Court ruling in June – was passed around the same time the state imposed other legal restrictions on the LGBTQ community, including limits on public drag shows and bans on gender-affirming care for minors. The drag show law was later deemed unconstitutional for violating the First Amendment.

    What’s going on with age verification in the U.K.?

    The United Kingdom enacted the Online Safety Act in July 2025, requiring many online platforms to verify a user’s identity before allowing them access. If a user is identified as a minor, they won’t be allowed on certain websites. The Act applies to search engines, social media platforms, video-sharing platforms, instant messaging services, cloud storage sites – pretty much anywhere that you may encounter media or talk to someone.

    In practice, this means that websites like YouTube, Spotify, Google, X, and Reddit are requiring UK users to verify their identity before accessing certain content. These requirements don’t just apply to pornographic or violent content – people in the UK have been barred from viewing vital education and news sources, making it difficult to access information without exposing themselves to potential privacy concerns.

    The UK does not use one specific way of verifying one’s identity – individual websites can decide what mechanism to use, and Ofcom, the UK’s communications regulator, is supposed to oversee this implementation. But as we explained with the Tea example, we can’t trust that any given authentication tool will be safe.

    Now, users who are subject to identity verification must decide if they want to freely access information or if they want to expose themselves to privacy risks.

    Does the U.K. age verification law affect me if I live elsewhere?

    Even if you don’t live in the U.K., you may be impacted by tech platforms that are pre-complying with these regulations.

    In the U.S., YouTube has already begun to roll out technology that is supposed to estimate users’ ages based on their activity, regardless of what age they listed when registering their account.

    Can’t you just use a VPN to get around these barriers?

    Yes, and the App Store charts in the U.K. prove it – after the Online Safety Act took effect, half of the top ten free apps on iOS were VPNs (Virtual Private Networks). We also saw VPN downloads spike after Pornhub access was blocked in many U.S. states.

    When Pornhub was suspended in France, ProtonVPN said that registrations had spiked by 1000% within half an hour – the company said this was a bigger spike than when TikTok temporarily blocked American users.

    You may have used a VPN before if you logged into your office desktop computer remotely, or if you spoofed your location so that you could watch British sitcoms for free from the U.S.

    This introduces another issue: free VPNs don’t always have great privacy practices, even if they are advertised as such.

    If you want to learn more about VPNs, TechCrunch has guides on what you need to know about VPNs and how you can decide if you need to use one.

    [ad_2]

    Amanda Silberling

    Source link

  • UK age check law seems to be hurting sites that comply, helping those that don’t | TechCrunch

    [ad_1]

    The United Kingdom recently started enforcing the Online Safety Act’s age-check rules, and The Washington Post reports that it’s already having a significant effect on web traffic.

    U.K. law now requires pornography websites to verify their users’ ages through means such as face scans and driver’s licenses; it also requires that online platforms prevent children from being exposed to adult content (which is why sites like Bluesky and Reddit have begun checking some users’ ages).

    To study the law’s effect, the Post says it examined the top 90 porn sites based on U.K. visitor data from Similarweb, finding 14 sites that still don’t perform an age check. All 14 of them appear to have experienced a dramatic increase in traffic, with one of them seeing traffic double year-over-year.

    Meanwhile, many websites ostensibly complied with the law while criticizing it, linking to a petition urging repeal, or even offered instructions for getting around it.

    John Scott-Railton, a researcher at the University of Toronto’s Citizen Lab, told the Post that this is “a textbook illustration of the law of unintended consequences,” adding that the law “suppresses traffic to compliant platforms while driving users to sites without age verification.”

    [ad_2]

    Anthony Ha

    Source link