ReportWire

Tag: online safety

  • Egypt to adopt restrictions on children’s social media use to fight ‘digital chaos’

    [ad_1]

    CAIRO — Egypt’s Parliament is looking into ways to regulate children’s use of social media platforms to combat what lawmakers called “digital choas,” following some western countries that are considering banning young teenagers from social media.

    The House of Representatives said in a statement late Sunday that it will work on a legislation to regulate children’s use of social media and “put an end to the digital chaos our children are facing, and which negatively impacts their future.”

    Legislators will consult with the government and expert bodies to draft a law to “protect Egyptian children from any risks that threaten its thoughts and behavior,” the statement said.

    The statement came after President Abdel-Fattah el-Sissi on Saturday urged his government and lawmakers to consider adopting legislation restricting children’s use of social media, “until they reach an age when they can handle it properly.”

    The president’s televised comments urged his government to look at other countries including Australia and the United Kingdom that are working on legislations to “restrict or ban” children from social media.

    About 50% of children under 18 in Egypt use social media platforms where they are likely exposed to harmful content, cyberbullying and abuse, according to a 2024 report by the National Center for Social and Criminological Research, a government-linked think tank.

    In December, Australia became the first country to ban social media for children younger than 16. The move triggered fraught debates about technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.

    The British government said it will consider banning young teenagers from social media while tightening laws designed to protect children from harmful content and excessive screen time.

    French President Emmanuel Macron urged his government to fast-track the legal process to ensure a social media ban for children under 15 can be enforced at the start of the next school year in September.

    [ad_2]

    Source link

  • The rise of deepfake cyberbullying poses a growing problem for schools

    [ad_1]

    Schools are facing a growing problem of students using artificial intelligence to transform innocent images of classmates into sexually explicit deepfakes.

    The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims.

    The challenge for schools was highlighted this fall when AI-generated nude images swept through a Louisiana middle school. Two boys ultimately were charged, but not before one of the victims was expelled for starting a fight with a boy she accused of creating the images of her and her friends.

    “While the ability to alter images has been available for decades, the rise of A.I. has made it easier for anyone to alter or create such images with little to no training or experience,” Lafourche Parish Sheriff Craig Webre said in a news release. “This incident highlights a serious concern that all parents should address with their children.”

    Here are key takeaways from AP’s story on the rise of AI-generated nude images and how schools are responding.

    The prosecution stemming from the Louisiana middle school deepfakes is believed to be the first under the state’s new law, said Republican state Sen. Patrick Connick, who authored the legislation.

    The law is one of many across the country taking aim at deepfakes. In 2025, at least half the states enacted legislation addressing the use of generative AI to create seemingly realistic, but fabricated, images and sounds, according to the National Conference of State Legislatures. Some of the laws address simulated child sexual abuse material.

    Students also have been prosecuted in Florida and Pennsylvania and expelled in places like California. One fifth grade teacher in Texas also was charged with using AI to create child pornography of his students.

    Deepfakes started as a way to humiliate political opponents and young starlets. Until the past few years, people needed some technical skills to make them realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue.

    “Now, you can do it on an app, you can download it on social media, and you don’t have to have any technical expertise whatsoever,” he said.

    He described the scope of the problem as staggering. The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported to its cyber tipline soared from 4,700 in 2023 to 440,000 in just the first six months of 2025.

    Sameer Hinduja, the co-director of the Cyberbullying Research Center, recommends that schools update their policies on AI-generated deepfakes and get better at explaining them. That way, he said, “students don’t think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity.”

    He said many parents assume that schools are addressing the issue when they aren’t.

    “So many of them are just so unaware and so ignorant,” said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University. “We hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn’t happening amongst their youth.”

    AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said.

    Many victims become depressed and anxious, he said.

    “They literally shut down because it makes it feel like, you know, there’s no way they can even prove that this is not real — because it does look 100% real,” he said.

    Parents can start the conversation by casually asking their kids if they’ve seen any funny fake videos online, Alexander said.

    Take a moment to laugh at some of them, like Bigfoot chasing after hikers, he said. From there, parents can ask their kids, “Have you thought about what it would be like if you were in this video, even the funny one?” And then parents can ask if a classmate has made a fake video, even an innocuous one.

    “Based on the numbers, I guarantee they’ll say that they know someone,” he said.

    If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, who is the founder and CEO of The Social Institute, which educates people on responsible social media use and has helped schools develop policies. She said many kids fear their parents will overreact or take their phones away.

    She uses the acronym SHIELD as a roadmap for how to respond. The “S” stands for “stop” and don’t forward. “H” is for “huddle” with a trusted adult. The “I” is for “inform” any social media platforms on which the image is posted. “E” is a cue to collect “evidence,” like who is spreading the image, but not to download anything. The “L” is for “limit” social media access. The “D” is a reminder to “direct” victims to help.

    “The fact that that acronym is six steps I think shows that this issue is really complicated,” she said.

    ___

    The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

    [ad_2]

    Source link

  • Malaysia to ban social media for children under 16 next year

    [ad_1]

    KUALA LUMPUR, Malaysia — Malaysia plans to ban social media accounts for people under 16 starting in 2026, joining Australia and a growing number of countries pushing tighter digital age limits for children.

    Communications Minister Fahmi Fadzil said Sunday the Cabinet approved the move as part of a broader effort to shield young people from online harm like cyberbullying, scams and sexual exploitation. He said the government is studying approaches taken by Australia and other countries, and the potential use of electronic checks with identity cards or passports to verify users’ ages. He did not say when exactly the ban will be enforced.

    “I believe that if the government, regulatory bodies, and parents all play their roles, we can ensure that the Internet in Malaysia is not only fast, widespread and affordable but most importantly, safe, especially for children and families,” he said.

    Since January, major social media and messaging platforms with at least 8 million users in Malaysia are required to obtain a licence as part of a broader tightening of state oversight over digital platforms. Licensed platforms must implement age verification, content-safety measures and transparency rules, reflecting the government’s push for a safer digital space.

    Australia’s parliament enacted the world’s first ban on social media for children that will begin Dec. 10, setting the minimum age at 16. Facebook, Instagram, Snapchat, Threads, TikTok, X and YouTube as well as message board Reddit and livestreaming service Kick face fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent children younger than 16 from holding accounts.

    Australia’s move is being closely watched by countries that share concerns about social media impacts on young children.

    Denmark’s government also announced earlier this month plans to ban access to social media for anyone under 15, though details on how the measures would be enforced remain unclear. Norway is also moving forward with a proposed law that would set a minimum age limit of 15 for accessing social media platforms.

    [ad_2]

    Source link

  • Harry and Meghan ask families to join fight against predatory social media policies

    [ad_1]

    NEW YORK — NEW YORK (AP) — Prince Harry and Meghan Markle urged parents to stand against social media companies that they said prey upon children with exploitative algorithms as the “explosion of unregulated artificial intelligence” adds to their concerns that technologies’ benefits are inseparable from its dangers.

    To underscore that point, the Duke and Duchess of Sussex cited research from advocacy group ParentsTogether that found researchers posing as children experienced harmful interactions every five minutes they spent with an artificial intelligence chatbot.

    “This wasn’t content created by a third party. These were the companies’ own chatbots working to advance their own depraved internal policies,” said Prince Harry at Spring Studios in Manhattan Thursday night as he and Markle were named Humanitarians of the Year by the nonprofit Project Healthy Minds. “But here’s what gives us hope: these families aren’t facing this alone.”

    To build their movement of families fighting for online safety, the couple also announced Thursday that their foundation’s Parents Network would join forces with ParentsTogether.

    Their remarks came at the annual gala for Project Healthy Minds, a Millennial- and Gen Z-driven tech nonprofit that runs a free online marketplace aiming to connect patients with the exact mental health care they seek.

    The couple has made youth mental health a cornerstone of their philanthropic work since launching the Archewell Foundation in 2020 after stepping aside as working royals. Through its network for families who have experienced online harm and support of youth-led organizations shaping responsible technology, the nonprofit works to make digital spaces safer.

    Prince Harry has previously stressed the need to hold powerful social media companies accountable. He warned last year that young people are experiencing an “epidemic” of anxiety, depression and social isolation driven by negative experiences online.

    According to numerous studies, few guardrails exist to mitigate kids’ exposure to age-inappropriate content including pornography and violence on social media, where they also face cyberbullying and sexual harassment.

    The issue could also be considered personal for the couple. Markle has been open about her mental health struggles due to what she describes as the royal family’s intense pressures and tabloid attacks. Harry’s own personal life has been the subject of much tabloid reporting, including targeted phone hacking and surveillance.

    Prince Harry brought his awareness campaign to a reception Wednesday night hosted by men’s health nonprofit Movember. In a conversation with television journalist Brooke Baldwin, he emphasized that men should not feel isolated because he repeatedly hears the same struggles when he speaks with them.

    “The biggest barrier is the belief that no one will understand,” he said in comments reshared on his blog. “Loneliness convinces you you’re the only one, which is rarely true.”

    “Culture makers” such as Prince Harry and Meghan Markle are important voices in mental health conversations because they inspire their enormous audiences to seek care, according to Project Healthy Minds CEO Phil Schermer.

    But Schermer emphasized that the “moment of inspiration is fleeting” and it’s important for celebrities to take the extra step of partnering with trusted organizations that can actually deliver care.

    He pointed to NBC television personality Carson Daly, the gala’s host, as an example. Daly opened up about his own anxiety on the air after reading a 2018 essay by NBA champion Kevin Love about an in-game panic attack.

    Daly, a Project Healthy Minds board member, said mental health is now the most common topic that comes up when fans recognize him in public.

    “I was like, ’I want to put all my eggs in this basket’ because I see the power even when I tell my story, it unlocks so many other people telling their story,” Daly told the Associated Press. “And I think that process — that’s how the destigmatization works.”

    The money raised Thursday night will help the nonprofit build new filters that break down care options by their insurance providers and preferences for in-person or telehealth service options, according to Schermer. He compared the features to those on travel planning sites such as Expedia that allow users to choose the times, prices and airlines of their flight options.

    Schermer said that having a recognizable host in Daly also helps “make it cool to talk about your emotions.”

    “It’s not just the absence of a stigma,” Schermer said. “It’s also the presence of a sense of pride that by being vulnerable, being honest, being open, that that’s actually your greatest superpower.”

    Thursday night’s other honoree was Indianapolis Colts co-owner and chief brand officer Kalen Jackson. The NFL executive — who talks openly about dealing with anxiety — has continued the team’s staunch support for mental health after the death of her father and beloved former owner Jim Irsay.

    Project Healthy Minds recognized Jackson with its inaugural Sports Visionary of the Year Award, presented by NFL Commissioner Roger Goodell. Jackson leads her family’s Kicking The Stigma initiative, which raises awareness about mental health disorders and tries to expand access to care across Indiana and country.

    ______

    Associated Press coverage of philanthropy and nonprofits receives support through the AP’s collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.

    [ad_2]

    Source link

  • Prince Harry and Meghan Markle ask families to join fight against predatory social media policies

    [ad_1]

    NEW YORK — NEW YORK (AP) — Prince Harry and Meghan Markle urged parents to stand against social media companies that they said prey upon children with exploitative algorithms as the “explosion of unregulated artificial intelligence” adds to their concerns that technologies’ benefits are inseparable from its dangers.

    To underscore that point, the Duke and Duchess of Sussex cited research from advocacy group ParentsTogether that found researchers posing as children experienced harmful interactions every five minutes they spent with an artificial intelligence chatbot.

    “This wasn’t content created by a third party. These were the companies’ own chatbots working to advance their own depraved internal policies,” said Prince Harry at Spring Studios in Manhattan Thursday night as he and Markle were named Humanitarians of the Year by the nonprofit Project Healthy Minds. “But here’s what gives us hope: these families aren’t facing this alone.”

    To build their movement of families fighting for online safety, the couple also announced Thursday that their foundation’s Parents Network would join forces with ParentsTogether.

    Their remarks came at the annual gala for Project Healthy Minds, a Millennial- and Gen Z-driven tech nonprofit that runs a free online marketplace aiming to connect patients with the exact mental health care they seek.

    The couple has made youth mental health a cornerstone of their philanthropic work since launching the Archewell Foundation in 2020 after stepping aside as working royals. Through its network for families who have experienced online harm and support of youth-led organizations shaping responsible technology, the nonprofit works to make digital spaces safer.

    Prince Harry has previously stressed the need to hold powerful social media companies accountable. He warned last year that young people are experiencing an “epidemic” of anxiety, depression and social isolation driven by negative experiences online.

    According to numerous studies, few guardrails exist to mitigate kids’ exposure to age-inappropriate content including pornography and violence on social media, where they also face cyberbullying and sexual harassment.

    The issue could also be considered personal for the couple. Markle has been open about her mental health struggles due to what she describes as the royal family’s intense pressures and tabloid attacks. Harry’s own personal life has been the subject of much tabloid reporting, including targeted phone hacking and surveillance.

    Prince Harry brought his awareness campaign to a reception Wednesday night hosted by men’s health nonprofit Movember. In a conversation with television journalist Brooke Baldwin, he emphasized that men should not feel isolated because he repeatedly hears the same struggles when he speaks with them.

    “The biggest barrier is the belief that no one will understand,” he said in comments reshared on his blog. “Loneliness convinces you you’re the only one, which is rarely true.”

    “Culture makers” such as Prince Harry and Meghan Markle are important voices in mental health conversations because they inspire their enormous audiences to seek care, according to Project Healthy Minds CEO Phil Schermer.

    But Schermer emphasized that the “moment of inspiration is fleeting” and it’s important for celebrities to take the extra step of partnering with trusted organizations that can actually deliver care.

    He pointed to NBC television personality Carson Daly, the gala’s host, as an example. Daly opened up about his own anxiety on the air after reading a 2018 essay by NBA champion Kevin Love about an in-game panic attack.

    Daly, a Project Healthy Minds board member, said mental health is now the most common topic that comes up when fans recognize him in public.

    “I was like, ’I want to put all my eggs in this basket’ because I see the power even when I tell my story, it unlocks so many other people telling their story,” Daly told the Associated Press. “And I think that process — that’s how the destigmatization works.”

    The money raised Thursday night will help the nonprofit build new filters that break down care options by their insurance providers and preferences for in-person or telehealth service options, according to Schermer. He compared the features to those on travel planning sites such as Expedia that allow users to choose the times, prices and airlines of their flight options.

    Schermer said that having a recognizable host in Daly also helps “make it cool to talk about your emotions.”

    “It’s not just the absence of a stigma,” Schermer said. “It’s also the presence of a sense of pride that by being vulnerable, being honest, being open, that that’s actually your greatest superpower.”

    Thursday night’s other honoree was Indianapolis Colts co-owner and chief brand officer Kalen Jackson. The NFL executive — who talks openly about dealing with anxiety — has continued the team’s staunch support for mental health after the death of her father and beloved former owner Jim Irsay.

    Project Healthy Minds recognized Jackson with its inaugural Sports Visionary of the Year Award, presented by NFL Commissioner Roger Goodell. Jackson leads her family’s Kicking The Stigma initiative, which raises awareness about mental health disorders and tries to expand access to care across Indiana and country.

    ______

    Associated Press coverage of philanthropy and nonprofits receives support through the AP’s collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.

    [ad_2]

    Source link

  • Netflix’s Adolescence: What Parents Need to Know About Toxic Masculinity, Incel Culture, and Raising Boys in a Digital World

    [ad_1]

    “I know — not a popular opinion.

    And yes, your kid is not going to love this. You might get pushback, eye rolls, maybe even tears.

    Do it anyway — and let them make you the bad guy. That’s your job.

    In Adolescence, Jamie had full access to his laptop, alone in his room, all night — and that’s where things spiraled. He got pulled into toxic online spaces his parents didn’t even know existed.

    Set a clear tech boundary: no phones, laptops, or tablets in bedrooms after a set time.

    Devices charge overnight in a shared space.

    This isn’t about punishment — it’s about safety, sleep, and mental health.

    They might hate it. But that boundary could protect them from a world they’re not ready to navigate alone.”

    Wondering when your child is ready for a phone? These four key questions to guide you.

    [ad_2]

    Amy McCready

    Source link

  • New GASA Report Estimates $688 Billion in Scam Losses Across Asia Amid Rising Cyberthreat Worldwide

    New GASA Report Estimates $688 Billion in Scam Losses Across Asia Amid Rising Cyberthreat Worldwide

    [ad_1]

    Press Release


    Oct 15, 2024 03:00 CEST

    2024 Asia Scam Report Reveals Singapore and Japan See Decline in Scam Losses

    The Global Anti-Scam Alliance (GASA) has released the highly anticipated 2024 Asia Scam Report. Based on survey responses from 24,731 consumers across Asia, this annual report offers insights into the growing threat posed by scammers and their increasingly sophisticated tactics.

    Individual survey responses were extrapolated to report an estimated total loss of $688.42 billion over the last 12 months, a figure that represents a significant portion of the estimated  $1.026 trillion global scam losses reported in GASA’s 2023 data. This year’s report highlights key trends such as the rise of AI-generated scam messages and the surge in social media-related fraud.

    Key Findings from the 2024 Asia Scam Report:

    • Rapid Revictimization: Scam victims across Asia tend to be scammed repeatedly after they have been successfully targeted once.
    • Singapore (-40%) and Japan (-17%) have reported a noteworthy reduction in average scam loss per victim, indicating the effectiveness of preventive measures while other countries have seen increases. In Singapore, initiatives such as the co-location of police and banks has contributed to an efficient crackdown on scams.
    • Dominance of Social Media Scams: Some leading platforms have seen an increase in scam activity, with many users looking for social platforms to address fraudulent ads and fake accounts. 
    • Top Scams by Category: Identity theft, investment scams, and shopping scams remain the leading threats across the region, with scammers targeting individuals through a range of channels, including phone calls, social media, and text messages.

    Scam Prevention and Cross-Sector Collaboration in Focus at GASS Asia Summit
    As Foundation Partners of GASA, Mastercard and ScamAdviser sponsored this report and will engage deeply in scam prevention discussions and partnerships at the Global Anti-Scam Summit (GASS) Asia Summit on October 20–21. The event, in Singapore, is a key gathering for stakeholders across sectors to joint actions against perpetrators of scams. With Amazon, Google, and other global organizations participating, the event will focus on best practices and new strategies in financial fraud, AI misuse, and consumer protection strategies.

    For more information and to download the full 2024 Asia Scam Report, visit GASA’s official website.

    Source: Global Anti-Scam Alliance (GASA)

    [ad_2]

    Source link

  • New Mexico attorney general sues company behind Snapchat alleging child sexual extortion on the site

    New Mexico attorney general sues company behind Snapchat alleging child sexual extortion on the site

    [ad_1]

    AP Technology Writer — New Mexico’s attorney general has filed a lawsuit against the company behind Snapchat, alleging that site’s design and policies foster the sharing of child sexual abuse material and facilitate child sexual exploitation.

    Attorney General Raúl Torrez filed the lawsuit against Snap Inc. Thursday in state court in Santa Fe. In addition to sexual abuse, the lawsuit claims the company also openly promotes child trafficking, drugs and guns.

    Last December, Torrez filed a similar lawsuit against Meta, the parent company of Instagram and Facebook, saying it allows predators to trade child sexual abuse material and solicit minors for sex on its platforms. That suit is pending.

    Snap’s “harmful design features create an environment where predators can easily target children through sextortion schemes and other forms of sexual abuse,” Torrez said in a statement. Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors.

    “Snap has misled users into believing that photos and videos sent on their platform will disappear, but predators can permanently capture this content and they have created a virtual yearbook of child sexual images that are traded, sold, and stored indefinitely,” Torres said.

    In a statement, Snap said it shares Torrez’s and the public’s concerns about the online safety of young people.

    “We understand that online threats continue to evolve and we will continue to work diligently to address these critical issues,” the company based in Santa Monica, California, said. “We have invested hundreds of millions of dollars in our trust and safety teams over the past several years, and designed our service to promote online safety by moderating content and enabling direct messaging with close friends and family.”

    According to the complaint, minors report having more online sexual interactions on Snapchat than any other platform, and more sex trafficking victims are recruited on Snapchat than on any other platform.

    Prior to the lawsuit, New Mexico conducted a monthslong undercover investigation into child sexual abuse images on Snapchat. According to Torrez’s statement, the investigation revealed a “vast network of dark web sites dedicated to sharing stolen, non-consensual sexual images from Snap,” finding more than 10,000 records related to Snap and child sexual abuse material in the last year. This included information related to minors younger than 13 being sexually assaulted.

    As part of the undercover investigation, the New Mexico department of justice set up a decoy Snapchat account for a 14-year-old named Heather, who found and exchanged messages with accounts with names like “child.rape” and “pedo_lover10.”

    Snapchat, the lawsuit alleges, “was by far the largest source of images and videos among the dark web sites investigated.” Investigators also found Snapchat accounts that openly circulated and sold child abuse images directly on the platform.

    [ad_2]

    Source link

  • Meta takes down thousands of Facebook accounts running sextortion scams from Nigeria

    Meta takes down thousands of Facebook accounts running sextortion scams from Nigeria

    [ad_1]

    Meta said Wednesday that it has taken down about 63,000 Instagram accounts in Nigeria running sexual extortion scams and has removed thousands of Facebook groups and pages that were trying to organize, recruit and train new scammers.

    Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

    There has been a marked rise in sextortion cases in recent years, fueled in part by a loosely organized group called the Yahoo Boys, operating mainly out of Nigeria, Meta said. It added that it applied its “dangerous organizations and individuals” policy to remove Facebook accounts and groups run by the group.

    “Because they’re driven by money, they’re targeting can be indiscriminate,” said Antigone Davis, Meta’s global head of safety. “So in other words, think of this as a little bit of a scattershot approach: get out there and send many, many, requests out to individuals and see who may who may respond.”

    In January, the FBI warned of a “huge increase” in sextortion cases targeting children. The targeted victims are primarily boys between the ages of 14 to 17, but the FBI said any child can become a victim.

    Meta said its investigation found that the majority of the scammers’ attempts did not succeed and mostly targeted adult men in the U.S., but added that it did see “some” try to target minors, which Meta says it reported to the National Center for Missing and Exploited Children.

    The removed accounts included a “coordinated network” of about 2,500 accounts linked to a group of about 20 people who were running them, Meta said.

    In April, Meta announced it was deploying new tools in Instagram to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages. Meta is still testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

    Davis said users should look out for messages from people with “highly stylized” photos, people who are “exceptionally good looking” or have never sent you a message before.

    “That should give you pause,” she said. Users should also take a pause if somebody sends an image first — scammers often use this tactic to try to gain trust and bait unsuspecting people into sending them back a photo of themselves.

    “This is one of the one of these areas where if you have any sort of suspicion, I would urge caution,” she said.

    [ad_2]

    Source link

  • Instagram begins blurring nudity in messages to protect teens and fight sexual extortion

    Instagram begins blurring nudity in messages to protect teens and fight sexual extortion

    [ad_1]

    LONDON — Instagram says it’s deploying new tools to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages.

    The social media platform said in a blog post Thursday that it’s testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

    Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

    Instagram and other social media companies have faced growing criticism for not doing enough to protect young people. Mark Zuckerberg, the CEO of Instagram’s owner Meta Platforms, apologized to the parents of victims of such abuse during a Senate hearing earlier this year.

    Meta, which is based in Menlo Park, California, also owns Facebook and WhatsApp but the nudity blur feature won’t be added to messages sent on those platforms.

    Instagram said scammers often use direct messages to ask for “intimate images.” To counter this, it will soon start testing out a nudity-protection feature for direct messages that blurs any images with nudity “and encourages people to think twice before sending nude images.”

    “The feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return,” Instagram said.

    The feature will be turned on by default globally for teens under 18. Adult users will get a notification encouraging them to activate it.

    Images with nudity will be blurred with a warning, giving users the option to view it. They’ll also get an option to block the sender and report the chat.

    For people sending direct messages with nudity, they will get a message reminding them to be cautious when sending “sensitive photos.” They’ll also be informed that they can unsend the photos if they change their mind, but that there’s a chance others may have already seen them.

    Instagram said it’s working on technology to help identify accounts that could be potentially be engaging in sexual extortion scams, “based on a range of signals that could indicate sextortion behavior.”

    To stop criminals from connecting with young people, it’s also taking measures including not showing the “message” button on a teen’s profile to potential sextortion accounts, even if they already follow each other, and testing new ways to hide teens from these accounts.

    In January, the FBI warned of a “huge increase” in sextortion cases targeting children — including financial sextortion, where someone threatens to release compromising images unless the victim pays. The targeted victims are often boys between the ages of 14 to 17, but the FBI said any child can become a victim. In the six-month period from October 2022 to March 2023, the FBI saw a more than 20% increase in reporting of financially motivated sextortion cases involving minor victims compared to the same period in the previous year.

    [ad_2]

    Source link

  • Online protection bills for kids pass in Maryland House, Senate — but Big Tech companies continue their fight – WTOP News

    Online protection bills for kids pass in Maryland House, Senate — but Big Tech companies continue their fight – WTOP News

    [ad_1]

    Bills that would limit how much data can be harvested from kids online passed overwhelmingly in their respective chambers in Annapolis, but there are signs that opponents aren’t finished objecting to the measures.

    Bills that would limit how much data can be harvested from kids online passed overwhelmingly in their respective chambers in Annapolis, Maryland, but there are signs that opponents aren’t finished objecting to the measures.

    House and Senate bills would bar tech companies from using data to push personalized ads to children or to track them in real time. The exception would be apps that are used for navigation.

    In addition, tech firms that make products that require an account would have to default to the highest privacy setting possible.

    While the bills must pass in both chambers before final passage, Net Choice — which represents tech giants Google, TikTok and Meta — has already written a letter to Gov. Wes Moore asking that he veto the bills.

    Testifying before a House panel last month, Carl Szabo, vice president and general counsel for Net Choice, told lawmakers that the bill was unconstitutional and infringes upon the First Amendment rights of digital companies.

    “California tried to do an end run around the First Amendment. They lost. Their law has done absolutely nothing to protect children in the state of California,” said Szabo.

    Szabo, who pointed out he’s a parent and lives in Maryland, said, “I am happy to provide solutions; just this is not one of them.”

    In the same hearing, Del. C.T. Wilson, chair of the Economic Matters Committee, said lawmakers were intent on passing protections for children online.

    Wilson referenced earlier testimony on suicides linked to online bullying.

    “I guess … we don’t do anything about that because of freedom of speech?” Wilson continued. “Teddy Roosevelt said: ‘The best thing you can do is the right thing. The second-best thing is the wrong thing, but the worst thing is nothing.’”

    Net Choice has filed lawsuits in other states on similar bills. While the organization has anticipated ultimate passage of the bills and asked for a gubernatorial veto, it’s not yet clear if the group will file suit in Maryland.

    Maryland Attorney General Anthony Brown has expressed support for online protections for children. In written testimony to the House Economic Matters Committee, Brown wrote in support of the House bill.

    HB 603 prohibits the use of deceptive design patterns that mislead and confuse underage users. Thus, [the bill] imposes permissible limits on commercial activity aimed at protecting children from documented harms,” the attorney general said.

    Sen. Ben Kramer, who has sponsored a Senate version of the legislation, told WTOP he is confident the bills will be enacted. And in case of a legal challenge, Kramer said, “If Big Tech wants to have a run at it [in the courts], so be it, and we’re not going to be intimidated by them.”

    In an email, Gov. Moore’s press secretary Carter Elliott said the governor will review the legislation once it passes both chambers.

    “When bills hit his desk, he will thoroughly review them all to ensure that the Moore-Miller Administration is enacting legislation that is in the best interest of all Marylanders,” the press secretary said.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2024 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    [ad_2]

    Kate Ryan

    Source link

  • US to roll out visa restrictions on people who misuse spyware to target journalists, activists

    US to roll out visa restrictions on people who misuse spyware to target journalists, activists

    [ad_1]

    WASHINGTON — The Biden administration announced Monday it is rolling out a new policy that will allow it to impose visa restrictions on foreign individuals involved in the misuse of commercial spyware.

    The administration’s policy will apply to people who’ve been involved in the misuse of commercial spyware to target individuals including journalists, activists, perceived dissidents, members of marginalized communities, or the family members of those who are targeted. The visa restrictions could also apply to people who facilitate or get financial benefit from the misuse of commercial spyware, officials said.

    “The United States remains concerned with the growing misuse of commercial spyware around the world to facilitate repression, restrict the free flow of information, and enable human rights abuses,” Secretary of State Antony Blinken said in a statement announcing the new policy. “The misuse of commercial spyware threatens privacy and freedoms of expression, peaceful assembly, and association. Such targeting has been linked to arbitrary detentions, forced disappearances, and extrajudicial killings in the most egregious of cases.”

    Biden issued another executive order nearly a year ago restricting the U.S. government’s use of commercial spyware “that poses risks to national security.”

    That order required the head of any U.S. agency using commercial programs to certify that they don’t pose a significant counterintelligence or other security risk, a senior administration official said. It was issued as the White House acknowledged a surge in hacks of U.S. government employees, across 10 countries, that had been compromised or targeted by commercial spyware.

    A senior administration official who briefed reporters ahead of Monday’s announcement would not say if any particular individuals were in line to immediately be impacted by the visa restrictions. The official spoke on the condition of anonymity under ground rules set by the White House.

    Officials said the visa restriction policy can apply to citizens of any country found to have misused or facilitated the malign use of spyware, even if they are from countries whose citizens are allowed entry into the U.S. without first applying for a visa.

    Perhaps the best known example of spyware, the Pegasus software from Israel’s NSO Group, was used to target more than 1,000 people across 50 countries, according to security researchers and a July 2021 global media investigation, citing a list of more than 50,000 cellphone numbers.

    The U.S. has already placed export limits on NSO Group, restricting the company’s access to U.S. components and technology.

    Pegasus spyware was used in Jordan to hack the cellphones of at least 30 people, including journalists, lawyers, human rights and political activists, according to the digital rights group Access Now.

    The hacking with spyware made by Israel’s NSO Group occurred from 2019 until last September, according to Access Now. It did not accuse Jordan’s government of the hacking.

    Amnesty International also reported that its forensic researchers had determined that Pegasus spyware was installed on the phone of Washington Post journalist Jamal Khashoggi’s fiancee, Hatice Cengiz, just four days after he was killed in the Saudi Consulate in Istanbul in 2018. The company had previously been implicated in other spying on Khashoggi.

    ___

    Associated Press Frank Bajak in Boston contributed reporting.

    [ad_2]

    Source link

  • Journalists, lawyers and activists hacked with Pegasus spyware in Jordan, forensic probe finds

    Journalists, lawyers and activists hacked with Pegasus spyware in Jordan, forensic probe finds

    [ad_1]

    Israeli-made Pegasus spyware was used in Jordan to hack the cellphones of at least 30 people, including journalists, lawyers, human rights and political activists, the digital rights group Access Now said Thursday.

    The hacking with spyware made by Israel’s NSO Group occurred from 2019 until last September, Access Now said in its report. It did not accuse Jordan’s government of the hacking.

    One of the targets was Human Rights Watch’s deputy director for the region, Adam Coogle, who said in an interview that it was difficult to imagine who other than Jordan’s government would be interested in hacking those who were targeted.

    The Jordanian government had no immediate comment on Thursday’s report.

    In a 2022 report detailing a much smaller group of Pegasus victims in Jordan, digital sleuths at the University of Toronto’s Citizen Lab identified two operators of the spyware it said may have been agents of the Jordanian government. A year earlier, Axios reported on negotiations between Jordan’s government and NSO Group.

    “We believe this is just the tip of the iceberg when it comes to the use of Pegasus spyware in Jordan, and that the true number of victims is likely much higher,” Access Now said. Its Middle East and North Africa director, Marwa Fatafta, said at least 30 of 35 known targeted individuals were successfully hacked.

    Citizen Lab confirmed all but five of the infections, with 21 victims asking to remain anonymous, citing the risk of reprisal. The rest were identified by Human Rights Watch, Amnesty International’s Security Lab, and the Organized Crime and Corruption Reporting Project.

    NSO Group says it only sells to vetted intelligence and law enforcement agencies — and only for use against terrorists and serious criminals. But cybersecurity researchers who have tracked the spyware’s use in 45 countries have documented dozens of cases of politically motivated abuse of the spyware — from Mexico and Thailand to Poland and Saudi Arabia.

    An NSO Group spokesperson said the company would not confirm or deny its clients’ identities. NSO Group says it vets customers and investigates any report its spyware has been abused.

    The U.S. government was unpersuaded and blacklisted the NSO Group in November 2021, when iPhone maker Apple Inc. sued it, calling its employees “amoral 21st century mercenaries who have created highly sophisticated cyber-surveillance machinery that invites routine and flagrant abuse.”

    Those targeted in Jordan include Human Rights Watch’s senior researcher for Jordan and Syria, Hiba Zayadin. Both she and Coogle had received threat notifications from Apple on Aug. 29 that state-sponsored attackers had attempted to compromise their iPhones.

    Coogle’s local, personal iPhone was successfully hacked in October 2022, he said, just two weeks after the human rights group published a report documenting the persecution and harassment of citizens organizing peaceful political dissent.

    After that, Coogle activated “Lockdown Mode,” on the iPhone, which Apple recommends for users at high risk.

    Human Rights Watch said in a statement Thursday that it had contacted NSO Group about the attacks and specifically asked it to investigate the hack of Coogle’s device “but has received no substantive response to these inquiries.”

    Jordanian human rights lawyer Hala Ahed — known for defending women’s and workers rights and prisoners of conscience — was also targeted at least twice by Pegasus, successfully in March 2021 then unsuccessfully in February 2023, Access Now said.

    About half of those found to have been targeted by Pegasus in Jordan — 16 in all — were journalists or media workers, the report said.

    One veteran Palestinian-American journalist and columnist, Dauod Kuttab, was hacked with Pegasus three times between February 2022 and September 2023.

    Along the way, he said, he’s learned important lessons about not clicking on links in messages purporting to be from legitimate contacts, which is how one of the Pegasus hacks snared him.

    Kuttab refused to speculate about who might have targeted him.

    “I always assume that somebody is listening to my conversations,” he said, as getting surveilled “comes with the territory” when you are journalist in the Middle East.

    But Kuttab does worry about his sources being compromised by hacks — and the violation of his privacy.

    “Regardless of who did it, it’s not right to intervene into my personal, family privacy and my professional privacy.”

    ___

    This story has been corrected to say that Access Now says the hacking occurred from 2019 until last September, not from early 2020 until last November.

    [ad_2]

    Source link

  • Britain’s got some of Europe’s toughest surveillance laws. Now it wants more

    Britain’s got some of Europe’s toughest surveillance laws. Now it wants more

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    LONDON — The U.K. already has some of the most far-reaching surveillance laws in the democratic world. Now it’s rushing to beef them up even further — and tech firms are spooked.

    Britain’s government wants to build on its landmark Investigatory Powers Act, a controversial piece of legislation dubbed the “snooper’s charter” by critics when introduced back in 2016.

    That law — introduced in the wake of whistleblower Edward Snowden’s revelations of mass state surveillance — attempted to introduce more accountability into the U.K. intelligence agencies’ sprawling snooping regime by formalizing wide-ranging powers to intercept emails, texts, web history and more.

    Now new legislation is triggering a fresh outcry among both industry execs and privacy campaigners — who say it could hobble efforts to protect user privacy.

    Industry body TechUK has written to Home Secretary James Cleverly airing its complaints. The group’s letter warns that the Investigatory Powers (Amendment) Bill threatens technological innovation; undermines the sovereignty of other nations; and could unleash dire consequences if it sets off a domino effect overseas.

    Tech companies are most concerned by a change that would allow the Home Office to issue notices preventing them from making technical updates that might impede information-sharing with U.K. intelligence agencies. 

    TechUK argues that, combined with pre-existing powers, the changes would “grant a de facto power to indefinitely veto companies from making changes to their products and services offered in the U.K.” 

    “Using this power, the government could prevent the implementation of new end-to-end encryption, or stop developers from patching vulnerabilities in code that the government or their partners would like to exploit,” Meredith Whittaker, president of secure messaging app Signal, told POLITICO when the bill was first unveiled. 

    The Home Office, Britain’s interior ministry, remains adamant it’s a technical and procedural set of tweaks. Home Office Minister Andrew Sharpe said at the bill’s committee stage in the House of Lords that the law was “not going to … ban end-to-end encryption or introduce a veto power for the secretary of state … contrary to what some are incorrectly speculating.”

    “We have always been clear that we support technological innovation and private and secure communications technologies, including end-to-end encryption,” a government spokesperson said. “But this cannot come at a cost to public safety, and it is critical that decisions are taken by those with democratic accountability.”

    Encryption threat

    Despite the protestations of industry and campaigners, the British government is whisking the bill through parliament at breakneck speed — risking the ire of lawmakers.

    Ministers have so far blocked efforts’ to refine the bill in the House of Lords, the U.K.’s upper chamber. But there are more opportunities to contest the legislation coming and industry is already making appeals to MPs in the hopes of paring it back in the House of Commons.

    Some companies including Apple have threatened to pull their services from the UK if asked to undermine encryption under Britain’s laws | Feline Lim/Getty Images

    “We stress the critical need for adequate time to thoroughly discuss these changes, highlighting that rigorous scrutiny is essential given the international precedent they will set and their very serious impacts,” the TechUK letter states.

    The backdrop to the row is the fraught debate on encryption that unfolded during the passage of the earlier Online Safety Act, which companies and campaigners argued could compel companies to break encryption in the name of online safety. 

    The bill ultimately said that the government can call for the implementation of this technology when it’s “technically feasible” and simultaneously preserves privacy. 

    Apple, WhatsApp and Signal have threatened to pull their services from the U.K. if asked to undermine encryption under U.K. laws. 

    Since the Online Safety Act passed in November, Meta announced that it had begun its rollout of end-to-end encryption on its Messenger service.

    In response, Cleverly issued a statement saying he was “disappointed” that the company had gone ahead with the move despite repeated government warnings that it would make identifying child abusers on the platform more difficult. 

    Critics see a pincer movement. “Taken together, it appears that the Online Safety Bill’s Clause 122 is intended to undermine existing encryption, while the updates to the IPA are intended to block further rollouts of encryption,” said Whittaker.  

    Beyond encryption 

    In addition to the notice regime, rights campaigners are worried that the bill allows for the more permissive use of bulk data where there are “low or no” expectations of privacy, for wide-ranging purposes including training AI models.

    Lib Dem peer Christopher Fox argued in the House of Lords that this “creates an essentially new and essentially undefined category of information” which marks “a departure from existing privacy law,” notably the Data Protection Act.

    Director of campaign group Big Brother Watch, Silkie Carlo, also has issues with the newly invented category. With CCTV footage or social media posts for example, people may not have an expectation of privacy, “[but] that’s not the point, the point is that that data taken together and processed in a certain way, can be incredibly intrusive.”

    Big Brother Watch is also concerned about how the bill deals with internet connection records — i.e. web logs for individuals for the last 12 months. These can currently be obtained by agencies when specific criteria is known, like the person of interest’s identity. Changes to the bill would broaden this for the purpose of “target discovery,” which Big Brother Watch characterizes as “generalized surveillance.”  

    Members of the House of Lords are also worried about the bill’s proposal to expand the number of people who can sanction spying on parliamentarians themselves. Right now, this requires the PM’s sign-off, but under the bill, the PM would be able to designate deputies for when he is not “available.” The change was inspired by the period in which former PM Boris Johnson was incapacitated with COVID-19.

    The bill will return to the House of Lords on January 23, before heading to the House of Commons to be debated by MPs | Tolga Akmen/AFP via Getty Images

    “The purpose of this bill is to give the intelligence agencies a bit of extra agility at the margins, where the existing Rolls Royce regime is proving a bit clunky and bureaucratic,” argues David Anderson, crossbench peer and author of a review that served as a blueprint for the bill. “If you start throwing in too many safeguards, you will negate that purpose, and you will not solve the problem that bill is addressing.” 

    Anderson proposed the changes relating to spying on MPs and peers are necessary “if the prime minister has got COVID, or if they’re in a foreign country where they have no access to secure communications.” 

    This could even apply in cases where there’s a conflict of interest because spies want to snoop on the PM’s relatives or the PM himself, he added.

    Amendments proposed by peers at the committee stage were uniformly rejected by the government. 

    The bill will return to the House of Lords for the next stage of the legislative process on January 23, before heading to the House of Commons to be debated by MPs.

    “Our overarching concern is that the significance of the proposed changes to the notices regime are presented by the Home Office as minor adjustments and as such are being downplayed,” reads the TechUK letter.

    “What we’re seeing across these different bills is a continual edging further towards … turning private tech companies into arms of a surveillance state,” says Carlo.

    [ad_2]

    Laurie Clarke

    Source link

  • From WWE wrestling to global AI summit: The unlikely rise of Michelle Donelan

    From WWE wrestling to global AI summit: The unlikely rise of Michelle Donelan

    [ad_1]

    LONDON — Britain’s tech chief is no stranger to dealing with big egos. She used to promote superstar wrestlers.

    U.K. Science and Technology Secretary Michelle Donelan’s past career as a marketeer for WWE wrestling may stand her in good stead at Bletchley Park on Wednesday, as she hosts representatives from more than 100 tech companies, countries and academic institutions on the first day of a U.K.-hosted summit which aims to grapple with one of the biggest challenges of our time — the rise of artificial intelligence. 

    Working at the fast-paced WWE was “very much like” being at her busy Department for Science, Innovation and Technology (DSIT), Donelan tells POLITICO — somewhat improbably — in an eve-of-summit interview at her sparsely-decorated office on Whitehall.

    The oddball world of commercial wrestling was also good training for politics.

    “It was an eye-opener to different personalities, and how to deal with those different personalities,” she says — ideal for “dealing with big egos, in terms of British politics.”

    A low-profile Tory MP who only bagged her first junior ministerial job in 2019, Donelan makes for a surprising compère for the first day of Rishi Sunak’s much-hyped AI summit.

    Unlike Sunak, the 39-year-old was no self-professed tech geek when she was entrusted with setting up his new science and technology department in February 2023. By her own admission she doesn’t regularly use generative AI tools like ChatGPT. 

    But Donelan, who was pregnant with her first child when she was handed the science and tech brief, has been wading through piles of binders detailing technical information as she tries to get to grips with the subject. Colleagues note admiringly (and sometimes despairingly) how she operates on just a few hours sleep.

    “I think my journey on this has been a deeper understanding of … just how vital it is that we do lead in this, that we aren’t passive, that we don’t wait for others,” she says.

    Summit going on

    Since February, Donelan has been laying the groundwork for a summit Sunak hopes will be one of the defining moments of his premiership, with the objective of convincing world leaders to agree on the risks posed by AI.

    She, like the PM, is concerned about the potential disruption artificial intelligence could pose. “The risks are very daunting, there’s no denying that,” she says, while acknowledging “there is a debate about whether they will materialize or not.”

    Her critics say the summit is wrongly focused on long-term risk, however, and argue not enough is being done to tackle AI’s more immediate threats.

    The U.K. is “way behind” in terms of bringing forward actual legislation, said Peter Kyle, Donelan’s opposite number in the Labour Party, who has not been invited to this week’s summit. Donelan’s department has not yet even published a response to its own consultation on an artificial intelligence white paper published way back in March, he pointed out.

    Donelan insists the summit is “only part” of the U.K.’s work on artificial intelligence, however and that it plans to say more about the white paper — a first step toward legislation — “by the end of the year.”

    “We’re not afraid to legislate. There will have to be legislation in this space eventually,” she says.

    But specifics are thin on the ground. She refuses to be drawn on “arbitrary timelines.”

    Surviving the hospital pass

    It was Donelan’s embrace of the government’s controversial Online Safety Bill, which she inherited in her previous ministerial role during the short-lived premiership of Liz Truss, which attracted the attention of Sunak.

    In the hard-fought Tory leadership campaign of July and August 2022, Truss and Sunak both promised to scrap parts of the bill focused on policing “legal but harmful” online content. It was Donelan, appointed as culture secretary by Truss, who was left to unravel those pledges.

    Her “no-nonsense” and “methodical” approach to the bill, and her willingness to take the views of her MP colleagues seriously, impressed Sunak when he arrived in No. 10 following Truss’ self-destruction.

    For that reason he kept her in post — and then chose her to set up the new department for science and technology earlier this year, according to a No. 10 official closely involved with that decision, granted anonymity to discuss internal government business.

    “I think Rishi, like me, can see that she is one of those effective secretaries of state that will deliver outcomes,” said former Education Secretary Nadhim Zahawi, whom Donelan worked alongside prior to her promotion to Cabinet.

    Finally getting the Online Safety Bill into law was a notable achievement. Donelan’s previous claim to fame had been her unwanted record of being the shortest-serving Cabinet minister in British history. She took the job of education secretary, and then resigned 35 hours later, in the chaotic final days of the Boris Johnson administration. 

    Child protection

    Donelan’s resolve to get the bill through parliament had been hardened by a one-to-one meeting with campaigner Ian Russell last November. His daughter Molly took her own life after viewing suicide content online.

    Donelan has kept the dossier of Molly’s posts handed to her by Russell at that private meeting, according to one U.K. government official. “From that [meeting] she was more determined to do something on child protection,” they said.

    “It was heart-wrenching to hear his story, and those of other bereaved parents and I felt very passionately that we had an opportunity to really make a difference on this and to and to change the nature in which we regulate the online world,” Donelan says.

    Her approach was strikingly different to the long line of Tory ministers who preceded her. Her willingness to simply pick up the phone to relevant business leaders — often bypassing official government channels — has won her admirers in the exasperated U.K. tech industry, which has endured a succession of different ministers overseeing a bill plagued by uncertainty.

    “It was a complete breath of fresh air when she came in,” said Dom Hallas, executive director of tech lobbying outfit the Startup Coalition. “At industry roundtables she is to the point and well-briefed, but she is also frank when something is not going to happen.”

    “She actually gets things done, which I would contrast with the previous [Boris Johnson-led] regime. She does listen and seems interested in trying to find out what various stakeholders think about things,” Julian David, chief executive of industry body TechUK, added.

    Donelan feels she has skin in the game. Her son was born in the spring, and the tech secretary says the new online laws make her “a lot more confident in his use of social media, when he’s old enough.”

    Donelan confirms, however, that being handed a new government department, while heavily pregnant, and about to take maternity leave, was no small challenge. 

    “I’m not going to lie. It’s a lot harder than I thought it was going to be. Before you have a child you don’t appreciate you are going to have things like ‘Mum guilt’,” she says. “It was easier in my head and harder in reality.”

    The long game

    Donelan’s unshowy style belies a burning ambition, according to multiple MPs and officials who have tracked her career to date.

    She told both the Mail on Sunday and the BBC’s Political Thinking podcast that she decided to become a politician at the age of six, after seeing Tory icon Margaret Thatcher on television.

    In 1999, aged just 15, she spoke at the Conservative Party Conference in Blackpool. She was just 26 when she first stood for election, as a no-hoper in the safe Labour seat of Wentworth and Dearne in 2010.

    Three years later she became the Conservative candidate for the Lib Dem held seat of Chippenham — going on to overturn a 2,470 Lib Dem majority in the 2015 general election.

    On arriving in parliament, Donelan’s ambition was obvious to colleagues. One recalls her immediately asking for advice on how to climb the career ladder.

    Soon after she took her first step up, as a parliamentary private secretary — a lowly unpaid aide to a minister — the Conservative whips’ office created a leaderboard tallying the workrate of the 40-odd MPs holding similar roles. Donelan led the way, smashing every target by a significant margin, one minister said.

    “If she’s given a task she will attack it like nothing else. I’m not so sure about the bigger picture stuff — wider strategizing and setting a direction herself. But give her a direction and she’ll go at it,” the same minister said. 

    In her private life, Donelan is a committed Christian who shies away from the darker side of politics. She is “extremely respectful of Cabinet colleagues,” another former government official who worked with her said. “She doesn’t seem to be involved in backdoor skulduggery. It is all very earnest, but it is working for her in a way that is quite refreshing.”

    Yet she raised eyebrows at the Conservative Party conference in October with a main stage speech clearly designed to please the grassroots and capture a few right-wing headlines. Donelan vowed a crackdown on the “creeping wokeism” she claimed is threatening scientific research — and went viral for all the wrong reasons.

    A difficult interview with the BBC’s Victoria Derbyshire at the same conference also landed her less-than-positive headlines.

    For an ambitious minister looking to wrestle her way onto the world stage this week, these are nothing more than hazards of the job.

    Emilio Casalicchio contributed reporting

    [ad_2]

    Annabelle Dickson and Tom Bristow

    Source link

  • Australian safety watchdog fines social platform X $385,000 for not tackling child abuse content

    Australian safety watchdog fines social platform X $385,000 for not tackling child abuse content

    [ad_1]

    CANBERRA, Australia — Australia’s online safety watchdog said on Monday it had fined X — the social media platform formerly known as Twitter — 610,500 Australian dollars ($385,000) for failing to fully explain how it tackled child sexual exploitation content.

    Australia’s eSafety Commission describes itself as the world’s first government agency dedicated to keeping people safe online.

    The commission issued legal transparency notices early this year to X and other platforms questioning what they were doing to tackle a proliferation of child sexual exploitation, sexual extortion and the livestreaming of child sexual abuse.

    eSafety Commissioner Julie Inman Grant said X and Google had not complied with the notices because both companies had failed to adequately respond to a number of questions.

    The platform renamed X by its new owner Elon Musk was the worst offender, providing no answers to some questions including how many staff remained on the trust and safety team that worked on preventing harmful and illegal content since Musk took over, Inman Grant said.

    “I think there’s a degree of defiance there,” Inman Grant said.

    “If you’ve got a basic H.R. (human resources) system or payroll, you’ll know how many people are on each team,” she added.

    X did not immediately respond to a request for comment.

    After Musk completed his acquisition of the company in October last year, he drastically cut costs and shed thousands of jobs.

    X could challenge the fine in the Australian Federal Court. But the court could impose a fine of up to AU$780,000 ($493,402) per day since March when the commission first found the platform had not complied with the transparency notice.

    The commission would continue to pressure X through notices to become more transparent, Inman Grant said.

    “They can keep stonewalling and we’ll keep fining them,” she said.

    The commission issued Google with a formal warning for providing “generic responses to specific questions,” a statement said.

    Google regional director Lucinda Longcroft said the company had developed a range of technologies to proactively detect, remove and report child sexual abuse material.

    “Protecting children on our platforms is the most important work we do,” Longcroft said in a statement. “Since our earliest days we have invested heavily in the industrywide fight to stop the spread of child sexual abuse material,” she added.

    [ad_2]

    Source link

  • Leading Egyptian opposition politician targeted with spyware, researchers find

    Leading Egyptian opposition politician targeted with spyware, researchers find

    [ad_1]

    BOSTON — A leading Egyptian opposition politician was targeted with spyware multiple times after announcing a presidential bid — including with malware that automatically infects smartphones, security researchers have found. They say Egyptian authorities were likely behind the attempted hacks.

    Discovery of the malware last week by researchers at Citizen Lab and Google’s Threat Analysis Group prompted Apple to rush out operating system updates for iPhones, iPads, Mac computers and Apple Watches to patch the associated vulnerabilities.

    Citizen Lab said in a blog post that attempts beginning in August to hack former Egpytian lawmaker Ahmed Altantawy involved configuring his phone’s connection to the Vodaphone Egypt mobile network to automatically infect it with Predator spyware if he visited certain websites not using the secure HTTPS protocol.

    Citizen Lab said the effort likely failed because Altantawy had his phone in “lockdown mode,” which Apple recommends for iPhone users at high risk, including rights activists, journalists and political dissidents in countries like Egypt.

    Prior to that, Citizen Lab said, attempts were made beginning in May to hack Altantawy’s phone with Predator via links in SMS and WhatsApp messages that he would have had to click on to become infected.

    Once infected, the Predator spyware turns a smartphone into a remote eavesdropping device and lets the attacker siphon off data.

    Given that Egypt is a known customer of Predator’s maker, Cytrox, and the spyware was delivered via network injection from Egyptian soil, Citizen Lab said it had “high confidence” Egypt’s government was behind the attack.

    Bill Marczak of the University of Toronto-based internet watchdog obtained the exploit chain with Google researcher Maddie Stone.

    “It’s scary the fact that the government can essentially select anyone on Vodafone Egypt’s network and perhaps other networks for infections and they just flip a switch” and select them for targeting, he said. Marczak said “the most likely scenario here is that, yes, there is this cooperation from from Vodafone.”

    In a separate incident in 2021, Citizen Lab determined that Altantawy — who announced his candidacy in March — was successfully hacked with Predator.

    Egyptian officials did not respond Saturday to requests for comment.

    Altantawy, a former journalist, announced in March his bid to challenge incumbent President Abdel Fatah el-Sissi in 2024, who has overseen a sharp crackdown on political opposition. Rights groups accuse el-Sissi’s administration of targeting dissent with brutal tactics — forced disappearances, torture and long-term detentions without trial.

    Altantawy, family members and supporters have complained of being harrassed, which led him to ask Citizen Lab researchers to analyze his phone for potential spyware infection.

    Altantawy said Saturday in written responses to questions relayed by a trusted intermediary, who requested anonymity for personal security, that he contacted Citizen Lab after receiving a series of suspicious and anonymous messages embedded with links he suspected were malicious.

    He said he believed the hacking attempts were “inextricably linked to my political candidacy and my opposition role in the country against the Sisi regime” and sought “not only to surveil, but perhaps also to find compromising material that could be used to discredit or defame me.”

    Altantawy also said the incident raises questions about whether telecommunications companies operating in Egypt might be complicit.

    Previously, Citizen Lab documented Predator infections affecting two exiled Egyptians, and in a joint probe with Facebook determined that Cytrox had customers in countries including Armenia, Greece, Indonesia, Madagascar, Oman, Saudi Arabia and Serbia.

    In July, the U.S. added Predator’s maker, Cytrox, to its blacklist for developing surveillance tools deemed to have threatened U.S. national security as well as individuals and organizations worldwide. That makes it illegal for U.S. companies to do business with them. Israel NSO Group, maker of the Pegasus spyware, was similarly sanctions in November 2021. The reported use of Predator in Greece helped precipitate the resignation last year of two top government officials, including the national intelligence director.

    The latest discovery brings to five the number of zero-day vulnerabilities to Apple software for which patches have been released this month.

    ——-

    AP reporter Maggie Hyde in Cairo contributed.

    [ad_2]

    Source link

  • Anti-LGBTQ hate thrives online, spurs fears of more violence

    Anti-LGBTQ hate thrives online, spurs fears of more violence

    [ad_1]

    In the days after a gunman killed five people at a gay nightclub in Colorado last month, much of social media lit up with the now familiar expressions of grief, mourning and disbelief.

    But on some online message boards and platforms, the tone was celebratory. “I love waking up to great news,” wrote one user on Gab, a platform popular with far-right groups. Other users on the site called for more violence.

    The hate isn’t limited to fringe sites.

    On Twitter, YouTube and Facebook, researchers and LGBTQ advocates have tracked an increase in hate speech and threats of violence directed at LGBTQ people, groups and events, with much of it directed at transgender people.

    The content comes after conservative lawmakers in several states introduced dozens of anti-LGBTQ measures and amid a wave of threats targeting LGBTQ groups, as well as hospitals, health care workers, libraries and private businesses that support them.

    “I don’t think people understand the state of danger that we’re living in right now,” said Jay Brown, senior vice president at the Human Rights Campaign and a transgender man. “A lot of that is happening online, and online threats are turning into threats of real violence offline.”

    Hospitals in Boston, Pittsburgh, Phoenix, Washington, D.C., and other cities have received bomb threats and other harassing messages after misleading claims spread online about transgender care programs.

    In Tennessee, masked members of a white supremacist group showed up recently at a holiday charity event at a bookstore because the evening’s entertainment included a drag performer. An upcoming holiday party at an adults-only gay nightclub scheduled for Friday was also the subject of threats. The party’s theme? Ugly Christmas sweaters.

    “And they’re still coming after us? It’s just straight up bigotry and hatred at this point,” said Jessica Patterson, one of the organizers of the event, who noted that groups calling for violence against LGBTQ groups often espouse other bigotries too. “They just have to hate someone.”

    The transphobic content targeting events such as Patterson’s is just a subset of the hateful content about Jews, Muslims, women, Black people, Asians and others that has internet safety advocates and an increasing number of lawmakers in the United States and elsewhere pushing for tougher regulations that would force tech companies to do more.

    There’s no simple explanation for the increase in hate speech documented by researchers recent years. Socio-economic stress caused by the COVID-19 pandemic, increased political polarization and resurgent far-right movements have all been blamed. So have politicians such as Donald Trump, whose brash use of social media emboldened extremists online.

    “I’ve been tracking hate-fueled extremist communities for more than 25 years but I’ve never seen hate speech — let alone the calls for violence that they spark — reach the volume they have now,” extremism researcher Rita Katz wrote in an email to The Associated Press.

    Katz is co-founder of SITE Intelligence Group, which monitors far-right internet sites and has identified dozens of threats against LGBTQ groups and events in the U.S. in recent months. SITE released a bulletin Thursday detailing death threats against drag performers after one appeared at the White House bill signing of the Respect for Marriage Act.

    Researchers at the Center for Countering Digital Hate, a nonprofit with offices in the U.S. and United Kingdom, studied the social media messages that spread immediately after the Colorado Springs shooting in November and found many examples of far-right Trump supporters celebrating the carnage. The users who didn’t praise the shooting often claimed it was faked by authorities and the media as a way to make conservatives look bad.

    Online hate speech has been linked to offline violence in the past, and many of the perpetrators of recent mass shootings were later found to be immersed in online worlds of bigotry and conspiracy theories.

    Officials in a number of countries have cited social media as a key factor in extremist radicalization, and have warned that COVID restrictions and lockdowns have given extremist groups a powerful recruiting tool.

    Despite rules prohibiting hate speech or violent threats, platforms such as Facebook and YouTube have struggled to identify and remove such content. In some cases, it’s because people use coded language designed to evade automated content moderation.

    Then there’s Twitter, which saw a surge in racist, anti-Semitic and homophobic content following its purchase by Elon Musk, a self-described free speech absolutist. Musk himself posted a tweet this past week that mocked transgender pronouns, as well as another misleadingly suggesting that Yoel Roth, Twitter’s former head of trust and safety, had supported letting children into gay dating apps.

    Roth, who is gay, went into hiding after receiving a deluge of threats following Musk’s tweet.

    “He (Musk) didn’t use the word ‘groomer’ but that’s the subtext of his tweet is that Yoel Roth is a groomer,” said Bhaskar Chakravorti, dean of global business at the Fletcher School at Tufts University, who has created a “Musk Monitor” tracking hate speech on the site.

    “If the owner of Twitter himself is pushing false and hateful content against his former head of safety, what can we expect from this platform?” Chakravorti said.

    [ad_2]

    Source link

  • Musk says granting ‘amnesty’ to suspended Twitter accounts

    Musk says granting ‘amnesty’ to suspended Twitter accounts

    [ad_1]

    SAN FRANCISCO — New Twitter owner Elon Musk said Thursday that he is granting “amnesty” for suspended accounts, which online safety experts predict will spur a rise in harassment, hate speech and misinformation.

    The billionaire’s announcement came after he asked in a poll posted to his timeline to vote on reinstatements for accounts that have not “broken the law or engaged in egregious spam.” The yes vote was 72%.

    “The people have spoken. Amnesty begins next week. Vox Populi, Vox Dei,” Musk tweeted using a Latin phrase meaning “the voice of the people, the voice of God.”

    Musk used the same Latin phrase after posting a similar poll last last weekend before reinstating the account of former President Donald Trump, which Twitter had banned for encouraging the Jan. 6, 2021, Capitol insurrection. Trump has said he won’t return to Twitter but has not deleted his account.

    Such online polls are anything but scientific and can easily be influenced by bots.

    In the month since Musk took over Twitter, groups that monitor the platform for racist, anti-Semitic and other toxic speech say it’s been on the rise on the world’s de facto public square. That has included a surge in racist abuse of World Cup soccer players that Twitter is allegedly failing to act on.

    The uptick in harmful content is in large part due to the disorder following Musk’s decision to lay off half the company’s 7,500-person workforce, fire top executives, and then institute a series of ultimatums that prompted hundreds more to quit. Also let go were an untold number of contractors responsible for content moderation. Among those resigning over a lack of faith in Musk’s willingness to keep Twitter from devolving into a chaos of uncontrolled speech were Twitter’s head of trust and safety, Yoel Roth.

    Major advertisers have also abandoned the platform.

    On Oct. 28, the day after he took control, Musk tweeted that no suspended accounts would be reinstated until Twitter formed a “content moderation council” with diverse viewpoints that would consider the cases.

    On Tuesday, he said he was reneging on that promise because he’d agreed to at the insistence of “a large coalition of political-social activists groups” who later ”broke the deal” by urging that advertisers at least temporarily stop giving Twitter their business.

    A day earlier, Twitter reinstated the personal account of far-right Rep. Marjorie Taylor Greene, which was banned in January for violating the platform’s COVID misinformation policies.

    Musk, meanwhile, has been getting increasingly chummy on Twitter with right-wing figures. Before this month’s U.S. midterm elections he urged “independent-minded” people to vote Republican.

    A report from the European Union published Thursday said Twitter took longer to review hateful content and removed less of it this year compared with 2021. The report was based on data collected over the spring — before Musk acquired Twitter — as part of an annual evaluation of online platforms’ compliance with the bloc’s code of conduct on disinformation. It found that Twitter assessed just over half of the notifications it received about illegal hate speech within 24 hours, down from 82% in 2021.

    [ad_2]

    Source link

  • Kidoodle.TV Stresses the Importance of Knowing What Children Are Watching

    Kidoodle.TV Stresses the Importance of Knowing What Children Are Watching

    [ad_1]

    Kidoodle.TV’s latest whitepaper explains how parents are responding to the demand for increased screen time in their children’s lives and what they anticipate for the future, post-pandemic.

    Press Release



    updated: Jun 23, 2021

    Kidoodle.TV, a family-friendly streaming platform for kids 12 and under, recently released a comprehensive research study to better understand how parents view and approach screen time in a pandemic world and what they anticipate for the future as the world reopens. As media consumption amongst children has increased, the plethora of platforms available have allowed kids to access content that may be unsafe or unsuitable.

    According to a study by Wakefield Research:

    • 2 in 5 parents (44%) say they’ve seen previews or ads meant for older children or teens while streaming videos for their kids.
       
    • 42% say they’ve seen ads for a product meant solely for adults like a political ad.
       
    • 38% say they’ve seen content auto-played that they did not approve of.

    “The significance of doing this kind of real-life research to hear from parents directly on what they are facing when raising their children in today’s digital world, and the increased use of screen time on a daily basis, is critical to helping families make good decisions on what platforms are allowed. As a parent, I am faced with multiple demands everyday and knowing the dangers that lurk online is half the problem. The solution is making a smart decision for the sake of our children and being able to remove one less worry. The trends identified here today will serve to inform and guide and hopefully eliminate risks associated with inappropriate content being accessible to children,” commented Brenda Bisner, Chief Content Officer at Kidoodle.TV. 

    In addition to increased recreational viewing, parents and teachers rely on online streaming services as an educational tool. While streaming services can be powerful and effective when it comes to education, it’s a double-edged sword due to the risk of exposing children to content not meant for them.

    According to the 2021 It Takes a (Digital) Village Report, only 43% of parents utilize parental controls all the time, leaving a large opportunity for nefarious or inappropriate content to slip through the cracks. While a good start, the parental control system has many flaws. Even the most popular streaming services have inadvertently had lapses in their algorithms, where the platform recommends shows or ads meant for an older cohort of children.

    Incidents can also occur as a result of hackers or online trolls. In 2019, trolls targeted kids shows on a popular streaming platform that displayed content regarding self-harm and other bizarre themes. Clearly a violation of the platform, the videos were promptly taken down, but not before thousands of children had viewed them. Knowing what children are watching, being an active part of their streaming time, and streaming on platforms that use human moderation to review content instead of solely relying on algorithms, are some of the most effective ways of protecting children from unintentionally viewing inappropriate content.

    In regards to what parents think a post-pandemic future holds, the study revealed that while some parents want to lessen their child(ren)’s screen time (35%), others realize that increased screen time is simply a new reality (65%) and are allowing for more or the same amount of viewing time.

    Learn more about kids’ screen time in the age of COVID-19 in Kidoodle.TV’s 2021 It Takes a (Digital) Village report. 

    About Kidoodle.TV

    Kidoodle.TV® is a family-focused Safe Streaming™ service committed to ensuring children have a safe alternative to stream their favorite TV shows and movies. Available in over 160 countries and territories on thousands of connected devices, Kidoodle.TV provides peace of mind with every show available on Kidoodle.TV strictly vetted by caring people committed to Safe and Free Streaming for Kids™. Kidoodle.TV is available on iOS, Android, Apple TV, Fire TV, LG, Samsung, VIDAA-enabled Hisense TVs, Chromecast, Roku, Vizio SmartCast, Amazon, Jio, Xfinity X1, Connected TVs, HTML5 Web, and many other streaming media devices. Kidoodle.TV is owned and operated by A Parent Media Co. Inc., a family-based company. Kidoodle.TV is certified by the kidSAFE ® Seal Program and is the proud recipient of the Mom’s Choice Award ®, a Stevie ® Award, platinum winner of the Best Mobile App Award, and Parents’ Picks Award – Best Elementary Products. Visit www.kidoodle.tv to learn more. *Content availability varies by location.”

    CONTACT INFORMATION:

    Tiffany Kayar
    tiffanyPR@newswiremail.io

    Source: Kidoodle.TV

    [ad_2]

    Source link