ReportWire

Tag: pornography

  • Utah law requiring porn sites verify user ages takes effect

    Utah law requiring porn sites verify user ages takes effect

    [ad_1]

    SALT LAKE CITY — You may soon be required to prove you’re older than 18 to watch porn in Utah, if adult websites comply with a law that took effect Wednesday.

    A new state law requiring adult websites verify the ages of their users took effect on Wednesday, making the state at least the second to enact an age verification law to shield kids from sexually explicit materials that have become increasingly accessible online.

    “It’s part of our job as society — and maybe a subset of my job as a lawmaker — to try to protect children,” state Sen. Todd Weiler, the measure’s Republican sponsor, said. “I’m not gonna blame all of society’s ills on pornography, but I don’t think it’s helpful when a kid is forming their impressions of sex and gender to have all of this filth and lewd depictions on their mind.”

    It’s currently illegal to show children pornography under federal law, however it’s rarely enforced. The law is Utah’s latest move to crack down on access to pornography and dovetails with lawmakers’ other efforts to restrict how kids use the internet, including social media sites. It comes less than a year after Louisiana enacted a similar law and as additional states consider such policies as filters or age verification for adult websites.

    Dr. Eleanor Gaetan of the anti-porn National Center on Sexual Exploitation said filters and age verification were “complementary efforts” to limit kids’ access to pornography. She noted anti-porn sentiment had grown substantially in recent years due to a “groundswell of parents,” including ones who have testified in statehouses throughout the country and in front of the U.S. Congress.

    “The wave will continue because the harms are real,” she said. “These kids can’t unsee what they see.”

    Though heralded by social conservatives, age verification laws have been condemned by adult websites who argue they’re part of a larger anti-sex political movement. They’ve also garnered opposition from groups that advocate for digital privacy and free speech, including the Electronic Frontier Foundation. The group argued earlier this year that it’s impossible to ensure websites don’t retain user data, regardless of if age verification laws require they delete it.

    Earlier this week, Pornhub, among the most widely viewed adult websites, blocked access to its content to protest the law. Those in Utah attempting to access the site since Monday have been greeted with a “Dear User” letter and accompanying video from adult film actor Cherie DeVille.

    “Giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users,” DeVille says, reading from the letter. “The best and most effective solution for protecting children and adults alike is to identify users by their device.”

    The letter says Pornhub will “completely disable access” in Utah due to the law, unless a “real solution” is offered.

    It’s unclear if other websites will comply.

    Critics, including Pornhub, argue age-verification laws can be easily circumvented with well-known tools such as VPNs that reroute requests to visit websites across public networks. They also have raised questions about enforcement, with Pornhub saying enforcement efforts drive traffic to less-known sites that don’t comply with the law and have fewer safety protocols.

    A year after passing an age-verification requirement, Louisiana lawmakers have renewed their efforts to get adult websites to comply with its law. A follow-up measure that would subject the sites to fines for not requiring users prove their age advanced through the state House of Representatives in April.

    Measures have also been introduced in Arizona and South Carolina. Arkansas passed a similar age-verification law for adult websites that takes effect later this summer

    The Utah law attempts to address privacy and internet data harvesting concerns by requiring websites not retain the ID information. It opens adult websites up to lawsuits if they don’t verify the age of their users. It offers several age verification methods, including third-party age verification services and digital licenses that states are increasingly offering on mobile devices.

    It builds off years of anti-porn efforts in Utah’s Republican-controlled Legislature, where a majority of lawmakers are members of The Church of Jesus Christ of Latter-day Saints. It comes seven years after Weiler — who describes himself as the statehouse’s unofficial “porn czar” — led the charge to make Utah the first state to declare pornography a “public health crisis” and two years after lawmakers passed a measure paving the way to require internet-capable devices be equipped with porn filters for children. Provisions of the law delay it from taking effect unless at least five other states pass similar measures.

    Weiler likened the measure to Utah’s first-in-the-nation law prohibiting kids under 18 from using social media between the hours of 10:30 p.m. and 6:30 a.m. and requiring age verification for social media users. He said he understands that, realistically some kids may bypass age-verification controls. But he said he wonders why opponents’ arguing enforcement concerns make internet age verification laws useless haven’t raised similar concerns about drivers speeding or online gambling.

    “The internet was born, but it wasn’t born yesterday,” he said.

    __

    AP reporters Sara Cline in Baton Rouge, La. and Andrew DeMillo in Little Rock, Ark. contributed reporting.

    [ad_2]

    Source link

  • Deepfake pornography could be a growing problem as AI editing programs become more sophisticated

    Deepfake pornography could be a growing problem as AI editing programs become more sophisticated

    [ad_1]

    Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns. But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual “deepfake” pornography.

    Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

    Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.


    Creating a “lie detector” for deepfakes

    05:36

    Easier to create and more difficult to detect

    The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

    “The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

    Artificial images, real harm

    Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin said she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

    Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

    “You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

    The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

    Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

    But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, said she believes the problem has to be controlled through some sort of global solution.

    In the meantime, some AI models say they’re already curbing access to explicit images.


    Art created by artificial Intelligence

    06:53

    Removing AI’s access to explicit content

    OpenAI said it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and said it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

    Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

    Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

    Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

    TikTok, Twitch, others update policies

    TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

    The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

    Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

    Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

    Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

    The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

    Take It Down tool

    In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

    “When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

    “We have not … been able to formulate a direct response yet to it,” Portnoy said.

    [ad_2]

    Source link

  • Deepfake porn could be a growing problem amid AI race

    Deepfake porn could be a growing problem amid AI race

    [ad_1]

    NEW YORK — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.

    But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.

    Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

    Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.

    The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

    “The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

    Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

    Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

    “You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

    The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

    Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

    But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

    In the meantime, some AI models say they’re already curbing access to explicit images.

    OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

    Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

    Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

    Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

    TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

    The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

    Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

    Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

    Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

    The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

    In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

    “When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

    “We have not … been able to formulate a direct response yet to it,” Portnoy said.

    [ad_2]

    Source link

  • Deepfake porn could be a growing problem amid AI race

    Deepfake porn could be a growing problem amid AI race

    [ad_1]

    NEW YORK — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.

    But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.

    Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

    Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.

    The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

    “The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

    Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

    Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

    “You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

    The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

    Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

    But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

    In the meantime, some AI models say they’re already curbing access to explicit images.

    OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

    Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

    Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

    Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

    TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

    The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

    Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

    Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

    Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

    The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

    In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

    “When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

    “We have not … been able to formulate a direct response yet to it,” Portnoy said.

    [ad_2]

    Source link

  • Italian Museum Invites Florida Students To See Some Real Porn

    Italian Museum Invites Florida Students To See Some Real Porn

    [ad_1]

    FLORENCE, ITALY—After a Tallahassee parent complained that pictures of Michelangelo’s David shown to a sixth-grade art class were “pornographic,” causing a principal to lose her job, officials from Italy’s Galleria dell’Accademia invited Florida students to come see some real porn Thursday. “If you thought David was obscene, just wait until you get a load of the sick shit we show our patrons after hours,” said Cecilie Hollberg, the museum’s director, who explained that in a darkened, curtained-off gallery at the back of the building, her institution housed a permanent collection of hardcore pornography that, unlike the famed Renaissance masterpiece, had absolutely no redeeming social value. “We’ve got something for everyone, including some real nasty stuff—porn with more jizz and more sloppy, stretched out holes than you’ve ever seen in your life. Let’s just say it’ll get you harder than any marble statue.” Hollberg went on to acknowledge that of the approximately 1.5 million people who visited the Galleria dell’Accademia each year, fewer than 5% even bothered to stop and see the David.

    [ad_2]

    Source link

  • Florida Principal Out After Viewing Of Michelangelo’s ‘David’ Upsets Parents

    Florida Principal Out After Viewing Of Michelangelo’s ‘David’ Upsets Parents

    [ad_1]

    The principal of Florida’s Tallahassee Classical School is out of a job after parents complained that their sixth-grade children were shown Michelangelo’s 16th century “David” sculpture, with one parent calling it “pornographic,” the Tallahassee Democrat first reported.

    The now-former principal, Hope Carrasquilla, told HuffPost the situation was also “a little more complicated than that,” noting that the usual protocol is to send parents a letter before students are shown such classical artwork.

    Due to “a series of miscommunications,” the letter did not go out to the sixth-grade parents, and some complained, Carrasquilla said.

    One parent was “point-blank upset,” Carrasquilla continued, and “felt her child should not be viewing those pieces.”

    Michelangelo’s “David.”

    Roberto Serra – Iguana Press via Getty Images

    The board of the charter school decided Monday to oust the principal after less than a year in the job. She was the school’s third principal since it opened in the fall of 2020, per the Tallahassee Democrat.

    The marble sculpture of the Biblical figure David was crafted between 1501 and 1504, originally commissioned for display inside an Italian cathedral. It now resides at the Galleria dell’Accademia in Florence.

    Carrasquilla said she had taught in classical education for a decade and knew that “once in a while you get a parent who gets upset about Renaissance art” — hence the letter. She was not surprised by the reaction from the school board chair, Barney Bishop, but the fact that other board members went along with him was unexpected.

    Bishop told the Tallahassee newspaper that “parental rights are supreme.”

    “And that means protecting the interests of all parents, whether it’s one, 10, 20 or 50,” he added.

    Carrasquilla said many other parents and faculty members were upset about her ouster and have been reaching out with support.

    The move comes as conservatives in Florida and elsewhere battle to step up their input in primary education.

    The Tallahassee school is a public charter institution that focuses on classical learning, a teaching philosophy centered on a traditional Western liberal arts education that aims to impart critical thinking skills children can use throughout their lives. Classical learning is also popular within the Christian homeschooling movement.

    The Tallahassee Classical School is affiliated with Hillsdale College, a conservative Christian institution that has sought to expand its influence over the last decade by helping set up public charter schools. Hillsdale briefly cut ties with the Tallahassee school in early 2022 for not meeting improvement standards, but it later regained affiliation.

    Hillsdale has raised funds for the charter school network by pledging to fight “leftist” and “distorted” teaching of American history, such as the lessons about slavery contained in The New York Times’ 1619 Project, the newspaper reported last year.

    [ad_2]

    Source link

  • Your Trump questions answered. Yes, he can still run for president if indicted | CNN Politics

    Your Trump questions answered. Yes, he can still run for president if indicted | CNN Politics

    [ad_1]

    A version of this story appeared in CNN’s What Matters newsletter. To get it in your inbox, sign up for free here.



    CNN
     — 

    Could he still run for president? Why would the adult-film star case move before any of the ones about protecting democracy? How could you possibly find an impartial jury?

    What’s below are answers to some of the questions we’ve been getting – versions of these were emailed in by subscribers of the What Matters newsletter – about the possible indictment of former President Donald Trump.

    He’s involved in four different criminal investigations by three different levels of government – the Manhattan district attorney; the Fulton County, Georgia, district attorney; and the Department of Justice.

    These questions are mostly concerned with Manhattan DA Alvin Bragg’s potential indictment of Trump over a hush-money payment scheme, but many could apply to each investigation.

    The most-asked question is also the easiest to answer.

    Yes, absolutely.

    “Nothing stops Trump from running while indicted, or even convicted,” the University of California, Los Angeles law professor Richard Hasen told me in an email.

    The Constitution requires only three things of candidates. They must be:

    • A natural born citizen.
    • At least 35 years old.
    • A resident of the US for at least 14 years.

    As a political matter, it’s maybe more difficult for an indicted candidate, who could become a convicted criminal, to win votes. Trials don’t let candidates put their best foot forward. But it is not forbidden for them to run or be elected.

    There are a few asterisks both in the Constitution and the 14th and 22nd Amendments, none of which currently apply to Trump in the cases thought to be closest to formal indictment.

    Term limits. The 22nd Amendment forbids anyone who has twice been president (meaning twice been elected or served part of someone else’s term and then won his or her own) from running again. That doesn’t apply to Trump since he lost the 2020 election.

    Impeachment. If a person is impeached by the House and convicted by the Senate of high crimes and misdemeanors, he or she is removed from office and disqualified from serving again. Trump, although twice impeached by the House during his presidency, was also twice acquitted by the Senate.

    Disqualification. The 14th Amendment includes a “disqualification clause,” written specifically with an eye toward former Confederate soldiers.

    It reads:

    No person shall be a Senator or Representative in Congress, or elector of President and Vice-President, or hold any office, civil or military, under the United States, or under any state, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any State legislature, or as an executive or judicial officer of any State, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof.

    Potential charges in New York City with regard to the hush-money payment to an adult-film star have nothing to do with rebellion or insurrection. Nor do potential federal charges with regard to classified documents.

    Potential charges in Fulton County, Georgia, with regard to 2020 election meddling or at the federal level with regard to the January 6, 2021, insurrection could perhaps be construed by some as a form of insurrection. But that is an open question that would have to work its way through the courts. The 2024 election is fast approaching.

    If he was convicted of a felony – reminder, he has not yet even been charged – in New York, Trump would be barred from voting in his adoptive home state of Florida, at least until he had served out a potential sentence.

    First off, there’s no suggestion of any coordination between the Manhattan DA, the Department of Justice and the Fulton County DA.

    These are all separate investigations on separate issues moving at their own pace.

    The payment to the adult-film actress Stormy Daniels occurred years ago in 2016. Trump has argued the statute of limitations has run out. Lawyers could argue the clock stopped when Trump left New York to become president in 2017.

    It’s also not clear how exactly a state crime (falsifying business records) can be paired with a federal election crime to create a state felony. There are some very deep legal dives into this, like this one from Just Security. We will have to see what, if anything, Bragg adds if he does bring an indictment.

    Of the four known criminal investigations into Trump, falsifying business records with regard to the hush-money payment to an adult-film actress seems like the smallest of potatoes, especially since federal prosecutors decided not to charge him when he left office.

    His finances, subject of a long-running investigation, seem like a bigger deal. But the Manhattan DA decided not to criminally charge Trump with regard to tax crimes. Trump has been sued by the New York attorney general in civil court based on some of that evidence.

    Investigations in Georgia with regard to election meddling and by the Justice Department with regard to January 6 and his treatment of classified data also seem more consequential.

    But these cases are being pursued by different entities at different paces in different governments – New York City; Fulton County, Georgia; and the federal government.

    “I do think that the charges are much more serious against Trump related to the election,” Hasen said in his email. “But falsifying business records can also be a crime. (I’m more skeptical about combining that in a state court with a federal campaign finance violation.)”

    One federal law enforcement source told CNN’s John Miller over the weekend that Trump’s Secret Service detail is actively engaged with authorities in New York City about how this arrest process would work if Trump is ultimately indicted.

    It’s usually a routine process of fingerprinting, a mug shot and an arraignment. It would not likely be a public event and clearly his protective detail would move through the building with Trump.

    New York does not release most mug shots after a 2019 law intended to cut down on online extortion.

    As Trump is among the most divisive and now well-known Americans in history, it’s hard to believe there’s a big, impartial jury pool out there.

    The Sixth Amendment guarantees “the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed.”

    Finding such a jury “won’t be easy given the intense passions on both sides that he engenders,” Hasen said.

    A Quinnipiac University poll conducted in March asked for registered voters’ opinion of Trump. Just 2% said they hadn’t heard enough about him to say.

    The New York State Unified Court System’s trial juror’s handbook explains the “voir dire” process by which jurors are selected. Those accepted by both the prosecution and defense as being free of “bias or personal knowledge that could hinder his or her ability to judge a case impartially” must take an oath to act fairly and impartially.

    We’re getting way ahead of ourselves. He hasn’t been indicted, much less tried or convicted. Any indictment, even for a Class E felony in New York, would be for the kind of nonviolent offense that would not lead to jail time for any defendant.

    “I don’t expect Trump to be put in jail if he is indicted for any of these charges,” Hasen said. “Jail time would only come if he were convicted and sentenced to jail time.”

    The idea that Trump would ever see the inside of a jail cell still seems completely far-fetched. Hasen said the Secret Service would have to arrange for his protection in jail. The logistics of that are mind-boggling. Would agents be placed into cells on either side of him? Would they dress as inmates or guards?

    Top officials accused of wrongdoing have historically found a way out of jail. Former President Richard Nixon got a preemptive pardon from his successor, Gerald Ford. Nixon’s previous vice president, Spiro Agnew, resigned after he was caught up in a corruption scandal. Agnew made a plea deal and avoided jail time. Aaron Burr, also a former vice president, narrowly escaped a treason conviction. But then he left the country.

    That remains to be seen. Jonathan Wackrow, a former Secret Service agent and current global head of security for Teneo, said on CNN on Monday that agents are taking a back seat – to the New York Police Department and New York State court officers who are in charge of maintaining order and safety, and to the FBI, which looks for potential acts of violence by extremists.

    The Secret Service, far from coordinating the event as they might normally, are “in a protective mode,” Wackrow said.

    “They are viewing this as really an administrative movement where they have to protect Donald Trump from point A to point B, let him do his business before the court, and leave. They are not playing that active role that we typically see them in.”

    The New York Times published a report based on anonymous sources close to Trump on Tuesday that suggested he is, either out of bravado or genuine delight, relishing the idea of having to endure a “perp walk” in New York City. The “perp walk,” by the way, is the public march of a perpetrator into a police office for processing.

    “He has repeatedly tried to show that he is not experiencing shame or hiding in any way, and I think you’re going to see that,” the Times reporter and CNN political analyst Maggie Haberman said on the network on Tuesday night.

    “I do think there’s a part of him that does view this as a political asset,” said Marc Short, the former chief of staff to former Vice President Mike Pence, during an appearance on CNN on Wednesday. “Because he can use it to paint the other, more serious legal jeopardy he faces either in Georgia or the Department of Justice, as they’re politically motivated.”

    But Short argued voters will tire of the baggage Trump is carrying, particularly if he faces additional potential indictments in the federal and Georgia investigations.

    [ad_2]

    Source link

  • DeSantis needles Trump as he breaks silence on hush money case | CNN Politics

    DeSantis needles Trump as he breaks silence on hush money case | CNN Politics

    [ad_1]



    CNN
     — 

    Breaking his silence on Donald Trump’s legal troubles, Florida Gov. Ron DeSantis on Monday criticized the Manhattan district attorney who is pursuing charges against the former president and vowed his office would not be involved if the matter trickles into Trump’s adopted home state.

    But DeSantis, a rising rival for the 2024 Republican presidential nomination, stopped well short of offering support for the former president and instead seemed to poke fun at the situation Trump has found himself in as he attempts a political comeback and a third campaign for the White House. A grand jury is in the final stages of determining whether Trump should face charges over an alleged payment to adult film star Stormy Daniels related to a supposed affair.

    “I don’t know what goes into paying hush money to a porn star to secure silence over some type of alleged affair,” DeSantis said as laughter broke out at a news conference in Panama City, Florida. “I just, I can’t speak to that.”

    DeSantis added: “I’ve got real issues to deal with here in the state of Florida.”

    The dismissive quips traveled quickly across the state to Mar-a-Lago, where Trump has decamped while he awaits for word on the New York grand jury’s findings. His allies immediately started attacking DeSantis across social media, suggesting he would face a political price for failing to recognize Republicans are rallying around Trump amid his mounting legal threats.

    Trump responded in a statement posted to his social media site, Truth Social, leveling a series of personal attacks against DeSantis.

    “Ron DeSanctimonious will probably find out about FALSE ACCUSATIONS & FAKE STORIES sometime in the future, as he gets older, wiser, and better known, when he’s unfairly and illegally attacked by a woman, even classmates that are ‘underage’ (or possibly a man!). I’m sure he will want to fight these misfits just like I do!” Trump wrote.

    As part of the post Trump also shared a photo that suggested DeSantis had behaved inappropriately with teenage girls while teaching history in Georgia in his early 20s, an image the former president previously shared on social media to go after the Florida governor.

    The episode Monday was illustrative of the increasingly fraught rivalry between two of the GOP’s biggest stars as they battle for party supremacy — one made more awkward by their proximity inside the Sunshine State. Trump has suggested his arrest is forthcoming, and if he is in Florida at that moment, it could require a coordinated effort by police in DeSantis’ state.

    DeSantis said he is not aware of any arrangements with local law enforcement regarding Trump, and he said he had “no interest in getting involved in some type of manufactured circus.”

    The delayed remarks by DeSantis stand in stark contrast to the forceful defense he offered on Trump’s behalf last August when federal authorities seized documents from the former president’s Palm Beach estate. Just hours after the raid, DeSantis on Twitter called the FBI search at Mar-a-Lago “another escalation in the weaponization of federal agencies against the regime’s political opponents, while people like Hunter Biden get treated with kid gloves.”

    But there was no such tweet this time from DeSantis, who had remained quiet for days amid reports that a New York grand jury was interviewing witnesses and has largely avoided discussing Trump at all amid escalating attacks from the former president and his allies. DeSantis instead last week held events focused on relief for Hurricane Ian victims and the pandemic. He posted a picture from the World Baseball Classic picture standing next to the Miami Marlins mascot.

    Over the weekend, as other Republicans criticized Manhattan District Attorney Alvin Bragg, a Democrat, for pursuing charges in a case that dates back to the 2016 election, Trump allies engaged in a coordinated pressure campaign to get DeSantis to speak out in defense of the former president.

    “Thank you, Vice President @Mike_Pence and @VivekGRamaswamy, for pointing out how Radical Left Democrats are trying to divide our Country in the name of Partisan Politics,” Trump campaigdn adviser Jason Miller wrote on Twitter. “Radio silence from Gov. @RonDeSantisFL and Amb. @NikkiHaley.”

    Trump’s son, Donald Trump Jr., wrote in a tweet on Sunday: “Pay attention to which Republicans spoke out against this corrupt BS immediately and who sat on their hands and waited to see which way the wind was blowing.”

    MAGA, Inc sent several emails tracking which Republicans had commented on the potential criminal charges and hitting DeSantis for “remaining silent.” Trump allies acknowledged that this was a concerted effort to force DeSantis to weigh in on the matter, believing that he would have to offer support to Trump.

    When DeSantis finally weighed in Monday, it came during an unrelated press conference about central bank digital currencies, a recent area of concern among some conservatives but hardly the topic of the day, given the revelations about Trump’s legal case. He didn’t address Trump’s legal situation until asked by an individual from the Florida Standard, a conservative website friendly to DeSantis.

    DeSantis echoed other criticism of Bragg, accusing the Democrat of seeking charges against Trump for political reasons. He compared Bragg to the local state attorney in Tampa, Andrew Warren, who DeSantis controversially removed from office last year over his politics, and linked them both to George Soros, the Hungarian-born billionaire and progressive donor often at the center of conservative conspiracies.

    “If you have a prosecutor who is ignoring crimes happening every single day in his jurisdiction, and he chooses to go back many, many years ago to try to use something about porn star hush money payments, you know, that’s an example of pursuing a political agenda and weaponizing the office, and I think that that’s fundamentally wrong,” DeSantis said.

    But DeSantis also seemed to downplay Bragg’s pursuit of Trump as a lesser concern compared to issues related to crime in the city.

    “That’s bad, but the real victims are ordinary New Yorkers, ordinary Americans in all these different jurisdictions that they get victimized every day because of the reckless political agenda that the Soros DAs bring to their job,” he said. “They ignore crime and they empower criminals.”

    Haley weighed in later Monday, saying a prosecution of Trump would be “for political points.” The former South Carolina governor, who announced her White House campaign last month, told Fox News’ Bret Baier, “And I think what we know is that when you get into political prosecutions like this, it’s more about revenge than it is about justice.”

    “I think the country would be better off talking about things that the American public cares about than to sit there and have to deal with some revenge by some political people in New York,” added Haley, who served as ambassador to the United Nations under Trump.

    This story has been updated with additional information.

    [ad_2]

    Source link

  • R. Kelly gets 20-year sentence for federal child pornography and enticement charges

    R. Kelly gets 20-year sentence for federal child pornography and enticement charges

    [ad_1]

    Sentencing hearing today for R. Kelly on Chicago conviction


    Sentencing hearing today for R. Kelly on Chicago conviction

    01:44

    A federal judge on Thursday handed singer R. Kelly a 20-year prison sentence for his convictions of child pornography and the enticement of minors for sex but said he will serve nearly all of the sentence simultaneously with a 30-year sentence imposed last year on racketeering charges.

    U.S. District Judge Harry Leinenweber also ordered that Kelly serve one year in prison following his New York sentence.

    The central question going into the sentencing in Kelly’s hometown of Chicago was whether Leinenweber would order that the 56-year-old serve the sentence simultaneously with or only after he completes the New York term for 2021 racketeering and sex trafficking convictions. The latter would have been tantamount to a life sentence.

    Prosecutors had acknowledged that a lengthy term served only after the New York sentence could have erased any chance of Kelly ever getting out of prison alive. It’s what they asked for, arguing his crimes against children and lack of remorse justified it.

    With Thursday’s sentence, though, Kelly will serve no more than 31 years. That means the 56-year-old Grammy Award winner will be eligible for release at around age 80, providing him some hope of one day leaving prison alive.

    Leinenweber said at the outset of the hearing that he did not accept the government’s contention that Kelly used fear to woo underage girls for sex.

    “The (government’s) whole theory of grooming, was sort of the opposite of fear of bodily harm,” the judge told the court. “It was the fear of lost love, lost affections (from Kelly)’. … It just doesn’t seem to me that it rises to the fear of bodily harm.”

    A calm Kelly spoke briefly at the start of the hearing, when the judge asked him if he had reviewed key presentencing documents for any inaccuracies.

    “Your honor, I have gone over it with my attorney,” Kelly said. “I’m just relying on my attorney for that.”

    Jurors in Chicago convicted Kelly last year on six of 13 counts: three counts of producing child porn and three of enticement of minors for sex.

    Kelly rose from poverty in Chicago to become one of the world’s biggest R&B stars. Known for his smash hit “I Believe I Can Fly” and for sex-infused songs such as “Bump n’ Grind,” he sold millions of albums even after allegations about his abuse of girls began circulating publicly in the 1990s.

    In presentencing filings, prosecutors described Kelly as “a serial sexual predator” who used his fame and wealth to reel in, sexually abuse and then discard star-struck fans.

    In filings, Kelly’s lawyer, Jennifer Bonjean said he has suffered enough, including financially. She said his worth once approached $1 billion, but that he “is now destitute.”

    In court Thursday, Bonjean said Kelly will be lucky to survive his 30-year New York sentence alone and argued that Kelly’s silence should not be viewed as a lack of remorse.

    She said that while she advised Kelly not to speak because he continues to appeal his convictions and could face other legal action, “He would like to, he would like to very much.”

    [ad_2]

    Source link

  • Nonconsensual deepfake porn puts AI in spotlight | CNN Business

    Nonconsensual deepfake porn puts AI in spotlight | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In its annual “worldwide threat assessment,” top US intelligence officials have warned in recent years of the threat posed by so-called deepfakes – convincing fake videos made using artificial intelligence.

    “Adversaries and strategic competitors,” they warned in 2019, might use this technology “to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.”

    The scenarios are not difficult to imagine; a faked video showing a politician in a compromising position; faked audio of a world leader discussing sensitive information.

    The threat doesn’t seem too distant. The recent viral success of ChatGPT, an A.I. chatbot that can answer questions and write prose, is a reminder of how powerful this kind of technology can be.

    But despite the warnings, we haven’t seen many notable instances, that we know of, where deepfakes have successfully been deployed in geopolitics.

    But there is one group the technology has been weaponized against consistently and for several years: women.

    Deepfakes have been used to put women’s faces, without their consent, into often aggressive pornographic videos. It’s a depraved AI spin on the humiliating practice of revenge porn, with deepfake videos appearing so real it can be hard for female victims to deny it isn’t really them.

    The long-simmering issue exploded into public view last week when it emerged Atrioc, a high-profile male video game streamer on the hugely popular platform Twitch, had accessed deepfake videos of some of his female Twitch streaming colleagues. He later apologized.

    Amid the fallout, the Twitch streamer “Sweet Anita” realized deepfake depictions of her in pornographic videos exist online.

    “It’s very, very surreal to watch yourself do something you’ve never done,” Twitch streamer “Sweet Anita” told CNN after realizing last week her face had been inserted into pornographic videos without her consent.

    “It’s kind of like if you watched anything shocking happening to yourself. Like, if you watched a video of yourself being murdered, or a video of yourself jumping off a cliff,” she said.

    But the deeply disturbing use of the technology in this way is not novel.

    Indeed, the very term “deepfake” is derived from the username of an anonymous Reddit contributor who began posting manipulated videos of female celebrities in pornographic scenes in 2017.

    “From the very beginning, the person who created deepfakes was using it to make pornography of women without their consent,” Samantha Cole, a reporter with Vice’s Motherboard, who has been tracking deepfakes since their inception, told CNN.

    The online gaming community is a notoriously difficult place for women – the 2014 “Gamergate” harassment campaign a most prominent example.

    But concerns over the use of nonconsensual pornographic images isn’t exclusive to this community, and threatens to become more commonplace as artificial intelligence technology develops at breakneck speed and the ease of creating deepfake videos continues to improve.

    “I am baffled by how awful people are to each other on the Internet in a way that I don’t think they would be face to face,” Hany Farid, a professor at the University of California, Berkeley, and digital forensics expert, told CNN.

    “I think we have to start sort of trying to understand, why is it that this technology, this medium, allows and brings out seemingly the worst in human nature? And if we’re going to have these technologies ingrained in our lives the way they seem to be, I think we’re going to have to start to think about how we can be better human beings with these types of devices,” he said.

    It’s part of a much larger systemic problem.

    “It’s all rape culture,” Cole said, “I don’t know what the actual solution is other than getting to that fundamental problem of disrespect and non-consent and being okay with violating women’s consent.”

    There have been efforts from lawmakers to crack down on the creation of nonconsensual imagery, whether it is AI-generated or not. In California, laws have been brought in to try to counter the potential for deepfakes to be used in an election campaign and in nonconsensual pornography.

    But there’s skepticism. “We haven’t even solved the problems of the technology sector from 10, 20 years ago,” Farid said, pointing out that the development of artificial intelligence “is moving much, much faster than the original technology revolution.”

    “Move fast and break things,” was Facebook founder Mark Zuckerberg’s motto back in the company’s early days. As the power, and indeed the danger, of his platform came into focus he later changed the motto to, “Move fast with stable infrastructure.”

    Whether it was willful negligence or ignorance, Silicon Valley was not prepared for the onslaught of hate and disinformation that has festered on its platforms. The same tools it had built to bring people together have also been weaponized to divide.

    And while there has been a good deal of discussion about “ethical AI,” as Google and Microsoft look set for an AI arms race, there’s concern things could be moving too rapidly.

    “The people who are developing these technologies – the academics, the people in the research labs at Google and Facebook – you have to start asking yourself, ‘why are you developing this technology?,’” Farid suggested.

    “If the harms outweigh the benefits, should you carpet bomb the Internet with your technology and put it out there and then sit back and say, ‘well, let’s see what happens next?’”

    [ad_2]

    Source link

  • Twitch Streamer Pokimane Wants Tougher Laws On Revenge Porn

    Twitch Streamer Pokimane Wants Tougher Laws On Revenge Porn

    [ad_1]

    Pokimane talking with her hands.

    Screenshot: Pokimane / Kotaku

    One of the biggest female streamers on Twitch wants to take a harder stance on revenge porn—nude photos that are posted online without their owners’ consent. Imane “Pokimane” Anys said in a recent Twitch stream that it should be “illegal” to possess nudes without their owners’ consent, and that she wanted to work towards “facilitating legislation” against it.

    “There are some companies that I’m going to message…not companies. Organizations that are involved in certain causes. I’m going to be like…Listen: If you ever need someone to…” Pokimane made talking hand gestures on stream. “I’m your girl. Because I think if you wanna pass a bill, you usually go in front of a group of politicians and you explain your cause…I’ll do it.” Kotaku reached out to Pokimane to ask which organizations she planned to work with, but did not receive a response by the time of publication.

    Pokimane was initially vague about what she was taking a stance against, but she eventually clarified that she was talking about revenge porn. “I think it should be illegal to even have your phone, your PC, on your anything…having photos that someone doesn’t consent to you having.”

    There are several reasons why she is taking this stance now. Pokimane talked about how her viewers would message her about how their former partners would leak their nudes. She felt that those individuals were rarely punished for “ruining” girls’ lives. “So many things online go without repercussions and they really shouldn’t,” she said.

    The U.S. currently has laws against revenge porn in nearly every state. But as Hasan Piker pointed out in a recent stream about Pokimane’s comments, enforcement against revenge porn is complicated and murky. Cops are hardly the most empathetic or competent investigators of gendered violence. Besides that, surveilling every electronic device for revenge porn would be a massive privacy violation. “The only way you can tackle revenge porn is at the point of distribution,” he said.

    Pokimane seemed optimistic about preventing revenge porn by stigmatizing it. “If [an ex] shares [nudes] with someone, that person should be so scared of having that photo because the person whose photo they have—didn’t consent to giving it to them.”

    [ad_2]

    Sisi Jiang

    Source link

  • Twitter searches for China protests bombarded by spam and porn, raising alarms among researchers | CNN Business

    Twitter searches for China protests bombarded by spam and porn, raising alarms among researchers | CNN Business

    [ad_1]


    Washington
    CNN Business
     — 

    Twitter searches for the widespread Covid-19-related protests in China are returning a flood of spam, pornography and gibberish that some disinformation researchers say at first glance appear to be a deliberate attempt by the Chinese government or its allies to drown out images of the demonstrations.

    Beginning late last week and into Monday, searches in Chinese for major protest hotspots, including Beijing, Shanghai, Nanjing, and Guangzhou, produced a nonstop stream of solicitations, images of scantily clad women in suggestive poses and seemingly random word- and sentence fragments. Many of the tweets reviewed by CNN on Monday came from accounts that had been created months ago, follow virtually no other accounts and have no followers of their own.

    The spike in suspected inauthentic behavior followed a deadly fire in China’s Xinjiang province, where at least 10 people were killed amid Covid-19 lockdown restrictions that reportedly hindered first responders from reaching the blaze. The fire, and long simmering frustration over the country’s zero Covid policies, helped spur the rare protests in China.

    “It is happening not just around Xinjiang but around any sensitive Chinese issue at the moment,” said Charlie Smith, the pseudonymous co-founder of GreatFire.org, a digital activism group based in China. “Search any city that has seen a rise in Covid cases, or had on-the-street protests on the weekend, and you will see the same thing.”

    The apparent suppression campaign by suspected bot accounts represents one of the first major disinformation tests for Twitter since the platform was purchased by Elon Musk. The billionaire has personally vowed to wage war against bots and spammers but has also cut more than half of Twitter’s staff, raising concerns about the company’s ability to combat bad actors in the United States and abroad.

    US lawmakers have expressed alarm about Twitter’s alleged vulnerability to foreign exploitation. Moreover, Musk’s ties to China through one of his other companies, electric-vehicle maker Tesla, have raised doubts about his willingness to stand up to the Chinese government.

    Twitter, which has cut a substantial amount of its public relations team, didn’t immediately respond to a request for comment.

    GreatFire.org, which helps Chinese citizens get around the country’s internet censorship, noted a torrent of “dating” spam tweets appearing on Friday tagged with “Urumqi,” the capital of Xinjiang. The flood of spam tweets is still ongoing, Smith told CNN on Monday.

    Pornography and sex-related sites were among the first to be censored by China when it began its internet crackdown years ago, Smith added, making it less likely that the spam tweets advertising sex services are the work of random, private individuals.

    Twitter is officially blocked in China, but estimates of the number of Twitter users in China have ranged between 3 million and 10 million.

    On Sunday evening, Alex Stamos, director of the Stanford Internet Observatory and a disinformation researcher, elevated an independent researcher’s findings that Stamos said “points to this being an intentional attack to throw up informational chaff and reduce external visibility into protests in China.”

    The other researcher’s self-described “quick and dirty analysis” of the location-focused searches suggested a “significant uptick” in recent tweets containing ads for escorts, pornography and gambling.

    Stamos, who previously worked as the chief security officer at Facebook, later tweeted that the apparent disinformation campaign has convinced him to seriously consider leaving Twitter. “We are rapidly approaching the point where any political discussion will be dominated by organized influence teams and more lighthearted topics by spam,” he said.

    Musk has pushed back on suggestions that his ownership of Tesla, which is heavily invested in China, may give the Chinese government “leverage” over Twitter. In June, prior to completing his purchase of the social media company, Musk told Bloomberg News that “as far as I’m aware,” China does not attempt to interfere with the free speech of the US press.

    But for years, social media companies including Twitter have highlighted actual and multiple examples of foreign influence operations on social media. The recent layoffs and resignations at Twitter — which have directly affected the teams responding to Chinese influence campaigns, a former employee told The Washington Post — have further reduced the company’s ability to meet those challenges.

    It’s also unclear to what extent China may have visibility into Twitter’s service and internal systems. Earlier this year, Twitter’s former head of security told the US government in a whistleblower disclosure that the company is extraordinarily vulnerable to foreign exploitation. The whistleblower’s testimony claimed the FBI had warned the company this year that at least one agent working for the Chinese government was on Twitter’s payroll.

    The claim has alarmed US policymakers. Last week, Sen. Chuck Grassley, the top Republican on the Senate Judiciary Committee, wrote to Musk asking him to review Twitter’s security for insider threats and to brief congressional staff on the matter.

    [ad_2]

    Source link

  • How the filmmakers behind ‘Till’ depicted Black trauma without showing violence | CNN

    How the filmmakers behind ‘Till’ depicted Black trauma without showing violence | CNN

    [ad_1]



    CNN
     — 

    Chinonye Chukwu didn’t want to make a movie about Black trauma.

    The director of the newly released film “Till,” which centers on Mamie Till-Mobley as she fights for justice after the killing of her son, said she wasn’t interested in depicting the moment that Emmett Till was brutally beaten to death in 1955 Mississippi.

    “The story is about Mamie and her journey, and so it wasn’t narratively necessary to show the physical violence inflicted upon Emmett,” Chukwu told CNN. “As a Black person, I didn’t want to see it. I didn’t want to recreate it.”

    In bringing the story of Till-Mobley to the big screen, Chukwu was intentional about what she chose to show and what she chose to omit. The film doesn’t dramatize the vicious and violent manner in which Emmett was killed, but it does depict his horrifically mangled body – an image that Till-Mobley famously shared with the world and that catalyzed the civil rights movement.

    Still, “Till” couldn’t avoid getting swept up into a debate about “Black trauma porn.” Soon after the release of the trailer, some corners of Black Twitter questioned why a movie about Emmett Till was even needed, swiftly characterizing it as the latest Hollywood project to capitalize on Black pain and tragedy. More than a few declared that they wouldn’t be watching.

    The filmmakers behind “Till” argue that this classification ignores the care and context that they’ve brought to this story. And they’re urging audiences not to look away.

    “Black trauma porn” – much like “disaster porn” or “poverty porn” – generally refers to graphic depictions of violence against Black people that are intended to elicit strong emotional responses. The implication is that these images can be needlessly traumatizing to Black viewers for whom violence is an inescapable fact of life.

    Increasingly, the term has been applied not just to videos of police shootings repeatedly shared online, but also to films and TV series. Amazon’s horror anthology series “Them” and the thriller film “Antebellum” are among recent projects criticized for depicting gratuitous violence against Black characters to make a point about the evils of racism. But the “Black trauma porn” label has also been leveled more broadly at historical dramas about slavery or Jim Crow, such as Barry Jenkins’ miniseries “The Underground Railroad” and now, “Till.”

    Given that wide umbrella, some experts feel that the term “Black trauma porn” is overused and dismissive, leaving little room for discussion about how creatives might explore traumatic events and experiences on screen thoughtfully.

    It’s not hard to understand where the impulse to use that label is coming from, said Kalima Young, an assistant professor at Towson University whose work focuses on representations of race and gender-based trauma in media. Black people are exhausted from constantly being subjected to real-life images of Black pain and death, and seeing that replicated on screen as entertainment can feel exploitative. Still, she said it’s important to separate viral videos from creative works.

    “When we use the term ‘trauma porn,’ we conflate the two, and we collapse what’s happening,” Young said. “It takes some of the nuance out of the conversation.”

    Janell Hobson, a professor of women’s, gender and sexuality studies at the University at Albany, understands why some Black viewers might not have the appetite for “Till.” The two White men accused of Emmett Till’s murder were ultimately acquitted, despite later admitting to the killing, while earlier this year a grand jury declined to indict the White woman who accused him of making advances toward her. Viewers know that there was no justice, and that’s painful.

    Chukwu said she deliberately didn't depict the brutal manner in which Emmett was killed in the film.

    But though Hobson hasn’t yet seen “Till,” she feels it’s a mistake to call it “Black trauma porn.”

    “There’s a difference between criticizing a film that is designed to exploit and to create titillation around images of Black trauma and Black pain versus a drama that is designed to raise awareness around a very troubling part of our history,” she said. “There’s a difference between telling a story of Black trauma and telling a story that is ‘Black trauma porn.’”

    What, then, is the line between a story of Black trauma and “Black trauma porn?”

    For Young, the distinguishing factor is context. Creators have a responsibility to justify why a particular Black character is being subjected to violence or why that violence is being depicted a certain way, she said – a balance that can be tricky to achieve in genres such as horror, in which violence has long been key. Failing to provide a clear and compelling case for those choices can contribute to a feeling that Young refers to as “empty empathy.”

    “Empty empathy,” according to Young, is when viewers are invited to empathize with characters who are experiencing trauma without being provided the space or context to process those visceral feelings. In other words, it’s when trauma is presented as mere spectacle.

    To avoid falling into that trap, filmmakers and TV producers have to think creatively about how they tell stories of trauma, Hobson said. That might involve subverting audience expectations as Jordan Peele’s “Get Out” does when a police cruiser pulls up at the end, or telling a familiar story from a different perspective, as “Till” does by highlighting the journey of Mamie Till-Mobley. Strong character development, as well as interspersing moments of humor or rest, can also help soften the blow, Young added.

    Despite its heavy subject matter,

    The team behind “Till” says they’ve worked hard to tell the story of Till-Mobley sensitively. In interviews leading up to its release, Chukwu has emphasized repeatedly that the film contains no physical violence against Black people. It also grounds Till-Mobley’s story in joy and dignity – the opening scene depicts Till-Mobley driving around Chicago with a carefree Emmett singing along to the radio. The ending also closes on a lighter moment between mother and son.

    But trauma, too, is integral here, and in giving this story the big screen treatment, the filmmakers are honoring the memory of the real-life Till-Mobley.

    Keith Beauchamp, a producer and co-writer of “Till” who was a mentee of Till-Mobley, has a deep connection to this history. He worked closely with Till-Mobley on a documentary about the case. “The Untold Story of Emmett Louis Till,” released in 2005, led to the federal government reopening an investigation into the crime. Recently, he helped unearth an unserved arrest warrant from 1955 for the woman whose accusations led to Emmett’s murder.

    Beauchamp said “Till” has been 29 years in the making for him personally, and that Till-Mobley herself wanted this story to be told through film. He sees “Till” as a continuation of her fight for justice – not just for Emmett, but for all those who came after him.

    “We’re not in the business of re-traumatizing America,” he said. “But this is the story of Emmett Louis Till, and it was that photograph that inspired generations of people and continues to inspire generations of people today.”

    When complaints of “trauma porn” are leveled, critics often ask who a particular work is for. Put bluntly, is that depiction of Black trauma intended to appeal to the sympathies of White people?

    Young considers that implication a knee-jerk reaction. While skeptics of “Till” might feel that they are plenty familiar with the history of Emmett Till, there are layers to that story that have not been fully unpacked.

    “Did they truly understand the context of why the situation occurred?” Young asked. “Have we had enough time to sit in the conversation of why Mamie Till would make that decision to have an open casket?”

    Whether someone considers a story about Black trauma too much to endure or whether they consider it imperative to witness is inherently subjective. It’s notable that many of the recent projects deemed to be “Black trauma porn” have been the work of Black creatives – an obvious reminder that Black people are not a monolith.

    At a time when Republican legislatures are attempting to prevent the nation's fully history from being taught in schools, the filmmakers behind

    Hobson also points out that Black creatives have only recently been given the platform to tell their own stories. Viewers, of course, can opt not to watch, but Black creators should be allowed the space to air their wounds, however imperfect their attempts.

    At a time when Republican state legislatures are trying to restrict discussions of race and history in schools, Young said it’s crucial that stories such as “Till” not be dismissed.

    “In a country right now that is trying so desperately to tamp down on the ghosts that are living under the soil of this country, it’s important that we keep on doing this digging – that we keep on doing the sowing, that we keep on allowing a myriad of voices to tell Black experiences of racial terror and history,” she added.

    Beauchamp, for his part, hopes viewers will give “Till” a chance. Till-Mobley was “the mother of the civil rights movement” – an unsung hero who never got her due. In revisiting her story now, he hopes to resurrect her spirit.

    “I just want to awaken the sleeping giant of revolutionary change once again that is desperately needed in this country right now.”

    [ad_2]

    Source link

  • Judge denies Trump bid to move hush money case to federal court | CNN Politics

    Judge denies Trump bid to move hush money case to federal court | CNN Politics

    [ad_1]



    CNN
     — 

    A federal judge on Wednesday denied Donald Trump’s effort to move the New York indictment charging him with falsifying business records into federal court, finding that Trump failed to show that any of the allegedly illegal conduct related to his role as president.

    Judge Alvin Hellerstein previewed at a court hearing several weeks ago that he would not accept the case and would return it to state court.

    Trump, who has pleaded not guilty to 34 counts of falsifying business records in connection to hush money payments made to adult film actress Stormy Daniels, is set to go to trial in Manhattan for this case in March 2024.

    The judge stated in his ruling that the payments to Daniels, an adult film actress and director, were not related to presidential duties.

    “The evidence overwhelmingly suggests that the matter was a purely a personal item of the President – a cover-up of an embarrassing event. Hush money paid to an adult film star is not related to a President’s official acts,” the judge wrote. “Whatever the standard, and whether it is high or low, Trump fails to satisfy it.”

    The judge also rejected Trump’s argument that he should have immunity given his position as president at the time he signed reimbursement checks to Michael Cohen, his then-personal attorney who facilitated the hush money payment to Daniels, whose real name is Stephanie Clifford.

    “Reimbursing Cohen for advancing hush money to Stephanie Clifford cannot be considered the performance of a constitutional duty. Falsifying business records to hide such reimbursement, and to transform the reimbursement into a business expense for Trump and income to Cohen, likewise does not relate to a presidential duty. Trump is not immune from the People’s prosecution in New York Supreme Court,” the judge found.

    A spokesperson for Manhattan District Attorney Alvin Bragg told CNN that the district attorney’s office is “very pleased with the federal court’s decision and look forward to proceeding in New York State Supreme Court.”

    A Trump campaign spokesman, meanwhile, said Wednesday that “this case belongs in a federal court and we will continue to pursue all legal avenues to move it there.”

    In another blow to Trump, the judge said that federal election law, the Federal Election Campaign Act, doesn’t pre-empt the state charges, falsifying a business record with the intent to commit or conceal another crime. Trump has signaled he will make the argument that the federal statute should preempt the state claim before the judge presiding over the case in state court.

    “FECA does not preempt the application of a general state law to conduct related to a federal election except if the law, or its application, constitutes a specific regulation of conduct covered by FECA,” the judge wrote.

    “The only elements are the falsification of business records, an intent to defraud, and an intent to commit or conceal another crime,” the judge said, adding, “Trump can be convicted of a felony even if he did not commit any crime beyond the falsification, so long as he intended to do so or to conceal such a crime.”

    The judge also rejected Trump’s claim that the case should be moved to federal court because of hostility at the state level.

    “There is no reason to believe that the New York judicial system would not be fair and give Trump equal justice under the law,” the judge wrote.

    This story has been updated with additional details.

    [ad_2]

    Source link