ReportWire

Tag: netchoice

  • Your words: ‘It’s upsetting that Donald Trump and JD Vance have such anger and disdain’; ‘We must savor our moments to praise the Supreme Court’

    Your words: ‘It’s upsetting that Donald Trump and JD Vance have such anger and disdain’; ‘We must savor our moments to praise the Supreme Court’

    [ad_1]

    click to enlarge

    Shutterstock

    Former President Donald Trump and running mate JD Vance at the Republican National Convention in Milwaukee (July 15, 2024)

    ¶ JD Vance’s anger toward educators is bad for families

    J.D. Vance’s angry, hurtful comments that teachers who don’t have children “disorient and really disturb” him are consistent with his long history of attacking families and public schools.

    Teachers, teacher aides, paraprofessionals, school nurses, bus drivers, cafeteria workers, janitors, librarians, social workers, counselors and dozens of other essential school employees work every day to help kids learn and grow. So often they pay for supplies for children out of their own pockets to make sure they have what they need.

    Vance doesn’t seem to understand that being an effective educator and not having biological kids are completely unrelated. I support our educators because they support our kids, and it’s upsetting that Donald Trump and J.D. Vance have such anger and disdain for people who do not share their views.

    Enough is enough. We have more in common than what divides us, and that’s why I am excited for the hopeful vision Kamala Harris and Tim Walz have for our families and the future of our great nation.

    — Reginald Vinson, Altamonte Springs

    ¶ The Supreme Court and Florida’s social media laws: An excellent decision

    Do we consider Facebook, X, Instagram and TikTok newspapers? Editorial mediums? Free-speech platforms? Public space? 

    Florida Attorney General Ashley Moody and the opposing counsel, NetChoice LLC, argued this question last month. Moody defended Florida laws that protected conservative speech online, and upon being challenged in the court, failed to win over the justices in a heated litigation. These laws would prohibit social media companies from removing some conservative posts, and under Florida statutes, reduce “censorship.” 

    To understand this case we must understand how social media companies filter their content: “Third parties,” or anyone who uses social media apps, make content in the form of online posts. These posts are sometimes restricted if the company agrees it’s harmful to other users or unfit for the “content stream.” As summarized by Oyez.com, “[social media companies] curate and edit the content that users see, which involves removing posts that violate community standards and prioritizing posts based on various factors.”

    NetChoice summarized their argument simply: “Content moderation should be understood as an expressive editorial activity afforded stringent First Amendment protection.” In removing “overly conservative” posts, social media companies are either, according to Moody, censoring free speech, or according to NetChoice, expressing free speech, and it all depends on what they are. If they were private companies, which is what the court decided, they are entitled to the same First Amendment rights as you and I. 

    Lawrence Lessig of Harvard, Tim Wu of Columbia, and Zephyr Teachout of Fordham don’t think so. They argued “[Facebook, Twitter, Instagram, and TikTok] are not space-limited publications dependent on editorial discretion in choosing what topics or issues to highlight. Rather, they are platforms for widespread public expression and discourse. They are their own beast, but they are far closer to a public shopping center or a railroad.”

    This comes as shocking: These professors are liberal, so naturally, they would be opposed to laws protecting conservatism. But they see social media companies as more than private: They see them as vehicles in need of government intervention to protect the common good. This is how this case crosses ideological lines. 

    For three reasons, I affirm the Supreme Court’s unanimous decision to prohibit government intervention in private companies. 

    First, the role of the Supreme Court is to interpret the Constitution. This case considered Florida’s First Amendment protections. They ruled, however, that government intervention in a private company — no matter how big — is unwarranted. Florida, in attempting to protect the rights of conservatives, sacrificed the rights of others, sacrificed the rights of private companies that should have the ability to moderate their platform without intervention from the federal government. I completely and wholeheartedly agree with this stance — preserving the distinction between private and administrative spheres is necessary for a functioning democracy. 

    Second, without considering precedent, let us dive into politics. An inherent contradiction arises, almost a legal cognitive dissonance, on the conservative side in this case. They want to protect the First Amendment by eroding it. They want to limit government intervention, as traditional conservatives do, by increasing it. Proponents of private entities want to erode the notion of what it means to be private, going against the foundation of their ideology.

    Finally, we ask the question: How big is too big a company? At what point does a social media company have too much influence? At what point does a social media company deserve First Amendment restriction? How is it fair that some private companies, started by private citizens, have to worry about First Amendment infringement? 

    These questions can’t be answered. They simply establish the idea that there could never be a point at which the courts could restrict a private entity. An argument could always be made that a corporation is overly biased in content moderation. Similarly, an argument could always be made that the influence of “X” company is not enough to levy government restrictions. In this way, a numerical measure like a certain level of earnings of a private business or an amount of content deemed “biased” could never be made to place government restrictions in the private sector. Similarly, what we deem “conservative” — thus needing government protection from “biased” removal — is arbitrary. 

    The Supreme Court issued the right decision, and the lower courts will follow their lead. We must savor the moments to praise the court, and this is one of them.

    Julius Olavarria, Orlando

    Subscribe to Orlando Weekly newsletters.

    Follow us: Apple News | Google News | NewsBreak | Reddit | Instagram | Facebook | Twitter | or sign up for our RSS Feed

    [ad_2]

    Orlando Weekly readers

    Source link

  • California is racing to combat deepfakes ahead of the election

    California is racing to combat deepfakes ahead of the election

    [ad_1]

    Days after Vice President Kamala Harris launched her presidential bid, a video — created with the help of artificial intelligence — went viral.

    “I … am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” a voice that sounded like Harris’ said in the fake audio track used to alter one of her campaign ads. “I was selected because I am the ultimate diversity hire.”

    Billionaire Elon Musk — who has endorsed Harris’ Republican opponent, former President Trump— shared the video on X, then clarified two days later that it was actually meant as a parody. His initial tweet had 136 million views. The follow-up calling the video a parody garnered 26 million views.

    To Democrats, including California Gov. Gavin Newsom, the incident was no laughing matter, fueling calls for more regulation to combat AI-generated videos with political messages and a fresh debate over the appropriate role for government in trying to contain emerging technology.

    On Friday, California lawmakers gave final approval to a bill that would prohibit the distribution of deceptive campaign ads or “election communication” within 120 days of an election. Assembly Bill 2839 targets manipulated content that would harm a candidate’s reputation or electoral prospects along with confidence in an election’s outcome. It’s meant to address videos like the one Musk shared of Harris, though it includes an exception for parody and satire.

    “We’re looking at California entering its first-ever election during which disinformation that’s powered by generative AI is going to pollute our information ecosystems like never before and millions of voters are not going to know what images, audio or video they can trust,” said Assemblymember Gail Pellerin (D-Santa Cruz). “So we have to do something.”

    Newsom has signaled he will sign the bill, which would take effect immediately, in time for the November election.

    The legislation updates a California law that bars people from distributing deceptive audio or visual media that intends to harm a candidate’s reputation or deceive a voter within 60 days of an election. State lawmakers say the law needs to be strengthened during an election cycle in which people are already flooding social media with digitally altered videos and photos known as deepfakes.

    The use of deepfakes to spread misinformation has concerned lawmakers and regulators during previous election cycles. These fears increased after the release of new AI-powered tools, such as chatbots that can rapidly generate images and videos. From fake robocalls to bogus celebrity endorsement of candidates, AI-generated content is testing tech platforms and lawmakers.

    Under AB 2839, a candidate, election committee or elections official could seek a court order to get deepfakes pulled down. They could also sue the person who distributed or republished the deceptive material for damages.

    The legislation also applies to deceptive media posted 60 days after the election, including content that falsely portrays a voting machine, ballot, voting site or other election-related property in a way that is likely to undermine the confidence in the outcome of elections.

    It doesn’t apply to satire or parody that’s labeled as such, or to broadcast stations if they inform viewers that what is depicted doesn’t accurately represent a speech or event.

    Tech industry groups oppose AB 2839, along with other bills that target online platforms for not properly moderating deceptive election content or labeling AI-generated content.

    “It will result in the chilling and blocking of constitutionally protected free speech,” said Carl Szabo, vice president and general counsel for NetChoice. The group’s members include Google, X and Snap as well as Facebook’s parent company, Meta, and other tech giants.

    Online platforms have their own rules about manipulated media and political ads, but their policies can differ.

    Unlike Meta and X, TikTok doesn’t allow political ads and says it may remove even labeled AI-generated content if it depicts a public figure such as a celebrity “when used for political or commercial endorsements.” Truth Social, a platform created by Trump, doesn’t address manipulated media in its rules about what’s not allowed on its platform.

    Federal and state regulators are already cracking down on AI-generated content.

    The Federal Communications Commission in May proposed a $6-million fine against Steve Kramer, a Democratic political consultant behind a robocall that used AI to impersonate President Biden’s voice. The fake call discouraged participation in New Hampshire’s Democratic presidential primary in January. Kramer, who told NBC News he planned the call to bring attention to the dangers of AI in politics, also faces criminal charges of felony voter suppression and misdemeanor impersonation of a candidate.

    Szabo said current laws are enough to address concerns about election deepfakes. NetChoice has sued various states to stop some laws aimed at protecting children on social media, alleging they violate free speech protections under the 1st Amendment.

    “Just creating a new law doesn’t do anything to stop the bad behavior, you actually need to enforce laws,” Szabo said.

    More than two dozen states, including Washington, Arizona and Oregon, have enacted, passed or are working on legislation to regulate deepfakes, according to the consumer advocacy nonprofit Public Citizen.

    In 2019, California instituted a law aimed at combating manipulated media after a video that made it appear as if House Speaker Nancy Pelosi was drunk went viral on social media. Enforcing that law has been a challenge.

    “We did have to water it down,” said Assemblymember Marc Berman (D-Menlo Park), who authored the bill. “It attracted a lot of attention to the potential risks of this technology, but I was worried that it really, at the end of the day, didn’t do a lot.”

    Rather than take legal action, said Danielle Citron, a professor at the University of Virginia School of Law, political candidates might choose to debunk a deepfake or even ignore it to limit its spread. By the time they could go through the court system, the content might already have gone viral.

    “These laws are important because of the message they send. They teach us something,” she said, adding that they inform people who share deepfakes that there are costs.

    This year, lawmakers worked with the California Initiative for Technology and Democracy, a project of the nonprofit California Common Cause, on several bills to address political deepfakes.

    Some target online platforms that have been shielded under federal law from being held liable for content posted by users.

    Berman introduced a bill that requires an online platform with at least 1 million California users to remove or label certain deceptive election-related content within 120 days of an election. The platforms would have to take action no later than 72 hours after a user reports the post. Under AB 2655, which passed the Legislature Wednesday, the platforms would also need procedures for identifying, removing and labeling fake content. It also doesn’t apply to parody or satire or news outlets that meet certain requirements.

    Another bill, co-authored by Assemblymember Buffy Wicks (D-Oakland), requires online platforms to label AI-generated content. While NetChoice and TechNet, another industry group, oppose the bill, ChatGPT maker OpenAI is supporting AB 3211, Reuters reported.

    The two bills, though, wouldn’t take effect until after the election, underscoring the challenges with passing new laws as technology advances rapidly.

    “Part of my hope with introducing the bill is the attention that it creates, and hopefully the pressure that it puts on the social media platforms to behave right now,” Berman said.

    [ad_2]

    Queenie Wong

    Source link