ReportWire

Tag: Privacy

  • In global rush to regulate AI, Europe set to be trailblazer

    In global rush to regulate AI, Europe set to be trailblazer

    [ad_1]

    LONDON — The breathtaking development of artificial intelligence has dazzled users by composing music, creating images and writing essays, while also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern the emerging technology were caught off guard by AI’s rapid rise.

    The 27-nation bloc proposed the Western world’s first AI rules two years ago, focusing on reining in risky but narrowly focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren’t sure how, or even if it was necessary.

    “Then ChatGPT kind of boom, exploded,” said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. “If there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.”

    The release of ChatGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislation.

    The EU’s AI Act could become the de facto global standard for artificial intelligence, with companies and organizations potentially deciding that the sheer size of the bloc’s single market would make it easier to comply than develop different products for different regions.

    “Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi.

    Authorities worldwide are scrambling to figure out how to control the rapidly evolving technology to ensure that it improves people’s lives without threatening their rights or safety. Regulators are concerned about new ethical and societal risks posed by ChatGPT and other general purpose AI systems, which could transform daily life, from jobs and education to copyright and privacy.

    The White House recently brought in the heads of tech companies working on AI including Microsoft, Google and ChatGPT creator OpenAI to discuss the risks, while the Federal Trade Commission has warned that it wouldn’t hesitate to crack down.

    China has issued draft regulations mandating security assessments for any products using generative AI systems like ChatGPT. Britain’s competition watchdog has opened a review of the AI market, while Italy briefly banned ChatGPT over a privacy breach.

    The EU’s sweeping regulations — covering any provider of AI services or products — are expected to be approved by a European Parliament committee Thursday, then head into negotiations between the 27 member countries, Parliament and the EU’s executive Commission.

    European rules influencing the rest of the world — the so-called Brussels effect — previously played out after the EU tightened data privacy and mandated common phone-charging cables, though such efforts have been criticized for stifling innovation.

    Attitudes could be different this time. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month pause to consider the risks.

    Geoffrey Hinton, a computer scientist known as the “Godfather of AI,” and fellow AI pioneer Yoshua Bengio voiced their concerns last week about unchecked AI development.

    Tudorache said such warnings show the EU’s move to start drawing up AI rules in 2021 was “the right call.”

    Google, which responded to ChatGPT with its own Bard chatbot and is rolling out AI tools, declined to comment. The company has told the EU that “AI is too important not to regulate.”

    Microsoft, a backer of OpenAI, did not respond to a request for comment. It has welcomed the EU effort as an important step “toward making trustworthy AI the norm in Europe and around the world.”

    Mira Murati, chief technology officer at OpenAI, said in an interview last month that she believed governments should be involved in regulating AI technology.

    But asked if some of OpenAI’s tools should be classified as posing a higher risk, in the context of proposed European rules, she said it’s “very nuanced.”

    “It kind of depends where you apply the technology,” she said, citing as an example a “very high-risk medical use case or legal use case” versus an accounting or advertising application.

    OpenAI CEO Sam Altman plans stops in Brussels and other European cities this month in a world tour to talk about the technology with users and developers.

    Recently added provisions to the EU’s AI Act would require “foundation” AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.

    Foundation models, also known as large language models, are a subcategory of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of online information, like blog posts, digital books, scientific articles and pop songs.

    “You have to make a significant effort to document the copyrighted material that you use in the training of the algorithm,” paving the way for artists, writers and other content creators to seek redress, Tudorache said.

    Officials drawing up AI regulations have to balance risks that the technology poses with the transformative benefits that it promises.

    Big tech companies developing AI systems and European national ministries looking to deploy them “are seeking to limit the reach of regulators,” while civil society groups are pushing for more accountability, said EDRi’s Chander.

    “We want more information as to how these systems are developed — the levels of environmental and economic resources put into them — but also how and where these systems are used so we can effectively challenge them,” she said.

    Under the EU’s risk-based approach, AI uses that threaten people’s safety or rights face strict controls.

    Remote facial recognition is expected to be banned. So are government “social scoring” systems that judge people based on their behavior. Indiscriminate “scraping” of photos from the internet used for biometric matching and facial recognition is also a no-no.

    Predictive policing and emotion recognition technology, aside from therapeutic or medical uses, are also out.

    Violations could result in fines of up to 6% of a company’s global annual revenue.

    Even after getting final approval, expected by the end of the year or early 2024 at the latest, the AI Act won’t take immediate effect. There will be a grace period for companies and organizations to figure out how to adopt the new rules.

    It’s possible that industry will push for more time by arguing that the AI Act’s final version goes farther than the original proposal, said Frederico Oliveira Da Silva, senior legal officer at European consumer group BEUC.

    They could argue that “instead of one and a half to two years, we need two to three,” he said.

    He noted that ChatGPT only launched six months ago, and it has already thrown up a host of problems and benefits in that time.

    If the AI Act doesn’t fully take effect for years, “what will happen in these four years?” Da Silva said. “That’s really our concern, and that’s why we’re asking authorities to be on top of it, just to really focus on this technology.”

    ___

    AP Technology Writer Matt O’Brien in Providence, Rhode Island, contributed.

    [ad_2]

    Source link

  • Berkeley professor apologizes for false Indigenous identity

    Berkeley professor apologizes for false Indigenous identity

    [ad_1]

    SAN FRANCISCO — An anthropology professor at the University of California, Berkeley, whose identity as Native American had been questioned for years apologized this week for falsely identifying as Indigenous, saying she is “a white person” who lived an identity based on family lore.

    Elizabeth Hoover, associate professor of environmental science, policy and management, said in an apology posted Monday on her website that she claimed an identity as a woman of Mohawk and Mi’kmaq descent but never confirmed that identity with those communities or researched her ancestry until recently.

    “I caused harm,” Hoover wrote. “I hurt Native people who have been my friends, colleagues, students, and family, both directly through fractured trust and through activating historical harms. This hurt has also interrupted student and faculty life and careers. I acknowledge that I could have prevented all of this hurt by investigating and confirming my family stories sooner. For this, I am deeply sorry.”

    Hoover’s alleged Indigenous roots came into question in 2021 after her name appeared on an “Alleged Pretendian List.” The list compiled by Jacqueline Keeler, a Native American writer and activist, includes more than 200 names of people Keeler says are falsely claiming Native heritage.

    Hoover first addressed doubts about her ethnic identity last year when she said in an October post on her website that she had conducted genealogical research and found “no records of tribal citizenship for any of my family members in the tribal databases that were accessed.”

    Her statement caused an uproar, and some of her former students authored a letter in November demanding her resignation. The letter was signed by hundreds of students and scholars from UC Berkeley and other universities along with members of Native American communities. It also called for her to apologize, stop identifying as Indigenous and acknowledge she had caused harm, among other demands.

    “As scholars embedded in the kinship networks of our communities, we find Hoover’s repeated attempts to differentiate herself from settlers with similar stories and her claims of having lived experience as an Indigenous person by dancing at powwows absolutely appalling,” the letter reads.

    Janet Gilmore, a UC Berkeley spokesperson, said in a statement she couldn’t comment on whether Hoover faces disciplinary action, saying discussing it would violate “personnel matters and/or violate privacy rights, both of which are protected by law.”

    “However, we are aware of and support ongoing efforts to achieve restorative justice in a way that acknowledges and addresses the extent to which this matter has caused harm and upset among members of our community,” Gilmore added.

    Hoover is the latest person to apologize for falsely claiming a racial or ethnic identity.

    U.S. Sen. Elizabeth Warren angered many Native Americans during her presidential campaign in 2018 when she used the results of a DNA test to try and rebut the ridicule of then-President Donald Trump, who had derisively referred to her as “fake Pocahontas.”

    Despite the DNA results, which showed some evidence of a Native American in Warren’s lineage, probably six to 10 generations ago, Warren is not a member of any tribe, and DNA tests are not typically used as evidence to determine tribal citizenship.

    Warren later offered a public apology at a forum on Native American issues, saying she was “sorry for the harm I have caused.”

    In 2015, Rachel Dolezal was fired as head of the Spokane, Washington, chapter of the NAACP and was kicked off a police ombudsman commission after her parent told local media their daughter was born white but was presenting herself as Black. She also lost her job teaching African studies at Eastern Washington University in nearby Cheney.

    Hoover said her identity was challenged after she began her first assistant professor job. She began teaching at UC Berkeley in the Fall of 2020.

    “At the time, I interpreted inquiries into the validity of my Native identity as petty jealousy or people just looking to interfere in my life,” she wrote.

    Hoover said that she grew up in rural upstate New York thinking she was someone of mixed Mohawk, Mi’kmaq, French, English, Irish and German descent, and attending food summits and powwows. Her mother shared stories about her grandmother being a Mohawk woman who married an abusive French-Canadian man and who committed suicide, leaving her children behind to be raised by someone else.

    She said she would no longer identify as Indigenous but would continue to help with food sovereignty and environmental justice movements in Native communities that ask her for her support.

    In her apology issued Monday, Hoover acknowledged she benefited from programs and funding that were geared toward Native scholars and said she is committed to engaging in the restorative justice process taking place on campus, “as well as supporting restorative justice processes in other circles I have been involved with, where my participation is invited.”

    [ad_2]

    Source link

  • Kids and social media: Here are tips for concerned parents

    Kids and social media: Here are tips for concerned parents

    [ad_1]

    When it comes to social media, families are seeking help.

    With ever-changing algorithms pushing content at children, parents are seeing their kids’ mental health suffer, even as platforms like TikTok and Instagram provide connections with friends. Some are questioning whether kids should be on social media at all, and if so, starting at what age.

    Lawmakers have taken notice. A bipartisan group of senators recently introduced legislation aiming to prohibit all children under the age of 13 from using social media. It would also require permission from a guardian for users under 18 to create an account. It is one of several proposals in Congress seeking to make the internet safer for children and teens.

    Meanwhile, on Wednesday the Federal Trade Commission said Facebook misled parents and failed to protect the privacy of children using its Messenger Kids app, including misrepresenting the access it provided to app developers to private user data. Now, the FTC is proposing sweeping changes to a privacy order it has with Facebook’s parent company Meta that would include prohibiting it from making money from data it collects on children.

    But making laws and regulating companies takes time. What are parents — and teens — supposed to do in the meantime? Here are some tips on staying safe, communicating and setting limits on social media — for kids as well as their parents.

    IS 17 THE NEW 13?

    There’s already, technically, a rule that prohibits kids under 13 from using platforms that advertise to them without parental consent: The Children’s Online Privacy Protection Act that went into effect in 2000 — before today’s teenagers were even born.

    The goal was to protect kids’ online privacy by requiring websites and online services to disclose clear privacy policies and get parents’ consent before gathering personal information on their kids, among other things. To comply, social media companies have generally banned kids under 13 from signing up for their services, although it’s been widely documented that kids sign up anyway, either with or without their parents’ permission.

    But times have changed, and online privacy is no longer the only concern when it comes to kids being online. There’s bullying, harassment, the risk of developing eating disorders, suicidal thoughts or worse.

    For years, there has been a push among parents, educators and tech experts to wait to give children phones — and access to social media — until they are older, such as the “Wait Until 8th” pledge that has parents sign a pledge not to give their kids a smartphone until the 8th grade, or about age 13 or 14. But neither social media companies nor the government have done anything concrete to increase the age limit.

    IF THE LAW WON’T BAN KIDS, SHOULD PARENTS?

    “There is not necessarily a magical age,” said Christine Elgersma, a social media expert at the nonprofit Common Sense Media. But, she added, “13 is probably not the best age for kids to get on social media.”

    The laws currently being proposed include blanket bans on the under-13 set when it comes to social media. The problem? There’s no easy way to verify a person’s age when they sign up for apps and online services. And the apps popular with teens today were created for adults first. Companies have added some safeguards over the years, Elgersma noted, but these are piecemeal changes, not fundamental rethinks of the services.

    “Developers need to start building apps with kids in mind,” she said.

    Some tech executives, celebrities such as Jennifer Garner and parents from all walks of life have resorted to banning their kids from social media altogether. While the decision is a personal one that depends on each child and parent, some experts say this could lead to isolating kids, who could be left out of activities and discussions with friends that take place on social media or chat services.

    Another hurdle — kids who have never been on social media may find themselves ill-equipped to navigate the platforms when they are suddenly allowed free rein the day they turn 18.

    TALK, TALK, TALK

    Start early, earlier than you think. Elgersma suggests that parents go through their own social media feeds with their children before they are old enough to be online and have open discussions on what they see. How would your child handle a situation where a friend of a friend asks them to send a photo? Or if they see an article that makes them so angry they just want to share it right away?

    For older kids, approach them with curiosity and interest.

    “If teens are giving you the grunts or the single word answers, sometimes asking about what their friends are doing or just not asking direct questions like, ‘What are you doing on Instagram?’ but rather, ‘Hey, I heard this influencer is really popular,’” she suggested. “And even if your kid rolled their eyes it could be a window.”

    Don’t say things like “Turn that thing off!” when your kid has been scrolling for a long time, says Jean Rogers, the director of the nonprofit Fairplay’s Screen Time Action Network.

    “That’s not respectful,” Rogers said. “It doesn’t respect that they have a whole life and a whole world in that device.”

    Instead, Rogers suggests asking them questions about what they do on their phone, and see what your child is willing to share.

    Kids are also likely to respond to parents and educators “pulling back the curtains” on social media and the sometimes insidious tools companies use to keep people online and engaged, Elgersma said. Watch a documentary like “The Social Dilemma” that explores algorithms, dark patterns and dopamine feedback cycles of social media. Or read up with them how Facebook and TikTok make money.

    “Kids love to be in the know about these things, and it will give them a sense of power,” she said.

    SETTING LIMITS

    Rogers says most parents have success with taking their kids’ phones overnight to limit their scrolling. Occasionally kids might try to sneak the phone back, but it’s a strategy that tends to work because kids need a break from the screen.

    “They need to an excuse with their peers to not be on their phone at night,” Rogers said. “They can blame their parents.”

    Parents may need their own limits on phone use. Rogers said it’s helpful to explain what you are doing when you do have a phone in hand around your child so they understand you are not aimlessly scrolling through sites like Instagram. Tell your child that you’re checking work email, looking up a recipe for dinner or paying a bill so they understand you’re not on there just for fun. Then tell them when you plan to put the phone down.

    YOU CAN’T DO IT ALONE

    Parents should also realize that it’s not a fair fight. Social media apps like Instagram are designed to be addictive, says Roxana Marachi, a professor of education at San Jose State University who studies data harms. Without new laws that regulate how tech companies use our data and algorithms to push users toward harmful content, there is only so much parents can do, Marachi said.

    “The companies are not interested in children’s well-being, they’re interested in eyes on the screen and maximizing the number of clicks,” Marachi said. “Period.”

    [ad_2]

    Source link

  • Utah law requiring porn sites verify user ages takes effect

    Utah law requiring porn sites verify user ages takes effect

    [ad_1]

    SALT LAKE CITY — You may soon be required to prove you’re older than 18 to watch porn in Utah, if adult websites comply with a law that took effect Wednesday.

    A new state law requiring adult websites verify the ages of their users took effect on Wednesday, making the state at least the second to enact an age verification law to shield kids from sexually explicit materials that have become increasingly accessible online.

    “It’s part of our job as society — and maybe a subset of my job as a lawmaker — to try to protect children,” state Sen. Todd Weiler, the measure’s Republican sponsor, said. “I’m not gonna blame all of society’s ills on pornography, but I don’t think it’s helpful when a kid is forming their impressions of sex and gender to have all of this filth and lewd depictions on their mind.”

    It’s currently illegal to show children pornography under federal law, however it’s rarely enforced. The law is Utah’s latest move to crack down on access to pornography and dovetails with lawmakers’ other efforts to restrict how kids use the internet, including social media sites. It comes less than a year after Louisiana enacted a similar law and as additional states consider such policies as filters or age verification for adult websites.

    Dr. Eleanor Gaetan of the anti-porn National Center on Sexual Exploitation said filters and age verification were “complementary efforts” to limit kids’ access to pornography. She noted anti-porn sentiment had grown substantially in recent years due to a “groundswell of parents,” including ones who have testified in statehouses throughout the country and in front of the U.S. Congress.

    “The wave will continue because the harms are real,” she said. “These kids can’t unsee what they see.”

    Though heralded by social conservatives, age verification laws have been condemned by adult websites who argue they’re part of a larger anti-sex political movement. They’ve also garnered opposition from groups that advocate for digital privacy and free speech, including the Electronic Frontier Foundation. The group argued earlier this year that it’s impossible to ensure websites don’t retain user data, regardless of if age verification laws require they delete it.

    Earlier this week, Pornhub, among the most widely viewed adult websites, blocked access to its content to protest the law. Those in Utah attempting to access the site since Monday have been greeted with a “Dear User” letter and accompanying video from adult film actor Cherie DeVille.

    “Giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users,” DeVille says, reading from the letter. “The best and most effective solution for protecting children and adults alike is to identify users by their device.”

    The letter says Pornhub will “completely disable access” in Utah due to the law, unless a “real solution” is offered.

    It’s unclear if other websites will comply.

    Critics, including Pornhub, argue age-verification laws can be easily circumvented with well-known tools such as VPNs that reroute requests to visit websites across public networks. They also have raised questions about enforcement, with Pornhub saying enforcement efforts drive traffic to less-known sites that don’t comply with the law and have fewer safety protocols.

    A year after passing an age-verification requirement, Louisiana lawmakers have renewed their efforts to get adult websites to comply with its law. A follow-up measure that would subject the sites to fines for not requiring users prove their age advanced through the state House of Representatives in April.

    Measures have also been introduced in Arizona and South Carolina. Arkansas passed a similar age-verification law for adult websites that takes effect later this summer

    The Utah law attempts to address privacy and internet data harvesting concerns by requiring websites not retain the ID information. It opens adult websites up to lawsuits if they don’t verify the age of their users. It offers several age verification methods, including third-party age verification services and digital licenses that states are increasingly offering on mobile devices.

    It builds off years of anti-porn efforts in Utah’s Republican-controlled Legislature, where a majority of lawmakers are members of The Church of Jesus Christ of Latter-day Saints. It comes seven years after Weiler — who describes himself as the statehouse’s unofficial “porn czar” — led the charge to make Utah the first state to declare pornography a “public health crisis” and two years after lawmakers passed a measure paving the way to require internet-capable devices be equipped with porn filters for children. Provisions of the law delay it from taking effect unless at least five other states pass similar measures.

    Weiler likened the measure to Utah’s first-in-the-nation law prohibiting kids under 18 from using social media between the hours of 10:30 p.m. and 6:30 a.m. and requiring age verification for social media users. He said he understands that, realistically some kids may bypass age-verification controls. But he said he wonders why opponents’ arguing enforcement concerns make internet age verification laws useless haven’t raised similar concerns about drivers speeding or online gambling.

    “The internet was born, but it wasn’t born yesterday,” he said.

    __

    AP reporters Sara Cline in Baton Rouge, La. and Andrew DeMillo in Little Rock, Ark. contributed reporting.

    [ad_2]

    Source link

  • FTC: Facebook misled parents, failed to guard kids’ privacy

    FTC: Facebook misled parents, failed to guard kids’ privacy

    [ad_1]

    U.S. regulators say Facebook misled parents and failed to protect the privacy of children using its Messenger Kids app, including misrepresenting the access it provided to app developers to private user data.

    As a result, The Federal Trade Commision on Wednesday proposed sweeping changes to a 2020 privacy order with Facebook — now called Meta — that would prohibit it from profiting from data it collects on users under 18. This would include data collected through its virtual-reality products. The FTC said the company has failed to fully comply with the 2020 order.

    Meta would also be subject to other limitations, including with its use of face-recognition technology and be required to provide additional privacy protections for its users.

    “Facebook has repeatedly violated its privacy promises,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”

    Meta called the announcement a “political stunt.”

    “Despite three years of continual engagement with the FTC around our agreement, they provided no opportunity to discuss this new, totally unprecedented theory. Let’s be clear about what the FTC is trying to do: usurp the authority of Congress to set industry-wide standards and instead single out one American company while allowing Chinese companies, like TikTok, to operate without constraint on American soil,” Meta said in a prepared statement.

    The Menlo Park, California company added that it will “vigorously fight” the FTC’s action and expects to prevail.

    Facebook launched Messenger Kids in 2017, pitching it as a way for children to chat with family members and friends approved by their parents. The app doesn’t give kids separate Facebook or Messenger accounts. Rather, it works as an extension of a parent’s account, and parents get controls, such as the ability to decide with whom their kids can chat.

    At the time, Facebook said Messenger Kids wouldn’t show ads or collect data for marketing, though it would collect some data it said was necessary to run the service.

    But child-development experts raised immediate concerns.

    In early 2018, a group of 100 experts, advocates and parenting organizations contested Facebook’s claims that the app was filling a need kids had for a messaging service. The group included nonprofits, psychiatrists, pediatricians, educators and the children’s music singer Raffi Cavoukian.

    “Messenger Kids is not responding to a need — it is creating one,” the letter said. “It appeals primarily to children who otherwise would not have their own social media accounts.” Another passage criticized Facebook for “targeting younger children with a new product.”

    Facebook, in response to the letter, said at the time that the app “helps parents and children to chat in a safer way,” and emphasized that parents are “always in control” of their kids’ activity.

    The FTC now says this has not been the case. The 2020 privacy order, which required Facebook to pay a $5 billion fine, required an independent assessor to evaluate the company’s privacy practices. The FTC said the assessor “identified several gaps and weaknesses in Facebook’s privacy program.”

    The FTC also said Facebook, from late 2017 until 2019, “misrepresented that parents could control whom their children communicated with through its Messenger Kids product.”

    “Despite the company’s promises that children using Messenger Kids would only be able to communicate with contacts approved by their parents, children in certain circumstances were able to communicate with unapproved contacts in group text chats and group video calls,” the FTC said.

    Meta critics applauded the FTC’s action. Jeffrey Chester, the executive director of the nonprofit Center for Digital Democracy, called it a “a long-overdue intervention into what has become a huge national crisis for young people.”

    Meta, and with its platforms like Instagram and Facebook, Chester added, “are at the center of a powerful commercialized social media system that has spiraled out of control, threatening the mental health and well-being of children and adolescents.”

    The company, he added, has not done enough to address existing problems — and is now unleashing “even more powerful data gathering and targeting tactics fueled by immersive content, virtual reality and artificial intelligence, while pushing youth further into the metaverse with no meaningful safeguards.”

    As part of the proposed changes to the FTC’s 2020 order (which was announced in 2019 and finalized later), Meta would also be required to pause launching new products and services without “written confirmation from the assessor that its privacy program is in full compliance” with the order.

    Meta has 30 days to respond to the FTC’s latest action.

    [ad_2]

    Source link

  • OpenAI: ChatGPT back in Italy after meeting watchdog demands

    OpenAI: ChatGPT back in Italy after meeting watchdog demands

    [ad_1]

    ChatGPT’s maker said Friday that the artificial intelligence chatbot is available again in Italy after the company met the demands of regulators who temporarily blocked it over privacy concerns.

    OpenAI said it fulfilled a raft of conditions that the Italian data protection authority wanted satisfied by an April 30 deadline to have the ban on the AI software lifted.

    “ChatGPT is available again to our users in Italy,” San Francisco-based OpenAI said by email. “We are excited to welcome them back, and we remain dedicated to protecting their privacy.”

    Generative AI systems like ChatGPT, which use vast pools of online data like digital books, blog posts and other media to generate text, images and other content mimicking human work, have created buzz in the tech world and beyond.

    But their rapid development has stirred fears among officials and even tech leaders about possible ethical and societal risks, with European Union negotiators scrambling to update draft artificial intelligence regulations that have been years in the making.

    Last month, the Italian watchdog, known as Garante, ordered OpenAI to temporarily stop processing Italian users’ personal information while it investigated a possible data breach. The authority said it didn’t want to hamper AI’s development but emphasized the importance of following the EU’s strict data privacy rules.

    OpenAI said it “addressed or clarified the issues” raised by the watchdog.

    The measures include adding information on its website about how it collects and uses data that trains the algorithms powering ChatGPT, providing EU users with a new form for objecting to having their data used for training, and adding a tool to verify users’ ages when signing up.

    Some Italian users shared what appeared to be screenshots of the changes, including a menu button asking users to confirm their age and links to the updated privacy policy and training data help page.

    The Garante said in a statement that it “welcomes the measures OpenAI implemented” and urged the company to comply with two other demands for an age-verification system and a publicity campaign informing Italians about the backstory and their right to opt out of data processing.

    The watchdog imposed the ban last month after finding that some users’ messages and payment information were exposed to others. It also questioned whether there was a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT’s algorithms and raised concerns that the system could sometimes generate false information about individuals.

    Infrastructure Minister Matteo Salvini on Instagram, wrote approvingly of the return of ChatGPT and said that his League party “is committed to help start-ups and development in Italy.”

    Other regulators are now looking closer at such AI systems, with France’s data privacy regulator and Canada’s privacy commissioner investigating after receiving complaints about ChatGPT.

    The head of the Federal Trade Commission, Lina Khan, warned this week that the U.S. government will “not hesitate to crack down” on harmful business practices involving artificial intelligence.

    [ad_2]

    Source link

  • UK locks horns with WhatsApp over threat to break encryption

    UK locks horns with WhatsApp over threat to break encryption

    [ad_1]

    LONDON — Britain’s tough new plan to police the internet has left politicians in a stand-off with WhatsApp and other popular encrypted messaging services. Deescalating that row will be easier said than done.

    The Online Safety Bill, the United Kingdom’s landmark effort to regulate social media giants, gives regulator Ofcom the power to require tech companies to identify child sex abuse material in private messages.

    But the proposals have prompted Will Cathcart, boss of the Meta-owned messaging app, whose encrypted service is widely-used in Westminster’s own corridors of power, to claim it would rather be blocked in the U.K. than compromise on privacy.

    “The core of what we do is a private messaging service for billions of people around the world,” Cathcart told POLITICO in March when he jetted in to London to lobby ministers over the upcoming bill. “When the U.K., a liberal democracy, says, ‘Oh, it is okay to scan everyone’s private communication for illegal content,’ that emboldens countries around the world that have very different definitions of illegal content to propose the same thing,” he added.

    WhatsApp’s smaller rival, Signal, has also said it could stop providing services in the U.K. if the bill requires it to scan messages — echoing claims from the tech industry that date back more than a decade that they can’t create backdoors in encrypted digital services, even to protect kids online, because to do so opens the products up to vulnerabilities from bad actors, including foreign governments.

    “We can’t just let thousands of pedophiles get away with it. That wouldn’t be responsible or proportionate for a government to do,” Science and Technology Secretary Michelle Donelan told POLITICO in February.

    Ministers are keen to lower the temperature. But doing so will prove challenging, two former ministers told POLITICO on the condition of anonymity, given the likelihood of pushback from MPs, the complexity of the technology and the emotiveness of the issue.

    Easier said than done

    Finding a compromise is unlikely to be easy — and the row mirrors similar debates that are underway in the European Union and Australia over just how accountable tech platforms should be for potentially harmful content on encrypted services. 

    The debate over whether the requirements of the bill can be met while protecting privacy centers around “client-side scanning.” 

    While leaders at Britain’s National Cyber Security Centre and security agency GCHQ said last July they believe such technology can simultaneously protect children and privacy, other experts dispute their findings.

    A raft of cryptographers criticized the technique in a report called Bugs in Our Pockets in 2021 prompting tech giant Apple to abandon plans to introduce client-side scanning on its services. In Australia, the country’s eSafety Commissioner recently published a report highlighting how the likes of Microsoft and Apple had few, if any, mechanisms to track child sexual abuse material, including via their encrypted services.

    “This is not only companies really taking a blind eye to live crime scenes happening on their platforms, but they’re also failing to properly harden their systems and storage against abuse,” Australian eSafety Commissioner Julie Inman Grant told POLITICO. “It’s akin to leaving a home open to an intruder. Once that bad actor is inside the house, good luck getting them out.”

    WhatsApp’s smaller rival, Signal, has also said it could stop providing services in the U.K. if the bill requires it to scan messages | Damien Meyer/AFP via Getty Images

    Hacking risk

    Cybersecurity experts agree the U.K. bill’s demands are incompatible with a desire to protect encryption. They claim that privacy is not a fungible issue — services either have it or they don’t. And they warn that politicians should be wary of undermining such protections in ways that would make people’s online experiences potentially open to abuse or hacking.

    “In essence, end-to-end encryption involves not having a door, or if you want to use a postal analogy, not having a sorting office for the state to search. Client-side-scanning, despite the claims of its proponents, does seem to involve some kind of level of access, some kind of ability to sort and scan, and therefore there’s no way of confining that to good use by lawful credible authorities and liberal democracies,” Ciaran Martin, the former chief executive of the government’s National Cyber Security Centre said.

    Ministers insist that they support strong encryption and privacy, but say it cannot come at the cost of public safety. 

    Tech companies should be researching technology to identify child sex abuse before messages are encrypted, Donelan said. But the government also appears to be searching for a way to cool the row, and Donelan insisted the measure would be a “last resort.”

    “That element of the bill is like a safety mechanism that can be enacted, should it ever be needed to. It might never be needed because there might be other solutions in place,” she said.

    One official in the Department for Science, Innovation and Technology (DSIT), not authorized to speak on the record but familiar with government discussions, said DSIT wanted to find a way through and is having talks “with anyone that wants to discuss this with us.”

    Melanie Dawes, Ofcom’s chief executive, told POLITICO that any efforts to break encryption in the name of safety would have to meet stringent rules, and such requests would be made in only the most extreme situations. 

    “There’s a high bar for Ofcom to be able to require the use of a technology in order to secure safety,” she said.

    Lords debate

    Peers in the unelected House of Lords, the U.K. parliament’s revising chamber, waded into the issue Thursday.

    Richard Allan, a Lib Dem peer who was Facebook’s chief lobbyist in Europe until 2019, led the charge, saying tech companies will feel they’re “unable to offer their products in the UK under the bill.” He said undermining encryption opened the doors to hostile states and accused the government of playing a “high stakes game of chicken” with tech companies.

    But Beeban Kidron, a crossbench peer who has been leading much of the work in the Lords around child safety, said although she had some sympathy for Allan’s arguments, Big Tech companies had to do more to protect users’ privacy themselves.

    Wilf Stevenson, who is managing Labour’s response to the bill in the Lords, said he was not convinced the government’s plans were “right for the present day, let alone the future.” He added that under the bill “Ofcom is expected to be both gamekeeper and poacher,” with power to regulate tech companies and inspect private messages.

    But Stephen Parkinson, who is guiding the bill through the Lords on behalf of the government, defended the legislation. “The bill contains strong safeguards for privacy,” he said, echoing Donelan’s statement that powers to inspect messages were a “last resort” designed to be used only in cases of suspected terrorism and child sexual exploitation.

    Convincing ministers

    Messaging services including Signal and WhatsApp are hoping for a ministerial climbdown — but few see one coming.

    There is little prospect of large swathes of MPs, who will have the final say on the bill, riding to their rescue, according to two former ministers who have worked on the legislation. 

    “People are scared if they go in and fight over this, even for very genuine reasons, it could be very easily portrayed that they’re trying to block protecting kids,” one former Cabinet minister, a party loyalist, who worked on an earlier draft of the bill, said. 

    The second former minister said MPs “haven’t engaged with it terribly much on a very practical level” because it is “really hard.” 

    “Tech companies have made significant efforts to frame this issue in the false binary that any legislation that impacts private messaging will damage end-to-end encryption and will mean that encryption will not work or is broken. That argument is completely false,” opposition Labour frontbencher Alex Davies-Jones, said in a debate last June. 

    The widespread leaking of MPs’ WhatsApp messages has also undermined perceptions of the platform’s privacy credentials, the former Cabinet minister quoted above suggests. 

    “If you are sharing stuff on WhatsApp with people that’s inappropriate, there’s a good chance it’s going to end up in the public domain anyway. The encryption doesn’t stop that because somebody screenshots it and copies it and sends it on,” they lamented. 

    WhatsApp does have one ally in the former Brexit secretary and long-time civil liberties campaigner David Davis, though.

    “Right across the board there are a whole series of weaknesses the government hasn’t taken on board,” he told POLITICO of the bill.

    And on WhatsApp and Signal’s threats to leave the U.K., Davis thinks a point could be made.

    “Well, I sort of hope they do. The truth is their model depends on complete privacy,” he said.

    Update: This article has been updated to include comments from the latest House of Lords debate on the Online Safety Bill.

    [ad_2]

    Annabelle Dickson, Mark Scott and Tom Bristow

    Source link

  • Montana gov seeks to expand TikTok ban to other social apps

    Montana gov seeks to expand TikTok ban to other social apps

    [ad_1]

    Montana’s governor is asking lawmakers to expand the state’s proposed TikTok ban to more social media companies that provide certain data to foreign adversaries.

    Earlier this month, state lawmakers passed a bill that would make Montana the first state in the U.S. with a total ban on the popular social media platform. That would go much further than similar bans already in place in many other states and the federal government that bar the use of TikTok on government-issued devices.

    Similar to many national lawmakers and government officials, proponents of the law in Montana have claimed the Chinese government could harvest U.S. user data from TikTok and use the platform to push pro-Beijing misinformation or messages to the public. TikTok, which is owned by the Chinese tech giant ByteDance, has said it has never been asked to hand over its data, and has been vigorously opposing the legislation.

    Under the recently-passed bill, downloading TikTok would be illegal in Montana. And any “entity” — an app store or TikTok — would be fined $10,000 a day for each time someone accesses TikTok, “is offered the ability” to access the platform or downloads the app.

    But enforcing the ban is expected to be challenging. Tech experts say there’s nothing incentivizing the companies that would be liable for violation, such as app store leaders Apple and Google, as well as TikTok, to comply. And any enforcement measures could also be bypassed using a VPN, which can alter IP addresses and allows users to evade content restrictions.

    The legislation was also expected to face legal hurdles on First Amendment grounds as well as “bills of attainder” laws prohibiting the government from imposing a punishment on a specific entity without a formal trial. A spokesperson for Republican Montana Gov. Greg Gianforte said the amendment offered by the governor’s office sought to deal with some of the concerns raised with the original bill.

    “The amendment for consideration seeks to improve the bill by broadening Montanans’ privacy protections beyond just TikTok and against all foreign adversaries, while also addressing the bill’s technical and legal concerns,” Kaitlin Price, the governor’s press secretary, said in a statement.

    The Wall Street Journal first reported on the amendment.

    TikTok did not immediately respond to a request for comment. But a representative for NetChoice, a trade group that counts Google and TikTok as its members, said in a statement that the bill is still misguided.

    “Once again, focusing on the country of origin is still the wrong approach. This whole issue distracts us from the real threats happening around us all the time,” said Carl Szabo, the vice president and general counsel for the group.

    “If we truly care about protecting all Americans online, Congress needs to work on a federal data privacy law that preempts state law, among other components,” Szabo said. “That’s hard work, but that is what needs to be done.”

    Lawmakers would have to approve the governor’s amendment before the legislative session ends in early May.

    [ad_2]

    Source link

  • Appeals court upholds Apple’s control of iPhone app store

    Appeals court upholds Apple’s control of iPhone app store

    [ad_1]

    An appeals court on Monday upheld Apple’s exclusive control over the distribution of iPhone apps, rejecting the latest attempt to force one of the world’s most powerful companies to dismantle the digital walls protecting its most lucrative product.

    The 92-page decision issued by the U.S. Ninth Circuit Court of Appeals largely affirmed the findings of a lower-court judge who presided over a 2021 trial that revolved around an antitrust lawsuit filed by Epic Games, the maker of the popular Fortnite video game.

    Epic Games’ lawsuit alleged Apple’s app store — which was launched in 2008, a year after the first iPhone went on sale — had turned into an illegal monopoly that stifles innovation and competition while generating billions of dollars in profit for Apple.

    Epic tried to offer an alternative way to get its mobile app, attempting to evade the developer fees inside the app store, which collects a commission of 15% to 30% on subscriptions and other digital transactions.

    Apple ousted Epic from its app store after it tried to get around restrictions that Apple says protect the security and privacy of iPhone users while also helping to recoup some of the investment that powers one of the world’s most ubiquitous devices.

    U.S. District Judge Yvonne Gonzalez Rogers rejected the monopolist clams leveled against Apple in her September 2021 decision following a 16-day trial held in May of that year. The high-profile trial featured more than 500 exhibits and testimony from more than a dozen witnesses, including Apple CEO Tim Cook and Epic CEO Tim Sweeney.

    After listening to oral arguments last November, the three Ninth Circuit judges handling the appeal upheld the gist of Gonzalez Rogers’ decision with a few minor exceptions.

    Although the lower-court judge “erred as a matter of law on several issues, those errors were harmless,” the appeals court declared in its ruling. The appeals decision also backed Gonzalez Rogers’ opinion that Apple’s iPhone app store wasn’t violating federal antitrust law and that Epic hadn’t proven that consumers didn’t have the freedom to switch to other alternatives, such as phones powered by Google’s Android software.

    “Users who place a premium on low prices can (by purchasing an Android device) select one of the several open app-transaction platforms, which provide marginally less security and privacy,” the ruling said.

    Epic is pursuing an antitrust lawsuit against Google and its Play store for Android phones in a case mirroring its action against Apple. That lawsuit is scheduled for a November trial that will also be joined by the attorneys general in dozens of states pursuing similar allegations against Google.

    Another section of Monday’s decision backed Apple’s assertion one of the reasons people decide to purchase iPhones stems from the company’s commitment to protect their privacy and security.

    “Apple makes clear that by improving security and privacy features, it is tapping into consumer demand and differentiating its products from those of its competitors — goals that are plainly procompetitive,” the ruling said.

    One of the three appeals court judges, Sidney R. Thomas, differed with the two other judges, Milan D. Smith Jr. and Michael J. McShane, on some legal issues that he believed should have been sent back to Gonzalez Rogers for further review.

    Apple hailed the appeals court’s decision as further evidence that the iPhone app store “continues to promote competition, drive innovation, and expand opportunity.”

    In a tweet, Epic’s Sweeney affirmed Apple’s appeals court triumph and then followed up with another tweet saying the company is “working on next steps,” without elaborating. The Cary, North Carolina, company could still ask for a review before a larger panel of Ninth Circuit judges or file an appeal with the U.S. Supreme Court.

    Monday’s Ninth Circuit decision wasn’t an across-the-board victory for Apple, raising the potential that it might also pursue an additional appeal.

    In her lower court ruling, Gonzalez Rogers affirmed a section of the lower-court that some of Apple’s app store rules constitute unfair competition under California law. Those so-called “anti-steering” violations stem from an Apple prohibition preventing the promotions of payment options from inside the apps installed on iPhones.

    As a remedy, Gonzalez Rogers ordered Apple allow developers throughout the U.S. to insert links to other payment options besides its own within iPhone apps. That change would make it easier for app developers to avoid paying Apple’s commissions, potentially affecting billions of dollars in revenue annually.

    Apple had appealed the part of Gonzalez Rogers’ decision addressing the “anti-steering” policies, but was rebuffed Monday. In its statement, Apple said it’s assessing whether it will contest the appeals court’s findings on that issue.

    [ad_2]

    Source link

  • Online gaming chats have long been spy risk for US military

    Online gaming chats have long been spy risk for US military

    [ad_1]

    WASHINGTON — Step into a U.S. military recreation hall at a base almost anywhere in the world and you’re bound to see it: young troops immersed in the world of online games, using government-funded gaming machines or their own consoles.

    The enthusiasm military personnel have for gaming — and the risk that carries — is in the spotlight after Jack Teixeira, a 21-year-old Massachusetts Air National Guardsman, was charged with illegally taking and posting highly classified material in a geopolitical chat room on Discord, a social media platform that started as a hangout for gamers.

    State secrets can be illegally shared in countless different ways, from whispered conversations and dead drops to myriad social media platforms. But online gaming forums have long been a particular worry of the military because of their lure for young service members. And U.S. officials are limited in how closely they can monitor those forums to make sure nothing on them threatens national security.

    “The social media world and gaming sites in particular have been identified as a counterintelligence concern for about a decade,” said Dan Meyer, a partner at the Tully Rinckey law firm, which specializes in military and security clearance issues.

    Foreign intelligence agents could use an avatar in a gaming room to connect with “18 to 23-year-old sailors gaming from the rec center at Norfolk Naval Base, win their confidence over for months, and then, through that process, start to connect with them on other social media platforms,” Meyer said, noting that U.S. spy agencies have also created avatars to conduct surveillance in the online games World of Warcraft and Second Life.

    The military doesn’t have the authority to conduct surveillance of U.S. citizens on U.S. soil — that’s the role of domestic law enforcement agencies like the FBI. Even when monitoring members of the armed forces, there are privacy issues, something the Defense Department ran into head-on as it tried to establish social media policies to counter extremism in the ranks.

    The military does, however, have a presence in the online game community. Both the Army and the Navy have service members whose full-time job is to compete in video game tournaments as part of military esports teams. The teams are seen as an effective way to reach and potentially recruit youth who have grown up with online gaming since early childhood. But none of the services said they had any sort of similar team playing online to monitor for potential threats or leaks.

    Pentagon spokeswoman Sue Gough said its intelligence activities are primarily focused internationally. In collecting any information on Americans, the Defense Department does so “in accordance with law and policy and in a manner that protects privacy and civil liberties,” she said in a statement to The Associated Press. She said the procedures must be approved by the attorney general.

    Instead, the military has focused on training service members never to reveal classified information in the first place. In wake of the online leaks, the department is reviewing its processes to protect classified information, reducing the number of people who have access, and reminding the force that “the responsibility to safeguard classified information is a lifetime requirement for each individual granted a security clearance,” Deputy Secretary of Defense Kathleen Hicks said in a memo issued Thursday following Teixeira’s arrest.

    But that may not be enough.

    “These various gaming channels are just another form of social networks,” said Peter W. Singer, whose novel “Burn In” centered on attacks on the U.S. that are plotted in a private chamber of an online war game — and where all the plotters use avatars of historical figures to disguise themselves.

    Singer, who has advised the Pentagon on future warfare, expects that future espionage and plotting will likely find haven in some of these private online worlds.

    “There’s a shift from it being viewed as niche, and for kids to adults using it for everything from marketing and entertainment to criminality,” Singer said. “Is this the future? Most definitely.”

    But besides the legal limitations on monitoring these games, the vast number of sites and private chats would be virtually impossible for the Pentagon to manage, Singer said.

    “Your answer to this can’t be ‘How do I find it on video game channels?’” Singer said. “Your answer has to be, ‘How do I keep it from getting out in the first place?’”

    [ad_2]

    Source link

  • Jury selection begins in defamation lawsuit against Fox News

    Jury selection begins in defamation lawsuit against Fox News

    [ad_1]

    Jury selection has begun behind closed doors in a defamation lawsuit seeking to hold Fox News responsible for repeatedly airing false claims related to the 2020 presidential election

    ByRANDALL CHASE Associated Press

    WILMINGTON, Del. — Jury selection began behind closed doors Thursday in a defamation lawsuit seeking to hold Fox News responsible for repeatedly airing false claims related to the 2020 presidential election.

    Delaware Superior Court Judge Eric Davis previously made clear that the selection would be done out of public view to ensure the privacy and safety of potential jurors.

    “Because of the nature of the case and under the statute, I can take those steps to protect jurors,” the judge said Thursday, noting that the case has received international attention.

    “I need to make sure that the jury remains unaffected by this,” Davis added.

    Jury selection in Delaware is usually done in public but occasionally is closed to protect jurors, such as in high-profile criminal cases or those involving alleged gang activity.

    The judge met privately with potential jurors and handed out forms asking several routine questions, including whether those in the jury pool have ever worked for Fox or Dominion Voting Systems, the Colorado-based voting machine company that filed the defamation lawsuit.

    He began Thursday’s proceeding by denying a request by certain media outlets for permission to record and rebroadcast a live audio feed of the trial. The outlets sought similar permission for the jury selection, even though it is being done in private without audio access.

    “I have gone as far as I can go with response to access,” Davis told lawyers and media representatives in the courtroom, noting that even providing an audio feed of the trial is unprecedented.

    “You’re getting the most access of any media in a Superior Court case in Delaware,” he said.

    Lawyers for Fox filed a response opposing the media access request, saying it risks invading privacy interests, distracting jurors and trial participants, and compromising the integrity of the trial proceedings.

    “There is no guarantee that others will not exploit or misuse the recordings once they are posted online,” Fox;s lawyers wrote.

    Opening statements are scheduled for Monday in a trial expected to last six weeks.

    Dominion alleges that Fox damaged the company by repeatedly airing false allegations that its machines and the software they used rigged the 2020 presidential election to prevent Donald Trump’s reelection. Records produced as part of the suit show many Fox executives and on-air hosts didn’t believe the claims but broadcast them anyway.

    [ad_2]

    Source link

  • Today in History: April 9, Lee surrenders to Grant

    Today in History: April 9, Lee surrenders to Grant

    [ad_1]

    Today in History

    Today is Sunday, April 9, the 99th day of 2023. There are 266 days left in the year.

    Today’s Highlight in History:

    On April 9, 1865, Confederate Gen. Robert E. Lee surrendered his army to Union Lt. Gen. Ulysses S. Grant at Appomattox Court House in Virginia.

    On this date:

    In 1413, the coronation of England’s King Henry V took place in Westminster Abbey.

    In 1939, Marian Anderson performed a concert at the Lincoln Memorial in Washington, D.C., after the Black singer was denied the use of Constitution Hall by the Daughters of the American Revolution.

    In 1940, during World War II, Germany invaded Denmark and Norway.

    In 1942, during World War II, some 75,000 Philippine and American defenders on Bataan surrendered to Japanese troops, who forced the prisoners into what became known as the Bataan Death March; thousands died or were killed en route.

    In 1959, NASA presented its first seven astronauts: Scott Carpenter, Gordon Cooper, John Glenn, Gus Grissom, Wally Schirra, Alan Shepard and Donald Slayton. Architect Frank Lloyd Wright, 91, died in Phoenix, Arizona.

    In 1968, funeral services, private and public, were held for Martin Luther King Jr. at the Ebenezer Baptist Church and Morehouse College in Atlanta, five days after the civil rights leader was assassinated in Memphis, Tennessee.

    In 1979, officials declared an end to the crisis involving the Three Mile Island Unit 2 nuclear reactor in Pennsylvania, 12 days after a partial core meltdown.

    In 1996, in a dramatic shift of purse-string power, President Bill Clinton signed a line-item veto bill into law. (However, the U.S. Supreme Court struck down the veto in 1998.)

    In 2003, jubilant Iraqis celebrated the collapse of Saddam Hussein’s regime, beheading a toppled statue of their longtime ruler in downtown Baghdad and embracing American troops as liberators.

    In 2005, Britain’s Prince Charles married longtime love Camilla Parker Bowles, who took the title Duchess of Cornwall.

    In 2010, Supreme Court Justice John Paul Stevens announced his retirement. (His vacancy was filled by Elena Kagan.)

    In 2021, Britain’s Prince Philip, husband of Queen Elizabeth II, died at the age of 99; he was Britain’s longest-serving consort.

    Ten years ago: Thirteen people were shot to death during a pre-dawn, house-to-house rampage in the Serbian village of Velika Ivanca; authorities identified the gunman as a 60-year-old veteran of the Balkan wars who took his own life. Fourteen people were injured by a knife-wielding attacker at Lone Star College in Cypress, Texas; a suspect was later sentenced to 48 years in prison. Connecticut’s women’s basketball team won its eighth NCAA championship with a 93-60 rout of Louisville at New Orleans Arena.

    Five years ago: Federal agents raided the office of President Donald Trump’s personal attorney, Michael Cohen, seizing records on matters including a $130,000 payment made to porn actress Stormy Daniels. Opening statements began in the retrial of Bill Cosby, charged with drugging and molesting Andrea Constand at his suburban Philadelphia home. (Cosby was convicted and sentenced to three to 10 years in prison, but the state’s Supreme Court would later throw out the conviction.) Facebook began alerting some users that their data had been swept up in the Cambridge Analytica privacy scandal.

    One year ago: Civilian evacuations moved forward in patches of battle-scarred eastern Ukraine, a day after a Russian missile strike killed at least 52 people and wounded more than 100 at a train station where thousands clamored to leave before an expected Russian onslaught. Pittsburgh Steelers quarterback Dwayne Haskins was killed in an auto accident in Florida. Community activists in South Florida sprang into action after West Point cadets on spring break were sickened by fentanyl-laced cocaine at a house party. They blitzed beaches and warned spring breakers of a surge in recreational drugs cut with the dangerous synthetic opioid.

    Today’s Birthdays: Satirical songwriter and mathematician Tom Lehrer is 95. Actor Michael Learned is 84. Country singer Margo Smith is 81. Actor Dennis Quaid is 69. Comedian Jimmy Tingle is 68. Country musician Dave Innis (Restless Heart) is 64. Talk show host Joe Scarborough is 60. Actor-sports reporter Lisa Guerrero is 59. Arizona Gov. Doug Ducey is 59. Actor Mark Pellegrino is 58. Actor-model Paulina Porizkova is 58. Actor Cynthia Nixon is 57. TV personality Sunny Anderson is 48. Rock singer Gerard Way (My Chemical Romance) is 46. Actor Keshia Knight Pulliam is 44. Rock musician Albert Hammond Jr. (The Strokes) is 43. Actor Charlie Hunnam is 43. Actor Ryan Northcott is 43. Actor Arlen Escarpeta is 42. Actor Jay Baruchel is 41. Actor Annie Funke is 38. Actor Jordan Masterson is 37. Actor Leighton Meester is 37. Actor-singer Jesse McCartney is 36. R&B singer Jazmine Sullivan is 36. Actor Kristen Stewart is 33. Actor Elle Fanning is 25. Rapper Lil Nas X is 24. Actor Isaac Hempstead Wright is 24. Classical crossover singer Jackie Evancho (ee-VAYN’-koh) is 23.

    [ad_2]

    Source link

  • TikTok fined $15.9M by UK watchdog over misuse of kids’ data

    TikTok fined $15.9M by UK watchdog over misuse of kids’ data

    [ad_1]

    LONDON (AP) — Britain’s privacy watchdog hit TikTok with a multimillion-dollar penalty Tuesday for misusing children’s data and violating other protections for young users’ personal information.

    The Information Commissioner’s Office said it issued a fine of 12.7 million pounds ($15.9 million) to the short-video sharing app, which is wildly popular with young people.

    It’s the latest example of tighter scrutiny that TikTok and its parent, Chinese technology company ByteDance, are facing in the West, where governments are increasingly concerned about risks that the app poses to data privacy and cybersecurity.

    The British watchdog, which was investigating data breaches between May 2018 and July 2020, said TikTok allowed as many as 1.4 million children in the U.K. under 13 to use the app in 2020, despite the platform’s own rules prohibiting children that young from setting up accounts.

    TikTok didn’t adequately identify and remove children under 13 from the platform, the watchdog said. And even though it knew younger children were using the app, TikTok failed to get consent from their parents to process their data, as required by Britain’s data protection laws, the agency said.

    “There are laws in place to make sure our children are as safe in the digital world as they are in the physical world. TikTok did not abide by those laws,” Information Commissioner John Edwards said in a press release.

    TikTok collected and used personal data of children who were inappropriately given access to the app, he said.

    “That means that their data may have been used to track them and profile them, potentially delivering harmful, inappropriate content at their very next scroll,” Edwards said.

    The company said it disagreed with the watchdog’s decision.

    “We invest heavily to help keep under 13s off the platform and our 40,000-strong safety team works around the clock to help keep the platform safe for our community,” TikTok said in statement. “We will continue to review the decision and are considering next steps.”

    TIkTok says it has improved its sign-up system since the breaches happened by no longer allowing users to simply declare they are old enough and looking for other signs that an account is used by someone under 13.

    The penalty also covered other breaches of U.K. data privacy law.

    The watchdog said TikTok failed to properly inform people about how their data is collected, used and shared in an easily understandable way. Without this information, it’s unlikely that young users would be able “to make informed choices” about whether and how to use TikTok, it said.

    TikTok also failed to ensure personal data of British users was processed lawfully, fairly and transparently, the regulator said.

    TikTok initially faced a 27 million-pound fine, which was reduced after the company persuaded regulators to drop other charges.

    U.S. regulators in 2019 fined TikTok, previously known as Musical.ly, $5.7 million in a case that involved similar allegations of unlawful collection of children’s personal information.

    Also Tuesday, Australia became the latest country to ban TikTok from its government devices, with authorities from the European Union to the United States concerned that the app could share data with the Chinese government or push pro-Beijing narratives. U.S. lawmakers are also considering forcing a sale or even banning it outright as tensions with China grow.

    [ad_2]

    Source link

  • UK fines TikTok $15.9m over misuse of children’s data

    UK fines TikTok $15.9m over misuse of children’s data

    [ad_1]

    Watchdog says TikTok failed to get consent from parents to process the data, as required by the United Kingdom’s data protection laws.

    The UK’s privacy watchdog hit TikTok with a multimillion-dollar penalty for misusing children’s data and violating other protections for young users’ personal information.

    The Information Commissioner’s Office said on Tuesday that it issued a fine of 12.7 million British pounds ($15.9m) to the short-video-sharing app, which is wildly popular with young people.

    It’s the latest example of tighter scrutiny that TikTok and its parent, Chinese technology company ByteDance, are facing in the West, where governments are increasingly concerned about risks that the app poses to data privacy and cybersecurity.

    The British watchdog, which was investigating data breaches between May 2018 and July 2020, said TikTok allowed as many as 1.4 million children in the United Kingdom under age 13 to use the app in 2020, despite the platform’s own rules prohibiting children that young from setting up accounts.

    TikTok didn’t adequately identify and remove children under 13 from the platform, the watchdog said. And even though it knew younger children were using the app, TikTok failed to get consent from their parents to process their data, as required by the UK’s data protection laws, the agency said.

    “There are laws in place to make sure our children are as safe in the digital world as they are in the physical world. TikTok did not abide by those laws,” Information Commissioner John Edwards said in a press release.

    The social media company collected and used the personal data of children who were inappropriately given access to the app, he said.

    “That means that their data may have been used to track them and profile them, potentially delivering harmful, inappropriate content at their very next scroll,” Edwards said.

    The company said it disagreed with the watchdog’s decision.

    “We invest heavily to help keep under 13s off the platform and our 40,000-strong safety team works around the clock to help keep the platform safe for our community,” TikTok said in a statement.

    “We will continue to review the decision and are considering next steps,” the statement added.

    TIkTok says that it has improved its sign-up system since the breaches happened by no longer allowing users to simply declare they are old enough and that it is looking for other signs that an account is used by someone under 13.

    The penalty also covered other breaches of UK data privacy law.

    The watchdog said TikTok failed to properly inform people about how their data is collected, used and shared in an easily understandable way. Without this information, it’s unlikely that young users would be able “to make informed choices” about whether and how to use TikTok, it said.

    TikTok also failed to ensure personal data of British users was processed lawfully, fairly and transparently, the regulator said.

    The social media company initially faced a fine of 27 million British pounds ($33.7m), which was reduced after the company persuaded regulators to drop other charges.

    US regulators in 2019 fined TikTok – previously known as Musical.ly – $5.7m in a case that involved similar allegations of unlawful collection of children’s personal information.

    Also Tuesday, Australia became the latest country to ban TikTok from its government devices, with authorities from the European Union to the United States concerned that the app could share data with the Chinese government or push pro-Beijing narratives.

    US lawmakers are also considering forcing a sale or even banning TikTok outright as tensions with China grow.

    [ad_2]

    Source link

  • Biden says tech companies must ensure AI products are safe

    Biden says tech companies must ensure AI products are safe

    [ad_1]

    President Joe Biden met with his council of advisers on science and technology about the “risks and opportunities” that rapid advancements in artificial intelligence development pose for individual users and national security

    ByZEKE MILLER AP White House Correspondent

    WASHINGTON — President Joe Biden on Tuesday met with his council of advisers on science and technology about the risks and opportunities that rapid advancements in artificial intelligence development pose for individual users and national security.

    Biden said that “tech companies have a responsibility to make sure their products are safe before making them public.”

    “AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security,” Biden told the group.

    The White House said the Democratic president would use the AI meeting to “discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards” and to reiterate his call for Congress to pass legislation to protect children and curtail data collection by technology companies.

    Artificial intelligence burst to the forefront in the national and global conversation after the release of the popular ChatGPT AI chatbot, which helped spark a race among tech giants to unveil similar tools, while raising ethical and societal concerns about new tools that can generate convincing prose or imagery that looks like it’s the work of humans.

    Italy last week temporarily blocked ChatGPT over data privacy concerns, and European Union lawmakers have been negotiating new regulators to limit high-risk AI products.

    The U.S. so far has taken a different approach. The Biden administration last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.

    The Blueprint for an AI Bill of Rights notably did not set out specific enforcement actions, but instead was intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world.

    Biden’s council, known as PCAST, is composed of science, engineering, technology and medical experts and is co-chaired by the Cabinet-ranked director of the White House Office of Science and Technology Policy, Arati Prabhakar.

    Asked if AI is dangerous, Biden said Tuesday, “It remains to be seen. Could be.”

    —————

    AP writers Chris Megerian and Matt O’Brien contributed to this report.

    [ad_2]

    Source link

  • TikTok attorney: China can’t get U.S. data under plan

    TikTok attorney: China can’t get U.S. data under plan

    [ad_1]

    SAULSALITO, Calif. — Under intense scrutiny from Washington that could lead to a potential ban, the top attorney for TikTok and its Chinese parent company ByteDance defended the social media platform’s plan to safeguard U.S. user data from China.

    “The basic approach that we’re following is to make it physically impossible for any government, including the Chinese government, to get access to U.S. user data,” said general counsel Erich Andersen during a wide-ranging interview with The Associated Press at a cybersecurity conference in Sausalito, California, on Friday sponsored by the Hewlett Foundation and Aspen Digital and featuring top government officials, tech executives and journalists.

    ByteDance will continue to develop its new app called Lemon8, Andersen said.

    “We’re obviously going to do our best with the Lemon8 app to comply with U.S. law and to make sure we do the right thing here,” Andersen said, referring to the new social app developed by ByteDance that resembles Instagram and Pinterest. “But I think we got a long way to go with that application — it’s pretty much a startup phase.”

    ByteDance’s most known app, TikTok, is under intense scrutiny over concerns it could hand over user data to the Chinese government or push pro-Beijing propaganda and misinformation on its behalf. Lemon8 was introduced across app stores in Japan in April 2020 and has been rolled out in more countries since then. It’s available for download in the U.S. and could face similar scrutiny to TikTok.

    Leaders at the FBI, CIA and officials at other government agencies have warned that ByteDance could be forced to give user data — such as browsing history, IP addresses and biometric identifiers — to Beijing under a 2017 law that compels companies to cooperate with the government for matters involving China’s national security. Another Chinese law, implemented in 2014, has similar mandates.

    To assuage concerns from U.S. officials, TikTok has been emphasizing a $1.5 billion proposal, called Project Texas, to store all U.S. user data on servers owned and maintained by the software giant Oracle. Under the plan, access to U.S. data would be managed by U.S. employees through a separate entity called TikTok U.S. Data Security, which is run independently of ByteDance and monitored by outside observers.

    Some lawmakers have said that’s not enough. But despite skepticism about the project, TikTok says it is moving forward anyway.

    “We’re investing in a system where people don’t have to believe the Chinese government and they don’t have to believe us,” Andersen said.

    He also wondered if the skepticism was being driven by something else.

    “Where are we falling short here?” he said. “At some point you get beyond the cybersecurity risk assessment, etcetera, and you get to ‘We don’t like your nationality.’”

    TikTok CEO Shou Zi Chew has said the company started deleting all historic U.S. user data from non-Oracle servers this month and expects that process to be completed this year. During a congressional hearing held last week, Chew said migrating the data to Oracle will keep it out of China’s hands, but also acknowledged China-based employees may still have access to it before the process wraps up.

    TikTok maintains it has never been requested to turn over any kind of data and won’t do so if asked. But whether those promises, or Project Texas, will allow it to stay operating in the U.S. remains to be seen.

    The U.S., as well as Britain, the European Union and others, have banned TikTok on government devices. And the Biden administration is reportedly threatening a U.S. ban on the app unless its Chinese owners divest their stakes in the company.

    On Friday, Andersen said a ban would be “basically giving up”.

    “Banning a platform like TikTok is a defeat, it’s a statement that we aren’t creative enough to find another way,” he said.

    China has said it would oppose a possible sale, a declaration that makes it more difficult for TikTok to position itself and ByteDance as a global enterprise instead of a Chinese company. In 2020, the country had also come out in fierce opposition to executive orders by then President Donald Trump that sought to ban TikTok and the messaging app WeChat.

    “They were clear about their point of view back in 2020 timeframe when we faced an existential challenge from executive orders under the Trump administration,” Andersen said.

    Courts blocked Trump’s efforts, and President Joe Biden rescinded Trump’s orders after taking office. The company has since been in talks about privacy concerns with the Committee on Foreign Investment in the United States, a multi-agency panel that sits under the Treasury department.

    Meanwhile, lawmakers on Capitol Hill have been pushing bills that would effectively ban TikTok or give the administration more authority to do so. One bill by U.S. Sen. Josh Hawley was blocked this week by Sen. Rand Paul, the only Republican who has come out in opposition to a TikTok ban. A small number of progressive lawmakers have also said they would oppose a ban, and argued the U.S. should implement a national privacy law to curtail the problem.

    Andersen said Friday TikTok would support broad-based privacy legislation.

    “Our view is that we would really welcome broad-based legislation that applies broadly and evenly,” he said. “What we don’t like, frankly, is legislation that is sort of targeted at one company.”

    TikTok could also be banned through another bill, called the RESTRICT Act, that has garnered broad bipartisan support in the Senate and backing from the White House. The legislation does not call out TikTok but would give the Commerce Department power to review and potentially restrict foreign threats to technology platforms.

    ___

    This story has been updated to change Hewlett-Packard Foundation to Hewlett Foundation.

    [ad_2]

    Source link

  • TikTok attorney: China can’t get U.S. data under plan

    TikTok attorney: China can’t get U.S. data under plan

    [ad_1]

    SAULSALITO, Calif. — Under intense scrutiny from Washington that could lead to a potential ban, the top attorney for TikTok and its Chinese parent company ByteDance defended the social media platform’s plan to safeguard U.S. user data from China.

    “The basic approach that we’re following is to make it physically impossible for any government, including the Chinese government, to get access to U.S. user data,” said general counsel Erich Andersen during a wide-ranging interview with The Associated Press at a cybersecurity conference in Sausalito, California on Friday sponsored by Hewlett-Packard Foundation and Aspen Digital and featuring top government officials, tech executives and journalists.

    Andersen also said ByteDance will continue to develop its new app called Lemon8.

    “We’re obviously going to do our best with the Lemon8 app to comply with U.S. law and to make sure we do the right thing here,” Andersen said, referring to the new social app developed by ByteDance that resembles Instagram and Pinterest. “But I think we got a long way to go with that application — it’s pretty much a startup phase.”

    ByteDance’s most known app, TikTok, is under intense scrutiny over concerns it could hand over user data to the Chinese government or push pro-Beijing propaganda and misinformation on its behalf. Lemon8 was introduced across app stores in Japan in April 2020 and has been rolled out in more countries since then. It’s available for download in the U.S. and could face similar scrutiny to TikTok.

    Leaders at the FBI, CIA and officials at other government agencies have warned that ByteDance could be forced to give user data — such as browsing history, IP addresses and biometric identifiers — to Beijing under a 2017 law that compels companies to cooperate with the government for matters involving China’s national security. Another Chinese law, implemented in 2014, has similar mandates.

    To assuage concerns from U.S. officials, TikTok has been emphasizing a $1.5 billion proposal, called Project Texas, to store all U.S. user data on servers owned and maintained by the software giant Oracle. Under the plan, access to U.S. data would be managed by U.S. employees through a separate entity called TikTok U.S. Data Security, which is run independently of ByteDance and monitored by outside observers.

    Some lawmakers have said that’s not enough. But despite skepticism about the project, TikTok says it is moving forward anyway.

    “We’re investing in a system where people don’t have to believe the Chinese government and they don’t have to believe us,” Andersen said.

    He also wondered if the skepticism was being driven by something else.

    “Where are we falling short here?” he said. “At some point you get beyond the cybersecurity risk assessment, etcetera, and you get to ‘We don’t like your nationality.’”

    TikTok CEO Shou Zi Chew has said the company started deleting all historic U.S. user data from non-Oracle servers this month and expects that process to be completed this year. During a congressional hearing held last week, Chew said migrating the data to Oracle will keep it out of China’s hands, but also acknowledged China-based employees may still have access to it before the process wraps up.

    TikTok maintains it has never been requested to turn over any kind of data and won’t do so if asked. But whether those promises, or Project Texas, will allow it to stay operating in the U.S. remains to be seen.

    The U.S., as well as Britain, the European Union and others, have banned TikTok on government devices. And the Biden administration is reportedly threatening a U.S. ban on the app unless its Chinese owners divest their stakes in the company.

    On Friday, Andersen said a ban would be “basically giving up”.

    “Banning a platform like TikTok is a defeat, it’s a statement that we aren’t creative enough to find another way,” he said.

    China has said it would oppose a possible sale, a declaration that makes it more difficult for TikTok to position itself and ByteDance as a global enterprise instead of a Chinese company. In 2020, the country had also come out in fierce opposition to executive orders by then President Donald Trump that sought to ban TikTok and the messaging app WeChat.

    “They were clear about their point of view back in 2020 timeframe when we faced an existential challenge from executive orders under the Trump administration,” Andersen said.

    Courts blocked Trump’s efforts, and President Joe Biden rescinded Trump’s orders after taking office. The company has since been in talks about privacy concerns with the Committee on Foreign Investment in the United States, a multi-agency panel that sits under the Treasury department.

    Meanwhile, lawmakers on Capitol Hill have been pushing bills that would effectively ban TikTok or give the administration more authority to do so. One bill by U.S. Sen. Josh Hawley was blocked this week by Sen. Rand Paul, the only Republican who has come out in opposition to a TikTok ban. A small number of progressive lawmakers have also said they would oppose a ban, and argued the U.S. should implement a national privacy law to curtail the problem.

    Andersen said Friday TikTok would support broad-based privacy legislation.

    “Our view is that we would really welcome broad-based legislation that applies broadly and evenly,” he said. “What we don’t like, frankly, is legislation that is sort of targeted at one company.”

    TikTok could also be banned through another bill, called the RESTRICT Act, that has garnered broad bipartisan support in the Senate and backing from the White House. The legislation does not call out TikTok but would give the Commerce Department power to review and potentially restrict foreign threats to technology platforms.

    ___

    This story has been updated to correct the spelling of Andersen.

    [ad_2]

    Source link

  • TikTok attorney: China can’t get U.S. data under plan

    TikTok attorney: China can’t get U.S. data under plan

    [ad_1]

    SAULSALITO, Calif. — Under intense scrutiny from Washington that could lead to a potential ban, the top attorney for TikTok and its Chinese parent company ByteDance defended the social media platform’s plan to safeguard U.S. user data from China.

    “The basic approach that we’re following is to make it physically impossible for any government, including the Chinese government, to get access to U.S. user data,” said general counsel Erich Anderson during a wide-ranging interview with The Associated Press at a cybersecurity conference in Sausalito, California on Friday sponsored by Hewlett-Packard Foundation and Aspen Digital and featuring top government officials, tech executives and journalists.

    Anderson also said ByteDance will continue to develop its new app called Lemon8.

    “We’re obviously going to do our best with the Lemon8 app to comply with U.S. law and to make sure we do the right thing here,” Anderson said, referring to the new social app developed by ByteDance that resembles Instagram and Pinterest. “But I think we got a long way to go with that application — it’s pretty much a startup phase.”

    ByteDance’s most known app, TikTok, is under intense scrutiny over concerns it could hand over user data to the Chinese government or push pro-Beijing propaganda and misinformation on its behalf. Lemon8 was introduced across app stores in Japan in April 2020 and has been rolled out in more countries since then. It’s available for download in the U.S. and could face similar scrutiny to TikTok.

    Leaders at the FBI, CIA and officials at other government agencies have warned that ByteDance could be forced to give user data — such as browsing history, IP addresses and biometric identifiers — to Beijing under a 2017 law that compels companies to cooperate with the government for matters involving China’s national security. Another Chinese law, implemented in 2014, has similar mandates.

    To assuage concerns from U.S. officials, TikTok has been emphasizing a $1.5 billion proposal, called Project Texas, to store all U.S. user data on servers owned and maintained by the software giant Oracle. Under the plan, access to U.S. data would be managed by U.S. employees through a separate entity called TikTok U.S. Data Security, which is run independently of ByteDance and monitored by outside observers.

    Some lawmakers have said that’s not enough. But despite skepticism about the project, TikTok says it is moving forward anyway.

    “We’re investing in a system where people don’t have to believe the Chinese government and they don’t have to believe us,” Andersen said.

    He also wondered if the skepticism was being driven by something else.

    “Where are we falling short here?” he said. “At some point you get beyond the cybersecurity risk assessment, etcetera, and you get to ‘We don’t like your nationality.’”

    TikTok CEO Shou Zi Chew has said the company started deleting all historic U.S. user data from non-Oracle servers this month and expects that process to be completed this year. During a congressional hearing held last week, Chew said migrating the data to Oracle will keep it out of China’s hands, but also acknowledged China-based employees may still have access to it before the process wraps up.

    TikTok maintains it has never been requested to turn over any kind of data and won’t do so if asked. But whether those promises, or Project Texas, will allow it to stay operating in the U.S. remains to be seen.

    The U.S., as well as Britain, the European Union and others, have banned TikTok on government devices. And the Biden administration is reportedly threatening a U.S. ban on the app unless its Chinese owners divest their stakes in the company.

    On Friday, Andersen said a ban would be “basically giving up”.

    “Banning a platform like TikTok is a defeat, it’s a statement that we aren’t creative enough to find another way,” he said.

    China has said it would oppose a possible sale, a declaration that makes it more difficult for TikTok to position itself and ByteDance as a global enterprise instead of a Chinese company. In 2020, the country had also come out in fierce opposition to executive orders by then President Donald Trump that sought to ban TikTok and the messaging app WeChat.

    “They were clear about their point of view back in 2020 timeframe when we faced an existential challenge from executive orders under the Trump administration,” Andersen said.

    Courts blocked Trump’s efforts, and President Joe Biden rescinded Trump’s orders after taking office. The company has since been in talks about privacy concerns with the Committee on Foreign Investment in the United States, a multi-agency panel that sits under the Treasury department.

    Meanwhile, lawmakers on Capitol Hill have been pushing bills that would effectively ban TikTok or give the administration more authority to do so. One bill by U.S. Sen. Josh Hawley was blocked this week by Sen. Rand Paul, the only Republican who has come out in opposition to a TikTok ban. A small number of progressive lawmakers have also said they would oppose a ban, and argued the U.S. should implement a national privacy law to curtail the problem.

    Andersen said Friday TikTok would support broad-based privacy legislation.

    “Our view is that we would really welcome broad-based legislation that applies broadly and evenly,” he said. “What we don’t like, frankly, is legislation that is sort of targeted at one company.”

    TikTok could also be banned through another bill, called the RESTRICT Act, that has garnered broad bipartisan support in the Senate and backing from the White House. The legislation does not call out TikTok but would give the Commerce Department power to review and potentially restrict foreign threats to technology platforms.

    [ad_2]

    Source link

  • Italy privacy watchdog blocks ChatGPT, citing data breach

    Italy privacy watchdog blocks ChatGPT, citing data breach

    [ad_1]

    ROME — The Italian government’s privacy watchdog said Friday that it is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach.

    In a statement on its website, the Italian Data Protection Authority described its action as provisional “until ChatGPT respects privacy.” The watchdog’s measure involves temporarily limiting the company from holding Italian users’ data.

    U.S.-based OpenAI, which developed ChatGPT, didn’t immediately return a request for comment Friday.

    While some public schools and universities around the world have blocked the ChatGPT website from their local networks over student plagiarism concerns, it’s not clear how Italy would block it at a nationwide level.

    The move also is unlikely to affect applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.

    The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

    The Italian watchdog said OpenAI must report to it within 20 days what measures it has taken to ensure the privacy of users’ data or face a fine of up to either 20 million euros (nearly $22 million) or 4% of annual global revenue.

    The agency’s statement noted that ChatGPT faced a loss of data on March 20 “regarding the conversations of users and information related to the payment of the subscribers for the service.”

    OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject line, of other users’ chat history.

    “Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the company said. “We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”

    Italy’s privacy watchdog lamented “the lack of a notice to users and to all those involved whose data is gathered by OpenAI” and “above all, the absence of a juridical basis that justified the massive gathering and keeping of personal data, with the aim of ‘training’ algorithms underlying the functioning of the platform.”

    The agency said information supplied by ChatGPT “doesn’t always correspond to real data, thus determining the keeping of inexact personal data.”

    Finally, it noted “the absence of any kind of filter to verify the age of the users, exposing minors to answers absolutely unsuitable to their degree of development and self-awareness.”

    A group of scientists and tech industry leaders published a letter Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks.

    The San Francisco-based company’s CEO, Sam Altman, announced this week that he’s embarking on a six-continent trip in May to talk about the technology with users and developers. That includes a stop planned for Brussels, where European Union lawmakers have been negotiating sweeping new rules to limit high-risk AI tools.

    Altman said his stops in Europe would include Madrid, Munich, London and Paris.

    ___

    O’Brien reported from Providence, Rhode Island. AP Business Writer Kelvin Chan contributed from London.

    [ad_2]

    Source link

  • TikTok propaganda labels fall flat in ‘huge win’ for Russia

    TikTok propaganda labels fall flat in ‘huge win’ for Russia

    [ad_1]

    WASHINGTON — A year ago, following Russia’s invasion of Ukraine, TikTok started labeling accounts operated by Russian state propaganda agencies as a way to tell users they were being exposed to Kremlin disinformation.

    An analysis a year later shows the policy has been applied inconsistently. It ignores dozens of accounts with millions of followers. Even when used, labels have little impact on Russia’s ability to exploit TikTok’s powerful algorithms as part of its effort to shape public opinion about the war.

    Researchers at the Alliance for Securing Democracy, a bipartisan, transatlantic nonprofit operated by the German Marshall Fund that studies authoritarian disinformation, on Thursday published a report that identified nearly 80 TikTok accounts operated by Russian state outlets like RT or Sputnik or by individuals linked to them, including RT’s editor-in-chief.

    More than a third of the accounts were unlabeled, despite a labeling policy announced by TikTok a year ago. The labels, which appear in bold immediately below an account’s name, read “Russia state-controlled media.” Clicking on the label brings up more information, including a description that “the government has control over the account’s editorial content.”

    The accounts have spread pro-Russian propaganda about the invasion of Ukraine as well as false and misleading claims about the U.S. and the international coalition that stands against Russia’s war.

    “US to hold biggest satanic gathering in history,” claims one of the videos on Sputnik.Brasil, a Russian media account currently unlabeled on TikTok. Other videos posted by the account blame the U.S. for the war in Ukraine, claim the U.S. will start a nuclear war, and suggest the U.S. is working to make Brazil invade Iran.

    RT Mexico, one of the most popular unlabeled accounts, has posted multiple videos playing up tension between the U.S. and Mexico over immigration and drugs.

    “This is a huge win for Russian propaganda that they’re able to reach such large audiences on TikTok,” said Joe Bodnar, a research analyst at Alliance for Securing Democracy. “TikTok is not taking it as seriously as other platforms.”

    That charge comes as the video sharing platform, owned by Chinese company ByteDance, faces questions in Washington about its ties to the government as well as concerns about privacy, surveillance and harmful content.

    Britain, Canada, the U.S. federal government and a growing number of American states are among the governments that have already banned TikTok on government-issued devices. Some lawmakers in the U.S. have floated the idea of a complete ban on the app unless ByteDance agreed to sell its U.S. assets to another company.

    TikTok has labeled more than 120 accounts, a spokesperson for the platform told The Associated Press on Tuesday. The platform’s policy covers outlets and organizations, not individuals, a loophole that allows RT’s editor-in-chief to remain unlabeled.

    The platform said it would label many of the other accounts identified by researchers after being contacted by the AP.

    “This is an ongoing process and we’ll continue to review new accounts and add labels as and when they join the platform,” the company said in an emailed statement.

    Other tech companies have taken a more aggressive approach to Russian disinformation. Last year, Google blocked YouTube channels operated by Russian state media within Europe. The company also has initiated “pre-bunking” programs designed to blunt the effect of disinformation. Meta, which owns Facebook and Instagram, also labels foreign state media and has exposed and eliminated sprawling disinformation networks tied to Russia.

    Once known largely for its popularity among teens, TikTok has emerged as a leading source of information — and misinformation. More than two-thirds of American teens are on the platform, which is among the world’s most popular websites.

    Labels have become a common way for social media platforms to designate content from state-controlled media and alert users without removing the content. TikTok announced its labeling effort in March 2022, saying that “in response to the war in Ukraine, we’re expediting the rollout of our state media policy to bring viewers context to evaluate the content they consume on our platform.”

    While the labels may provide more information about an account, they aren’t doing much to reduce overall engagement with Russian propaganda on TikTok, suggesting users either don’t see or don’t care about the labels.

    RT, one of Russia’s top state-controlled outlets, has more followers on TikTok than The New York Times or The Wall Street Journal, despite a label classifying RT as “Russia state-controlled media.”

    Another labeled TikTok account, RT en Espanol, has received more likes than other Spanish-language news outlets including Telemundo, Univision or El Pais.

    TikTok’s use of labeling came up during a recent congressional hearing in which TikTok’s CEO was questioned about the platform’s ties to China and its record on safety and privacy. In his testimony, Shou Zi Chew said TikTok’s labeling policy would also extend to Chinese state media outlets.

    Lawmakers were skeptical of his explanations.

    “I worry that TikTok is the world’s most powerful and extensive propaganda machine,” U.S. Rep. Marc Veasey, D-Texas, said during last week’s hearing.

    [ad_2]

    Source link