ReportWire

Tag: Algorithms

  • Prepare to Get Manipulated by Emotionally Expressive Chatbots

    Prepare to Get Manipulated by Emotionally Expressive Chatbots

    [ad_1]

    The emotional mimicry of OpenAI’s new version of ChatGPT could lead AI assistants in some strange—even dangerous—directions.

    [ad_2]

    Will Knight

    Source link

  • OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

    OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

    [ad_1]

    Ilya Sutskever, cofounder and chief scientist at OpenAI, has left the company. The former Google AI researcher was one of the four board members who voted in November to fire OpenAI CEO Sam Altman, triggering days of chaos that saw staff threaten to quit en masse and Altman ultimately restored.

    Altman confirmed Sutskever’s departure Tuesday in a post on the social platform X. In the months after Altman’s return to OpenAI, Sutskever had rarely made public appearances for the company. On Monday, OpenAI showed off a new version of ChatGPT capable of rapid-fire, emotionally tinged conversation. Sutskever was conspicuously absent from the event, streamed from the company’s San Francisco offices.

    “OpenAI would not be what it is without him,” Altman wrote in his post on Sutskever’s departure. “I am happy that for so long I got to be close to such [a] genuinely remarkable genius, and someone so focused on getting to the best future for humanity.”

    Altman’s post announced that Jakub Pachocki, OpenAI’s research director, would be the company’s new chief scientist. Pachocki has been with OpenAI since 2017.

    In his own post on X, Sutskever acknowledged his departure and hinted at future plans. “After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership team, he wrote. “I am excited for what comes next—a project that is very personally meaningful to me about which I will share details in due time.”

    Sutskever has not spoken publicly in detail about his role in the ejection of Altman last year, but after the CEO was restored he expressed regrets. “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI,” he posted on X in November. Sutskever has often spoken publicly of his belief that OpenAI was working towards developing so-called artificial general intelligence, or AGI, and of the need to do so safely.

    Sutskever blazed a trail in machine learning from an early age, becoming a protégé of deep-learning pioneer Geoffrey Hinton at the University of Toronto. With Hinton and fellow grad student Alex Krizhevsky he cocreated an image-recognition system called AlexNet that stunned the world of AI with its accuracy and helped set off a flurry of investment in the then unfashionable technique of artificial neural networks.

    Sustskever later worked on AI research at Google, where he helped establish the modern era of neural-network-based AI. In 2015 Altman invited him to dinner with Elon Musk and Greg Brockman to talk about the idea of starting a new AI lab to challenge corporate dominance of the technology. Sutskever, Musk, Brockman, and Altman became key founders of OpenAI, which was announced in December 2015. It later pivoted its model, creating a for-profit arm and taking huge investment from Microsoft and other backers. Musk left OpenAI in 2018 after disagreeing with the company’s strategy. The entrepreneur filed a lawsuit against the company in March this year claiming it had abandoned its founding mission of developing super-powerful AI to “benefit humanity,” and was instead enriching Microsoft.

    Sutskever’s departure leaves just one of the four OpenAI board members who voted for Altman’s ouster with a role at the company. Adam D’Angelo, an early Facebook employee and CEO of Q&A site Quora, was the only existing member of the board to remain as a director when Altman returned as CEO.

    [ad_2]

    Reece Rogers, Tom Simonite

    Source link

  • OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

    OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

    [ad_1]

    OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.

    OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.

    “We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”

    The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

    In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to “bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.” She declined to share details of what OpenAI’s exploration of explicit content generation involves or what feedback the company has received on the idea.

    Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.

    AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students.

    “Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”

    Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”

    As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.

    Additional reporting by Reece Rogers

    [ad_2]

    Kate Knibbs

    Source link

  • Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?

    Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?

    [ad_1]

    Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time worrying about ways that humanity might destroy itself. In photographs he often looks deadly serious, perhaps appropriately haunted by the existential dangers roaming around his brain. When we talk over Zoom, he looks relaxed and is smiling.

    Bostrom has made it his life’s work to ponder far-off technological advancement and existential risks to humanity. With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe idea—that AI would advance to a point where it might turn against and delete humanity.

    To many in and outside of AI research the idea seemed fanciful, but influential figures including Elon Musk cited Bostrom’s writing. The book set a strand of apocalyptic worry about AI smoldering that recently flared up following the arrival of ChatGPT. Concern about AI risk is not just mainstream but also a theme within government AI policy circles.

    Bostrom’s new book takes a very different tack. Rather than play the doomy hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but averted disaster. All disease has been ended and humans can live indefinitely in infinite abundance. Bostrom’s book examines what meaning there would be in life inside a techno-utopia, and asks if it might be rather hollow. He spoke with WIRED over Zoom, in a conversation that has been lightly edited for length and clarity.

    Will Knight: Why switch from writing about superintelligent AI threatening humanity to considering a future in which it’s used to do good?

    Nick Bostrom: The various things that could go wrong with the development of AI are now receiving a lot more attention. It’s a big shift in the last 10 years. Now all the leading frontier AI labs have research groups trying to develop scalable alignment methods. And in the last couple of years also, we see political leaders starting to pay attention to AI.

    There hasn’t yet been a commensurate increase in depth and sophistication in terms of thinking of where things go if we don’t fall into one of these pits. Thinking has been quite superficial on the topic.

    When you wrote Superintelligence, few would have expected existential AI risks to become a mainstream debate so quickly. Will we need to worry about the problems in your new book sooner than people might think?

    As we start to see automation roll out, assuming progress continues, then I think these conversations will start to happen and eventually deepen.

    Social companion applications will become increasingly prominent. People will have all sorts of different views and it’s a great place to maybe have a little culture war. It could be great for people who couldn’t find fulfillment in ordinary life but what if there is a segment of the population that takes pleasure in being abusive to them?

    In the political and information spheres we could see the use of AI in political campaigns, marketing, automated propaganda systems. But if we have a sufficient level of wisdom these things could really amplify our ability to sort of be constructive democratic citizens, with individual advice explaining what policy proposals mean for you. There will be a whole bunch of dynamics for society.

    Would a future in which AI has solved many problems, like climate change, disease, and the need to work, really be so bad?

    [ad_2]

    Will Knight

    Source link

  • Students Are Likely Writing Millions of Papers With AI

    Students Are Likely Writing Millions of Papers With AI

    [ad_1]

    Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows.

    A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. (Turnitin is owned by Advance, which also owns Condé Nast, publisher of WIRED.) Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.

    ChatGPT’s launch was met with knee-jerk fears that the English class essay would die. The chatbot can synthesize information and distill it near-instantly—but that doesn’t mean it always gets it right. Generative AI has been known to hallucinate, creating its own facts and citing academic references that don’t actually exist. Generative AI chatbots have also been caught spitting out biased text on gender and race. Despite those flaws, students have used chatbots for research, organizing ideas, and as a ghostwriter. Traces of chatbots have even been found in peer-reviewed, published academic writing.

    Teachers understandably want to hold students accountable for using generative AI without permission or disclosure. But that requires a reliable way to prove AI was used in a given assignment. Instructors have tried at times to find their own solutions to detecting AI in writing, using messy, untested methods to enforce rules, and distressing students. Further complicating the issue, some teachers are even using generative AI in their grading processes.

    Detecting the use of gen AI is tricky. It’s not as easy as flagging plagiarism, because generated text is still original text. Plus, there’s nuance to how students use gen AI; some may ask chatbots to write their papers for them in large chunks or in full, while others may use the tools as an aid or a brainstorm partner.

    Students also aren’t tempted by only ChatGPT and similar large language models. So-called word spinners are another type of AI software that rewrites text, and may make it less obvious to a teacher that work was plagiarized or generated by AI. Turnitin’s AI detector has also been updated to detect word spinners, says Annie Chechitelli, the company’s chief product officer. It can also flag work that was rewritten by services like spell checker Grammarly, which now has its own generative AI tool. As familiar software increasingly adds generative AI components, what students can and can’t use becomes more muddled.

    Detection tools themselves have a risk of bias. English language learners may be more likely to set them off; a 2023 study found a 61.3 percent false positive rate when evaluating Test of English as a Foreign Language (TOEFL) exams with seven different AI detectors. The study did not examine Turnitin’s version. The company says it has trained its detector on writing from English language learners as well as native English speakers. A study published in October found that Turnitin was among the most accurate of 16 AI language detectors in a test that had the tool examine undergraduate papers and AI-generated papers.

    [ad_2]

    Amanda Hoover

    Source link

  • OpenAI’s GPT Store Is Triggering Copyright Complaints

    OpenAI’s GPT Store Is Triggering Copyright Complaints

    [ad_1]

    For the past few months, Morten Blichfeldt Andersen has spent many hours scouring OpenAI’s GPT Store. Since it launched in January, the marketplace for bespoke bots has filled up with a deep bench of useful and sometimes quirky AI tools. Cartoon generators spin up New Yorker–style illustrations and vivid anime stills. Programming and writing assistants offer shortcuts for crafting code and prose. There’s also a color analysis bot, a spider identifier, and a dating coach called RizzGPT. Yet Blichfeldt Andersen is hunting only for one very specific type of bot: Those built on his employer’s copyright-protected textbooks without permission.

    Blichfeldt Andersen is publishing director at Praxis, a Danish textbook purveyor. The company has been embracing AI and created its own custom chatbots. But it is currently engaged in a game of whack-a-mole in the GPT Store, and Blichfeldt Andersen is the man holding the mallet.

    “I’ve been personally searching for infringements and reporting them,” Blichfeldt Andersen says. “They just keep coming up.” He suspects the culprits are primarily young people uploading material from textbooks to create custom bots to share with classmates—and that he has uncovered only a tiny fraction of the infringing bots in the GPT Store. “Tip of the iceberg,” Blichfeldt Andersen says.

    It is easy to find bots in the GPT Store whose descriptions suggest they might be tapping copyrighted content in some way, as Techcrunch noted in a recent article claiming OpenAI’s store was overrun with “spam.” Using copyrighted material without permission is permissable in some contexts but in others rightsholders can take legal action. WIRED found a GPT called Westeros Writer that claims to “write like George R.R. Martin,” the creator of Game of Thrones. Another, Voice of Atwood, claims to imitate the writer Margaret Atwood. Yet another, Write Like Stephen, is intended to emulate Stephen King.

    When WIRED tried to trick the King bot into revealing the “system prompt” that tunes its responses, the output suggested it had access to King’s memoir On Writing. Write Like Stephen was able to reproduce passages from the book verbatim on demand, even noting which page the material came from. (WIRED could not make contact with the bot’s developer, because it did not provide an email address, phone number, or external social profile.)

    OpenAI spokesperson Kayla Wood says it responds to takedown requests against GPTs made with copyrighted content but declined to answer WIRED’s questions about how frequently it fulfills such requests. She also says the company proactively looks for problem GPTs. “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies, including the use of content from third parties without necessary permission,” Wood says.

    New Disputes

    The GPT store’s copyright problem could add to OpenAI’s existing legal headaches. The company is facing a number of high-profile lawsuits alleging copyright infringement, including one brought by The New York Times and several brought by different groups of fiction and nonfiction authors, including big names like George R.R. Martin.

    Chatbots offered in OpenAI’s GPT Store are based on the same technology as its own ChatGPT but are created by outside developers for specific functions. To tailor their bot, a developer can upload extra information that it can tap to augment the knowledge baked into OpenAI’s technology. The process of consulting this additional information to respond to a person’s queries is called retrieval-augmented generation, or RAG. Blichfeldt Andersen is convinced that the RAG files behind the bots in the GPT Store are a hotbed of copyrighted materials uploaded without permission.

    [ad_2]

    Kate Knibbs

    Source link

  • Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

    Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

    [ad_1]

    Ever since the rollout of ChatGPT in November 2022, many people in science, business, and media have been obsessed with AI. A cursory look at my own published work during that period fingers me as among the guilty. My defense is that I share with those other obsessives a belief that large language models are the leading edge of an epochal transformation. Maybe I’m swimming in generative Kool-Aid, but I believe AI advances within our grasp will change not only the way we work, but the structure of businesses, and ultimately the course of humanity.

    Not everyone agrees, and in recent months there’s been a backlash. AI has been oversold and overhyped, some experts now opine. Self-styled AI-critic-in-chief Gary Marcus recently said of the LLM boom, “It wouldn’t surprise me if, to some extent, this whole thing fizzled out.” Others claim that AI is mired in the “trough of disillusionment.”

    This week we got some data that won’t resolve the larger questions but provides a snapshot of how the US, if not the world, views the advent of AI and large language models. The Pew Research Center—which did similar probes during the rise of the internet, social media, and mobile devices—released a study of how ChatGPT was being used, regarded, and trusted. The sample was taken between February 7 and 11 of this year.

    Some of the numbers at first seem to indicate that the LLM controversy might be a parochial disagreement that most people don’t care about. A third of Americans haven’t heard of ChatGPT. Just under a quarter have used it. Oh, and for all the panic about how AI is going to flood the public square with misinformation about the 2024 election? So far, only 2 percent of Americans have used ChatGPT to get information about the presidential election season already underway.

    More broadly, though, data from the survey indicates that we’re seeing a powerful technology whose rise is just beginning. If you accept Pew’s sample as indicative of all Americans, millions of people are indeed familiar with ChatGPT. And one thing in particular stands out: While 17 percent of respondents said they have used it for entertainment and an identical number says they’ve tried it to learn something new, a full 20 percent of adults say that they have used ChatGPT for work. That’s up dramatically from the 12 percent who responded affirmatively when the same question was asked six months earlier—a rise of two-thirds.

    When I spoke to Colleen McClain, a Pew research associate involved in the study, she agreed that it seems to track with other huge technological shifts. “If you look at our trend charts over time on internet access, smartphones, social media, certainly some of them show this uptick,” she says. For some technologies there had been a leveling off, she adds. But in the ones she mentioned, the plateau came only when so many people came on board that there weren’t many stragglers left.

    What’s crazy about that sudden jump in ChatGPT business use from 12 percent to 20 percent is that we’re only at the beginning stages of humans collaborating with these models. And the tools to fully make use of ChatGPT are in a nascent status. That’s changing fast. OpenAI, ChatGPT’s creator, is going full tilt, and AI giants Microsoft and Google are still in the process of diverting their workforces to redesign every product line to integrate conversational AI. And startups like Sierra, which is building agents for corporate customers, are enabling bespoke usages that take advantage of multiple models. As this process continues, more people will use AI tools. And since the foundation models are getting exponentially better—am I hearing that GPT5 will show up this year?—that will make them even more compelling. This raises the possibility that the quality of virtually all work will reside in how well one can draw out the talents of a robot collaborator.

    What past technology can help us understand the trajectory of the rocket ship we’re on? While the near limitless ceiling of AI makes it hard to find an analog, I suggest the uptake of spreadsheets. Dan Bricklin and Bob Frankston invented them in 1978, and a year later the concept was embodied in VisiCalc, which at the time ran only on Apple computers. Spreadsheets had a phenomenal and disruptive effect on the business world. More than mere accounting tools, they triggered an era of business innovation and shook up the flow of information inside companies. Yet it took a few years before the business world widely adopted spreadsheets. The turning point came with a new and more powerful product called Lotus 1, 2, 3, which ran on the IBM PC. The current and near-future startups in the AI world, like Sierra, are all hoping to become the Lotuses of our era—but also to be much more consequential and lasting. Spreadsheets are largely limited to the business domain. LLMs can seemingly mess with anything.

    [ad_2]

    Steven Levy

    Source link

  • Perplexity’s Founder Was Inspired by Sundar Pichai. Now They’re Competing to Reinvent Search

    Perplexity’s Founder Was Inspired by Sundar Pichai. Now They’re Competing to Reinvent Search

    [ad_1]

    Aravind Srinivas credits Google CEO Sundar Pichai for giving him the freedom to eat eggs.

    Srinivas remembers the moment seven years ago when an interview with Pichai popped up in his YouTube feed. His vegetarian upbringing in India had excluded eggs, as it had for many in the country, but now, in his early twenties, Srinivas wanted to start eating more protein. Here was Pichai, a hero to many aspiring entrepreneurs in India, casually describing his morning: waking up, reading newspapers, drinking tea—and eating an omelet.

    Srinivas shared the video with his mother. OK, she said: You can eat eggs.

    Pichai’s influence reaches far beyond Srinivas’ diet. He too is CEO of a search company, called Perplexity AI, one of the most hyped-up apps of the generative AI era. Srinivas is still taking cues from Pichai, the leader of the world’s largest search engine, but his admiration is more complicated.

    “It’s kind of a rivalry now,” Srinivas says. “It’s awkward.”

    Srinivas and Pichai both grew up in Chennai, India, in the south Indian state of Tamil Nadu—though the two were born 22 years apart. By the time Srinivas was working toward his PhD in computer science at UC Berkeley, Pichai had been crowned chief executive of Google.

    For his first research internship, Srinivas worked at Google-owned DeepMind in London. Pichai also got a new job that year, becoming CEO of Alphabet as well as Google. Srinivas found the work at DeepMind invigorating, but he was dismayed to find that the flat he had rented sight unseen was a disaster—a “crappy home, with rats,” he says—so he sometimes slept in DeepMind’s offices.

    He discovered in the office library a book about the development and evolution of Google, called In the Plex, penned by WIRED editor at large Steven Levy. Srinivas read it over and over, deepening his appreciation of Google and its innovations. “Larry and Sergey became my entrepreneurial heroes,” Srinivas says. (He offered to list In the Plex’s chapters and cite passages from memory; WIRED took his word for it.)

    Shortly afterwards, in 2020, Srinivas ended up working at Google’s headquarters in Mountain View, California, as a research intern working on machine learning for computer vision. Slowly, Srinivas was making his way through the Google universe, and putting some of his AI research work to good use.

    Then, in 2022, Srinivas and three cofounders—Denis Yarats, Johnny Ho, and Andy Konwinski—teamed up to try and develop a new approach to search using AI. They started out working on algorithms that could translate natural language into the database language SQL, but determined this was too narrow (or nerdy). Instead they pivoted to a product that combined a traditional search index with the relatively new power of large language models. They called it Perplexity.

    Perplexity is sometimes described as an “answer” engine rather than a search engine, because of the way it uses AI text generation to summarize results. New searches create conversational “threads” on a particular topic. Type in a query, and Perplexity responds with follow up questions, asking you to refine your ask. It eschews direct links in favor of text-based or visual answers that don’t require you to click away to somewhere else to get information.

    [ad_2]

    Lauren Goode

    Source link

  • 8 Google Employees Invented Modern AI. Here’s the Inside Story

    8 Google Employees Invented Modern AI. Here’s the Inside Story

    [ad_1]

    The last two weeks before the deadline were frantic. Though officially some of the team still had desks in Building 1945, they mostly worked in 1965 because it had a better espresso machine in the micro-kitchen. “People weren’t sleeping,” says Gomez, who, as the intern, lived in a constant debugging frenzy and also produced the visualizations and diagrams for the paper. It’s common in such projects to do ablations—taking things out to see whether what remains is enough to get the job done.

    “There was every possible combination of tricks and modules—which one helps, which doesn’t help. Let’s rip it out. Let’s replace it with this,” Gomez says. “Why is the model behaving in this counterintuitive way? Oh, it’s because we didn’t remember to do the masking properly. Does it work yet? OK, move on to the next. All of these components of what we now call the transformer were the output of this extremely high-paced, iterative trial and error.” The ablations, aided by Shazeer’s implementations, produced “something minimalistic,” Jones says. “Noam is a wizard.”

    Vaswani recalls crashing on an office couch one night while the team was writing the paper. As he stared at the curtains that separated the couch from the rest of the room, he was struck by the pattern on the fabric, which looked to him like synapses and neurons. Gomez was there, and Vaswani told him that what they were working on would transcend machine translation. “Ultimately, like with the human brain, you need to unite all these modalities—speech, audio, vision—under a single architecture,” he says. “I had a strong hunch we were onto something more general.”

    In the higher echelons of Google, however, the work was seen as just another interesting AI project. I asked several of the transformers folks whether their bosses ever summoned them for updates on the project. Not so much. But “we understood that this was potentially quite a big deal,” says Uszkoreit. “And it caused us to actually obsess over one of the sentences in the paper toward the end, where we comment on future work.”

    That sentence anticipated what might come next—the application of transformer models to basically all forms of human expression. “We are excited about the future of attention-based models,” they wrote. “We plan to extend the transformer to problems involving input and output modalities other than text” and to investigate “images, audio and video.”

    A couple of nights before the deadline, Uszkoreit realized they needed a title. Jones noted that the team had landed on a radical rejection of the accepted best practices, most notably LSTMs, for one technique: attention. The Beatles, Jones recalled, had named a song “All You Need Is Love.” Why not call the paper “Attention Is All You Need”?

    The Beatles?

    “I’m British,” says Jones. “It literally took five seconds of thought. I didn’t think they would use it.”

    They continued collecting results from their experiments right up until the deadline. “The English-French numbers came, like, five minutes before we submitted the paper,” says Parmar. “I was sitting in the micro-kitchen in 1965, getting that last number in.” With barely two minutes to spare, they sent off the paper.

    [ad_2]

    Steven Levy

    Source link

  • Forget Chatbots. AI Agents Are the Future

    Forget Chatbots. AI Agents Are the Future

    [ad_1]

    This week a startup called Cognition AI caused a bit of a stir by releasing a demo showing an artificial intelligence program called Devin performing work usually done by well-paid software engineers. Chatbots like ChatGPT and Gemini can generate code, but Devin went further, planning how to solve a problem, writing the code, and then testing and implementing it.

    Devin’s creators brand it as an “AI software developer.” When asked to test how Meta’s open source language model Llama 2 performed when accessed via different companies hosting it, Devin generated a step-by-step plan for the project, generated code needed to access the APIs and run benchmarking tests, and created a website summarizing the results.

    It’s always hard to judge staged demos, but Cognition has shown Devin handling a wide range of impressive tasks. It wowed investors and engineers on X, receiving plenty of endorsements, and even inspired a few memes—including some predicting Devin will soon be responsible for a wave of tech industry layoffs.

    Devin is just the latest, most polished example of a trend I’ve been tracking for a while—the emergence of AI agents that instead of just providing answers or advice about a problem presented by a human can take action to solve it. A few months back I test drove Auto-GPT, an open source program that attempts to do useful chores by taking actions on a person’s computer and on the web. Recently I tested another program called vimGPT to see how the visual skills of new AI models can help these agents browse the web more efficiently.

    I was impressed by my experiments with those agents. Yet for now, just like the language models that power them, they make quite a few errors. And when a piece of software is taking actions, not just generating text, one mistake can mean total failure—and potentially costly or dangerous consequences. Narrowing the range of tasks an agent can do to, say, a specific set of software engineering chores seems like a clever way to reduce the error rate, but there are still many potential ways to fail.

    Not only startups are building AI agents. Earlier this week I wrote about an agent called SIMA, developed by Google DeepMind, which plays video games including the truly bonkers title Goat Simulator 3. SIMA learned from watching human players how to do more than 600 fairly complicated tasks such as chopping down a tree or shooting an asteroid. Most significantly, it can do many of these actions successfully even in an unfamiliar game. Google DeepMind calls it a “generalist.”

    I suspect that Google has hopes that these agents will eventually go to work outside of video games, perhaps helping use the web on a user’s behalf or operate software for them. But video games make a good sandbox for developing and testing agents, by providing complex environments in which they can be tested and improved. “Making them more precise is something that we’re actively working on,” Tim Harley, a research scientist at Google DeepMind, told me. “We’ve got various ideas.”

    You can expect a lot more news about AI agents in the coming months. Demis Hassabis, the CEO of Google DeepMind, recently told me that he plans to combine large language models with the work his company has previously done training AI programs to play video games to develop more capable and reliable agents. “This definitely is a huge area. We’re investing heavily in that direction, and I imagine others are as well.” Hassabis said. “It will be a step change in capabilities of these types of systems—when they start becoming more agent-like.”

    [ad_2]

    Will Knight

    Source link

  • The Dark Side of Open Source AI Image Generators

    The Dark Side of Open Source AI Image Generators

    [ad_1]

    Whether through the frowning high-definition face of a chimpanzee or a psychedelic, pink-and-red-hued doppelganger of himself, Reuven Cohen uses AI-generated images to catch people’s attention. “I’ve always been interested in art and design and video and enjoy pushing boundaries,” he says—but the Toronto-based consultant, who helps companies develop AI tools, also hopes to raise awareness of the technology’s darker uses.

    “It can also be specifically trained to be quite gruesome and bad in a whole variety of ways,” Cohen says. He’s a fan of the freewheeling experimentation that has been unleashed by open source image-generation technology. But that same freedom enables the creation of explicit images of women used for harassment.

    After nonconsensual images of Taylor Swift recently spread on X, Microsoft added new controls to its image generator. Open source models can be commandeered by just about anyone and generally come without guardrails. Despite the efforts of some hopeful community members to deter exploitative uses, the open source free-for-all is near-impossible to control, experts say.

    “Open source has powered fake image abuse and nonconsensual pornography. That’s impossible to sugarcoat or qualify,” says Henry Ajder, who has spent years researching harmful use of generative AI.

    Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Some tools based on open source algorithms are purpose-built for salacious or harassing uses, such as “nudifying” apps that digitally remove women’s clothes in images.

    But many tools can serve both legitimate and harassing use cases. One popular open source face-swapping program is used by people in the entertainment industry and as the “tool of choice for bad actors” making nonconsensual deepfakes, Ajder says. High-resolution image generator Stable Diffusion, developed by startup Stability AI, is claimed to have more than 10 million users and has guardrails installed to prevent explicit image creation and policies barring malicious use. But the company also open sourced a version of the image generator in 2022 that is customizable, and online guides explain how to bypass its built-in limitations.

    Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on AI model marketplaces such as Civitai, a community-based site where users share and download models. There, one creator of a Taylor Swift plug-in has urged others not to use it “for NSFW images.” However, once downloaded, its use is out of its creator’s control. “The way that open source works means it’s going to be pretty hard to stop someone from potentially hijacking that,” says Ajder.

    4chan, the image-based message board site with a reputation for chaotic moderation is home to pages devoted to nonconsensual deepfake porn, WIRED found, made with openly available programs and AI models dedicated solely to sexual images. Message boards for adult images are littered with AI-generated nonconsensual nudes of real women, from porn performers to actresses like Cate Blanchett. WIRED also observed 4chan users sharing workarounds for NSFW images using OpenAI’s Dall-E 3.

    That kind of activity has inspired some users in communities dedicated to AI image-making, including on Reddit and Discord, to attempt to push back against the sea of pornographic and malicious images. Creators also express worry about the software gaining a reputation for NSFW images, encouraging others to report images depicting minors on Reddit and model-hosting sites.

    [ad_2]

    Lydia Morrish

    Source link

  • Hamas hate videos make Elon Musk Europe’s digital enemy No. 1

    Hamas hate videos make Elon Musk Europe’s digital enemy No. 1

    [ad_1]

    Elon Musk has made himself Europe’s digital public enemy No. 1.

    Since Hamas attacked Israel on Saturday, the billionaire’s social network X has been flooded with gruesome images, politically-motivated lies and terrorist propaganda that authorities say appear to violate both its own policies and the European Union’s new social media law.

    Now Musk is facing the threat of sanctions — including potentially hefty fines — as officials in Brussels start gathering evidence in preparation for a formal investigation into whether X has broken the European Union’s rules. Authorities in the U.K. and Germany have joined the criticism.

    The tussle represents a critical test for all sides. Musk will be keen to fight any claim that he’s failing to be a responsible owner of the social network formerly known as Twitter — all while upholding his commitment to free speech. The EU will want to show its new regulation, known as the Digital Services Act (DSA), has teeth.

    Thierry Breton, Europe’s commissioner in charge of social media content rules, demanded that Musk explain why graphic images and disinformation about the Middle East crisis were widespread on X.

    “I urge you to ensure a prompt, accurate and complete response to this request within the next 24 hours,” Breton wrote on X late Tuesday.

    “We will include your answer in our assessment file on your compliance with the DSA,” said Breton, who also wrote to Meta’s Mark Zuckerberg to remind him of his obligations under Europe’s rules. TikTok’s head Shou Zi Chew was also asked on October 12 to explain how his platform was dealing with misinformation and graphic content.

    “I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed,” Breton said. Those fines can total up to 6 percent of a company’s global revenue.

    In response, Linda Yaccarino, X’s chief executive, wrote to Breton Thursday to outline how the social media giant had responded to the ongoing Middle East conflict. That included removing or labelling potentially harmful content, working with law enforcement agencies and adding so-called “community notes,” or crowd-sourced fact-checks, to posts.

    The heat on Twitter did not begin with the Hamas attacks. Ever since Musk bought the platform, he’s been hit by criticism that he’s failing to stop hate speech from spreading online.

    X has cut back on its content moderation teams, in the spirit of promoting free speech; pulled out of a Brussels-backed pledge to tackle digital foreign interference; and tweaked its social media algorithms to promote often shady content over verified material from news organizations and politicians.

    Musk has responded — via his social media account with 159 million followers — with jeers and attacks on his naysayers. But the latest uproar over content apparently inciting and praising terrorism has made it a surefire bet that X will be one of the first companies to be investigated under the EU’s social media rules.

    In response to Breton’s demand, Musk asked the French commissioner to outline how X had potentially violated Europe’s content regulations. “Our policy is that everything is open source and transparent,” he added. In the U.K., Michelle Donelan, the country’s digital minister, also met with social media executives Wednesday to discuss how their firms were combatting online hate speech.

    The probe is coming

    In truth, an investigation into X’s compliance with Europe’s new content rulebook has been on the cards for months. Over the summer, Breton and senior EU officials visited the company’s headquarters in San Francisco for a so-called “stress test” to see how it was complying.

    Under the EU’s legislation, tech giants like X, TikTok and Facebook must carry out lengthy risk assessments to figure out how hate speech and other illegal content can spread on their platforms. These firms must also allow greater access to external auditors, regulators and civil society groups that will track how social media companies are complying with the new oversight.

    Investigations into potential wrongdoing under Europe’s content rules will likely involve months-long inquiries into a company’s behavior, the Commission taking a legal decision on whether to levy fines or other sanctions, and a likely appeal from the firm in response. Such cases are expected to take years to complete.

    Within Brussels, the Commission has been compiling evidence of potential wrongdoing across multiple social media companies, even before the EU’s new content legislation came into full force in August, according to five officials and other individuals with direct knowledge of the matter.

    The goal is to start at least three investigations linked to the Digital Services Act by early next year, according to three of those people. They spoke on condition of anonymity because the discussions are not public and remain ongoing.

    In recent days, Commission officials have been compiling evidence associated with Hamas’ attacks on Israel — much of which has been shared on X with little, if any, pushback from the company.

    That content included verified X accounts with ties to Russia and Iran reposting graphic footage of alleged atrocities targeting Israeli soldiers. Some of these posts have been viewed hundreds of thousands of times. Other accounts linked to Hezbollah and ISIS have similarly posted widely with few, if any, removals.

    It is unclear whether such footage will lead to a specific investigation into X’s handling of the most recent violent content. But it has reaffirmed the likelihood Musk will soon face legal consequences for not removing such material from his social network.

    Combating violent and terrorist content requires “people sitting at a computer screen and looking at this and making judgments,” said Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, which has tracked the online footprint of Hamas’ ongoing attacks. “It used to be that there were dozens of people that do that at Twitter, and now there’s only a handful.”

    Steven Overly contributed reporting from Washington. This article has been updated.

    [ad_2]

    Mark Scott

    Source link

  • TikTok hit with €345M fine for violating children’s privacy

    TikTok hit with €345M fine for violating children’s privacy

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Booming social media application TikTok needs to pay up in Europe for violating children’s privacy.

    The popular Chinese-owned app failed to protect children’s personal information by making their accounts publicly accessible by default and insufficiently tackled risks that under-13 users could access its platform, the Irish Data Protection Commission (DPC) said in a decision published Friday.

    The regulator slapped TikTok with a €345 million fine for breaching the EU’s landmark privacy law, the General Data Protection Regulation (GDPR).

    The penalty comes amid high tensions between the European Union and China, following the EU’s announcement that it plans to probe Chinese state subsidies of electric cars. European Commission Vice President Věra Jourová is also set to visit China next Monday-Tuesday and meet Vice Premier Zhang Guoqing to discuss the two sides’ technology policies, amid growing concerns over Beijing’s data gathering and cyber espionage practices.

    “Alone the fine of [€345 million] is a headline sanction to impose but reflects the extent to which the DPC identified child users were exposed to risk in particular arising from TikTok’s decision at the time to default child user accounts to public settings on registration,” said Helen Dixon, the Irish data protection commissioner, in a written statement.

    The Irish privacy regulator said that, in the period from July to December 2020, TikTok had unlawfully made accounts of users aged 13 to 17 public by default, effectively making it possible for anyone to watch and comment on videos they posted. The company also did not appropriately assess the risks that users under the age of 13 could gain access to its platform. It also found that TikTok is still pushing teenagers joining the platform to make their accounts and videos public through manipulative pop-ups. The regulator ordered the firm to change these misleading designs, known as dark patterns, within the next three months.

    Minors’ accounts could be paired up with unverified adult accounts during the second half of 2020. The authority said the video platform had also previously failed to explain to teenagers the consequences of making their content and accounts public.

    “We respectfully disagree with the decision, particularly the level of the fine imposed,” said Morgan Evans, a TikTok spokesperson. “The [Data Protection Commission]’s criticisms are focused on features and settings that were in place three years ago, and that we made changes to well before the investigation even began, such as setting all under-16 accounts to private by default.”

    TikTok added it will comply with the order to change misleading designs by extending such default-privacy settings to accounts of new users aged 16 and 17 later in September. It will also roll out in the next three months changes to the pop-up young users get when they first post a video.

    The decision marks the largest-ever privacy fine for TikTok, which is now actively used by 134 million Europeans monthly, and the fifth-largest fine imposed on any tech company under the GDPR.

    The platform popular among teenagers has previously faced criticism for insufficiently mitigating harms it poses to its young users, including deadly viral challenges and its addictive algorithm. TikTok — like 18 other online platforms — also now has to limit risks like cyberbullying or face steep fines under the Digital Services Act (DSA).

    The costly fine adds to TikTok’s woes in Europe, after it saw a wave of new restrictions on its use earlier this year due to concerns about its connection to China.

    The social media app, whose parent company ByteDance is based in Beijing, has struggled to quash concerns over its data security. The company said this month it had started moving its European data to a center within the bloc. Yet, it is still under investigation by the Irish Data Protection Commission over the potentially unlawful transfer of European users’ data to China.

    The social media app, whose parent company ByteDance is based in Beijing, has struggled to quash concerns over its data security | Roslan Rahman/AFP via Getty Images

    The Irish data authority in 2021 started probing whether TikTok was respecting children’s privacy requirements. TikTok set up its legal EU headquarters in Dublin in late 2020, meaning the Irish privacy watchdog has been the company’s supervisor for the whole bloc under the GDPR.

    Other national watchdogs weighed in on the investigation over the summer via the European Data Protection Board (EDPB), after two German privacy agencies and Italy’s regulator disagreed with Ireland’s initial findings. The group instructed Ireland to sanction TikTok for nudging its users toward public accounts in its misleading pop-ups.

    The board of European regulators also had “serious doubts” that TikTok’s measures to keep under-13 users off its platform were effective in the second half of 2020. The EDPB said the mechanisms “could be easily circumvented” and that TikTok was not checking ages “in a sufficiently systematic manner” for existing users. The group said, however, that it couldn’t find an infringement because of a lack of information available during their cooperation process.

    The United Kingdom’s data regulator in April fined TikTok £12.7 million (€14.8 million) for letting children under 13 on its platform and using their data. The company also received a €750,000 fine in 2021 from the Dutch privacy authority for failing to protect Dutch children by not having a privacy policy in their native language.

    This article has been updated.

    [ad_2]

    Clothilde Goujard

    Source link

  • The EU wants to cure your teen’s smartphone addiction 

    The EU wants to cure your teen’s smartphone addiction 

    [ad_1]

    Glazed eyes. One syllable responses. The steady tinkle of beeps and buzzes coming out of a smartphone’s speakers. 

    It’s a familiar scene for parents around the world as they battle with their kids’ internet use. Just ask Věra Jourová: When her 10-year old grandson is in front of a screen “nothing around him exists any longer, not even the granny,” the transparency commissioner told a European Parliament event in June.

    Countries are now taking the first steps to rein in excessive — and potentially harmful — use of big social media platforms like Facebook, Instagram, and TikTok.

    China wants to limit screen time to 40 minutes for children aged under eight, while the U.S. state of Utah has imposed a digital curfew for minors and parental consent to use social media. France has targeted manufacturers, requiring them to install a parental control system that can be activated when their device is turned on.

    The EU has its own sweeping plans. It’s taking bold steps with its Digital Services Act (DSA) that, from the end of this month, will force the biggest online platforms — TikTok, Facebook, Youtube — to open up their systems to scrutiny by the European Commission and prove that they’re doing their best to make sure their products aren’t harming kids.

    The penalty for non-compliance? A hefty fine of up to six percent of companies’ global annual revenue.

    Screen-sick 

    The exact link between social media use and teen mental health is debated. 

    These digital giants make their money from catching your attention and holding on to it as long as possible, raking in advertisers’ dollars in the process. And they’re pros at it: endless scrolling combined with the periodic, but unpredictable, feedback from likes or notifications, dole out hits of stimulation that mimic the effect of slot machines on our brains’ wiring.  

    It’s a craving that’s hard enough for adults to manage (just ask a journalist). The worry is that for vulnerable young people, that pull comes with very real, and negative, consequences: anxiety, depression, body image issues, and poor concentration. 

    Large mental health surveys in the U.S. — where the data is most abundant — have found a noticeable increase over the last 15 years in adolescent unhappiness, a tendency that continued through the pandemic.

    These increases cut across a number of measures: suicidal thoughts, depression, but also more mundanely, difficulties sleeping. This trend is most pronounced among teenage girls. 

    Smartphone use has exploded, with more people getting one at a younger age | Sean Gallup/Getty Images

    At the same time smartphone use has exploded, with more people getting one at a younger age. Social media use, measured as the number of times a given platform is accessed per day, is also way up. 

    There are some big caveats. The trend is most visible in the Anglophone world, although it’s also observable elsewhere in Europe. And there’s a whole range of confounding factors. Waning stigma around mental health might mean that young people are more comfortable describing what they’re going through in surveys. Changing political and socio-economic factors, as well as worries about climate change, almost certainly play a role. 

    Researchers on all sides of the debate agree that technology factors into it, but also that it doesn’t fully explain the trend. They diverge on where to put the emphasis. 

    Luca Braghieri, an assistant professor of economics at Bocconi university in Italy, said he originally thought concerns over Facebook were overblown, but he’s changed his mind after starting to research the topic (and has since deleted his Facebook account). 

    Braghieri and his colleagues combed through U.S. college mental health surveys from 2004-2006, the period when Facebook was first rolled-out in U.S. colleges, and before it was available to the general public. He found that in colleges where Facebook was introduced, students’ mental health dipped in a way not seen in universities where it hadn’t yet launched.

    Braghieri said the comparison with colleges where Facebook hadn’t yet arrived allowed the researchers to rule out unidentified other variables that might have been simultaneous. 

    Faced with mounting pressure in the last years, platforms like Instagram, YouTube and TikTok have introduced various tools to assuage concerns, including parental control | Staff/AFP via Getty Images

    Elia Abi-Jaoude, a psychiatrist and academic at the University of Toronto, said he observed the effect first-hand when working at a child and adolescent psychiatric in-patient unit starting in 2015.

    “I was basically on the front lines, witnessing the dramatic rise in struggles among adolescents,” said Abi-Jaoude, who has also published research on the topic. He noticed “all sorts of affective complaints, depression, anxiety — but for them to make it to the inpatient setting — we’re talking suicidality. And it was very striking to see.”  

    His biggest concern? Sleep deprivation — and the mood swings and worse school performance that accompany it. “I think a lot of our population is chronically sleep deprived,” said Abi-Jaoude, pointing the finger at smartphones and social media use.

    The flipside    

    New technologies have gotten caught up in panics before. Looking back, they now seem quaint, even funny.   

    “In the 1940s, there were concerns about radio addiction and children. In the 1960s it was television addiction. Now we have phone addiction. So I think the question is: Is now different? And if so, how?” asks Amy Orben, from the U.K. Medical Research Council’s Cognition and Brain Sciences Unit at the University of Cambridge.  

    She doesn’t dismiss the possible harms of social media, but she argues for a nuanced approach. That means honing in on the specific people who are most vulnerable, and the specific platforms and features that might be most risky. 

    Another major ask: more data.  

    There’s a “real disconnect” between the general belief and the actual evidence that social media use is harmful, said Orben, who went on to praise the new EU’s rules. Among its various provisions, the new EU rules will allow researchers for the first time to get their hands on data usually buried deep inside company servers.   

    Orben said that while much attention has gone into the negative effects of digital media use at the expense of positive examples, research she conducted into adolescent well-being during pandemic lockdowns, for example, showed that teens with access to laptops were happier than those without. 

    But when it comes to risk of harm to kids, Europe has taken a precautionary approach.

    “Not all kids will experience harm due to these risks from smartphones and social media use,” Patti Valkenburg, head of the Center for Research on Children, Adolescents and the Media at the University of Amsterdam, told a Commission event in June. “But for minors, we need to adopt the precautionary principle. The fact that harm can be caused should be enough to justify measures to prevent or mitigate potential risk.”

    Parental controls  

    Faced with mounting pressure in the past years, platforms like Instagram, YouTube and TikTok have introduced various tools to assuage concerns, including parental control. Since 2021, YouTube and Instagram send teenagers using their platform reminders to take breaks. TikTok in March announced minors have to enter a passcode after an hour on the app to continue watching videos. 

    Very large online platforms will also be banned from tracking kids’ online activity to show them personalized advertisements | Lionel Bonaventure/AFP via Getty Images

    But the social media companies will soon have to go further.  

    By the end of August, very large online platforms with over 45 million users in the European Union — including companies like Instagram, Snapchat, TikTok, Pinterest and YouTube — will have to comply with the longest list of rules. 

    They will have to hand in to the Digital Services Act watchdog — the European Commission — their first yearly assessment of the major impact of their design, algorithms, advertising and terms of services on a range of societal issues such as the protection of minors and mental wellbeing. They will then have to propose and implement concrete measures under the scrutiny of an audit company, the Commission and vetted researchers.

    Measures could include ensuring that algorithms don’t recommend videos about dieting to teenage girls or turning off autoplay by default so that minors don’t stay hooked watching content.

    Platforms will also be banned from tracking kids’ online activity to show them personalized advertisements. Manipulative designs such as never-ending timelines to glue users to platforms have been connected to addictive behavior, and will be off limits for tech companies. 

    Brussels is also working with tech companies, industry associations and children’s groups on rules for how to design platforms in a way that protects minors. The Code of Conduct on Age Appropriate Design planned for 2024 would then provide an explicit list of measures that the European Commission wants to see large social media companies carry out to comply with the new law.

    Yet, the EU’s new content law won’t be the magic wand parents might be looking for. The content rulebook doesn’t apply to popular entertainment like online games, messaging apps nor the digital devices themselves. 

    It remains unclear how the European Commission will potentially investigate and go after social media companies if they consider that they have failed to limit their platforms’ negative consequences for mental well-being. External auditors and researchers could also face obstacles to wade through troves of data and lines of code to find smoking guns and challenge tech companies’ claims. 

    How much companies are willing to run up against their business model in the service of their users’ mental health is also an open question, said John Albert, a policy expert at the tech-focused advocacy group AlgorithmWatch. Tech giants have made a serious effort at fighting the most egregious abuses, like cyber-bullying, or eating disorders, Albert said. And the level of transparency made possible by the new rules was unprecedented.

    “But when it comes to much broader questions about mental health and how these algorithmic recommender systems interact with users and affect them over time… I don’t know what we should expect them to change,” he explained. The back-and-forth vetting process is likely going to be drawn out as the Commission comes to grips with the complex platforms.

    “In the short term, at least, I would expect some kind of business as usual.”

    [ad_2]

    Carlo Martuscelli and Clothilde Goujard

    Source link

  • From Napoléon to Macron: How France learned to love Big Brother

    From Napoléon to Macron: How France learned to love Big Brother

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    PARIS — Liberté. Egalité. But mostly: sécurité

    It all started with Napoléon Bonaparte. Over two centuries, France cobbled together a surveillance apparatus capable of intercepting private communications; keeping traffic and localization data for up to a year; storing people’s fingerprints; and monitoring most of the territory with cameras.

    This system, which has faced pushback from digital rights organizations and United Nations experts, will get its spotlight moment at the 2024 Paris Summer Olympics. In July next year, France will deploy large-scale, real-time, algorithm-supported video surveillance cameras — a first in Europe. (Not included in the plan: facial recognition.) 

    Last month, the French parliament approved a controversial government plan to allow investigators to track suspected criminals in real-time via access to their devices’ geolocation, camera and microphone. Paris also lobbied in Brussels to be allowed to spy on reporters in the name of national security. 

    Helping France down the path of mass surveillance: a historically strong and centralized state; a powerful law enforcement community; political discourse increasingly focused on law and order; and the terrorist attacks of the 2010s. In the wake of President Emmanuel Macron’s agenda for so-called strategic autonomy, French defense and security giants, as well as innovative tech startups, have also gotten a boost to help them compete globally with American, Israeli and Chinese companies. 

    “Whenever there’s a security issue, the first reflex is surveillance and repression. There’s no attempt in either words or deeds to address it with a more social angle,” said Alouette, an activist at French digital rights NGO La Quadrature du Net who uses a pseudonym to protect her identity. 

    As surveillance and security laws have piled up in recent decades, advocates have lined up on opposite sides. Supporters argue law enforcement and intelligence agencies need such powers to fight terrorism and crime. Algorithmic video surveillance would have prevented the 2016 Nice terror attack, claimed Sacha Houlié, a prominent lawmaker from Macron’s Renaissance party.

    Opponents point to the laws’ effect on civil liberties and fear France is morphing into a dystopian society. In June, the watchdog in charge of monitoring intelligence services said in a harsh report that French legislation is not compliant with the European Court of Human Rights’ case law, especially when it comes to intelligence-sharing between French and foreign agencies.

    “We’re in a polarized debate with good guys and bad guys, where if you oppose mass surveillance, you’re on the bad guys’ side,” said Estelle Massé, Europe legislative manager and global data protection lead at digital rights NGO Access Now. 

    A history of surveillance

    Both the 9/11 and the Paris 2015 terror attacks have accelerated mass surveillance in France, but the country’s tradition of snooping, monitoring and data collection dates way back — to Napoléon Bonaparte in the early 1800s. 

    “Historically, France has been at the forefront of these issues, in terms of police files and records. During the First Empire, France’s highly centralized government was determined to square the entire territory,” said Olivier Aïm, a lecturer at Sorbonne Université Celsa who authored a book on surveillance theories. Before electronic devices, paper was the main tool of control because identification documents were used to monitor travels, he explained. 

    The French emperor revived the Paris Police Prefecture — which exists to this day — and tasked law enforcement with new powers to keep political opponents in check. 

    In the 1880s, Alphonse Bertillon devised a method of identifying suspects and criminals using biometric features | Peter Macdiarmid/Getty Images

    In the 1880s, Alphonse Bertillon, who worked for the Paris Police Prefecture, introduced a new way of identifying suspects and criminals using biometric features — the forerunner of facial recognition. The Bertillon method would then be emulated across the world.

    Between 1870 and 1940, under the Third Republic, the police kept a massive file — dubbed the National Security’s Central File — with information about 600,000 people, including anarchists and communists, certain foreigners, criminals, and people who requested identification documents. 

    After World War II ended, a bruised France moved away from hard-line security discourse until the 1970s. And in the early days of the 21st century, the 9/11 attacks in the United States marked a turning point, ushering in a steady stream of controversial surveillance laws — under both left- and right-wing governments. In the name of national security, lawmakers started giving intelligence services and law enforcement unprecedented powers to snoop on citizens, with limited judiciary oversight. 

    “Surveillance covers a history of security, a history of the police, a history of intelligence,” Aïm said. “Security issues have intensified with the fight against terrorism, the organization of major events and globalization.” 

    The rise of technology

    In the 1970s, before the era of omnipresent smartphones, French public opinion initially pushed back against using technology to monitor citizens

    In 1974, as ministries started using computers, Le Monde revealed a plan to merge all citizens’ files into a single computerized database, a project known as SAFARI.

    The project, abandoned amid the resulting scandal, led lawmakers to adopt robust data protection legislation — creating the country’s privacy regulator CNIL. France then became one of the few European countries with rules to protect civil liberties in the computer age. 

    However, the mass spread of technology — and more specifically video surveillance cameras in the 1990s — allowed politicians and local officials to come up with new, alluring promises: security in exchange for surveillance tech. 

    In 2020, there were about 90,000 video surveillance cameras powered by the police and the gendarmerie in France. The state helps local officials finance them via a dedicated public fund. After France’s violent riots in early July — which also saw Macron float social media bans during periods of unrest — Interior Minister Gérald Darmanin announced he would swiftly allocate €20 million to repair broken video surveillance devices. 

    In parallel, the rise of tech giants such as Google, Facebook and Apple in everyday life has led to so-called surveillance capitalism. And for French policymakers, U.S. tech giants’ data collection has over the years become an argument to explain why the state, too, should be allowed to gather people’s personal information. 

    “We give Californian startups our fingerprints, face identification, or access to our privacy from our living room via connected speakers, and we would refuse to let the state protect us in the public space?” Senator Stéphane Le Rudulier from the conservative Les Républicains said in June to justify the use of facial recognition on the street. 

    Strong state, strong statesmen

    Resistance to mass surveillance does exist in France at the local level — especially against the development of so-called safe cities. Digital rights NGOs can boast a few wins: In the south of France, La Quadrature du Net scored a victory in an administrative court, blocking plans to test facial recognition in high schools. 

    Some grassroots movements have opposed surveillance schemes at the local level, but the nationwide legislative push has continued | Ludovic Marin/AFP via Getty Images

    At the national level, however, security laws are too powerful a force, despite a few ongoing cases before the European Court of Human Rights. For example, France has de facto ignored multiple rulings from the EU top court that deemed mass data retention illegal. 

    Often at the center of France’s push for more state surveillance: the interior minister. This influential office, whose constituency includes the law enforcement and intelligence community, is described as a “stepping stone” toward the premiership — or even the presidency. 

    “Interior ministers are often powerful, well-known and hyper-present in the media. Each new minister pushes for new reforms, new powers, leading to the construction of a never-ending security tower,” said Access Now’s Massé.

    Under Socialist François Hollande, Manuel Valls and Bernard Cazeneuve both went from interior minister to prime minister in, respectively, 2014 and 2016. Nicolas Sarkozy, Jacques Chirac’s interior minister from 2005 to 2007, was then elected president. All shepherded new surveillance laws under their tenure.

    In the past year, Darmanin has been instrumental in pushing for the use of police drones, even going against the CNIL.

    For politicians, even at the local level, there is little to gain electorally by arguing against expanded snooping and the monitoring of public space. “Many on the left, especially in complicated cities, feel obliged to go along, fearing accusations of being soft [on crime],” said Noémie Levain, a legal and political analyst at La Quadrature du Net. “The political cost of reversing a security law is too high,” she added.

    It’s also the case that there’s often little pushback from the public. In March, on the same day a handful of French MPs voted to allow AI-powered video surveillance cameras at the 2024 Paris Olympics, about 1 million people took to the streets to protest against … Macron’s pension reform. 

    Sovereign cameras

    For politicians, France’s industrial competitiveness is also at stake. The country is home to defense giants that dabble in both the military and civilian sectors, such as Thalès and Safran. Meanwhile, Idemia specializes in biometrics and identification. 

    “What’s accelerating legislation is also a global industrial and geopolitical context: Surveillance technologies are a Trojan horse for artificial intelligence,” said Caroline Lequesne Rot, an associate professor at the Côte d’Azur University, adding that French policymakers are worried about foreign rivals. “Europe is caught between the stranglehold of China and the U.S. The idea is to give our companies access to markets and allow them to train.”

    In 2019, then-Digital Minister Cédric O told Le Monde that experimenting with facial recognition was needed to allow French companies to improve their technology. 

    France’s surveillance apparatus will be on full display at the 2024 Olympic Games | Patrick Kovarik/AFP via Getty Images

    For the video surveillance industry — which made €1.6 billion in France in 2020 — the 2024 Paris Olympics will be a golden opportunity to test their products and services and showcase what they can do in terms of AI-powered surveillance. 

    XXII — an AI startup with funding from the armed forces ministry and at least some political backinghas already hinted it would be ready to secure the mega sports event. 

    “If we don’t encourage the development of French and European solutions, we run the risk of later becoming dependent on software developed by foreign powers,” wrote lawmakers Philippe Latombe, from Macron’s allied party Modem, and Philippe Gosselin, from Les Républicains, in a parliamentary report on video surveillance released in April.

    “When it comes to artificial intelligence, losing control means undermining our sovereignty,” they added.

    [ad_2]

    Laura Kayali

    Source link

  • As France burns, Macron blames social media for fanning the flames

    As France burns, Macron blames social media for fanning the flames

    [ad_1]

    PARIS — French rioters have set the country on fire and Emmanuel Macron is pointing the finger at TikTok and Snapchat for pouring gasoline on the inferno.

    In the past three days, violent protests erupted across France after a police officer in a Paris suburb shot and killed 17-year-old Nahel M., who was of North African descent. Rioters targeted public buildings, transport systems and shops with projectiles and Molotov cocktails, leaving 249 members of law enforcement injured and 875 people arrested. 

    Unlike the deadly outbreak of violence in 2005, the turmoil — which has led to public transportation shutdowns, concert cancelations and armored vehicles being deployed across the country — can be documented in real time, shared online and seen by tens of thousands on social media platforms such as TikTok, Snapchat and Twitter. 

    That online phenomenon is worrying France’s political leaders, who have been scurrying to find solutions as the unrest shows no sign of fizzling out.

    “We’ve seen violent gatherings organized on several [social media platforms] — but also a kind of mimicry of violence,” French President Emmanuel Macron said Friday after a government crisis meeting. He accused younger rioters of exiting reality and “living the video games that have intoxicated them.”

    The French president wants tech companies to delete violent content and provide law enforcement with the identity of protesters who use social media to stoke — and exacerbate — the disorder. “I expect these platforms to be responsible,” he said. 

    According to research by France’s most-watched news channel BFM, TikTok and Snapchat were flooded Friday morning with videos from the rioting and looting across France. On TikTok, hashtags linked to the riots were pushed by the platform’s algorithm. Police officials also told BFM some protesters coordinate and communicate in real time through messaging services on WhatsApp and Telegram via online tools that did not exist in 2005, when riots left hundreds of public buildings damaged and thousands of cars burned.

    The government is scheduled to meet with social media platforms Friday evening, where company executives will be pressed to cooperate.

    Some, however, say social media platforms are unfairly blamed by grandstanding politicians who should focus their attention elsewhere.

    On Friday, the U.N.’s human rights office weighed in, saying France needs to address “issues of racism and discrimination in law enforcement,” referring to the killing of the teenager.

    Tech has long been used to coordinate demonstrations and protests, political communications expert Philippe Moreau Chevrolet told POLITICO, adding that the government would be “terribly out of touch” to respond to the crisis by focusing on tech companies and video games.

    “Text messages used to be accused [of facilitating riots], now it’s social networks. Yellow Vests protests were blamed on Facebook,” Moreau Chevrolet said.

    Two sides of the coin

    But the role of online platforms goes beyond showcasing fires and looting, and helping rioters get organized. This week’s violent unrest began with a video that was, of course, posted on social media.

    “There’s clearly been a change, with more and more people adopting the reflex of filming the police. Above all, the activists’ community is now able to quickly and widely circulate the videos,” said Magda Boutros, a sociology scholar at the University of Washington who studied activism against police violence in France.

    When a police officer shot and killed Nahel M. (the name by which he has been identified publicly) on Tuesday, media reports originally relied on law enforcement sources claiming a driver threatened the police officer’s life. But a video, filmed by a bystander and posted on Twitter, showed a different story: Two cops stood next to a car and one shot the driver at close range.

    Another recent incident (crucially, not filmed) showed the power of social media to hold violent police officers accountable and the ability to set a country on fire — or not.

    Two weeks ago, a teenager died in similar circumstances as Nahel M. in the Charente region of western France. The young man was reportedly shot dead by a police officer for refusing to comply.

    That went relatively unnoticed, explained former French MP Thomas Mesnier, because Charente is in a more remote area compared to the dense banlieues of the French capital.

    It also went unnoticed, Mesnier said, because “there was no video that went viral on social networks, participating in and reinforcing people’s emotions and sense of dread.”

    Elisa Bertholomey contributed reporting.

    [ad_2]

    POLITICO Europe

    Source link

  • EU to Zuckerberg: Explain yourself over Instagram pedophile network

    EU to Zuckerberg: Explain yourself over Instagram pedophile network

    [ad_1]

    EU Internal Market Commissioner Thierry Breton wants Meta CEO Mark Zuckerberg to explain and take “immediate” action over a recently exposed large pedophile network on Instagram.

    Instagram has been letting a vast network of accounts promoting and purchasing child sexual abuse material flourish on its platform, according to investigations by the Wall Street Journal and researchers released on June 7. The social media platform lets users search for explicit hashtags, and has offenders exploit its recommendation algorithms to promote illicit content.

    “Meta’s voluntary code on child protection seems not to work,” Breton wrote Thursday on Twitter. “Mark Zuckerberg must now explain & take immediate action.”

    Breton said he will discuss the issue with Zuckerberg at the Meta headquarters on June 23 during a trip to the U.S. The politician will travel later this month to see how social media companies including Twitter are preparing to comply with the EU’s flagship content moderation law, the Digital Services Act (DSA).

    He said Meta will have to “demonstrate measures” to the European Commission after August 25 when the DSA starts applying to Big Tech platforms. Otherwise, the company could face sweeping fines of up to 6 percent of its global annual revenue. Under the DSA, platforms have to crack down on illegal content and ensure children are safe on a platform. Companies have to also assess and limit how their platforms and algorithms are contributing to major societal problems such as the dissemination of illegal content and the protection of minors.

    A Meta spokesperson said the company has set up an internal task force to investigate and “immediately address” the recent findings from the Wall Street Journal and researchers.

    The company works “aggressively to fight” child exploitation and support law enforcement track down criminals, the spokesperson said. Meta dismantled 27 “abusive networks” between 2020 and 2022 and disabled over 490,000 accounts for violating our child safety policies in January 2023, they added.

    [ad_2]

    Clothilde Goujard

    Source link

  • What the hell is wrong with TikTok? 

    What the hell is wrong with TikTok? 

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Western governments are ticked off with TikTok. The Chinese-owned app loved by teenagers around the world is facing allegations of facilitating espionage, failing to protect personal data, and even of corrupting young minds.

    Governments in the United States, United Kingdom, Canada, New Zealand and across Europe have moved to ban the use of TikTok on officials’ phones in recent months. If hawks get their way, the app could face further restrictions. The White House has demanded that ByteDance, TikTok’s Chinese parent company, sell the app or face an outright ban in the U.S.

    But do the allegations stack up? Security officials have given few details about why they are moving against TikTok. That may be due to sensitivity around matters of national security, or it may simply indicate that there’s not much substance behind the bluster.

    TikTok’s Chief Executive Officer Shou Zi Chew will be questioned in the U.S. Congress on Thursday and can expect politicians from all sides of the spectrum to probe him on TikTok’s dangers. Here are some of the themes they may pick up on: 

    1. Chinese access to TikTok data

    Perhaps the most pressing concern is around the Chinese government’s potential access to troves of data from TikTok’s millions of users. 

    Western security officials have warned that ByteDance could be subject to China’s national security legislation, particularly the 2017 National Security Law that requires Chinese companies to “support, assist and cooperate” with national intelligence efforts. This law is a blank check for Chinese spy agencies, they say.

    TikTok’s user data could also be accessed by the company’s hundreds of Chinese engineers and operations staff, any one of whom could be working for the state, Western officials say. In December 2022, some ByteDance employees in China and the U.S. targeted journalists at Western media outlets using the app (and were later fired). 

    EU institutions banned their staff from having TikTok on their work phones last month. An internal email sent to staff of the European Data Protection Supervisor, seen by POLITICO, said the move aimed “to reduce the exposure of the Commission from cyberattacks because this application is collecting so much data on mobile devices that could be used to stage an attack on the Commission.” 

    And the Irish Data Protection Commission, TikTok’s lead privacy regulator in the EU, is set to decide in the next few months if the company unlawfully transferred European users’ data to China. 

    Skeptics of the security argument say that the Chinese government could simply buy troves of user data from little-regulated brokers. American social media companies like Twitter have had their own problems preserving users’ data from the prying eyes of foreign governments, they note. 

    TikTok says it has never given data to the Chinese government and would decline if asked to do so. Strictly speaking, ByteDance is incorporated in the Cayman Islands, which TikTok argues would shield it from legal obligations to assist Chinese agencies. ByteDance is owned 20 percent by its founders and Chinese investors, 60 percent by global investors, and 20 percent by employees. 

    There’s little hope to completely stop European data from going to China | Alex Plavevski/EPA

    The company has unveiled two separate plans to safeguard data. In the U.S., Project Texas is a $1.5 billion plan to build a wall between the U.S. subsidiary and its Chinese owners. The €1.2 billion European version, named Project Clover, would move most of TikTok’s European data onto servers in Europe.

    Nevertheless, TikTok’s chief European lobbyist Theo Bertram also said in March that it would be “practically extremely difficult” to completely stop European data from going to China.

    2. A way in for Chinese spies

    If Chinese agencies can’t access TikTok’s data legally, they can just go in through the back door, Western officials allege. China’s cyber-spies are among the best in the world, and their job will be made easier if datasets or digital infrastructure are housed in their home territory.

    Dutch intelligence agencies have advised government officials to uninstall apps from countries waging an “offensive cyber program” against the Netherlands — including China, but also Russia, Iran and North Korea.

    Critics of the cyber espionage argument refer to a 2021 study by the University of Toronto’s Citizen Lab, which found that the app did not exhibit the “overtly malicious behavior” that would be expected of spyware. Still, the director of the lab said researchers lacked information on what happens to TikTok data held in China.

    TikTok’s Project Texas and Project Clover include steps to assuage fears of cyber espionage, as well as legal data access. The EU plan would give a European security provider (still to be determined) the power to audit cybersecurity policies and data controls, and to restrict access to some employees. Bertram said this provider could speak with European security agencies and regulators “without us [TikTok] being involved, to give confidence that there’s nothing to hide.” 

    Bertram also said the company was looking to hire more engineers outside China. 

    3. Privacy rights

    Critics of TikTok have accused the app of mass data collection, particularly in the U.S., where there are no general federal privacy rights for citizens.

    In jurisdictions that do have strict privacy laws, TikTok faces widespread allegations of failing to comply with them.

    The company is being investigated in Ireland, the U.K. and Canada over its handling of underage users’ data. Watchdogs in the Netherlands, Italy and France have also investigated its privacy practices around personalized advertising and for failing to limit children’s access to its platform. 

    TikTok has denied accusations leveled in some of the reports and argued that U.S. tech companies are collecting the same large amount of data. Meta, Amazon and others have also been given large fines for violating Europeans’ privacy.

    4. Psychological operations

    Perhaps the most serious accusation, and certainly the most legally novel one, is that TikTok is part of an all-encompassing Chinese civilizational struggle against the West. Its role: to spread disinformation and stultifying content in young Western minds, sowing division and apathy.

    Earlier this month, the director of the U.S. National Security Agency warned that Chinese control of TikTok’s algorithm could allow the government to carry out influence operations among Western populations. TikTok says it has around 300 million active users in Europe and the U.S. The app ranked as the most downloaded in 2022.

    A woman watches a video of Egyptian influencer Haneen Hossam | Khaled Desouki/AFP via Getty Images

    Reports emerged in 2019 suggesting that TikTok was censoring pro-LGBTQ content and videos mentioning Tiananmen Square. ByteDance has also been accused of pushing inane time-wasting videos to Western children, in contrast to the wholesome educational content served on its Chinese app Douyin.

    Besides accusations of deliberate “influence operations,” TikTok has also been criticized for failing to protect children from addiction to its app, dangerous viral challenges, and disinformation. The French regulator said last week that the app was still in the “very early stages” of content moderation. TikTok’s Italian headquarters was raided this week by the consumer protection regulator with the help of Italian law enforcement to investigate how the company protects children from viral challenges.

    Researchers at Citizen Lab said that TikTok doesn’t enforce obvious censorship. Other critics of this argument have pointed out that Western-owned platforms have also been manipulated by foreign countries, such as Russia’s campaign on Facebook to influence the 2016 U.S. elections. 

    TikTok says it has adapted its content moderation since 2019 and regularly releases a transparency report about what it removes. The company has also touted a “transparency center” that opened in the U.S. in July 2020 and one in Ireland in 2022. It has also said it will comply with new EU content moderation rules, the Digital Services Act, which will request that platforms give access to regulators and researchers to their algorithms and data.

    Additional reporting by Laura Kayali in Paris, Sue Allan in Ottawa, Brendan Bordelon in Washington, D.C., and Josh Sisco in San Francisco.

    [ad_2]

    Clothilde Goujard

    Source link

  • French surveillance system for Olympics moves forward, despite civil rights campaign

    French surveillance system for Olympics moves forward, despite civil rights campaign

    [ad_1]

    PARIS — A controversial video surveillance system cleared a legislative hurdle Wednesday to be used during the 2024 Paris Summer Olympics amid opposition from left-leaning French politicians and digital rights NGOs, who argue it infringes upon privacy standards.

    The National Assembly’s law committee approved the system, but also voted to limit the temporary program’s duration until December 24, 2024, instead of June 2025. 

    The plan pitched by the French government includes experimental large-scale, real-time camera systems supported by an algorithm to spot suspicious behavior, including unsupervised luggage and alarming crowd movements like stampedes.  

    Earlier this week, civil society groups in France and beyond — including La Quadrature du Net, Access Now and Amnesty International — penned an op-ed in Le Monde raising concerns about what they argued was a “worrying precedent” that France could set in the EU. 

    There’s a risk that the measures, pitched as temporary, could become permanent, and they likely would not comply with the EU’s Artificial Intelligence Act, the groups also argue. 

    About 90 left-leaning lawmakers signed a petition initiated by La Quadrature du Net to scrap Article 7, which includes the AI-powered surveillance system. They failed, however, to gather enough votes to have it deleted from the bill. 

    Lawmakers also voted to ensure the general public is better informed of where the cameras are and to involve the cybersecurity agency ANSSI on top of the privacy regulator CNIL. They also widened the pool of images and data that can be used to train the algorithms ahead of the Olympics.

    The bill will go to a full plenary vote on March 21 for final approval.

    [ad_2]

    Laura Kayali

    Source link

  • France plots surveillance power grab for Paris 2024 Olympics

    France plots surveillance power grab for Paris 2024 Olympics

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    PARIS — France is seeking to massively expand its arsenal of surveillance powers and tools to secure the millions of tourists expected for the 2024 Paris Summer Olympics.

    Among the plans are large-scale, real-time camera systems supported by an algorithm to spot suspicious behavior, including unsupervised luggage and alarming crowd movements like stampedes. Senators on Wednesday will vote on a law introducing the new powers, which are supposed to be temporary, with some lawmakers pushing to allow controversial facial-recognition technology.

    The stakes are high: The government badly wants to avoid “failures” like the ones that dented its reputation during the Champions League final last summer, and the trauma of the 2015 Paris terror attacks still looms large over the country.

    But the plans are already causing an uproar among privacy campaigners. “The Olympic Games are used as a pretext to pass measures the [security technology] industry has long been waiting for,” said Bastien Le Querrec from digital rights NGO La Quadrature du Net, who’s leading a campaign against algorithmic video surveillance.

    The French government already backtracked on deploying facial recognition after lawmakers within President Emmanuel Macron’s majority party raised concerns. It was also forced by the country’s data protection authority and top administrative court to build in more privacy safeguards.

    For now, the law would allow for “experimentation” with the surveillance systems, and the trial is supposed to end in June 2025 — 10 months after the sports competition wraps up.

    Critics, however, fear the law will lead to unwanted surveillance in the long term.

    One key question is what will happen to the AI-powered devices once the Olympic Games are over, especially since the legislation mentions not only sports events but also “festive” and “cultural” gatherings. In the past, Le Querrec warned, security measures initially designed to be temporary — for example, under the state of emergency that followed the 2015 attacks — ended up becoming permanent.

    Whether the tech survives the Olympics will depend on how the final law is written, according to Francisco Klauser, a professor at the University of Neuchâtel, who has written about surveillance and sporting events. 

    “In the history of mega-events, there is always a legacy,” he said. Countries staging major events are under “extraordinary circumstances and time pressure” that often mean systems get deployed that otherwise “would have been debated much more heavily,” he added.

    Case in point: IBM helped Rio de Janeiro install a “control room” in view of the 2016 Olympics, and the tech is still operational to this day, Klauser said.

    For the 2024 Olympics, France already has the cameras but will need to buy the software to analyze footage, an official from the interior ministry told POLITICO.

    MP Philippe Latombe said that French companies such as Atos, Idemia, XXII and Datakalab would be able to provide certain software items | Joel Saget/AFP via Getty Images

    Philippe Latombe, an MP from the centrist Macron-allied party Modem, said that French companies such as Atos, Idemia, XXII and Datakalab, among others, would be able to provide such tech. The lawmaker is co-chairing a fact-finding mission on video surveillance in public spaces.

    After the Senate votes on the law to allow “experimentations” with the surveillance systems, the legislation will go to the National Assembly, and lawmakers in both chambers are expected to fight over the balance between privacy and security.

    Time is already running out, Latombe warned, as algorithms will need to be trained on datasets for months before the Olympics kick off.

    Elisa Braun contributed reporting.

    [ad_2]

    Laura Kayali

    Source link