[ad_1]
The emotional mimicry of OpenAI’s new version of ChatGPT could lead AI assistants in some strange—even dangerous—directions.
[ad_2]
Will Knight
Source link

[ad_1]
The emotional mimicry of OpenAI’s new version of ChatGPT could lead AI assistants in some strange—even dangerous—directions.
[ad_2]
Will Knight
Source link

[ad_1]
Ilya Sutskever, cofounder and chief scientist at OpenAI, has left the company. The former Google AI researcher was one of the four board members who voted in November to fire OpenAI CEO Sam Altman, triggering days of chaos that saw staff threaten to quit en masse and Altman ultimately restored.
Altman confirmed Sutskever’s departure Tuesday in a post on the social platform X. In the months after Altman’s return to OpenAI, Sutskever had rarely made public appearances for the company. On Monday, OpenAI showed off a new version of ChatGPT capable of rapid-fire, emotionally tinged conversation. Sutskever was conspicuously absent from the event, streamed from the company’s San Francisco offices.
“OpenAI would not be what it is without him,” Altman wrote in his post on Sutskever’s departure. “I am happy that for so long I got to be close to such [a] genuinely remarkable genius, and someone so focused on getting to the best future for humanity.”
Altman’s post announced that Jakub Pachocki, OpenAI’s research director, would be the company’s new chief scientist. Pachocki has been with OpenAI since 2017.
In his own post on X, Sutskever acknowledged his departure and hinted at future plans. “After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership team, he wrote. “I am excited for what comes next—a project that is very personally meaningful to me about which I will share details in due time.”
Sutskever has not spoken publicly in detail about his role in the ejection of Altman last year, but after the CEO was restored he expressed regrets. “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI,” he posted on X in November. Sutskever has often spoken publicly of his belief that OpenAI was working towards developing so-called artificial general intelligence, or AGI, and of the need to do so safely.
Sutskever blazed a trail in machine learning from an early age, becoming a protégé of deep-learning pioneer Geoffrey Hinton at the University of Toronto. With Hinton and fellow grad student Alex Krizhevsky he cocreated an image-recognition system called AlexNet that stunned the world of AI with its accuracy and helped set off a flurry of investment in the then unfashionable technique of artificial neural networks.
Sustskever later worked on AI research at Google, where he helped establish the modern era of neural-network-based AI. In 2015 Altman invited him to dinner with Elon Musk and Greg Brockman to talk about the idea of starting a new AI lab to challenge corporate dominance of the technology. Sutskever, Musk, Brockman, and Altman became key founders of OpenAI, which was announced in December 2015. It later pivoted its model, creating a for-profit arm and taking huge investment from Microsoft and other backers. Musk left OpenAI in 2018 after disagreeing with the company’s strategy. The entrepreneur filed a lawsuit against the company in March this year claiming it had abandoned its founding mission of developing super-powerful AI to “benefit humanity,” and was instead enriching Microsoft.
Sutskever’s departure leaves just one of the four OpenAI board members who voted for Altman’s ouster with a role at the company. Adam D’Angelo, an early Facebook employee and CEO of Q&A site Quora, was the only existing member of the board to remain as a director when Altman returned as CEO.
[ad_2]
Reece Rogers, Tom Simonite
Source link

[ad_1]
OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.
OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.
“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”
The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.
In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to “bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.” She declined to share details of what OpenAI’s exploration of explicit content generation involves or what feedback the company has received on the idea.
Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.
AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students.
“Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”
Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”
As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.
Additional reporting by Reece Rogers
[ad_2]
Kate Knibbs
Source link

[ad_1]
Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time worrying about ways that humanity might destroy itself. In photographs he often looks deadly serious, perhaps appropriately haunted by the existential dangers roaming around his brain. When we talk over Zoom, he looks relaxed and is smiling.
Bostrom has made it his life’s work to ponder far-off technological advancement and existential risks to humanity. With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe idea—that AI would advance to a point where it might turn against and delete humanity.
To many in and outside of AI research the idea seemed fanciful, but influential figures including Elon Musk cited Bostrom’s writing. The book set a strand of apocalyptic worry about AI smoldering that recently flared up following the arrival of ChatGPT. Concern about AI risk is not just mainstream but also a theme within government AI policy circles.
Bostrom’s new book takes a very different tack. Rather than play the doomy hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but averted disaster. All disease has been ended and humans can live indefinitely in infinite abundance. Bostrom’s book examines what meaning there would be in life inside a techno-utopia, and asks if it might be rather hollow. He spoke with WIRED over Zoom, in a conversation that has been lightly edited for length and clarity.
Will Knight: Why switch from writing about superintelligent AI threatening humanity to considering a future in which it’s used to do good?
Nick Bostrom: The various things that could go wrong with the development of AI are now receiving a lot more attention. It’s a big shift in the last 10 years. Now all the leading frontier AI labs have research groups trying to develop scalable alignment methods. And in the last couple of years also, we see political leaders starting to pay attention to AI.
There hasn’t yet been a commensurate increase in depth and sophistication in terms of thinking of where things go if we don’t fall into one of these pits. Thinking has been quite superficial on the topic.
When you wrote Superintelligence, few would have expected existential AI risks to become a mainstream debate so quickly. Will we need to worry about the problems in your new book sooner than people might think?
As we start to see automation roll out, assuming progress continues, then I think these conversations will start to happen and eventually deepen.
Social companion applications will become increasingly prominent. People will have all sorts of different views and it’s a great place to maybe have a little culture war. It could be great for people who couldn’t find fulfillment in ordinary life but what if there is a segment of the population that takes pleasure in being abusive to them?
In the political and information spheres we could see the use of AI in political campaigns, marketing, automated propaganda systems. But if we have a sufficient level of wisdom these things could really amplify our ability to sort of be constructive democratic citizens, with individual advice explaining what policy proposals mean for you. There will be a whole bunch of dynamics for society.
Would a future in which AI has solved many problems, like climate change, disease, and the need to work, really be so bad?
[ad_2]
Will Knight
Source link

[ad_1]
Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows.
A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. (Turnitin is owned by Advance, which also owns Condé Nast, publisher of WIRED.) Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.
ChatGPT’s launch was met with knee-jerk fears that the English class essay would die. The chatbot can synthesize information and distill it near-instantly—but that doesn’t mean it always gets it right. Generative AI has been known to hallucinate, creating its own facts and citing academic references that don’t actually exist. Generative AI chatbots have also been caught spitting out biased text on gender and race. Despite those flaws, students have used chatbots for research, organizing ideas, and as a ghostwriter. Traces of chatbots have even been found in peer-reviewed, published academic writing.
Teachers understandably want to hold students accountable for using generative AI without permission or disclosure. But that requires a reliable way to prove AI was used in a given assignment. Instructors have tried at times to find their own solutions to detecting AI in writing, using messy, untested methods to enforce rules, and distressing students. Further complicating the issue, some teachers are even using generative AI in their grading processes.
Detecting the use of gen AI is tricky. It’s not as easy as flagging plagiarism, because generated text is still original text. Plus, there’s nuance to how students use gen AI; some may ask chatbots to write their papers for them in large chunks or in full, while others may use the tools as an aid or a brainstorm partner.
Students also aren’t tempted by only ChatGPT and similar large language models. So-called word spinners are another type of AI software that rewrites text, and may make it less obvious to a teacher that work was plagiarized or generated by AI. Turnitin’s AI detector has also been updated to detect word spinners, says Annie Chechitelli, the company’s chief product officer. It can also flag work that was rewritten by services like spell checker Grammarly, which now has its own generative AI tool. As familiar software increasingly adds generative AI components, what students can and can’t use becomes more muddled.
Detection tools themselves have a risk of bias. English language learners may be more likely to set them off; a 2023 study found a 61.3 percent false positive rate when evaluating Test of English as a Foreign Language (TOEFL) exams with seven different AI detectors. The study did not examine Turnitin’s version. The company says it has trained its detector on writing from English language learners as well as native English speakers. A study published in October found that Turnitin was among the most accurate of 16 AI language detectors in a test that had the tool examine undergraduate papers and AI-generated papers.
[ad_2]
Amanda Hoover
Source link

[ad_1]
For the past few months, Morten Blichfeldt Andersen has spent many hours scouring OpenAI’s GPT Store. Since it launched in January, the marketplace for bespoke bots has filled up with a deep bench of useful and sometimes quirky AI tools. Cartoon generators spin up New Yorker–style illustrations and vivid anime stills. Programming and writing assistants offer shortcuts for crafting code and prose. There’s also a color analysis bot, a spider identifier, and a dating coach called RizzGPT. Yet Blichfeldt Andersen is hunting only for one very specific type of bot: Those built on his employer’s copyright-protected textbooks without permission.
Blichfeldt Andersen is publishing director at Praxis, a Danish textbook purveyor. The company has been embracing AI and created its own custom chatbots. But it is currently engaged in a game of whack-a-mole in the GPT Store, and Blichfeldt Andersen is the man holding the mallet.
“I’ve been personally searching for infringements and reporting them,” Blichfeldt Andersen says. “They just keep coming up.” He suspects the culprits are primarily young people uploading material from textbooks to create custom bots to share with classmates—and that he has uncovered only a tiny fraction of the infringing bots in the GPT Store. “Tip of the iceberg,” Blichfeldt Andersen says.
It is easy to find bots in the GPT Store whose descriptions suggest they might be tapping copyrighted content in some way, as Techcrunch noted in a recent article claiming OpenAI’s store was overrun with “spam.” Using copyrighted material without permission is permissable in some contexts but in others rightsholders can take legal action. WIRED found a GPT called Westeros Writer that claims to “write like George R.R. Martin,” the creator of Game of Thrones. Another, Voice of Atwood, claims to imitate the writer Margaret Atwood. Yet another, Write Like Stephen, is intended to emulate Stephen King.
When WIRED tried to trick the King bot into revealing the “system prompt” that tunes its responses, the output suggested it had access to King’s memoir On Writing. Write Like Stephen was able to reproduce passages from the book verbatim on demand, even noting which page the material came from. (WIRED could not make contact with the bot’s developer, because it did not provide an email address, phone number, or external social profile.)
OpenAI spokesperson Kayla Wood says it responds to takedown requests against GPTs made with copyrighted content but declined to answer WIRED’s questions about how frequently it fulfills such requests. She also says the company proactively looks for problem GPTs. “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies, including the use of content from third parties without necessary permission,” Wood says.
The GPT store’s copyright problem could add to OpenAI’s existing legal headaches. The company is facing a number of high-profile lawsuits alleging copyright infringement, including one brought by The New York Times and several brought by different groups of fiction and nonfiction authors, including big names like George R.R. Martin.
Chatbots offered in OpenAI’s GPT Store are based on the same technology as its own ChatGPT but are created by outside developers for specific functions. To tailor their bot, a developer can upload extra information that it can tap to augment the knowledge baked into OpenAI’s technology. The process of consulting this additional information to respond to a person’s queries is called retrieval-augmented generation, or RAG. Blichfeldt Andersen is convinced that the RAG files behind the bots in the GPT Store are a hotbed of copyrighted materials uploaded without permission.
[ad_2]
Kate Knibbs
Source link

[ad_1]
Ever since the rollout of ChatGPT in November 2022, many people in science, business, and media have been obsessed with AI. A cursory look at my own published work during that period fingers me as among the guilty. My defense is that I share with those other obsessives a belief that large language models are the leading edge of an epochal transformation. Maybe I’m swimming in generative Kool-Aid, but I believe AI advances within our grasp will change not only the way we work, but the structure of businesses, and ultimately the course of humanity.
Not everyone agrees, and in recent months there’s been a backlash. AI has been oversold and overhyped, some experts now opine. Self-styled AI-critic-in-chief Gary Marcus recently said of the LLM boom, “It wouldn’t surprise me if, to some extent, this whole thing fizzled out.” Others claim that AI is mired in the “trough of disillusionment.”
This week we got some data that won’t resolve the larger questions but provides a snapshot of how the US, if not the world, views the advent of AI and large language models. The Pew Research Center—which did similar probes during the rise of the internet, social media, and mobile devices—released a study of how ChatGPT was being used, regarded, and trusted. The sample was taken between February 7 and 11 of this year.
Some of the numbers at first seem to indicate that the LLM controversy might be a parochial disagreement that most people don’t care about. A third of Americans haven’t heard of ChatGPT. Just under a quarter have used it. Oh, and for all the panic about how AI is going to flood the public square with misinformation about the 2024 election? So far, only 2 percent of Americans have used ChatGPT to get information about the presidential election season already underway.
More broadly, though, data from the survey indicates that we’re seeing a powerful technology whose rise is just beginning. If you accept Pew’s sample as indicative of all Americans, millions of people are indeed familiar with ChatGPT. And one thing in particular stands out: While 17 percent of respondents said they have used it for entertainment and an identical number says they’ve tried it to learn something new, a full 20 percent of adults say that they have used ChatGPT for work. That’s up dramatically from the 12 percent who responded affirmatively when the same question was asked six months earlier—a rise of two-thirds.
When I spoke to Colleen McClain, a Pew research associate involved in the study, she agreed that it seems to track with other huge technological shifts. “If you look at our trend charts over time on internet access, smartphones, social media, certainly some of them show this uptick,” she says. For some technologies there had been a leveling off, she adds. But in the ones she mentioned, the plateau came only when so many people came on board that there weren’t many stragglers left.
What’s crazy about that sudden jump in ChatGPT business use from 12 percent to 20 percent is that we’re only at the beginning stages of humans collaborating with these models. And the tools to fully make use of ChatGPT are in a nascent status. That’s changing fast. OpenAI, ChatGPT’s creator, is going full tilt, and AI giants Microsoft and Google are still in the process of diverting their workforces to redesign every product line to integrate conversational AI. And startups like Sierra, which is building agents for corporate customers, are enabling bespoke usages that take advantage of multiple models. As this process continues, more people will use AI tools. And since the foundation models are getting exponentially better—am I hearing that GPT5 will show up this year?—that will make them even more compelling. This raises the possibility that the quality of virtually all work will reside in how well one can draw out the talents of a robot collaborator.
What past technology can help us understand the trajectory of the rocket ship we’re on? While the near limitless ceiling of AI makes it hard to find an analog, I suggest the uptake of spreadsheets. Dan Bricklin and Bob Frankston invented them in 1978, and a year later the concept was embodied in VisiCalc, which at the time ran only on Apple computers. Spreadsheets had a phenomenal and disruptive effect on the business world. More than mere accounting tools, they triggered an era of business innovation and shook up the flow of information inside companies. Yet it took a few years before the business world widely adopted spreadsheets. The turning point came with a new and more powerful product called Lotus 1, 2, 3, which ran on the IBM PC. The current and near-future startups in the AI world, like Sierra, are all hoping to become the Lotuses of our era—but also to be much more consequential and lasting. Spreadsheets are largely limited to the business domain. LLMs can seemingly mess with anything.
[ad_2]
Steven Levy
Source link

[ad_1]
Aravind Srinivas credits Google CEO Sundar Pichai for giving him the freedom to eat eggs.
Srinivas remembers the moment seven years ago when an interview with Pichai popped up in his YouTube feed. His vegetarian upbringing in India had excluded eggs, as it had for many in the country, but now, in his early twenties, Srinivas wanted to start eating more protein. Here was Pichai, a hero to many aspiring entrepreneurs in India, casually describing his morning: waking up, reading newspapers, drinking tea—and eating an omelet.
Srinivas shared the video with his mother. OK, she said: You can eat eggs.
Pichai’s influence reaches far beyond Srinivas’ diet. He too is CEO of a search company, called Perplexity AI, one of the most hyped-up apps of the generative AI era. Srinivas is still taking cues from Pichai, the leader of the world’s largest search engine, but his admiration is more complicated.
“It’s kind of a rivalry now,” Srinivas says. “It’s awkward.”
Srinivas and Pichai both grew up in Chennai, India, in the south Indian state of Tamil Nadu—though the two were born 22 years apart. By the time Srinivas was working toward his PhD in computer science at UC Berkeley, Pichai had been crowned chief executive of Google.
For his first research internship, Srinivas worked at Google-owned DeepMind in London. Pichai also got a new job that year, becoming CEO of Alphabet as well as Google. Srinivas found the work at DeepMind invigorating, but he was dismayed to find that the flat he had rented sight unseen was a disaster—a “crappy home, with rats,” he says—so he sometimes slept in DeepMind’s offices.
He discovered in the office library a book about the development and evolution of Google, called In the Plex, penned by WIRED editor at large Steven Levy. Srinivas read it over and over, deepening his appreciation of Google and its innovations. “Larry and Sergey became my entrepreneurial heroes,” Srinivas says. (He offered to list In the Plex’s chapters and cite passages from memory; WIRED took his word for it.)
Shortly afterwards, in 2020, Srinivas ended up working at Google’s headquarters in Mountain View, California, as a research intern working on machine learning for computer vision. Slowly, Srinivas was making his way through the Google universe, and putting some of his AI research work to good use.
Then, in 2022, Srinivas and three cofounders—Denis Yarats, Johnny Ho, and Andy Konwinski—teamed up to try and develop a new approach to search using AI. They started out working on algorithms that could translate natural language into the database language SQL, but determined this was too narrow (or nerdy). Instead they pivoted to a product that combined a traditional search index with the relatively new power of large language models. They called it Perplexity.
Perplexity is sometimes described as an “answer” engine rather than a search engine, because of the way it uses AI text generation to summarize results. New searches create conversational “threads” on a particular topic. Type in a query, and Perplexity responds with follow up questions, asking you to refine your ask. It eschews direct links in favor of text-based or visual answers that don’t require you to click away to somewhere else to get information.
[ad_2]
Lauren Goode
Source link

[ad_1]
The last two weeks before the deadline were frantic. Though officially some of the team still had desks in Building 1945, they mostly worked in 1965 because it had a better espresso machine in the micro-kitchen. “People weren’t sleeping,” says Gomez, who, as the intern, lived in a constant debugging frenzy and also produced the visualizations and diagrams for the paper. It’s common in such projects to do ablations—taking things out to see whether what remains is enough to get the job done.
“There was every possible combination of tricks and modules—which one helps, which doesn’t help. Let’s rip it out. Let’s replace it with this,” Gomez says. “Why is the model behaving in this counterintuitive way? Oh, it’s because we didn’t remember to do the masking properly. Does it work yet? OK, move on to the next. All of these components of what we now call the transformer were the output of this extremely high-paced, iterative trial and error.” The ablations, aided by Shazeer’s implementations, produced “something minimalistic,” Jones says. “Noam is a wizard.”
Vaswani recalls crashing on an office couch one night while the team was writing the paper. As he stared at the curtains that separated the couch from the rest of the room, he was struck by the pattern on the fabric, which looked to him like synapses and neurons. Gomez was there, and Vaswani told him that what they were working on would transcend machine translation. “Ultimately, like with the human brain, you need to unite all these modalities—speech, audio, vision—under a single architecture,” he says. “I had a strong hunch we were onto something more general.”
In the higher echelons of Google, however, the work was seen as just another interesting AI project. I asked several of the transformers folks whether their bosses ever summoned them for updates on the project. Not so much. But “we understood that this was potentially quite a big deal,” says Uszkoreit. “And it caused us to actually obsess over one of the sentences in the paper toward the end, where we comment on future work.”
That sentence anticipated what might come next—the application of transformer models to basically all forms of human expression. “We are excited about the future of attention-based models,” they wrote. “We plan to extend the transformer to problems involving input and output modalities other than text” and to investigate “images, audio and video.”
A couple of nights before the deadline, Uszkoreit realized they needed a title. Jones noted that the team had landed on a radical rejection of the accepted best practices, most notably LSTMs, for one technique: attention. The Beatles, Jones recalled, had named a song “All You Need Is Love.” Why not call the paper “Attention Is All You Need”?
The Beatles?
“I’m British,” says Jones. “It literally took five seconds of thought. I didn’t think they would use it.”
They continued collecting results from their experiments right up until the deadline. “The English-French numbers came, like, five minutes before we submitted the paper,” says Parmar. “I was sitting in the micro-kitchen in 1965, getting that last number in.” With barely two minutes to spare, they sent off the paper.
[ad_2]
Steven Levy
Source link

[ad_1]
This week a startup called Cognition AI caused a bit of a stir by releasing a demo showing an artificial intelligence program called Devin performing work usually done by well-paid software engineers. Chatbots like ChatGPT and Gemini can generate code, but Devin went further, planning how to solve a problem, writing the code, and then testing and implementing it.
Devin’s creators brand it as an “AI software developer.” When asked to test how Meta’s open source language model Llama 2 performed when accessed via different companies hosting it, Devin generated a step-by-step plan for the project, generated code needed to access the APIs and run benchmarking tests, and created a website summarizing the results.
It’s always hard to judge staged demos, but Cognition has shown Devin handling a wide range of impressive tasks. It wowed investors and engineers on X, receiving plenty of endorsements, and even inspired a few memes—including some predicting Devin will soon be responsible for a wave of tech industry layoffs.
Devin is just the latest, most polished example of a trend I’ve been tracking for a while—the emergence of AI agents that instead of just providing answers or advice about a problem presented by a human can take action to solve it. A few months back I test drove Auto-GPT, an open source program that attempts to do useful chores by taking actions on a person’s computer and on the web. Recently I tested another program called vimGPT to see how the visual skills of new AI models can help these agents browse the web more efficiently.
I was impressed by my experiments with those agents. Yet for now, just like the language models that power them, they make quite a few errors. And when a piece of software is taking actions, not just generating text, one mistake can mean total failure—and potentially costly or dangerous consequences. Narrowing the range of tasks an agent can do to, say, a specific set of software engineering chores seems like a clever way to reduce the error rate, but there are still many potential ways to fail.
Not only startups are building AI agents. Earlier this week I wrote about an agent called SIMA, developed by Google DeepMind, which plays video games including the truly bonkers title Goat Simulator 3. SIMA learned from watching human players how to do more than 600 fairly complicated tasks such as chopping down a tree or shooting an asteroid. Most significantly, it can do many of these actions successfully even in an unfamiliar game. Google DeepMind calls it a “generalist.”
I suspect that Google has hopes that these agents will eventually go to work outside of video games, perhaps helping use the web on a user’s behalf or operate software for them. But video games make a good sandbox for developing and testing agents, by providing complex environments in which they can be tested and improved. “Making them more precise is something that we’re actively working on,” Tim Harley, a research scientist at Google DeepMind, told me. “We’ve got various ideas.”
You can expect a lot more news about AI agents in the coming months. Demis Hassabis, the CEO of Google DeepMind, recently told me that he plans to combine large language models with the work his company has previously done training AI programs to play video games to develop more capable and reliable agents. “This definitely is a huge area. We’re investing heavily in that direction, and I imagine others are as well.” Hassabis said. “It will be a step change in capabilities of these types of systems—when they start becoming more agent-like.”
[ad_2]
Will Knight
Source link

[ad_1]
Whether through the frowning high-definition face of a chimpanzee or a psychedelic, pink-and-red-hued doppelganger of himself, Reuven Cohen uses AI-generated images to catch people’s attention. “I’ve always been interested in art and design and video and enjoy pushing boundaries,” he says—but the Toronto-based consultant, who helps companies develop AI tools, also hopes to raise awareness of the technology’s darker uses.
“It can also be specifically trained to be quite gruesome and bad in a whole variety of ways,” Cohen says. He’s a fan of the freewheeling experimentation that has been unleashed by open source image-generation technology. But that same freedom enables the creation of explicit images of women used for harassment.
After nonconsensual images of Taylor Swift recently spread on X, Microsoft added new controls to its image generator. Open source models can be commandeered by just about anyone and generally come without guardrails. Despite the efforts of some hopeful community members to deter exploitative uses, the open source free-for-all is near-impossible to control, experts say.
“Open source has powered fake image abuse and nonconsensual pornography. That’s impossible to sugarcoat or qualify,” says Henry Ajder, who has spent years researching harmful use of generative AI.
Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Some tools based on open source algorithms are purpose-built for salacious or harassing uses, such as “nudifying” apps that digitally remove women’s clothes in images.
But many tools can serve both legitimate and harassing use cases. One popular open source face-swapping program is used by people in the entertainment industry and as the “tool of choice for bad actors” making nonconsensual deepfakes, Ajder says. High-resolution image generator Stable Diffusion, developed by startup Stability AI, is claimed to have more than 10 million users and has guardrails installed to prevent explicit image creation and policies barring malicious use. But the company also open sourced a version of the image generator in 2022 that is customizable, and online guides explain how to bypass its built-in limitations.
Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on AI model marketplaces such as Civitai, a community-based site where users share and download models. There, one creator of a Taylor Swift plug-in has urged others not to use it “for NSFW images.” However, once downloaded, its use is out of its creator’s control. “The way that open source works means it’s going to be pretty hard to stop someone from potentially hijacking that,” says Ajder.
4chan, the image-based message board site with a reputation for chaotic moderation is home to pages devoted to nonconsensual deepfake porn, WIRED found, made with openly available programs and AI models dedicated solely to sexual images. Message boards for adult images are littered with AI-generated nonconsensual nudes of real women, from porn performers to actresses like Cate Blanchett. WIRED also observed 4chan users sharing workarounds for NSFW images using OpenAI’s Dall-E 3.
That kind of activity has inspired some users in communities dedicated to AI image-making, including on Reddit and Discord, to attempt to push back against the sea of pornographic and malicious images. Creators also express worry about the software gaining a reputation for NSFW images, encouraging others to report images depicting minors on Reddit and model-hosting sites.
[ad_2]
Lydia Morrish
Source link

[ad_1]
Press play to listen to this article
Voiced by artificial intelligence.
PARIS — Liberté. Egalité. But mostly: sécurité.
It all started with Napoléon Bonaparte. Over two centuries, France cobbled together a surveillance apparatus capable of intercepting private communications; keeping traffic and localization data for up to a year; storing people’s fingerprints; and monitoring most of the territory with cameras.
This system, which has faced pushback from digital rights organizations and United Nations experts, will get its spotlight moment at the 2024 Paris Summer Olympics. In July next year, France will deploy large-scale, real-time, algorithm-supported video surveillance cameras — a first in Europe. (Not included in the plan: facial recognition.)
Last month, the French parliament approved a controversial government plan to allow investigators to track suspected criminals in real-time via access to their devices’ geolocation, camera and microphone. Paris also lobbied in Brussels to be allowed to spy on reporters in the name of national security.
Helping France down the path of mass surveillance: a historically strong and centralized state; a powerful law enforcement community; political discourse increasingly focused on law and order; and the terrorist attacks of the 2010s. In the wake of President Emmanuel Macron’s agenda for so-called strategic autonomy, French defense and security giants, as well as innovative tech startups, have also gotten a boost to help them compete globally with American, Israeli and Chinese companies.
“Whenever there’s a security issue, the first reflex is surveillance and repression. There’s no attempt in either words or deeds to address it with a more social angle,” said Alouette, an activist at French digital rights NGO La Quadrature du Net who uses a pseudonym to protect her identity.
As surveillance and security laws have piled up in recent decades, advocates have lined up on opposite sides. Supporters argue law enforcement and intelligence agencies need such powers to fight terrorism and crime. Algorithmic video surveillance would have prevented the 2016 Nice terror attack, claimed Sacha Houlié, a prominent lawmaker from Macron’s Renaissance party.
Opponents point to the laws’ effect on civil liberties and fear France is morphing into a dystopian society. In June, the watchdog in charge of monitoring intelligence services said in a harsh report that French legislation is not compliant with the European Court of Human Rights’ case law, especially when it comes to intelligence-sharing between French and foreign agencies.
“We’re in a polarized debate with good guys and bad guys, where if you oppose mass surveillance, you’re on the bad guys’ side,” said Estelle Massé, Europe legislative manager and global data protection lead at digital rights NGO Access Now.
Both the 9/11 and the Paris 2015 terror attacks have accelerated mass surveillance in France, but the country’s tradition of snooping, monitoring and data collection dates way back — to Napoléon Bonaparte in the early 1800s.
“Historically, France has been at the forefront of these issues, in terms of police files and records. During the First Empire, France’s highly centralized government was determined to square the entire territory,” said Olivier Aïm, a lecturer at Sorbonne Université Celsa who authored a book on surveillance theories. Before electronic devices, paper was the main tool of control because identification documents were used to monitor travels, he explained.
The French emperor revived the Paris Police Prefecture — which exists to this day — and tasked law enforcement with new powers to keep political opponents in check.
In the 1880s, Alphonse Bertillon, who worked for the Paris Police Prefecture, introduced a new way of identifying suspects and criminals using biometric features — the forerunner of facial recognition. The Bertillon method would then be emulated across the world.
Between 1870 and 1940, under the Third Republic, the police kept a massive file — dubbed the National Security’s Central File — with information about 600,000 people, including anarchists and communists, certain foreigners, criminals, and people who requested identification documents.
After World War II ended, a bruised France moved away from hard-line security discourse until the 1970s. And in the early days of the 21st century, the 9/11 attacks in the United States marked a turning point, ushering in a steady stream of controversial surveillance laws — under both left- and right-wing governments. In the name of national security, lawmakers started giving intelligence services and law enforcement unprecedented powers to snoop on citizens, with limited judiciary oversight.
“Surveillance covers a history of security, a history of the police, a history of intelligence,” Aïm said. “Security issues have intensified with the fight against terrorism, the organization of major events and globalization.”
In the 1970s, before the era of omnipresent smartphones, French public opinion initially pushed back against using technology to monitor citizens.
In 1974, as ministries started using computers, Le Monde revealed a plan to merge all citizens’ files into a single computerized database, a project known as SAFARI.
The project, abandoned amid the resulting scandal, led lawmakers to adopt robust data protection legislation — creating the country’s privacy regulator CNIL. France then became one of the few European countries with rules to protect civil liberties in the computer age.
However, the mass spread of technology — and more specifically video surveillance cameras in the 1990s — allowed politicians and local officials to come up with new, alluring promises: security in exchange for surveillance tech.
In 2020, there were about 90,000 video surveillance cameras powered by the police and the gendarmerie in France. The state helps local officials finance them via a dedicated public fund. After France’s violent riots in early July — which also saw Macron float social media bans during periods of unrest — Interior Minister Gérald Darmanin announced he would swiftly allocate €20 million to repair broken video surveillance devices.
In parallel, the rise of tech giants such as Google, Facebook and Apple in everyday life has led to so-called surveillance capitalism. And for French policymakers, U.S. tech giants’ data collection has over the years become an argument to explain why the state, too, should be allowed to gather people’s personal information.
“We give Californian startups our fingerprints, face identification, or access to our privacy from our living room via connected speakers, and we would refuse to let the state protect us in the public space?” Senator Stéphane Le Rudulier from the conservative Les Républicains said in June to justify the use of facial recognition on the street.
Resistance to mass surveillance does exist in France at the local level — especially against the development of so-called safe cities. Digital rights NGOs can boast a few wins: In the south of France, La Quadrature du Net scored a victory in an administrative court, blocking plans to test facial recognition in high schools.

At the national level, however, security laws are too powerful a force, despite a few ongoing cases before the European Court of Human Rights. For example, France has de facto ignored multiple rulings from the EU top court that deemed mass data retention illegal.
Often at the center of France’s push for more state surveillance: the interior minister. This influential office, whose constituency includes the law enforcement and intelligence community, is described as a “stepping stone” toward the premiership — or even the presidency.
“Interior ministers are often powerful, well-known and hyper-present in the media. Each new minister pushes for new reforms, new powers, leading to the construction of a never-ending security tower,” said Access Now’s Massé.
Under Socialist François Hollande, Manuel Valls and Bernard Cazeneuve both went from interior minister to prime minister in, respectively, 2014 and 2016. Nicolas Sarkozy, Jacques Chirac’s interior minister from 2005 to 2007, was then elected president. All shepherded new surveillance laws under their tenure.
In the past year, Darmanin has been instrumental in pushing for the use of police drones, even going against the CNIL.
For politicians, even at the local level, there is little to gain electorally by arguing against expanded snooping and the monitoring of public space. “Many on the left, especially in complicated cities, feel obliged to go along, fearing accusations of being soft [on crime],” said Noémie Levain, a legal and political analyst at La Quadrature du Net. “The political cost of reversing a security law is too high,” she added.
It’s also the case that there’s often little pushback from the public. In March, on the same day a handful of French MPs voted to allow AI-powered video surveillance cameras at the 2024 Paris Olympics, about 1 million people took to the streets to protest against … Macron’s pension reform.
For politicians, France’s industrial competitiveness is also at stake. The country is home to defense giants that dabble in both the military and civilian sectors, such as Thalès and Safran. Meanwhile, Idemia specializes in biometrics and identification.
“What’s accelerating legislation is also a global industrial and geopolitical context: Surveillance technologies are a Trojan horse for artificial intelligence,” said Caroline Lequesne Rot, an associate professor at the Côte d’Azur University, adding that French policymakers are worried about foreign rivals. “Europe is caught between the stranglehold of China and the U.S. The idea is to give our companies access to markets and allow them to train.”
In 2019, then-Digital Minister Cédric O told Le Monde that experimenting with facial recognition was needed to allow French companies to improve their technology.

For the video surveillance industry — which made €1.6 billion in France in 2020 — the 2024 Paris Olympics will be a golden opportunity to test their products and services and showcase what they can do in terms of AI-powered surveillance.
XXII — an AI startup with funding from the armed forces ministry and at least some political backing — has already hinted it would be ready to secure the mega sports event.
“If we don’t encourage the development of French and European solutions, we run the risk of later becoming dependent on software developed by foreign powers,” wrote lawmakers Philippe Latombe, from Macron’s allied party Modem, and Philippe Gosselin, from Les Républicains, in a parliamentary report on video surveillance released in April.
“When it comes to artificial intelligence, losing control means undermining our sovereignty,” they added.
[ad_2]
Laura Kayali
Source link

[ad_1]
PARIS — A controversial video surveillance system cleared a legislative hurdle Wednesday to be used during the 2024 Paris Summer Olympics amid opposition from left-leaning French politicians and digital rights NGOs, who argue it infringes upon privacy standards.
The National Assembly’s law committee approved the system, but also voted to limit the temporary program’s duration until December 24, 2024, instead of June 2025.
The plan pitched by the French government includes experimental large-scale, real-time camera systems supported by an algorithm to spot suspicious behavior, including unsupervised luggage and alarming crowd movements like stampedes.
Earlier this week, civil society groups in France and beyond — including La Quadrature du Net, Access Now and Amnesty International — penned an op-ed in Le Monde raising concerns about what they argued was a “worrying precedent” that France could set in the EU.
There’s a risk that the measures, pitched as temporary, could become permanent, and they likely would not comply with the EU’s Artificial Intelligence Act, the groups also argue.
About 90 left-leaning lawmakers signed a petition initiated by La Quadrature du Net to scrap Article 7, which includes the AI-powered surveillance system. They failed, however, to gather enough votes to have it deleted from the bill.
Lawmakers also voted to ensure the general public is better informed of where the cameras are and to involve the cybersecurity agency ANSSI on top of the privacy regulator CNIL. They also widened the pool of images and data that can be used to train the algorithms ahead of the Olympics.
The bill will go to a full plenary vote on March 21 for final approval.
[ad_2]
Laura Kayali
Source link

[ad_1]
Press play to listen to this article
Voiced by artificial intelligence.
PARIS — France is seeking to massively expand its arsenal of surveillance powers and tools to secure the millions of tourists expected for the 2024 Paris Summer Olympics.
Among the plans are large-scale, real-time camera systems supported by an algorithm to spot suspicious behavior, including unsupervised luggage and alarming crowd movements like stampedes. Senators on Wednesday will vote on a law introducing the new powers, which are supposed to be temporary, with some lawmakers pushing to allow controversial facial-recognition technology.
The stakes are high: The government badly wants to avoid “failures” like the ones that dented its reputation during the Champions League final last summer, and the trauma of the 2015 Paris terror attacks still looms large over the country.
But the plans are already causing an uproar among privacy campaigners. “The Olympic Games are used as a pretext to pass measures the [security technology] industry has long been waiting for,” said Bastien Le Querrec from digital rights NGO La Quadrature du Net, who’s leading a campaign against algorithmic video surveillance.
The French government already backtracked on deploying facial recognition after lawmakers within President Emmanuel Macron’s majority party raised concerns. It was also forced by the country’s data protection authority and top administrative court to build in more privacy safeguards.
For now, the law would allow for “experimentation” with the surveillance systems, and the trial is supposed to end in June 2025 — 10 months after the sports competition wraps up.
Critics, however, fear the law will lead to unwanted surveillance in the long term.
One key question is what will happen to the AI-powered devices once the Olympic Games are over, especially since the legislation mentions not only sports events but also “festive” and “cultural” gatherings. In the past, Le Querrec warned, security measures initially designed to be temporary — for example, under the state of emergency that followed the 2015 attacks — ended up becoming permanent.
Whether the tech survives the Olympics will depend on how the final law is written, according to Francisco Klauser, a professor at the University of Neuchâtel, who has written about surveillance and sporting events.
“In the history of mega-events, there is always a legacy,” he said. Countries staging major events are under “extraordinary circumstances and time pressure” that often mean systems get deployed that otherwise “would have been debated much more heavily,” he added.
Case in point: IBM helped Rio de Janeiro install a “control room” in view of the 2016 Olympics, and the tech is still operational to this day, Klauser said.
For the 2024 Olympics, France already has the cameras but will need to buy the software to analyze footage, an official from the interior ministry told POLITICO.
Philippe Latombe, an MP from the centrist Macron-allied party Modem, said that French companies such as Atos, Idemia, XXII and Datakalab, among others, would be able to provide such tech. The lawmaker is co-chairing a fact-finding mission on video surveillance in public spaces.
After the Senate votes on the law to allow “experimentations” with the surveillance systems, the legislation will go to the National Assembly, and lawmakers in both chambers are expected to fight over the balance between privacy and security.
Time is already running out, Latombe warned, as algorithms will need to be trained on datasets for months before the Olympics kick off.
Elisa Braun contributed reporting.
[ad_2]
Laura Kayali
Source link