ReportWire

Tag: machine learning

  • Google Taps AI to Show Shoppers How Clothes Fit Different Bodies

    Google Taps AI to Show Shoppers How Clothes Fit Different Bodies

    [ad_1]

    One of the new ad formats Google announced today will allow brands to link short-form videos they made—or ones they hired creators to film—to their advertisements in Google’s search engine. AI-generated text summaries of the clips will be included below. “I’ve got three Gen Z-ers at home, and watching them shop, it’s very video-based,” said Madrigal.

    Google also launched a tool that allows companies to create entirely new, AI-generated product images based on photos from earlier marketing campaigns and pictures that represent their brand identity. For example, a home goods brand could upload a picture of one of its candles and an image of a beach, then ask Google to “put the candle on a beach that looks like this one under some palm trees.”

    Shannon Smyth, the founder of a perfume and body-care company called A Girl’s Gotta Spa, said she began using Google’s AI image tools last year when the company first began rolling them out as part of software called Product Studio. Initially, Google only allowed merchants to swap the backgrounds on existing product photos and make small tweaks, like increasing the resolution.

    “It coincided with struggling to keep up on our social channels with professional-looking photography, and as finances became more strapped I decided to give it a try,” Smyth says. She uses it to generate images for use on social media, in an email newsletter, and on her Amazon store. (Google put Smyth in touch with WIRED to discuss her experiences with its AI products.)

    Smyth said Google’s AI tools save time and have gotten better as she has continued using them. “I admit, I was frustrated at first if it would generate images without shadows or reflections, or have an unidentifiable object in the photo,” she explained. “I’ve found that as I give feedback on every image, those issues begin to get resolved.”

    Google is trying to help advertisers create compelling imagery without needing to spend as much of their time and budget on graphic designers, photographers, set designers, and models. That may not be good news for those workers, and if the product images aren’t accurate, shoppers could be left disappointed. But Google hopes AI imagery will make ads more engaging and draw more clicks—boosting its revenue.

    Yet the company and its competitors may also be simply helping retailers avoid paying for expensive software like Photoshop or spending so much on creative services. It’s not clear how many customers will necessarily feel compelled to advertise more. Smyth said her company doesn’t purchase ads on Google, despite how much she appreciates Product Studio.

    AI-generated advertising is increasingly becoming a fixture of the internet. Earlier this month, Meta began giving advertisers on Facebook and Instagram the ability to generate new versions of existing product photos using AI, after previously offering just AI-generated backgrounds. Meta and Google also allow advertisers to generate marketing copy for their ads.

    Amazon announced a similar beta image-generation tool last fall that can also create backgrounds for product photos. Instead of advertising a garden hose against a plain white backdrop, it allows brands to create, say, a scene of a backyard with a garden and trees—no actual dirt required.

    The looming question is whether consumers will find AI-generated ads off-putting, if they notice them in the first place. Some fashion brands, including Levi’s and the dressmaker Selkie, have faced backlash from customers after they announced they were experimenting with artificial intelligence. But for many smaller ecommerce companies, the potential benefits of using AI may outweigh the risks.

    “Let’s face it, small businesses are crumbling like a house of cards. We’re barely hanging on,” said Smyth. “It has helped me to stay top of mind to customers and potential customers visually. I’m pretty confident my aesthetic would’ve tanked or I would’ve abandoned many social channels without it as an option.”

    [ad_2]

    Louise Matsakis

    Source link

  • The US Is Forming a Global AI Safety Network With Key Allies

    The US Is Forming a Global AI Safety Network With Key Allies

    [ad_1]

    The US is widely seen as the global leader in artificial intelligence, thanks to companies like OpenAI, Google, and Meta. But the US government says it needs help from other nations to manage the risks posed by AI technology.

    At an international summit on AI Safety in Seoul on Tuesday, the US delivered a message from Secretary of Commerce Gina Raimondo announcing that a global network of AI safety institutes spanning the US, UK, Japan, Canada, and other allies will collaborate to contain the technology’s risks. She also urged other countries to join up.

    “Recent advances in AI carry exciting, life-changing potential for our society, but only if we do the hard work to mitigate the very real dangers,” Secretary Raimondo said in a statement released ahead of the announcement. “It is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

    The US government has previously said advances in AI create national security risks, including the potential to automate or accelerate the development of bioweapons, or to enable more damaging cyberattacks on critical infrastructure.

    One challenge for the US, alluded to in Raimondo’s statement, is that some national governments may not be eager to fall in line with its approach to AI. She said the US, the UK, Japan, Canada, Singapore, and the European AI Office would work together as the founding members of a “global network of AI safety institutes.”

    The Commerce Department declined to comment on whether China had been invited to join the new AI safety network. Fears that China will use advanced AI to empower its military or threaten the US led first the Trump administration and now the Biden administration to roll out a series of restrictions on Chinese access to key technology.

    The US and China have at least opened a line of communication. A meeting between US President Joe Biden and Chinese President Xi Jinping last November saw the two superpowers agree to hold talks on AI risks and safety. Representatives from the nations met in Switzerland last week to hold the first round of discussions.

    The Commerce Department said that representatives of the new global AI safety network’s members will meet in San Francisco later this year. A blueprint issued by the agency says that the network will work together to develop and agree upon methodologies and tools for evaluating AI models and ways to mitigate the risks of AI. “We hope to help develop the science and practices that underpin future arrangements for international AI governance,” the document says. A commerce department spokesperson said that the network would help nations tap into talent, experiment more quickly, and agree on AI standards.

    The Seoul summit on AI safety this week is co-hosted by the UK government, which convened the first major international meeting on the topic last November. That summit culminated in more than 28 countries including the US, members of the EU, and China signing a declaration warning that artificial intelligence is advancing with such speed and uncertainty that it could cause “serious, even catastrophic, harm.”

    [ad_2]

    Will Knight

    Source link

  • Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

    Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

    [ad_1]

    Last week OpenAI revealed a new conversational interface for ChatGPT with an expressive, synthetic voice strikingly similar to that of the AI assistant played by Scarlett Johansson in the sci-fi movie Her—only to suddenly disable the new voice over the weekend.

    On Monday, Johansson issued a statement claiming to have forced that reversal, after her lawyers demanded OpenAI clarify how the new voice was created.

    Johansson’s statement, relayed to WIRED by her publicist, claims that OpenAI CEO Sam Altman asked her last September to provide ChatGPT’s new voice but that she declined. She describes being astounded to see the company demo a new voice for ChatGPT last week that sounded like her anyway.

    “When I heard the release demo I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” the statement reads. It notes that Altman appeared to encourage the world to connect the demo with Johansson’s performance by tweeting out “her,” in reference to the movie, on May 13.

    Johansson’s statement says her agent was contacted by Altman two days before last week’s demo asking that she reconsider her decision not to work with OpenAI. After seeing the demo, she says she hired legal counsel to write to OpenAI asking for details of how it made the new voice.

    The statement claims that this led to OpenAI’s announcement Sunday in a post on X that it had decided to “pause the use of Sky,” the company’s name for the synthetic voice. The company also posted a blog post outlining the process used to create the voice. “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the post said.

    Sky is one of several synthetic voices that OpenAI gave ChatGPT last September, but at last week’s event it displayed a much more lifelike intonation with emotional cues. The demo saw a version of ChatGPT powered by a new AI model called GPT-4o appear to flirt with an OpenAI engineer in a way that many viewers found reminiscent of Johansson’s performance in Her.

    “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” Sam Altman said in a statement provided by OpenAI. He claimed the voice actor behind Sky’s voice was hired before the company contact Johannsson. “Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

    The conflict with Johansson adds to OpenAI’s existing battles with artists, writers, and other creatives. The company is already defending a number of lawsuits alleging it inappropriately used copyrighted content to train its algorithms, including suits from The New York Times and authors including George R.R. Martin.

    Generative AI has made it much easier to create realistic synthetic voices, creating new opportunities and threats. In January, voters in New Hampshire were bombarded with robocalls featuring a deepfaked voice message from Joe Biden. In March, OpenAI said that it had developed a technology that could clone someone’s voice from a 15-second clip, but the company said it would not release the technology because of how it might be misused.

    Updated 5-20-2024, 9 pm EDT: This article has been updated with comment from OpenAI CEO Sam Altman.

    [ad_2]

    Will Knight

    Source link

  • It’s Time to Believe the AI Hype

    It’s Time to Believe the AI Hype

    [ad_1]

    Folks, when dogs talk, we’re talking Biblical disruption. Do you think that future models will do worse on the law exams?

    If nothing else, this week proves that the rate of AI progress isn’t slowing at all. Just ask the people building these models. “A lot of things have happened—internet, mobile,” says Demis Hassabis, cofounder of DeepMind and now Google’s AI czar, in a post-keynote chat at I/O. “AI is going maybe three or four times faster than those other revolutions. We’re in a period of 25 or 30 years of massive change.” When I asked Google search VP Liz Reid to name a big challenge, she didn’t say it was to keep the innovation going—instead, she cited the difficulty of absorbing the pace of change. “As the technology is early, the biggest challenge is about even what’s possible,” she says. “It’s understanding what the models are great at today, and what they are not great at but will be great at in three months or six months. The technology is changing so fast that you can get two researchers in the room who are working on the same project, and they’ll have totally different views when something is possible.”

    There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. And when non-techies see the products for themselves, they most often become believers too. (Including Joe Biden, after a March 2023 demo of ChatGPT.) That’s why Microsoft is well along on a total AI reinvention, why Mark Zuckerberg is now refocusing Meta to create artificial general intelligence, why Amazon and Apple are desperately trying to keep up, and why countless startups are focusing on AI. And because all of these companies are trying to get an edge, the competitive fervor is ramping up new innovations at a frantic page. Do you think it was a coincidence that OpenAI made its announcement a day before Google I/O?

    Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more. But the AI revolution will change our lives, and change us, for better or worse. And we haven’t even seen GPT-5 yet.

    Time Travel

    Sure, I could be wrong about AI. But consider the last time I made such a call. In 1995, I joined Newsweek—the same organ where Clifford Stoll had just dismissed the internet as a hoax—and at the end of the year argued of this new digital medium, “This Changes Everything.” Some of my colleagues thought I’d bought into overblown hype. Actually, reality exceeded my hyperbole.

    In 1995, the Internet ruled. You talk about a revolution? For once, the shoe fits. “In the long run it’s hard to exaggerate the importance of the Internet,” says Paul Moritz, a Microsoft VP. “It really is about opening communications to the masses.” And 1995 was the year that the masses started coming. “If you look at the numbers they’re quoting, with the Web doubling every 53 days, that’s biological growth, like a red tide or population of lemmings,” says Kevin Kelly, executive editor of WIRED. “I don’t know if we’ve ever seen technology exhibit that sort of growth.” In fact, there’s a raging controversy over exactly how many people regularly use the Net. A recent Nielsen survey pegged the number at an impressive 24 million North Americans. During the course of the year the discussion of the Internet ranged from sex to stock prices to software standards. But the most significant aspect of the Internet has nothing to do with money or technology, really. It’s us.

    [ad_2]

    Steven Levy

    Source link

  • Prepare to Get Manipulated by Emotionally Expressive Chatbots

    Prepare to Get Manipulated by Emotionally Expressive Chatbots

    [ad_1]

    The emotional mimicry of OpenAI’s new version of ChatGPT could lead AI assistants in some strange—even dangerous—directions.

    [ad_2]

    Will Knight

    Source link

  • OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

    OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

    [ad_1]

    Ilya Sutskever, cofounder and chief scientist at OpenAI, has left the company. The former Google AI researcher was one of the four board members who voted in November to fire OpenAI CEO Sam Altman, triggering days of chaos that saw staff threaten to quit en masse and Altman ultimately restored.

    Altman confirmed Sutskever’s departure Tuesday in a post on the social platform X. In the months after Altman’s return to OpenAI, Sutskever had rarely made public appearances for the company. On Monday, OpenAI showed off a new version of ChatGPT capable of rapid-fire, emotionally tinged conversation. Sutskever was conspicuously absent from the event, streamed from the company’s San Francisco offices.

    “OpenAI would not be what it is without him,” Altman wrote in his post on Sutskever’s departure. “I am happy that for so long I got to be close to such [a] genuinely remarkable genius, and someone so focused on getting to the best future for humanity.”

    Altman’s post announced that Jakub Pachocki, OpenAI’s research director, would be the company’s new chief scientist. Pachocki has been with OpenAI since 2017.

    In his own post on X, Sutskever acknowledged his departure and hinted at future plans. “After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership team, he wrote. “I am excited for what comes next—a project that is very personally meaningful to me about which I will share details in due time.”

    Sutskever has not spoken publicly in detail about his role in the ejection of Altman last year, but after the CEO was restored he expressed regrets. “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI,” he posted on X in November. Sutskever has often spoken publicly of his belief that OpenAI was working towards developing so-called artificial general intelligence, or AGI, and of the need to do so safely.

    Sutskever blazed a trail in machine learning from an early age, becoming a protégé of deep-learning pioneer Geoffrey Hinton at the University of Toronto. With Hinton and fellow grad student Alex Krizhevsky he cocreated an image-recognition system called AlexNet that stunned the world of AI with its accuracy and helped set off a flurry of investment in the then unfashionable technique of artificial neural networks.

    Sustskever later worked on AI research at Google, where he helped establish the modern era of neural-network-based AI. In 2015 Altman invited him to dinner with Elon Musk and Greg Brockman to talk about the idea of starting a new AI lab to challenge corporate dominance of the technology. Sutskever, Musk, Brockman, and Altman became key founders of OpenAI, which was announced in December 2015. It later pivoted its model, creating a for-profit arm and taking huge investment from Microsoft and other backers. Musk left OpenAI in 2018 after disagreeing with the company’s strategy. The entrepreneur filed a lawsuit against the company in March this year claiming it had abandoned its founding mission of developing super-powerful AI to “benefit humanity,” and was instead enriching Microsoft.

    Sutskever’s departure leaves just one of the four OpenAI board members who voted for Altman’s ouster with a role at the company. Adam D’Angelo, an early Facebook employee and CEO of Q&A site Quora, was the only existing member of the board to remain as a director when Altman returned as CEO.

    [ad_2]

    Reece Rogers, Tom Simonite

    Source link

  • 6 Practical Tips for Using Anthropic’s Claude Chatbot

    6 Practical Tips for Using Anthropic’s Claude Chatbot

    [ad_1]

    Joel Lewenstein, a head of product design at Anthropic, was recently crawling beneath his new house to adjust the irrigation system when he ran into a conundrum: The device’s knobs made no sense. Instead of scouring the internet for a product manual, he opened up the app for Anthropic’s Claude chatbot on his phone and snapped a photo. Its algorithms analyzed the image and provided more context for what each knob might do.

    When I tested OpenAI’s image features for ChatGPT last year, I found it similarly useful—at least for low-stakes tasks. I’d recommend you turn to AI image analysis for identifying those random cords around your house, but not to guess the identity of a loose prescription pill.

    Anthropic released the iOS app that helped out Lewenstein for all to download earlier this month. I decided to try out the Claude app, in line with a goal I’d set to experiment with a wider variety of chatbots this year. And I chatted over video with Lewenstein to see what advice he had for getting started with Claude and how to ask questions in a way that elicit the most useful answers.

    Get Chatty

    Decades of Google Search dominating the web has trained us to type blunt and concise queries when we want something. To get the most out of chatbots like Claude, you need to break free from that approach. “It’s not Google Search,” Lewenstein says. “So you’re not putting in three keywords—you’re really having a conversation with it.” He encourages users to avoid an overly utilitarian communication style and to get a little more verbose with their prompts. Instead of a short phrase, try writing prompts that are a few sentences long or even a couple of paragraphs.

    Share Photos

    AI image analysis is still fairly new for Anthropic’s chatbot—it was released in March—but it can provide a powerful way to quickly pose questions to the chatbot. Lewenstein recommends using images as a launching point for conversations with Claude, like he did under his house. Although the feature may not always be accurate, it’s useful—and fun—if you keep the limitations in mind and look for opportunities where an image can address your query.

    Be Direct

    Still not getting the outputs you’d like? A solid troubleshooting technique is to be overly prescriptive in your prompts. “Just talking to Claude like a person actually leads you a little bit astray,” Lewenstein says. Instead, try giving Claude an almost awkward amount of context about how you’d like the answer formatted—for example, by saying they should be in bullet points or short paragraphs, and give it clear direction on the tone it should use. Do you want lyrical answers or something that sounds more technical? Also, consider telling Claude who the intended audience is and what their level of knowledge about the topic may be.

    Try, Try Again

    If your initial query to Claude doesn’t produce a good result, keep in mind that your first ask is just the starting point. Follow-up prompts and clarifying questions are critical to steering a chatbot in the right direction.

    When interacting with any chatbot, I’m quick to start a new conversation thread if the output goes awry, so I can try a different opening prompt. This isn’t the best approach, Lewenstein says.

    He suggests staying in that same chat window and providing direct feedback to the bot about what you’d like done differently, from tone to structure. “I literally just type, ‘No, too complicated. I don’t understand what these words mean. Can you try again, but simplify it one level more,” say Lewenstein, referencing a time when Claude’s summary of a document was confusing.

    Upload Big Docs

    Speaking of documents, Claude’s ability to analyze uploaded data is one of its strengths. The applications for this are more apparent for workplace use cases, where the chatbot can help with Excel spreadsheets and overflowing email inboxes, but it can be a useful feature outside the office too. If you upload batches of text, Claude can spot trends you might not have otherwise noticed. Ask the chatbot to look for patterns in language use or the topics covered. Got a PDF that you need to read but is so long that your eyes glaze over? Claude can help focus your attention on the most important aspect of the document first.

    I uploaded the text transcript of my conversation with Lewenstein to Claude and asked what quotes it would highlight as important. The chatbot did an impeccable job of capturing the conversation’s key themes, and it flagged many of the quotes that I ultimately decided to pull for this newsletter. (Anthropic’s policies mean that, unless you opt in, your input data is unlikely to be used to train its AI models.)

    Text Like You’re Friends

    Yes, you should play around with writing longer and more specific prompts to Claude, but it’s also smart to approach conversations with chatbots as a back-and-forth volley of messages. “I actually find the mobile app to be a really natural form factor for it, because you chat with people all the time on your phone,” says Lewenstein.

    When I uploaded a photo of a robot mural I saw in a cool San Francisco bar to the Claude app, the chatbot provided a poetic description of the art. It wasn’t able to guess which city the bar was located in, an almost impossible task, but the conversation’s cadence did feel like messaging an eager friend. Claude thanked me when I finally revealed the bar’s location: “My assumptions were delightfully upended.”

    I need to use it more to really get the hang of Claude, but I already feel like the chatbot’s outputs have a friendly flair. Although ChatGPT is still my go-to chatbot, I could see myself adding Claude to the mix when I’m wanting to message with an AI tool that prioritizes engaging, human-sounding outputs over a more dry, efficient style of communication. It’s important to remain open to using AI tools that you haven’t tried before. Chatbots continue to improve and change rapidly, so it’s far too early to get locked into a single tool.

    [ad_2]

    Reece Rogers

    Source link

  • OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

    OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

    [ad_1]

    OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.

    OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.

    “We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”

    The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

    In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to “bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.” She declined to share details of what OpenAI’s exploration of explicit content generation involves or what feedback the company has received on the idea.

    Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.

    AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students.

    “Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”

    Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”

    As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.

    Additional reporting by Reece Rogers

    [ad_2]

    Kate Knibbs

    Source link

  • Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?

    Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?

    [ad_1]

    Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time worrying about ways that humanity might destroy itself. In photographs he often looks deadly serious, perhaps appropriately haunted by the existential dangers roaming around his brain. When we talk over Zoom, he looks relaxed and is smiling.

    Bostrom has made it his life’s work to ponder far-off technological advancement and existential risks to humanity. With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe idea—that AI would advance to a point where it might turn against and delete humanity.

    To many in and outside of AI research the idea seemed fanciful, but influential figures including Elon Musk cited Bostrom’s writing. The book set a strand of apocalyptic worry about AI smoldering that recently flared up following the arrival of ChatGPT. Concern about AI risk is not just mainstream but also a theme within government AI policy circles.

    Bostrom’s new book takes a very different tack. Rather than play the doomy hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but averted disaster. All disease has been ended and humans can live indefinitely in infinite abundance. Bostrom’s book examines what meaning there would be in life inside a techno-utopia, and asks if it might be rather hollow. He spoke with WIRED over Zoom, in a conversation that has been lightly edited for length and clarity.

    Will Knight: Why switch from writing about superintelligent AI threatening humanity to considering a future in which it’s used to do good?

    Nick Bostrom: The various things that could go wrong with the development of AI are now receiving a lot more attention. It’s a big shift in the last 10 years. Now all the leading frontier AI labs have research groups trying to develop scalable alignment methods. And in the last couple of years also, we see political leaders starting to pay attention to AI.

    There hasn’t yet been a commensurate increase in depth and sophistication in terms of thinking of where things go if we don’t fall into one of these pits. Thinking has been quite superficial on the topic.

    When you wrote Superintelligence, few would have expected existential AI risks to become a mainstream debate so quickly. Will we need to worry about the problems in your new book sooner than people might think?

    As we start to see automation roll out, assuming progress continues, then I think these conversations will start to happen and eventually deepen.

    Social companion applications will become increasingly prominent. People will have all sorts of different views and it’s a great place to maybe have a little culture war. It could be great for people who couldn’t find fulfillment in ordinary life but what if there is a segment of the population that takes pleasure in being abusive to them?

    In the political and information spheres we could see the use of AI in political campaigns, marketing, automated propaganda systems. But if we have a sufficient level of wisdom these things could really amplify our ability to sort of be constructive democratic citizens, with individual advice explaining what policy proposals mean for you. There will be a whole bunch of dynamics for society.

    Would a future in which AI has solved many problems, like climate change, disease, and the need to work, really be so bad?

    [ad_2]

    Will Knight

    Source link

  • Meta Is Already Training a More Powerful Successor to Llama 3

    Meta Is Already Training a More Powerful Successor to Llama 3

    [ad_1]

    Zuckerberg took to Instagram today to explain that Meta would incorporate the new Meta AI assistant, powered by Llama 3, into products that include Whatsapp, Instagram, Facebook, and Messenger.

    Meta said in its blog post announcing Llama 3 that it had focused heavily on improving the training data used to develop the model. It was fed seven times as much data as its predecessor, Llama 2, the company said. Some AI experts noted that figures released by Meta also showed that creating Llama 3 required huge amounts of energy to power the servers required.

    The growing capabilities of open source AI models have spurred some experts to worry that they could make it easier to develop cyber, chemical, or biological weapons—or even become hostile toward humans. Meta has released tools that it says can help ensure Llama does not output potentially harmful utterances.

    Others in the field of AI say that Meta’s Llama models are not as open as they could be. The company’s open source license on the models places some restrictions on what researchers and developers can build.

    “It’s great to see more and more models openly releasing their weights,” said Luca Soldaini, senior applied research scientist at Allen Institute for AI, a nonprofit lab, n a statement after Llama 3’s release. “But the open community needs access to all other parts of the AI pipeline—its data, training, logs, code, and evaluations. This is what will ultimately accelerate our collective understanding of these models.”

    Stella Biderman, an AI researcher involved with EleutherAI, a nonprofit open source AI project, says Meta’s license for Llama 2 limited the experiments that AI researchers can run with it, and adds that the Llama 3 license looks even more restrictive. “Meta releases weights but is famously restrictive about what you can do with them,” Biderman says.

    One part of the model’s license says that companies with “greater than 700 million monthly active users” must seek a special license from Meta—a clause apparently designed to prevent the project from helping the company’s closest rivals.

    Even so, Llama 3 seems likely to spark a new burst of AI experimentation. Clement Delange, CEO of HuggingFace, a repository for open AI models, including Llama 3, says developers created more than 30,000 variants of Llama 2. “I’m sure we’ll see a flurry of new models based on Llama 3 as well,” he says. “Awesome community move by Meta.”

    [ad_2]

    Will Knight

    Source link

  • WTF Fun Fact 13720 – Brain-Computer Interfaces

    WTF Fun Fact 13720 – Brain-Computer Interfaces

    [ad_1]

    Interactive technology took a significant leap forward with the latest development in brain-computer interfaces by engineers at The University of Texas at Austin. This new technology allows users to control video games using nothing but their thoughts, eliminating the need for traditional manual controls.

    Breaking Barriers with Brain-Computer Interfaces

    One of the groundbreaking aspects of this interface is its lack of need for individual calibration. Traditional brain-computer interfaces require extensive customization to align with each user’s unique neurological patterns. This new system, however, uses machine learning to adapt to individual users quickly, allowing for a much more user-friendly experience. This innovation drastically reduces setup time and makes the technology accessible to a broader audience, including those with motor disabilities.

    The interface works by using a cap fitted with electrodes that capture brain activity. These signals are then translated into commands that control game elements, such as steering a car in a racing game. This setup not only introduces a new way of gaming but also holds the potential for significant advancements in assistive technology.

    Enhancing Neuroplasticity Through Gaming

    The research, led by José del R. Millán and his team, explores the technology and its impact on neuroplasticity—the brain’s ability to form new neural connections. The team’s efforts focus on harnessing this capability to improve brain function and quality of life for patients with neurological impairments.

    Participants in the study engaged in two tasks. First, a complex car racing game requiring strategic thinking for maneuvers like turns. Then, a simpler task involving balancing a digital bar. These activities were chosen to train the brain in different ways to leverage the interface’s capacity to translate neural commands into digital actions.

    Foundational Research and Future Applications

    The research represents foundational work in the field of brain-computer interfaces. Initially tested on subjects without motor impairments, the next step involves trials with individuals who have motor disabilities. This expansion is crucial for validating the interface’s potential clinical applications.

    Beyond gaming, the technology is poised to revolutionize how individuals with disabilities interact with their environments. The ongoing projects include developing a wheelchair navigable via thought and rehabilitation robots for hand and arm therapy, which were recently demonstrated at the South by Southwest Conference and Festivals.

    This brain-computer interface stands out not only for its technological innovation but also for its commitment to improving lives. It exemplifies the potential of using machine learning to enhance independence and quality of life for people with disabilities. As this technology progresses, it promises to open new avenues for accessibility and personal empowerment, making everyday tasks more manageable and integrating advanced assistive technologies into the fabric of daily living.

     WTF fun facts

    Source: “Universal brain-computer interface lets people play games with just their thoughts” — ScienceDaily

    [ad_2]

    WTF

    Source link

  • To Build a Better AI Supercomputer, Let There Be Light

    To Build a Better AI Supercomputer, Let There Be Light

    [ad_1]

    GlobalFoundries, a company that makes chips for others, including AMD and General Motors, previously announced a partnership with Lightmatter. Harris says his company is “working with the largest semiconductor companies in the world as well as the hyperscalers,” referring to the largest cloud companies like Microsoft, Amazon, and Google.

    If Lightmatter or another company can reinvent the wiring of giant AI projects, a key bottleneck in the development of smarter algorithms might fall away. The use of more computation was fundamental to the advances that led to ChatGPT, and many AI researchers see the further scaling-up of hardware as being crucial to future advances in the field—and to hopes of ever reaching the vaguely-specified goal of artificial general intelligence, or AGI, meaning programs that can match or exceed biological intelligence in every way.

    Linking a million chips together with light might allow for algorithms several generations beyond today’s cutting edge, says Lightmatter’s CEO Nick Harris. “Passage is going to enable AGI algorithms,” he confidently suggests.

    The large data centers that are needed to train giant AI algorithms typically consist of racks filled with tens of thousands of computers running specialized silicon chips and a spaghetti of mostly electrical connections between them. Maintaining training runs for AI across so many systems—all connected by wires and switches—is a huge engineering undertaking. Converting between electronic and optical signals also places fundamental limits on chips’ abilities to run computations as one.

    Lightmatter’s approach is designed to simplify the tricky traffic inside AI data centers. “Normally you have a bunch of GPUs, and then a layer of switches, and a layer of switches, and a layer of switches, and you have to traverse that tree” to communicate between two GPUs, Harris says. In a data center connected by Passage, Harris says, every GPU would have a high-speed connection to every other chip.

    Lightmatter’s work on Passage is an example of how AI’s recent flourishing has inspired companies large and small to try to reinvent key hardware behind advances like OpenAI’s ChatGPT. Nvidia, the leading supplier of GPUs for AI projects, held its annual conference last month, where CEO Jensen Huang unveiled the company’s latest chip for training AI: a GPU called Blackwell. Nvidia will sell the GPU in a “superchip” consisting of two Blackwell GPUs and a conventional CPU processor, all connected using the company’s new high-speed communications technology called NVLink-C2C.

    The chip industry is famous for finding ways to wring more computing power from chips without making them larger, but Nvidia chose to buck that trend. The Blackwell GPUs inside the company’s superchip are twice as powerful as their predecessors but are made by bolting two chips together, meaning they consume much more power. That trade-off, in addition to Nvidia’s efforts to glue its chips together with high-speed links, suggests that upgrades to other key components for AI supercomputers, like that proposed by Lightmatter, could become more important.

    [ad_2]

    Will Knight

    Source link

  • OpenAI’s GPT Store Is Triggering Copyright Complaints

    OpenAI’s GPT Store Is Triggering Copyright Complaints

    [ad_1]

    For the past few months, Morten Blichfeldt Andersen has spent many hours scouring OpenAI’s GPT Store. Since it launched in January, the marketplace for bespoke bots has filled up with a deep bench of useful and sometimes quirky AI tools. Cartoon generators spin up New Yorker–style illustrations and vivid anime stills. Programming and writing assistants offer shortcuts for crafting code and prose. There’s also a color analysis bot, a spider identifier, and a dating coach called RizzGPT. Yet Blichfeldt Andersen is hunting only for one very specific type of bot: Those built on his employer’s copyright-protected textbooks without permission.

    Blichfeldt Andersen is publishing director at Praxis, a Danish textbook purveyor. The company has been embracing AI and created its own custom chatbots. But it is currently engaged in a game of whack-a-mole in the GPT Store, and Blichfeldt Andersen is the man holding the mallet.

    “I’ve been personally searching for infringements and reporting them,” Blichfeldt Andersen says. “They just keep coming up.” He suspects the culprits are primarily young people uploading material from textbooks to create custom bots to share with classmates—and that he has uncovered only a tiny fraction of the infringing bots in the GPT Store. “Tip of the iceberg,” Blichfeldt Andersen says.

    It is easy to find bots in the GPT Store whose descriptions suggest they might be tapping copyrighted content in some way, as Techcrunch noted in a recent article claiming OpenAI’s store was overrun with “spam.” Using copyrighted material without permission is permissable in some contexts but in others rightsholders can take legal action. WIRED found a GPT called Westeros Writer that claims to “write like George R.R. Martin,” the creator of Game of Thrones. Another, Voice of Atwood, claims to imitate the writer Margaret Atwood. Yet another, Write Like Stephen, is intended to emulate Stephen King.

    When WIRED tried to trick the King bot into revealing the “system prompt” that tunes its responses, the output suggested it had access to King’s memoir On Writing. Write Like Stephen was able to reproduce passages from the book verbatim on demand, even noting which page the material came from. (WIRED could not make contact with the bot’s developer, because it did not provide an email address, phone number, or external social profile.)

    OpenAI spokesperson Kayla Wood says it responds to takedown requests against GPTs made with copyrighted content but declined to answer WIRED’s questions about how frequently it fulfills such requests. She also says the company proactively looks for problem GPTs. “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies, including the use of content from third parties without necessary permission,” Wood says.

    New Disputes

    The GPT store’s copyright problem could add to OpenAI’s existing legal headaches. The company is facing a number of high-profile lawsuits alleging copyright infringement, including one brought by The New York Times and several brought by different groups of fiction and nonfiction authors, including big names like George R.R. Martin.

    Chatbots offered in OpenAI’s GPT Store are based on the same technology as its own ChatGPT but are created by outside developers for specific functions. To tailor their bot, a developer can upload extra information that it can tap to augment the knowledge baked into OpenAI’s technology. The process of consulting this additional information to respond to a person’s queries is called retrieval-augmented generation, or RAG. Blichfeldt Andersen is convinced that the RAG files behind the bots in the GPT Store are a hotbed of copyrighted materials uploaded without permission.

    [ad_2]

    Kate Knibbs

    Source link

  • Treasury accuses banks of ‘insufficient data sharing’ on fraud

    Treasury accuses banks of ‘insufficient data sharing’ on fraud

    [ad_1]

    In a report last week on AI and cybersecurity, the U.S. Department of the Treasury said that, while banks tend to share plenty of information with each other for the purposes of cybersecurity and anti-money laundering, they have practiced “insufficient data sharing” in the area of fraud prevention.

    The dearth of banks sharing their fraud data undercuts smaller banks’ efforts to train anti-fraud AI models models that many banks hope will replace rule-based engines, deny lists and device fingerprinting in the fight to detect and prevent transaction-related crimes such as money laundering and fraud.

    Treasury acknowledged a general gap in the data available to financial institutions for training AI models of all kinds, but the report said the gap is “significant in the area of fraud prevention,” which the report contrasted with robust cybersecurity data sharing efforts led by organizations including the Financial Services Information Sharing and Analysis Center.

    “The accuracy of machine learning-based systems in identifying and modeling fraudulent behavioral patterns correlates directly with the scale, scope (variety of datasets) and quality of data available to firms,” the report reads.

    The report said “most financial institutions” interviewed for the report, which was based on 42 interviews, expressed the need for better collaboration in the domain of fraud prevention, particularly as fraudsters themselves have been using AI and machine learning technologies.

    “Sharing of fraud data would support the development of sophisticated fraud detection tools and better identification of emerging trends or risks,” the report said.

    However, while such information sharing could improve fraud detection, it “also raises privacy concerns,” the report said, as it would involve collecting and storing sensitive financial information including transaction histories and personal behaviors. Data anonymization and algorithmic transparency — i.e., helping customers understand how their data is used — could mitigate these issues, the report said.

    Treasury said in the report that the Financial Crimes Enforcement Network, which is a bureau of Treasury, might be well positioned to support fraud information-sharing efforts between banks, to ensure that smaller financial institutions “are benefitting from the advancements in AI technology development for countering fraud,” the report said. Core providers could also play this role, according to the report.

    While many vendors offer smaller banks access to AI-based transaction monitoring systems, Treasury’s report said internal development at banks “offers advantages in oversight and control of the development, testing, transparency, and governance of models and access to sufficient data monitoring for model risk management evaluation purposes.”

    For the moment, the report cited efforts by two institutions that are already working to close the fraud information-sharing gap: The Bank Policy Institute and the American Bankers Association.

    The Bank Policy Institute, a public policy research and advocacy organization, told Congress in February that, as part of the effort to promote and enable data and intelligence sharing between institutions, the institute has established BITS, an “executive-level forum” for bankers to collaborate on policy advocacy, promote critical infrastructure resilience, strengthen cybersecurity and reduce fraud.

    The American Bankers Association, a trade organization and bank industry lobbying group, is set to launch an information-sharing exchange in the first half of this year, which the association says will help member banks fight fraud.

    As an example of how the exchange will work, in fraud cases known as business-email compromise, the platform will enable banks to alert their peers with key information about the account of the alleged fraudster, said Paul Benda, executive vice president of risk, cybersecurity and fraud at the American Bankers Association.

    “The idea here is to allow banks to share this information amongst other banks in a near-real-time manner so they can integrate this data into their payment flows, into their risk-scoring systems, to stop that money from going out,” Benda said.

    The association said its long-term goal is to make the exchange available to all financial institutions that are covered by Section 314(b) of the Patriot Act, which gives financial institutions the right to share information that could be used to identify transactions that might involve money laundering or terrorist funding.

    As for the consequences of failing to promote adequate fraud information sharing, several institutions Treasury interviewed said “there may be a risk of future consolidation towards larger institutions” if “smaller financial institutions are not supported in closing this critical gap,” according to the report.

    [ad_2]

    Carter Pape

    Source link

  • Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

    Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

    [ad_1]

    Ever since the rollout of ChatGPT in November 2022, many people in science, business, and media have been obsessed with AI. A cursory look at my own published work during that period fingers me as among the guilty. My defense is that I share with those other obsessives a belief that large language models are the leading edge of an epochal transformation. Maybe I’m swimming in generative Kool-Aid, but I believe AI advances within our grasp will change not only the way we work, but the structure of businesses, and ultimately the course of humanity.

    Not everyone agrees, and in recent months there’s been a backlash. AI has been oversold and overhyped, some experts now opine. Self-styled AI-critic-in-chief Gary Marcus recently said of the LLM boom, “It wouldn’t surprise me if, to some extent, this whole thing fizzled out.” Others claim that AI is mired in the “trough of disillusionment.”

    This week we got some data that won’t resolve the larger questions but provides a snapshot of how the US, if not the world, views the advent of AI and large language models. The Pew Research Center—which did similar probes during the rise of the internet, social media, and mobile devices—released a study of how ChatGPT was being used, regarded, and trusted. The sample was taken between February 7 and 11 of this year.

    Some of the numbers at first seem to indicate that the LLM controversy might be a parochial disagreement that most people don’t care about. A third of Americans haven’t heard of ChatGPT. Just under a quarter have used it. Oh, and for all the panic about how AI is going to flood the public square with misinformation about the 2024 election? So far, only 2 percent of Americans have used ChatGPT to get information about the presidential election season already underway.

    More broadly, though, data from the survey indicates that we’re seeing a powerful technology whose rise is just beginning. If you accept Pew’s sample as indicative of all Americans, millions of people are indeed familiar with ChatGPT. And one thing in particular stands out: While 17 percent of respondents said they have used it for entertainment and an identical number says they’ve tried it to learn something new, a full 20 percent of adults say that they have used ChatGPT for work. That’s up dramatically from the 12 percent who responded affirmatively when the same question was asked six months earlier—a rise of two-thirds.

    When I spoke to Colleen McClain, a Pew research associate involved in the study, she agreed that it seems to track with other huge technological shifts. “If you look at our trend charts over time on internet access, smartphones, social media, certainly some of them show this uptick,” she says. For some technologies there had been a leveling off, she adds. But in the ones she mentioned, the plateau came only when so many people came on board that there weren’t many stragglers left.

    What’s crazy about that sudden jump in ChatGPT business use from 12 percent to 20 percent is that we’re only at the beginning stages of humans collaborating with these models. And the tools to fully make use of ChatGPT are in a nascent status. That’s changing fast. OpenAI, ChatGPT’s creator, is going full tilt, and AI giants Microsoft and Google are still in the process of diverting their workforces to redesign every product line to integrate conversational AI. And startups like Sierra, which is building agents for corporate customers, are enabling bespoke usages that take advantage of multiple models. As this process continues, more people will use AI tools. And since the foundation models are getting exponentially better—am I hearing that GPT5 will show up this year?—that will make them even more compelling. This raises the possibility that the quality of virtually all work will reside in how well one can draw out the talents of a robot collaborator.

    What past technology can help us understand the trajectory of the rocket ship we’re on? While the near limitless ceiling of AI makes it hard to find an analog, I suggest the uptake of spreadsheets. Dan Bricklin and Bob Frankston invented them in 1978, and a year later the concept was embodied in VisiCalc, which at the time ran only on Apple computers. Spreadsheets had a phenomenal and disruptive effect on the business world. More than mere accounting tools, they triggered an era of business innovation and shook up the flow of information inside companies. Yet it took a few years before the business world widely adopted spreadsheets. The turning point came with a new and more powerful product called Lotus 1, 2, 3, which ran on the IBM PC. The current and near-future startups in the AI world, like Sierra, are all hoping to become the Lotuses of our era—but also to be much more consequential and lasting. Spreadsheets are largely limited to the business domain. LLMs can seemingly mess with anything.

    [ad_2]

    Steven Levy

    Source link

  • Inside the Creation of the World’s Most Powerful Open Source AI Model

    Inside the Creation of the World’s Most Powerful Open Source AI Model

    [ad_1]

    This past Monday, about a dozen engineers and executives at data science and AI company Databricks gathered in conference rooms connected via Zoom to learn if they had succeeded in building a top artificial intelligence language model. The team had spent months, and about $10 million, training DBRX, a large language model similar in design to the one behind OpenAI’s ChatGPT. But they wouldn’t know how powerful their creation was until results came back from the final tests of its abilities.

    “We’ve surpassed everything,” Jonathan Frankle, chief neural network architect at Databricks and leader of the team that built DBRX, eventually told the team, which responded with whoops, cheers, and applause emojis. Frankle usually steers clear of caffeine but was taking sips of iced latte after pulling an all-nighter to write up the results.

    Databricks will release DBRX under an open source license, allowing others to build on top of its work. Frankle shared data showing that across about a dozen or so benchmarks measuring the AI model’s ability to answer general knowledge questions, perform reading comprehension, solve vexing logical puzzles, and generate high-quality code, DBRX was better than every other open source model available.

    AI decision makers: Jonathan Frankle, Naveen Rao, Ali Ghodsi, and Hanlin Tang.Photograph: Gabriela Hasbun

    It outshined Meta’s Llama 2 and Mistral’s Mixtral, two of the most popular open source AI models available today. “Yes!” shouted Ali Ghodsi, CEO of Databricks, when the scores appeared. “Wait, did we beat Elon’s thing?” Frankle replied that they had indeed surpassed the Grok AI model recently open-sourced by Musk’s xAI, adding, “I will consider it a success if we get a mean tweet from him.”

    To the team’s surprise, on several scores DBRX was also shockingly close to GPT-4, OpenAI’s closed model that powers ChatGPT and is widely considered the pinnacle of machine intelligence. “We’ve set a new state of the art for open source LLMs,” Frankle said with a super-sized grin.

    Building Blocks

    By open-sourcing, DBRX Databricks is adding further momentum to a movement that is challenging the secretive approach of the most prominent companies in the current generative AI boom. OpenAI and Google keep the code for their GPT-4 and Gemini large language models closely held, but some rivals, notably Meta, have released their models for others to use, arguing that it will spur innovation by putting the technology in the hands of more researchers, entrepreneurs, startups, and established businesses.

    Databricks says it also wants to open up about the work involved in creating its open source model, something that Meta has not done for some key details about the creation of its Llama 2 model. The company will release a blog post detailing the work involved to create the model, and also invited WIRED to spend time with Databricks engineers as they made key decisions during the final stages of the multimillion-dollar process of training DBRX. That provided a glimpse of how complex and challenging it is to build a leading AI model—but also how recent innovations in the field promise to bring down costs. That, combined with the availability of open source models like DBRX, suggests that AI development isn’t about to slow down any time soon.

    Ali Farhadi, CEO of the Allen Institute for AI, says greater transparency around the building and training of AI models is badly needed. The field has become increasingly secretive in recent years as companies have sought an edge over competitors. Opacity is especially important when there is concern about the risks that advanced AI models could pose, he says. “I’m very happy to see any effort in openness,” Farhadi says. “I do believe a significant portion of the market will move towards open models. We need more of this.”

    [ad_2]

    Will Knight

    Source link

  • Large Language Models’ Emergent Abilities Are a Mirage

    Large Language Models’ Emergent Abilities Are a Mirage

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Two years ago, in a project called the Beyond the Imitation Game benchmark, or BIG-bench, 450 researchers compiled a list of 204 tasks designed to test the capabilities of large language models, which power chatbots like ChatGPT. On most tasks, performance improved predictably and smoothly as the models scaled up—the larger the model, the better it got. But with other tasks, the jump in ability wasn’t smooth. The performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability.

    The authors described this as “breakthrough” behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. In a paper published in August 2022, researchers noted that these behaviors are not only surprising but unpredictable, and that they should inform the evolving conversations around AI safety, potential, and risk. They called the abilities “emergent,” a word that describes collective behaviors that only appear once a system reaches a high level of complexity.

    But things may not be so simple. A new paper by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLM’s performance. The abilities, they argue, are neither unpredictable nor sudden. “The transition is much more predictable than people give it credit for,” said Sanmi Koyejo, a computer scientist at Stanford and the paper’s senior author. “Strong claims of emergence have as much to do with the way we choose to measure as they do with what the models are doing.”

    We’re only now seeing and studying this behavior because of how large these models have become. Large language models train by analyzing enormous data sets of text—words from online sources including books, web searches, and Wikipedia—and finding links between words that often appear together. The size is measured in terms of parameters, roughly analogous to all the ways that words can be connected. The more parameters, the more connections an LLM can find. GPT-2 had 1.5 billion parameters, while GPT-3.5, the LLM that powers ChatGPT, uses 350 billion. GPT-4, which debuted in March 2023 and now underlies Microsoft Copilot, reportedly uses 1.75 trillion.

    That rapid growth has brought an astonishing surge in performance and efficacy, and no one is disputing that large enough LLMs can complete tasks that smaller models can’t, including ones for which they weren’t trained. The trio at Stanford who cast emergence as a “mirage” recognize that LLMs become more effective as they scale up; in fact, the added complexity of larger models should make it possible to get better at more difficult and diverse problems. But they argue that whether this improvement looks smooth and predictable or jagged and sharp results from the choice of metric—or even a paucity of test examples—rather than the model’s inner workings.

    [ad_2]

    Stephen Ornes

    Source link

  • Perplexity’s Founder Was Inspired by Sundar Pichai. Now They’re Competing to Reinvent Search

    Perplexity’s Founder Was Inspired by Sundar Pichai. Now They’re Competing to Reinvent Search

    [ad_1]

    Aravind Srinivas credits Google CEO Sundar Pichai for giving him the freedom to eat eggs.

    Srinivas remembers the moment seven years ago when an interview with Pichai popped up in his YouTube feed. His vegetarian upbringing in India had excluded eggs, as it had for many in the country, but now, in his early twenties, Srinivas wanted to start eating more protein. Here was Pichai, a hero to many aspiring entrepreneurs in India, casually describing his morning: waking up, reading newspapers, drinking tea—and eating an omelet.

    Srinivas shared the video with his mother. OK, she said: You can eat eggs.

    Pichai’s influence reaches far beyond Srinivas’ diet. He too is CEO of a search company, called Perplexity AI, one of the most hyped-up apps of the generative AI era. Srinivas is still taking cues from Pichai, the leader of the world’s largest search engine, but his admiration is more complicated.

    “It’s kind of a rivalry now,” Srinivas says. “It’s awkward.”

    Srinivas and Pichai both grew up in Chennai, India, in the south Indian state of Tamil Nadu—though the two were born 22 years apart. By the time Srinivas was working toward his PhD in computer science at UC Berkeley, Pichai had been crowned chief executive of Google.

    For his first research internship, Srinivas worked at Google-owned DeepMind in London. Pichai also got a new job that year, becoming CEO of Alphabet as well as Google. Srinivas found the work at DeepMind invigorating, but he was dismayed to find that the flat he had rented sight unseen was a disaster—a “crappy home, with rats,” he says—so he sometimes slept in DeepMind’s offices.

    He discovered in the office library a book about the development and evolution of Google, called In the Plex, penned by WIRED editor at large Steven Levy. Srinivas read it over and over, deepening his appreciation of Google and its innovations. “Larry and Sergey became my entrepreneurial heroes,” Srinivas says. (He offered to list In the Plex’s chapters and cite passages from memory; WIRED took his word for it.)

    Shortly afterwards, in 2020, Srinivas ended up working at Google’s headquarters in Mountain View, California, as a research intern working on machine learning for computer vision. Slowly, Srinivas was making his way through the Google universe, and putting some of his AI research work to good use.

    Then, in 2022, Srinivas and three cofounders—Denis Yarats, Johnny Ho, and Andy Konwinski—teamed up to try and develop a new approach to search using AI. They started out working on algorithms that could translate natural language into the database language SQL, but determined this was too narrow (or nerdy). Instead they pivoted to a product that combined a traditional search index with the relatively new power of large language models. They called it Perplexity.

    Perplexity is sometimes described as an “answer” engine rather than a search engine, because of the way it uses AI text generation to summarize results. New searches create conversational “threads” on a particular topic. Type in a query, and Perplexity responds with follow up questions, asking you to refine your ask. It eschews direct links in favor of text-based or visual answers that don’t require you to click away to somewhere else to get information.

    [ad_2]

    Lauren Goode

    Source link

  • Google DeepMind’s New AI Model Can Help Soccer Teams Take the Perfect Corner

    Google DeepMind’s New AI Model Can Help Soccer Teams Take the Perfect Corner

    [ad_1]

    Working with player-tracking data from 7,176 corners taken in the Premier League during 2020 and 2021, the researchers began by representing the arrangement of players as a graph, with the players’ position, movement, height, and weight encoded as nodes on the graph, and relationships between players as the lines between them. Then they used an approach called geometric deep learning, which takes advantage of the symmetry of a soccer field to shrink down the amount of processing the neural network needed to do. (This isn’t a new strategy—a similar approach was used in DeepMind’s influential AlphaGo research.)

    The resulting model led to the creation of a number of tools that could be useful to soccer coaches. Based on the arrangement of players at the moment the kick is taken, TacticAI can predict which player is most likely to make the first contact on the ball, and whether a shot will be taken as a result. It can then generate recommendations for the best ways to adjust player position and movement to either maximize the chance of a shot being taken (for the attacking team) or minimize it (for the defending team)—shifting a defender across to cover the near post, for instance, or putting a man on the edge of the area.

    The soccer experts at Liverpool particularly liked how TacticAI’s recommendations could pinpoint attackers who were critical for the success of a particular tactic, or defenders who were “asleep at the wheel,” Veličković says. Analysts spend hours sifting through video footage looking for weak points in their opponents’ defensive setups that they can target, or trying to find holes in their own team’s performances to double down on in training. “But it’s really hard to track across 22 people, across lots of different situations,” Veličković says. “If you have a tool like this it immediately helps you see which players are not moving in the right way, which players should be doing something different.”

    TacticAI can also be used to find other corners which feature a similar pattern of players and movement, again saving hours of time for analysts. According to DeepMind, the suggestions made by the model were rated as useful by Liverpool coaches twice as often as current techniques, which are based only on the physical coordinates of the players and don’t take into account their movement or physical attributes. (Two corners might look the same, but if the tall striker is at the edge of the box in one and running towards the near post on the other, that’s probably important.)

    One thing it’s also doing, according to DeepMind’s Zhe Wang, another lead contributor to the paper, is making up for the lack of suitable language to describe the huge range of different things that can happen at a corner. Unlike American football, which has a deep and storied nomenclature for different plays and running routes, the choreographing of soccer set pieces in such detail is a relatively new phenomenon. “Different coaches may have their own expressions for the patterns of corner kicks that they observe,” says Wang. “So with TacticAI, we hope to use the power of deep learning to establish a common language to describe patterns of corner kicks.”

    In the future, according to the paper, the researchers hope to build TacticAI into a natural language interface so that coaches can query it in text and get answers to the problems they’re trying to solve on the field. Veličković says that the model could be used during a game to help coaches refine their corner routines on the fly, but that it’s most likely to be useful in the days leading up to a match, where it’ll free up coaches’ time. “We don’t want to build AI systems that replace experts,” says Veličković. “We want to build AI systems that amplify the capabilities of experts so that they are then able to do their job a lot more efficiently and have more time for the creative part of coaching.”

    [ad_2]

    Amit Katwala

    Source link

  • Banking on AI: How financial institutions are deploying new tech

    Banking on AI: How financial institutions are deploying new tech

    [ad_1]

    Lack of understanding remains a key hurdle for adopting traditional and generative artificial intelligence-powered tools, but banks and credit unions are still eager to use the technology, according to a new report from Arizent.

    Despite both consumer and institutional interest in artificial intelligence continuing to grow across the financial services industry, the majority of leaders are still unsure about the technology and its potential uses — leaving a select group of executives to lead their organizations into the fray. 

    Arizent, the publisher of American Banker, surveyed 127 financial institution professionals to find out how traditional and generative AI is unfolding in the industry with respect to applications, risks versus rewards, impact on the workforce and more.

    Respondents represent banks ranging from less than $10 billion of assets to more than $100 billion of assets, as well as credit unions of all asset sizes.

    The results showed that familiarity is the largest hurdle for adoption. Tech-minded changemakers helping prepare their organizations for AI said the top two things they are doing are researching providers and attending industry conferences or events on AI. They are also creating working groups for responsible AI usage and educating stakeholders.

    Among banks and credit unions that have begun using AI, many have adopted tools for navigating contract negotiations, improving loan underwriting procedures, speeding up internal development projects and more.

    But with the White House’s executive order on AI and uncertainty about what bank regulators might say about the technology, financial institutions and tech vendors alike are concerned about compliance risk.

    James McPhillips, partner at Clifford Chance, said regulators abroad are more progressive than their American counterparts when it comes to overseeing the intersection of banking and technology, including the recent passage of the European Union’s Artificial Intelligence Act. This disparity has left financial institutions pondering what similar efforts will look like domestically.

    “As it stands, federal regulators appear to be planning to use existing laws to regulate the use and deployment of AI, but banks have not yet seen how those regulators will actually enforce those regulations in the context of AI,” McPhillips said.

    Below are highlights of the report’s findings that give deep insight into how leaders are getting better informed about the implications of AI and whether or not it can pave the way for future innovation.

    [ad_2]

    Frank Gargano

    Source link