ReportWire

Tag: this week in AI

  • This Week in AI: Do shoppers actually want Amazon’s GenAI? | TechCrunch

    This Week in AI: Do shoppers actually want Amazon’s GenAI? | TechCrunch

    [ad_1]

    Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

    This week, Amazon announced Rufus, an AI-powered shopping assistant trained on the e-commerce giant’s product catalog as well as information from around the web. Rufus lives inside Amazon’s mobile app, helping with finding products, performing product comparisons and getting recommendations on what to buy.

    From broad research at the start of a shopping journey such as ‘what to consider when buying running shoes?’ to comparisons such as ‘what are the differences between trail and road running shoes?’ … Rufus meaningfully improves how easy it is for customers to find and discover the best products to meet their needs,” Amazon writes in a blog post.

    That’s all great. But my question is, who’s clamoring for it really?

    I’m not convinced that GenAI, particularly in chatbot form, is a piece of tech the average person cares about — or even thinks about. Surveys support me in this. Last August, the Pew Research Center found that among those in the U.S. who’ve heard of OpenAI’s GenAI chatbot ChatGPT (18% of adults), only 26% have tried it. Usage varies by age of course, with a greater percentage of young people (under 50) reporting having used it than older.  But the fact remains that the vast majority don’t know — or care — to use what’s arguably the most popular GenAI product out there.

    GenAI has its well-publicized problems, among them a tendency to make up facts, infringe on copyrights and spout bias and toxicity. Amazon’s previous attempt at a GenAI chatbot, Amazon Q, struggled mightily — revealing confidential information within the first day of its release. But I’d argue GenAI’s biggest problem now — at least from a consumer standpoint — is that there’s few universally compelling reasons to use it.

    Sure, GenAI like Rufus can help with specific, narrow tasks like shopping by occasion (e.g. finding clothes for winter), comparing product categories (e.g. the difference between lip gloss and oil) and surfacing top recommendations (e.g. gifts for Valentine’s Day). Is it addressing most shoppers’ needs, though? Not according to a recent poll from ecommerce software startup Namogoo.

    Namogoo, which asked hundreds of consumers about their needs and frustrations when it comes to online shopping, found that product images were by far the most important contributor to a good ecommerce experience, followed by product reviews and descriptions. The respondents ranked search as fourth-most important and “simple navigation” fifth; remembering preferences, information and shopping history was second-to-last.

    The implication is that people generally shop with a product in mind; that search is an afterthought. Maybe Rufus will shake up the equation. I’m inclined to think not, particularly if it’s a rocky rollout (and it well might be given the reception of Amazon’s other GenAI shopping experiments) — but stranger things have happened I suppose.

    Here are some other AI stories of note from the past few days:

    • Google Maps experiments with GenAI: Google Maps is introducing a GenAI feature to help you discover new places. Leveraging large language models (LLMs), the feature analyzes the over 250 million locations on Google Maps and contributions from more than 300 million Local Guides to pull up suggestions based on what you’re looking for. 
    • GenAI tools for music and more: In other Google news, the tech giant released GenAI tools for creating music, lyrics and images and brought Gemini Pro, one of its more capable LLMs, to users of its Bard chatbot globally.
    • New open AI models: The Allen Institute for AI, the nonprofit AI research institute founded by late Microsoft co-founder Paul Allen, has released several GenAI language models it claims are more “open” than others — and, importantly, licensed in such a way that developers can use them unfettered for training, experimentation and even commercialization.
    • FCC moves to ban AI-generated calls: The FCC is proposing that using voice cloning tech in robocalls be ruled fundamentally illegal, making it easier to charge the operators of these frauds.
    • Shopify rolls out image editor: Shopify is releasing a GenAI media editor to enhance product images. Merchants can select a type from seven styles or type a prompt to generate a new background.
    • GPTs, invoked: OpenAI is pushing adoption of GPTs, third-party apps powered by its AI models, by enabling ChatGPT users to invoke them in any chat. Paid users of ChatGPT can bring GPTs into a conversation by typing “@” and selecting a GPT from the list. 
    • OpenAI partners with Common Sense: In an unrelated announcement, OpenAI said that it’s teaming up with Common Sense Media, the nonprofit organization that reviews and ranks the suitability of various media and tech for kids, to collaborate on AI guidelines and education materials for parents, educators and young adults.
    • Autonomous browsing: The Browser Company, which makes the Arc Browser, is on a quest to build an AI that surfs the web for you and gets you results while bypassing search engines, Ivan writes.

    More machine learnings

    Does an AI know what is “normal” or “typical” for a given situation, medium, or utterance? In a way, large language models are uniquely suited to identifying what patterns are most like other patterns in their datasets. And indeed that is what Yale researchers found in their research of whether an AI could identify “typicality” of one thing in a group of others. For instance, given 100 romance novels, which is the most and which the least “typical” given what the model has stored about that genre?

    Interestingly (and frustratingly), professors Balázs Kovács and Gaël Le Mens worked for years on their own model, a BERT variant, and just as they were about to publish, ChatGPT came out and in many ways duplicated exactly what they’d been doing. “You could cry,” Le Mens said in a news release. But the good news is that the new AI and their old, tuned model both suggest that indeed, this type of system can identify what is typical and atypical within a dataset, a finding that could be helpful down the line. The two do point out that although ChatGPT supports their thesis in practice, its closed nature makes it difficult to work with scientifically.

    Scientists at University of Pennsylvania were looking at another odd concept to quantify: common sense. By asking thousands of people to rate statements, stuff like “you get what you give” or “don’t eat food past its expiry date” on how “commonsensical” they were. Unsurprisingly, although patterns emerged, there were “few beliefs recognized at the group level.”

    “Our findings suggest that each person’s idea of common sense may be uniquely their own, making the concept less common than one might expect,” co-lead author Mark Whiting says. Why is this in an AI newsletter? Because like pretty much everything else, it turns out that something as “simple” as common sense, which one might expect AI to eventually have, is not simple at all! But by quantifying it this way, researchers and auditors may be able to say how much common sense an AI has, or what groups and biases it aligns with.

    Speaking of biases, many large language models are pretty loose with the info they ingest, meaning if you give them the right prompt, they can respond in ways that are offensive, incorrect, or both. Latimer is a startup aiming to change that with a model that’s intended to be more inclusive by design.

    Though there aren’t many details about their approach, Latimer says that their model uses Retrieval Augmented Generation (thought to improve responses) and a bunch of unique licensed content and data sourced from lots of cultures not normally represented in these databases. So when you ask about something, the model doesn’t go back to some 19th-century monograph to answer you. We’ll learn more about the model when Latimer releases more info.

    Image Credits: Purdue / Bedrich Benes

    One thing an AI model can definitely do, though, is grow trees. Fake trees. Researchers at Purdue’s Institute for Digital Forestry (where I would like to work, call me) made a super-compact model that simulates the growth of a tree realistically. This is one of those problems that seems simple but isn’t; you can simulate tree growth that works if you’re making a game or movie, sure, but what about serious scientific work? “Although AI has become seemingly pervasive, thus far it has mostly proved highly successful in modeling 3D geometries unrelated to nature,” said lead author Bedrich Benes.

    Their new model is only about a megabyte, which is extremely small for an AI system. But of course DNA is even smaller and denser, and it encodes the whole tree, root to bud. The model still works in abstractions — it’s by no means a perfect simulation of nature — but it does show that the complexities of tree growth can be encoded in a relatively simple model.

    Last up, a robot from Cambridge University researchers that can read braille faster than a human, with 90% accuracy. Why, you ask? Actually, it’s not for blind folks to use — the team decided this was an interesting and easily quantified task to test the sensitivity and speed of robotic fingertips. If it can read braille just by zooming over it, that’s a good sign! You can read more about this interesting approach here. Or watch the video below:

    [ad_2]

    Kyle Wiggers

    Source link

  • This week in AI: AI ethics keeps falling by the wayside | TechCrunch

    This week in AI: AI ethics keeps falling by the wayside | TechCrunch

    [ad_1]

    Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

    This week in AI, the news cycle finally (finally!) quieted down a bit ahead of the holiday season. But that’s not to suggest there was a dearth to write about, a blessing and a curse for this sleep-deprived reporter.

    A particular headline from the AP caught my eye this morning: “AI image-generators are being trained on explicit photos of children.” The gist of the story is, LAION, a data set used to train many popular open source and commercial AI image generators, including Stable Diffusion and Imagen, contains thousands of images of suspected child sexual abuse. A watchdog group based at Stanford, the Stanford Internet Observatory, worked with anti-abuse charities to identify the illegal material and report the links to law enforcement.

    Now, LAION, a nonprofit, has taken down its training data and pledged to remove the offending materials before republishing it. But incident serves to underline just how little thought is being put into generative AI products as the competitive pressures ramp up.

    Thanks to the proliferation of no-code AI model creation tools, it’s becoming frightfully easy to train generative AI on any data set imaginable. That’s a boon for startups and tech giants alike to get such models out the door. With the lower barrier to entry, however, comes the temptation to cast aside ethics in favor of an accelerated path to market.

    Ethics is hard — there’s no denying that. Combing through the thousands of problematic images in LAION, to take this week’s example, won’t happen overnight. And ideally, developing AI ethically involves working with all relevant stakeholders, including organizations who represent groups often marginalized and adversely impacted by AI systems.

    The industry is full of examples of AI release decisions made with shareholders, not ethicists, in mind. Take for instance Bing Chat (now Microsoft Copilot), Microsoft’s AI-powered chatbot on Bing, which at launch compared a journalist to Hitler and insulted their appearance. As of October, ChatGPT and Bard, Google’s ChatGPT competitor, were still giving outdated, racist medical advice. And the latest version of OpenAI’s image generator DALL-E shows evidence of Anglocentrism.

    Suffice it to say harms are being done in the pursuit of AI superiority — or at least Wall Street’s notion of AI superiority. Perhaps with the passage of the EU’s AI regulations, which threaten fines for noncompliance with certain AI guardrails, there’s some hope on the horizon. But the road ahead is long indeed.

    Here are some other AI stories of note from the past few days:

    Predictions for AI in 2024: Devin lays out his predictions for AI in 2024, touching on how AI might impact the U.S. primary elections and what’s next for OpenAI, among other topics.

    Against pseudanthropy: Devin also wrote suggesting that AI be prohibited from imitating human behavior.

    Microsoft Copilot gets music creation: Copilot, Microsoft’s AI-powered chatbot, can now compose songs thanks to an integration with GenAI music app Suno.

    Facial recognition out at Rite Aid: Rite Aid has been banned from using facial recognition tech for five years after the Federal Trade Commission found that the U.S. drugstore giant’s “reckless use of facial surveillance systems” left customers humiliated and put their “sensitive information at risk.”

    EU offers compute resources: The EU is expanding its plan, originally announced back in September and kicked off last month, to support homegrown AI startups by providing them with access to processing power for model training on the bloc’s supercomputers.

    OpenAI gives board new powers: OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power.

    Q&A with UC Berkeley’s Ken Goldberg: For his regular Actuator newsletter, Brian sat down with Ken Goldberg, a professor at UC Berkeley, a startup founder and an accomplished roboticist, to talk humanoid robots and broader trends in the robotics industry.

    CIOs take it slow with gen AI: Ron writes that, while CIOs are under pressure to deliver the kind of experiences people are seeing when they play with ChatGPT online, most are taking a deliberate, cautious approach to adopting the tech for the enterprise.

    News publishers sue Google over AI: A class action lawsuit filed by several news publishers accuses Google of “siphon[ing] off” news content through anticompetitive means, partly through AI tech like Google’s Search Generative Experience (SGE) and Bard chatbot.

    OpenAI inks deal with Axel Springer: Speaking of publishers, OpenAI inked a deal with Axel Springer, the Berlin-based owner of publications including Business Insider and Politico, to train its generative AI models on the publisher’s content and add recent Axel Springer-published articles to ChatGPT.

    Google brings Gemini to more places: Google integrated its Gemini models with more of its products and services, including its Vertex AI managed AI dev platform and AI Studio, the company’s tool for authoring AI-based chatbots and other experiences along those lines.

    More machine learnings

    Certainly the wildest (and easiest to misinterpret) research of the last week or two has to be life2vec, a Danish study that uses countless data points in a person’s life to predict what a person is like and when they’ll die. Roughly!

    Visualization of the life2vec’s mapping of various relevant life concepts and events.

    The study isn’t claiming oracular accuracy (say that three times fast, by the way) but rather intends to show that if our lives are the sum of our experiences, those paths can be extrapolated somewhat using current machine learning techniques. Between upbringing, education, work, health, hobbies, and other metrics, one may reasonably predict not just whether someone is, say, introverted or extroverted, but how these factors may affect life expectancy. We’re not quite at “precrime” levels here but you can bet insurance companies can’t wait to license this work.

    Another big claim was made by CMU scientists who created a system called Coscientist, an LLM-based assistant for researchers that can do a lot of lab drudgery autonomously. It’s limited to certain domains of chemistry currently, but just like scientists, models like these will be specialists.

    Lead researcher Gabe Gomes told Nature: “The moment I saw a non-organic intelligence be able to autonomously plan, design and execute a chemical reaction that was invented by humans, that was amazing. It was a ‘holy crap’ moment.” Basically it uses an LLM like GPT-4, fine tuned on chemistry documents, to identify common reactions, reagents, and procedures and perform them. So you don’t need to tell a lab tech to synthesize 4 batches of some catalyst — the AI can do it, and you don’t even need to hold its hand.

    Google’s AI researchers have had a big week as well, diving into a few interesting frontier domains. FunSearch may sound like Google for kids, but it actually is short for function search, which like Coscientist is able to make and help make mathematical discoveries. Interestingly, to prevent hallucinations, this (like others recently) use a matched pair of AI models a lot like the “old” GAN architecture. One theorizes, the other evaluates.

    While FunSearch isn’t going to make any ground-breaking new discoveries, it can take what’s out there and hone or reapply it in new places, so a function that one domain uses but another is unaware of might be used to improve an industry standard algorithm.

    StyleDrop is a handy tool for people looking to replicate certain styles via generative imagery. The trouble (as the researcher see it) is that if you have a style in mind (say “pastels”) and describe it, the model will have too many sub-styles of “pastels” to pull from, so the results will be unpredictable. StyleDrop lets you provide an example of the style you’re thinking of, and the model will base its work on that — it’s basically super-efficient fine-tuning.

    Image Credits: Google

    The blog post and paper show that it’s pretty robust, applying a style from any image, whether it’s a photo, painting, cityscape or cat portrait, to any other type of image, even the alphabet (notoriously hard for some reason).

    Google is also moving along in the generative video game with VideoPoet, which uses an LLM base (like everything else these days… what else are you going to use?) to do a bunch of video tasks, turning text or images to video, extending or stylizing existing video, and so on. The challenge here, as every project makes clear, is not simply making a series of images that relate to one another, but making them coherent over longer periods (like more than a second) and with large movements and changes.

    Image Credits: Google

    VideoPoet moves the ball forward, it seems, though as you can see the results are still pretty weird. But that’s how these things progress: first they’re inadequate, then they’re weird, then they’re uncanny. Presumably they leave uncanny at some point but no one has really gotten there yet.

    On the practical side of things, Swiss researchers have been applying AI models to snow measurement. Normally one would rely on weather stations, but these can be far between and we have all this lovely satellite data, right? Right. So the ETHZ team took public satellite imagery from the Sentinel-2 constellation, but as lead Konrad Schindler puts it, “Just looking at the white bits on the satellite images doesn’t immediately tell us how deep the snow is.”

    So they put in terrain data for the whole country from their Federal Office of Topography (like our USGS) and trained up the system to estimate not just based on white bits in imagery but also ground truth data and tendencies like melt patterns. The resulting tech is being commercialized by ExoLabs, which I’m about to contact to learn more.

    A word of caution from Stanford, though — as powerful as applications like the above are, note that none of them involve much in the way of human bias. When it comes to health, that suddenly becomes a big problem, and health is where a ton of AI tools are being tested out. Stanford researchers showed that AI models propagate “old medical racial tropes.” GPT-4 doesn’t know whether something is true or not, so it can and does parrot old, disproved claims about groups, such as that black people have lower lung capacity. Nope! Stay on your toes if you’re working with any kind of AI model in health and medicine.

    Lastly, here’s a short story written by Bard with a shooting script and prompts, rendered by VideoPoet. Watch out, Pixar!

    [ad_2]

    Kyle Wiggers

    Source link

  • This week in AI: Mistral and the EU's fight for AI sovereignty | TechCrunch

    This week in AI: Mistral and the EU's fight for AI sovereignty | TechCrunch

    [ad_1]

    Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

    This week, Google flooded the channels with announcements around Gemini, its new flagship multimodal AI model. Turns out it’s not as impressive as the company initially made it out to be — or, rather, the “lite” version of the model (Gemini Pro) Google released this week isn’t. (It doesn’t help matters that Google faked a product demo.) We’ll reserve judgement on Gemini Ultra, the full version of the model, until it begins making its way into various Google apps and services early next year.

    But enough talk of chatbots. What’s a bigger deal, I’d argue, is a funding round that just barely squeezed into the workweek: Mistral AI raising €450M (~$484 million) at $2 billion valuation.

    We’ve covered Mistral before. In September, the company, co-founded by Google DeepMind and Meta alumni, released its first model, Mistral 7B, which it claimed at the time outperformed others of its size. Mistral closed one of Europe’s largest seed rounds to date prior to Friday’s fundraise — and it hasn’t even launched a product yet.

    Now, my colleague Dominic has rightly pointed out that Paris-based Mistral’s fortunes are a red flag for many concerned about inclusivity. The startup’s co-founders are all white and male, and academically fit the homogenous, privileged profile of many of those in The New York Times’ roundly criticized list of AI changemakers.

    At the same time, investors appear to be viewing Mistral — as well as its sometime rival, Germany’s Aleph Alpha — as Europe’s opportunity to plant its flag in the very fertile (at present) generative AI ground.

    So far, the largest-profile and best-funded generative AI ventures have been stateside. OpenAI. Anthropic. Inflection AI. Cohere. The list goes on.

    Mistral’s good fortune is in many ways a microcosm of the fight for AI sovereignty. The European Union (EU) desires to avoid being left behind in yet another technological leap while at the same time imposing regulations to guide the tech’s development. As Germany’s Vice Chancellor and Minister for Economic Affairs Robert Habeck was recently quoted as saying: “The thought of having our own sovereignty in the AI sector is extremely important. [But] if Europe has the best regulation but no European companies, we haven’t won much.”

    The entrepreneurship-regulation divide came into sharp relief this week as EU lawmakers attempted to reach an agreement on policies to limit the risk of AI systems. (Update: lawmakers clinched a deal on a risk-based framework for regulating AI late Friday night.) Lobbyists, led by Mistral, have in recent months pushed for a total regulatory carve-out for generative AI models. But EU lawmakers have resisted such an exemption — for now.

    A lot’s riding on Mistral and its European competitors, all this being said; industry observers — and legislators stateside — will no doubt watch closely for the impact on investments once EU policymakers impose new restrictions on AI. Could Mistral someday grow to challenge OpenAI with the regulations in place? Or will the regulations have a chilling effect? It’s too early to say — but we’re eager to see ourselves. 

    Here are some other AI stories of note from the past few days:

    • A new AI alliance: Meta, on an open source tear, wants to spread its influence in the ongoing battle for AI mindshare. The social network announced that it’s teaming up with IBM to launch the AI Alliance, an industry body to support “open innovation” and “open science” in AI — but ulterior motives abound.
    • OpenAI turns to India: Ivan and Jagmeet report that OpenAI is working with former Twitter India head Rishi Jaitly as a senior advisor to facilitate talks with the government about AI policy. OpenAI is also looking to set up a local team in India, with Jaitly helping the AI startup navigate the Indian policy and regulatory landscape.
    • Google launches AI-assisted note-taking: Google’s AI note-taking app, NotebookLM, which was announced earlier this year, is now available to U.S. users 18 years of age or older. To mark the launch, the experimental app got integration with Gemini Pro, Google’s new large language model, which Google says will “help with document understanding and reasoning.”
    • OpenAI under regulatory scrutiny: The cozy relationship between OpenAI and Microsoft, a major backer and partner, is now the focus of a new inquiry launched by the Competition and Markets Authority in the U.K. over whether the two companies are effectively in a “relevant merger situation” after recent drama. The FTC is also reportedly looking into Microsoft’s investments in OpenAI in what appears to be a coordinated effort.
    • Asking AI nicely: How can you reduce biases if they’re baked into a AI model from biases in its training data? Anthropic suggests asking it nicely to please, please not discriminate or someone will sue us. Yes, really. Devin has the full story
    • Meta rolls out AI features: Alongside other AI-related updates this week, Meta AI, Meta’s generative AI experience, gained new capabilities including the ability to create images when prompted as well as support for Instagram Reels. The former feature, called “reimagine,” lets users in group chats recreate AI images with prompts, while the latter can turn to Reels as a resource as needed.
    • Respeecher gets cash: Ukrainian synthetic voice startup Respeecher — which is perhaps best known for being chosen to replicate James Earl Jones and his iconic Darth Vader voice for a Star Wars animated show, then later a younger Luke Skywalker for The Mandalorian — is finding success despite not just bombs raining down on their city, but a wave of hype that has raised up sometimes controversial competitors, Devin writes.
    • Liquid neural nets: An MIT spinoff co-founded by robotics luminary Daniela Rus aims to build general-purpose AI systems powered by a relatively new type of AI model called a liquid neural network. Called Liquid AI, the company raised $37.5 million this week in a seed round from backers including WordPress parent company Automattic. 

    More machine learnings

    Predicted floating plastic locations off the coast of South Africa.Image Credits: EPFL

    Orbital imagery is an excellent playground for machine learning models, since these days satellites produce more data than experts can possibly keep up with. EPFL researchers are looking into better identifying ocean-borne plastic, a huge problem but a very difficult one to track systematically. Their approach isn’t shocking — train a model on labeled orbital images — but they’ve refined the technique so that their system is considerably more accurate, even when there’s cloud cover.

    Finding it is only part of the challenge, of course, and removing it is another, but the better intelligence people and organizations have when they perform the actual work, the more effective they will be.

    Not every domain has so much imagery, however. Biologists in particular face a challenge in studying animals that are not adequately documented. For instance, they might want to track the movements of a certain rare type of insect, but due to a lack of imagery of that insect, automating the process is difficult. A group at Imperial College London is putting machine learning to work on this in collaboration with game development platform Unreal.

    Image Credits: Imperial College London

    By creating photo-realistic scenes in Unreal and populating them with 3D models of the critter in question, be it an ant, stick insect, or something bigger, they can create arbitrary amounts of training data for machine learning models. Though the computer vision system will have been trained on synthetic data, it can still be very effective in real-world footage, as their video shows.

    You can read their paper in Nature Communications.

    Not all generated imagery is so reliable, though, as University of Washington researchers found. They systematically prompted the open source image generator Stable Diffusion 2.1 to produce images of a “person” with various restrictions or locations. They showed that the term “person” is disproportionately associated with light-skinned, western men.

    Not only that, but certain locations and nationalities produced unsettling patterns, like sexualized imagery of women from Latin American countries and “a near-complete erasure of nonbinary and Indigenous identities.” For instance, asking for pictures of “a person from Oceania” produces white men and no indigenous people, despite the latter being numerous in the region (not to mention all the other non-white-guy people). It’s all a work in progress, and being aware of the biases inherent in the data is important.

    Learning how to navigate biased and questionably useful model is on a lot of academics’ minds — and those of their students. This interesting chat with Yale English professor Ben Glaser is a refreshingly optimistic take on how things like ChatGPT can be used constructively:

    When you talk to a chatbot, you get this fuzzy, weird image of culture back. You might get counterpoints to your ideas, and then you need to evaluate whether those counterpoints or supporting evidence for your ideas are actually good ones. And there’s a kind of literacy to reading those outputs. Students in this class are gaining some of that literacy.

    If everything’s cited, and you develop a creative work through some elaborate back-and-forth or programming effort including these tools, you’re just doing something wild and interesting.

    And when should they be trusted in, say, a hospital? Radiology is a field where AI is frequently being applied to help quickly identify problems in scans of the body, but it’s far from infallible. So how should doctors know when to trust the model and when not to? MIT seems to think that they can automate that part too — but don’t worry, it’s not another AI. Instead, it’s a standard, automated onboarding process that helps determine when a particular doctor or task finds an AI tool helpful, and when it gets in the way.

    Increasingly, AI models are being asked to generate more than text and images. Materials are one place where we’ve seen a lot of movement — models are great at coming up with likely candidates for better catalysts, polymer chains, and so on. Startups are getting in on it, but Microsoft also just released a model called MatterGen that’s “specifically designed for generating novel, stable materials.”

    Image Credits: Microsoft

    As you can see in the image above, you can target lots of different qualities, from magnetism to reactivity to size. No need for a Flubber-like accident or thousands of lab runs — this model could help you find a suitable material for an experiment or product in hours rather than months.

    Google DeepMind and Berkeley Lab are also working on this kind of thing. It’s quickly becoming standard practice in the materials industry.

    [ad_2]

    Kyle Wiggers

    Source link