ReportWire

Tag: machine learning

  • Forget Chatbots. AI Agents Are the Future

    Forget Chatbots. AI Agents Are the Future

    [ad_1]

    This week a startup called Cognition AI caused a bit of a stir by releasing a demo showing an artificial intelligence program called Devin performing work usually done by well-paid software engineers. Chatbots like ChatGPT and Gemini can generate code, but Devin went further, planning how to solve a problem, writing the code, and then testing and implementing it.

    Devin’s creators brand it as an “AI software developer.” When asked to test how Meta’s open source language model Llama 2 performed when accessed via different companies hosting it, Devin generated a step-by-step plan for the project, generated code needed to access the APIs and run benchmarking tests, and created a website summarizing the results.

    It’s always hard to judge staged demos, but Cognition has shown Devin handling a wide range of impressive tasks. It wowed investors and engineers on X, receiving plenty of endorsements, and even inspired a few memes—including some predicting Devin will soon be responsible for a wave of tech industry layoffs.

    Devin is just the latest, most polished example of a trend I’ve been tracking for a while—the emergence of AI agents that instead of just providing answers or advice about a problem presented by a human can take action to solve it. A few months back I test drove Auto-GPT, an open source program that attempts to do useful chores by taking actions on a person’s computer and on the web. Recently I tested another program called vimGPT to see how the visual skills of new AI models can help these agents browse the web more efficiently.

    I was impressed by my experiments with those agents. Yet for now, just like the language models that power them, they make quite a few errors. And when a piece of software is taking actions, not just generating text, one mistake can mean total failure—and potentially costly or dangerous consequences. Narrowing the range of tasks an agent can do to, say, a specific set of software engineering chores seems like a clever way to reduce the error rate, but there are still many potential ways to fail.

    Not only startups are building AI agents. Earlier this week I wrote about an agent called SIMA, developed by Google DeepMind, which plays video games including the truly bonkers title Goat Simulator 3. SIMA learned from watching human players how to do more than 600 fairly complicated tasks such as chopping down a tree or shooting an asteroid. Most significantly, it can do many of these actions successfully even in an unfamiliar game. Google DeepMind calls it a “generalist.”

    I suspect that Google has hopes that these agents will eventually go to work outside of video games, perhaps helping use the web on a user’s behalf or operate software for them. But video games make a good sandbox for developing and testing agents, by providing complex environments in which they can be tested and improved. “Making them more precise is something that we’re actively working on,” Tim Harley, a research scientist at Google DeepMind, told me. “We’ve got various ideas.”

    You can expect a lot more news about AI agents in the coming months. Demis Hassabis, the CEO of Google DeepMind, recently told me that he plans to combine large language models with the work his company has previously done training AI programs to play video games to develop more capable and reliable agents. “This definitely is a huge area. We’re investing heavily in that direction, and I imagine others are as well.” Hassabis said. “It will be a step change in capabilities of these types of systems—when they start becoming more agent-like.”

    [ad_2]

    Will Knight

    Source link

  • GrammaTech Wins $1 Million Prize in DARPA’s Artificial Intelligence Cyber Challenge (AIxCC)

    GrammaTech Wins $1 Million Prize in DARPA’s Artificial Intelligence Cyber Challenge (AIxCC)

    [ad_1]

    GrammaTech, Inc., a leading provider of cybersecurity services and tools that improve and accelerate software development, along with researchers from Carnegie Mellon University and the University of Virginia, has been selected as one of the winners of the Defense Advanced Research Projects Agency (DARPA)’s AI Cyber Challenge’s Small Business Track Competition! DARPA received a robust response of competitive submissions from small businesses for this opportunity, and GrammaTech’s team, VERSATIL, scored in the top seven. GrammaTech’s team will receive $1 million in prize money and the distinction of competing in the AIxCC Semifinal Competition (ASC) as a Small Business Track competitor. 

    The Artificial Intelligence Cyber Challenge (AIxCC) is a two-year competition asking leading AI companies, like GrammaTech, to work with DARPA to defend the software that everyone relies on. The growing importance of software in modern life creates an ever-expanding attack surface for malicious actors. New AI-enabled technology can address this cybersecurity problem. AIxCC asks competitors to design novel AI systems to secure critical code and will award a cumulative $29.5 million in prizes to teams with the best systems. 

    GrammaTech’s team combines technology and expertise from industry and academia. Built upon its system that won second prize at DARPA’s 2016 Cyber Grand Challenge, and its commercial successor Proteus, GrammaTech’s latest advances multiply the effectiveness of state-of-the-art static and dynamic analysis tools by leveraging Large Language Models (LLMs). The system will use LLMs to find potential vulnerabilities, guide and customize vulnerability discovery analyses, and generate patches to fix the vulnerabilities. An invaluable resource in support of the industry drive for Secure-by-Design, the system will automatically find and repair vulnerabilities at scale. 

    AIxCC competitions will occur at one of the world’s top cybersecurity conferences, DEF CON. The semifinal competition will be at DEF CON 2024, and the final competition at DEF CON 2025, where the top prize will be $4 million. 

    Ray DeMeo, Chief Growth Officer at GrammaTech, said: “This is GrammaTech’s second AI award in the first months of 2024. It reflects the caliber of our team and decades-long reputation for solving the hardest cybersecurity problems. GrammaTech’s combination of talent and technology being leveraged here are uniquely suited to next-level growth of our already formidable offerings for identifying vulnerabilities, automating remediation, and securing supply chains across the full software, firmware, and microelectronics compute stack, now a top priority for information processing and inference at the edge.” 

    About GrammaTech: 
    GrammaTech is a provider of advanced cybersecurity services and developer of software-assurance solutions. Originating from the computer science department at Cornell University, the company has a 35-year history of delivering cutting-edge cyber capability in support of government, intelligence, and mission-critical infrastructure. GrammaTech offerings are leveraged by software developers and system defenders alike, everywhere reliability and security are paramount. They cover threat detection and mitigation, malware analysis, machine learning and automation, migration to memory-safe languages, attack surface area reduction, and software supply chain integrity. 

    Source: GrammaTech

    [ad_2]

    Source link

  • Why Elon Musk Had to Open Source Grok, His Answer to ChatGPT

    Why Elon Musk Had to Open Source Grok, His Answer to ChatGPT

    [ad_1]

    After suing OpenAI this month, alleging the company has become too closed, Elon Musk says he will release his “truth-seeking” answer to ChatGPT, the chatbot Grok, for anyone to download and use.

    “This week, @xAI will open source Grok,” Musk wrote on his social media platform X today. That suggests his AI company, xAI, will release the full code of Grok and allow anyone to use or alter it. By contrast, OpenAI makes a version of ChatGPT and the language model behind it available to use for free but keeps its code private.

    Musk had previously said little about the business model for Grok or xAI, and the chatbot was made available only to Premium subscribers to X. Having accused his OpenAI cofounders of reneging on a promise to give away the company’s artificial intelligence earlier this month, Musk may have felt he had to open source his own chatbot to show that he is committed to that vision.

    OpenAI responded to Musk’s lawsuit last week by releasing email messages between Musk and others in which he appeared to back the idea of making the company’s technology more closed as it became more powerful. Musk ultimately plowed more than $40 million into OpenAI before parting ways with the project in 2018.

    When Musk first announced Grok was in development, he promised that it would be less politically biased than ChatGPT or other AI models, which he and others with right-leaning views have criticized for being too liberal. Tests by WIRED and others quickly showed that although Grok can adopt a provocative style, it is not hugely biased one way or another—perhaps revealing the challenge of aligning AI models consistently with a particular viewpoint.

    Open sourcing Grok could help Musk drum up interest in his company’s AI. Limiting Grok access to only paid subscribers of X, one of the smaller global social platforms, means that it does not yet have the traction of OpenAI’s ChatGPT or Google’s Gemini. Releasing Grok could draw developers to use and build upon the model, and may ultimately help it reach more end users. That could provide xAI with data it can use to improve its technology.

    Musk’s move to liberate Grok sees him align with Meta’s approach to generative AI. Meta’s open source models, like Llama 2, have become popular among developers because they can be fully customized and adapted to different uses. But adopting a similar strategy could draw Musk further into a growing debate over the benefits and risks of giving anyone access to the most powerful AI models.

    Many AI experts argue that open sourcing AI models has significant benefits such as increasing transparency and broadening access. “Open models are safer and more robust, and it’s great to see more options from leading companies in the space,” says Emad Mostaque, founder of Stability AI, a company that builds various open source AI models.

    [ad_2]

    Will Knight

    Source link

  • The Fear That Inspired the Creation of OpenAI

    The Fear That Inspired the Creation of OpenAI

    [ad_1]

    Elon Musk last week sued two of his OpenAI cofounders, Sam Altman and Greg Brockman, accusing them of “flagrant breaches” of the trio’s original agreement that the company would develop artificial intelligence openly and without chasing profits. Late on Tuesday, OpenAI released partially redacted emails between Musk, Altman, Brockman, and others that provide a counternarrative.

    The emails suggest that Musk was open to OpenAI becoming more profit-focused relatively early on, potentially undermining his own claim that it deviated from its original mission. In one message Musk offers to fold OpenAI into his electric-car company Tesla to provide more resources, an idea originally suggested by an email he forwarded from an unnamed outside party.

    The newly published emails also imply that Musk was not dogmatic about OpenAI having to freely provide its developments to all. In response to a message from chief scientist Ilya Sutskevar warning that open sourcing powerful AI advances could be risky as the technology advances, Musk writes, “Yup.” That seems to contradict the arguments in last week’s lawsuit that it was agreed from the start that OpenAI should make its innovations freely available.

    Putting the legal dispute aside, the emails released by OpenAI show a powerful cadre of tech entrepreneurs founding an organization that has grown to immense power. Strikingly, although OpenAI likes to describe its mission as focused on creating artificial general intelligence—machines smarter than humans—its founders spend more time discussing fears about the rising power of Google and other deep-pocketed giants than excited about AGI.

    “I think we should say that we are starting with a $1B funding commitment. This is real. I will cover whatever anyone else doesn’t provide,” Musk wrote in a missive discussing how to introduce OpenAI to the world. He dismissed a suggestion to launch by announcing $100 million in funding, citing the huge resources of Google and Facebook.

    Musk cofounded OpenAI with Altman, Brockman, and others in 2015, during another period of heady AI hype centered around Google. A month before the nonprofit was incorporated, Google’s AI program AlphaGo had learned to play the devilishly tricky board game Go well enough to defeat a champion human player for the first time. The feat shocked many AI experts who had thought Go too subtle for computers to master anytime soon. It also showed the potential for AI to master many seemingly impossible tasks.

    The text of Musk’s lawsuit confirms some previously reported details of the OpenAI backstory at this time, including the fact that Musk was first made aware of the possible dangers posed by AI during a 2012 meeting with Demis Hassabis, cofounder and CEO of DeepMind, the company that developed AlphaGo and was acquired by Google in 2014. The lawsuit also confirms that Musk disagreed deeply with Google cofounder Larry Page over the future risks of AI, something that apparently led to the pair falling out as friends. Musk eventually parted ways with OpenAI in 2018 and has apparently soured further on the project since the wild success of ChatGPT.

    Since OpenAI released the emails with Musk this week, speculation has swirled about the names and other details redacted from the messages. Some turned to AI as a way to fill in the blanks with statistically plausible text.

    “This needs billions per year immediately or forget it,” Musk wrote in one email about the OpenAI project. “Unfortunately, humanity’s future is in the hands of [redacted],” he added, perhaps a reference to Google cofounder Page.

    Elsewhere in the email change, the AI software—like some commentators on Twitter—guessed Musk had forwarded arguments that Google had a powerful advantage in AI from Hassabis.

    Whoever it was, the relationships on display in the emails between OpenAI’s cofounders have since become fractured. Musk’s lawsuit seeks to force the company to stop licensing technology to its primary backer, Microsoft. In a blog post accompanying the emails released this week, OpenAI’s other cofounders expressed sorrow at how things had soured.

    “We’re sad that it’s come to this with someone whom we’ve deeply admired,” they wrote. “Someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.”

    [ad_2]

    Will Knight

    Source link

  • The Dark Side of Open Source AI Image Generators

    The Dark Side of Open Source AI Image Generators

    [ad_1]

    Whether through the frowning high-definition face of a chimpanzee or a psychedelic, pink-and-red-hued doppelganger of himself, Reuven Cohen uses AI-generated images to catch people’s attention. “I’ve always been interested in art and design and video and enjoy pushing boundaries,” he says—but the Toronto-based consultant, who helps companies develop AI tools, also hopes to raise awareness of the technology’s darker uses.

    “It can also be specifically trained to be quite gruesome and bad in a whole variety of ways,” Cohen says. He’s a fan of the freewheeling experimentation that has been unleashed by open source image-generation technology. But that same freedom enables the creation of explicit images of women used for harassment.

    After nonconsensual images of Taylor Swift recently spread on X, Microsoft added new controls to its image generator. Open source models can be commandeered by just about anyone and generally come without guardrails. Despite the efforts of some hopeful community members to deter exploitative uses, the open source free-for-all is near-impossible to control, experts say.

    “Open source has powered fake image abuse and nonconsensual pornography. That’s impossible to sugarcoat or qualify,” says Henry Ajder, who has spent years researching harmful use of generative AI.

    Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Some tools based on open source algorithms are purpose-built for salacious or harassing uses, such as “nudifying” apps that digitally remove women’s clothes in images.

    But many tools can serve both legitimate and harassing use cases. One popular open source face-swapping program is used by people in the entertainment industry and as the “tool of choice for bad actors” making nonconsensual deepfakes, Ajder says. High-resolution image generator Stable Diffusion, developed by startup Stability AI, is claimed to have more than 10 million users and has guardrails installed to prevent explicit image creation and policies barring malicious use. But the company also open sourced a version of the image generator in 2022 that is customizable, and online guides explain how to bypass its built-in limitations.

    Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on AI model marketplaces such as Civitai, a community-based site where users share and download models. There, one creator of a Taylor Swift plug-in has urged others not to use it “for NSFW images.” However, once downloaded, its use is out of its creator’s control. “The way that open source works means it’s going to be pretty hard to stop someone from potentially hijacking that,” says Ajder.

    4chan, the image-based message board site with a reputation for chaotic moderation is home to pages devoted to nonconsensual deepfake porn, WIRED found, made with openly available programs and AI models dedicated solely to sexual images. Message boards for adult images are littered with AI-generated nonconsensual nudes of real women, from porn performers to actresses like Cate Blanchett. WIRED also observed 4chan users sharing workarounds for NSFW images using OpenAI’s Dall-E 3.

    That kind of activity has inspired some users in communities dedicated to AI image-making, including on Reddit and Discord, to attempt to push back against the sea of pornographic and malicious images. Creators also express worry about the software gaining a reputation for NSFW images, encouraging others to report images depicting minors on Reddit and model-hosting sites.

    [ad_2]

    Lydia Morrish

    Source link

  • Google Is Finally Trying to Kill AI Clickbait

    Google Is Finally Trying to Kill AI Clickbait

    [ad_1]

    Google is taking action against algorithmically generated spam. The search engine giant just announced upcoming changes, including a revamped spam policy, designed in part to keep AI clickbait out of its search results.

    “It sounds like it’s going to be one of the biggest updates in the history of Google,” says Lily Ray, senior director of SEO at the marketing agency Amsive. “It could change everything.”

    In a blog post, Google claims the change will reduce “low-quality, unoriginal content” in search results by 40 percent. It will focus on reducing what the company calls “scaled content abuse,” which is when bad actors flood the internet with massive amounts of articles and blog posts designed to game search engines.

    “A good example of it, which has been around for a little while, is the abuse around obituary spam,” says Google’s vice president of search, Pandu Nayak. Obituary spam is an especially grim type of digital piracy, where people attempt to make money by scraping and republishing death notices, sometimes on social platforms like YouTube. Recently, obituary spammers have started using artificial intelligence tools to increase their output, making the issue even worse. Google’s new policy, if enacted effectively, should make it harder for this type of spam to crop up in online searches.

    This notably more aggressive approach to combating search spam takes specific aim at “domain squatting,” a practice in which scavengers purchase websites with name recognition to profit off their reputations, often replacing original journalism with AI-generated articles designed to manipulate search engine rankings. This type of behavior predates the AI boom, but with the rise of text-generation tools like ChatGPT, it’s become increasingly easy to churn out endless articles to game Google rankings.

    The spike in domain squatting is just one of the issues that have tarnished Google Search’s reputation in recent years. “People can spin up these sites really easily,” says SEO expert Gareth Boyd, who runs the digital marketing firm Forte Analytica. “It’s been a big issue.” (Boyd admits that he has even created similar sites in the past, though he says he doesn’t do it anymore.)

    In February, WIRED reported on several AI clickbait networks that used domain squatting as a strategy, including one that took the websites for the defunct indie women’s website The Hairpin and the shuttered Hong Kong-based pro-democracy tabloid Apple Daily and filled them with AI-generated nonsense. Another transformed the website of a small-town Iowa newspaper into a bizarro repository for AI blog posts on retail stocks. According to Google’s new policy, this type of behavior is now explicitly categorized by the company as spam.

    In addition to domain squatting, Google’s new policy will also focus on eliminating “reputation abuse,” where otherwise trustworthy websites allow third-party sources to publish janky sponsored content or other digital junk. (Google’s blog post describes “payday loan reviews on a trusted educational website” as an example.) While the other parts of the spam policy will start enforcement immediately, Google is giving 60 days notice prior to cracking down on reputational abuse, to give websites time to fall in line.

    Nayak says the company has been working on this specific update since the end of last year. More broadly, the company has been working on ways to fix low-quality content in search, including AI-generated spam, since 2022. “We’ve been aware of the problem,” Nayak says. “It takes time to develop these changes effectively.”

    Some SEO experts are cautiously optimistic that these changes could restore Google’s search efficacy. “It’s going to reinstate the way things used to be, hopefully,” says Ray. “But we have to see what happens.”

    [ad_2]

    Kate Knibbs

    Source link

  • The Wild Claim at the Heart of Elon Musk’s OpenAI Lawsuit

    The Wild Claim at the Heart of Elon Musk’s OpenAI Lawsuit

    [ad_1]

    Elon Musk started the week by posting testily on X about his struggles to set up a new laptop running Windows. He ended it by filing a lawsuit accusing OpenAI of recklessly developing human-level AI and handing it over to Microsoft.

    Musk’s lawsuit is filed against OpenAI and two of its executives, CEO Sam Altman and president Greg Brockman, both of whom worked with the rocket and car entrepreneur to found the company in 2015. A large part of the case pivots around a bold and questionable technical claim: That OpenAI has developed so-called artificial general intelligence, or AGI, a term generally used to refer to machines that can comprehensively match or outsmart humans.

    The case claims that Altman and Brockman have breached the original “Founding Agreement” for OpenAI worked out with Musk, which it says pledged the company to develop AGI openly and “for the benefit of humanity. Musk’s suit alleges that the for-profit arm of the company, established in 2019 after he parted ways with OpenAI, has instead created AGI without proper transparency and licensed it to Microsoft, which has invested billions into the company. It demands that OpenAI be forced to release its technology openly and that it be barred from using it to financially benefit Microsoft, Altman, or Brockman.

    “On information and belief, GPT-4 is an AGI algorithm,” the lawsuit states, referring to the large language model that sits behind OpenAI’s ChatGPT. It cites studies that found the system can get a passing grade on the Uniform Bar Exam and other standard tests as proof that it has surpassed some fundamental human abilities. “GPT-4 is not just capable of reasoning. It is better at reasoning than average humans,” the suit claims.

    Although GPT-4 was heralded as a major breakthrough when it was launched in March 2023, most AI experts do not see it as proof that AGI has been achieved. “GPT-4 is general, but it’s obviously not AGI in the way that people typically use the term,” says Oren Etzioni, a professor emeritus at the University of Washington and an expert on AI.

    “It will be viewed as a wild claim,” says Christopher Manning, a professor at Stanford University who specializes in AI and language, of the AGI assertion in Musk’s suit. Manning says there are divergent views of what constitutes AGI within the AI community. Some experts might set the bar lower, arguing that GPT-4’s ability to perform a wide range of functions would justify calling it AGI, while others prefer to reserve the term for algorithms that can outsmart most or all humans at anything. “Under this definition, I think we very clearly don’t have AGI and are indeed still quite far from it,” he says.

    Limited Breakthrough

    GPT-4 won notice—and new customers for OpenAI—because it can answer a wide range of questions, while older AI programs were generally dedicated to specific tasks like playing chess or tagging images. Musk’s lawsuit refers to assertions from Microsoft researchers, in a paper from March 2023, that “given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” Despite its impressive abilities, GPT-4 still makes mistakes and has significant limitations to its ability to correctly parse complex questions.

    “I have the sense that most of us researchers on the ground think that large language models [like GPT-4] are a very significant tool for allowing humans to do much more but that they are limited in ways that make them far from stand-alone intelligences,” adds Michael Jordan, a professor at UC Berkeley and an influential figure in the field of machine learning.

    Jordan adds that he prefers to avoid the term AGI entirely because it is so vague. “I’ve never found Elon Musk to have anything to say about AI that was very calibrated or based on research reality,” he adds.

    [ad_2]

    Will Knight

    Source link

  • The Mindblowing Experience of a Chatbot That Answers Instantly

    The Mindblowing Experience of a Chatbot That Answers Instantly

    [ad_1]

    If all that is true—and there’s no way to tell right now—Groq might well pose a threat to the dominance of Nvidia. Ross is careful when discussing this. “Let’s be clear—they’re Goliath, and we’re David,” he says. “It would be very, very foolish to say that Nvidia is worried about us.” When asked about Groq, though, Nvidia’s prompt response indicates that the startup is indeed on its radar. With near-Groq-like speed, the Goliath’s PR team sent me a statement indicating that Nvidia’s AI advantage is not only in its chips but other services it provides to customers. like AI software, memory, networking, and other goodies. “AI compute in the data center is a complex challenge that requires a full-stack solution,” it says, implying that its unnamed competitor might be stack-challenged.

    In any case, Ross says he’s not competing with Nvidia but offering an alternative experience—and not just in terms of speed. He’s on a mission to make sure that Groq will deliver fair results unsullied by political point of view or pressure from commercial interests. “Groq will never be involved in advertising, ever,” he says. “Because that’s influencing people. AI should always be neutral, it should never tell you what you should be thinking. Groq exists to make sure everyone has access. It’s helping you make your decision, not its decisions.” Great sentiments, but even the Groq chatbot, when I quizzed it about early-stage idealism, is skeptical about such claims. “The pressure to generate profits and scale can lead even well-intentioned founders to compromise on their ideals,” it promptly replied.

    One other thing. You may have heard that Elon Musk has given the name “Grok” to the LLM created by his AI company. This took Ross by surprise, since he says he trademarked the name of his company when he founded it in 2016, and he believes it covers the phonetically identical original term. “We called dibs,” he says. “He can’t have it. We’ve sent a cease-and-desist letter.” So far he hasn’t gotten a response from Musk.

    When I asked Groq about the name dispute, it first cautioned me that it doesn’t provide legal opinions. “However, I can provide some context that may help you understand the situation better,” it said. The bot explained that the term grok has been used in the industry for decades, so Musk would be within his rights to use it. On the other hand, if Groq trademarked the term, it might well have an exclusive claim. All accurate and on the mark—everything you’d expect from a modern LLM. What you would not expect was that the reply appeared in less than a second.

    Time Travel

    In my book on Google, In the Plex, I explained how the company, and its cofounder Larry Page, prioritized speed and recognized that faster products are used not only more often, but differently. It became an obsession within Google.

    Engineers working for Page learned quickly enough of [his speed] priority. “When people do demos and they’re slow, I’m known to count sometimes,” he says. “One one-thousand, two one-thousand. That tends to get people’s attention.” Actually, if your product could be measured in seconds, you’d already failed. Paul Buchheit remembers one time when he was doing an early Gmail demo in Larry’s office. Page made a face and told him it was way too slow. Buchheit objected, but Page reiterated his complaint, charging that the reload took at least 600 milliseconds. (That’s six-tenths of a second.) Buchheit thought, You can’t know that, but when he got back to his own office he checked the server logs. Six hundred milliseconds. “He nailed it,” says Buchheit.

    [ad_2]

    Steven Levy

    Source link

  • ‘86% of Asia’s central banks, supervisory authorities adopting big data, ML’

    ‘86% of Asia’s central banks, supervisory authorities adopting big data, ML’

    [ad_1]

    The share of Asian central banks and supervisory authorities adopting big data and machine learning has risen to 86 per cent. This involves nowcasting exercises, applications to granular financial data, and suptech/regtech applications such as the computation of the economic policy uncertainty (EPU) indices in India.

    An important area is fraud detection, with data reflecting that one-third of Asian central banks deploy big data algorithms for anti-money laundering/combating terrorism financing purposes, RBI Deputy Governor Michael Patra said, quoting a survey.

    “Machine learning has also been extensively used in Asia for research purposes to inform monetary policy decisions, facilitate data management, and support regulatory supervision,” Patra said as part of his address at the 59th SEACEN Governors’ Conference on February 15. The RBI released the text of the speech on Tuesday.

    Other applications include using text analysis to evaluate monetary policy credibility, ensuring consistency in central banks’ communication of supervisory issues to financial institutions, improving efficiency in the compilation of statistics, assessing the state of the labour market or of trade conditions, extracting information on tourism activities, and capturing firms’ sentiment or evaluating employees’ feedback.

    Growing interest in digital forms of payments worldwide has also led SEACEN central banks to explore the possibilities of central bank digital currency (CBDC), which is currently in various stages of experimentation in different member countries.

    “The overarching goal for developing CBDC as digital cash among the SEACEN central banks appears to be to create a resilient payment system for consumers and businesses to transact in any situation. SEACEN central banks are actively coordinating their efforts to develop CBDCs, with near real-time exchanges of information on progress,” he said.

    Inflation vs growth

    Citing a working paper from the SEACEN centre, Patra said estimates suggest that the sacrifice ratio — the loss of output to achieve a reduction in inflation by one percentage point — is between zero and 0.5 per cent of GDP, even as the extent of contraction in output was widely divergent across member economies.

    “Asia will likely contribute about two-thirds of global growth in 2024, a carryover of its blockbuster performance in 2023. Disinflation is expected to remain on track in Asia, and convergence with central bank targets is being sighted. Thus, the outlook for Asia in a stormy and unsettled global environment is one of sustained growth with stability,” Patra said.

    As a result, the region is a preferred habitat for international financial flows on the back of positive growth differentials vis-à-vis the rest of the world, deep and vibrant financial markets, and reasonable stability in financial asset prices.

    On the other hand, global spillovers from geopolitical developments, geo-economic fragmentation, and the tightening of financial conditions as a result of aggressive and synchronised monetary policy tightening worldwide have imposed downward pressures on currencies in the region, resulting in a widening of risk spreads and reversals of portfolio equity and debt flows.

    Capital inflows to SEACEN member economies more than doubled from an average of $400 billion in 2000-2010 to over $900 billion in 2011-2021. The volatility of capital inflows into SEACEN economies declined between 2000-2010 and 2011-2021. However, the variability of portfolio equity, trade credit, and advance flows rose, leading to a need for varied responses, he added.

    As such, changing workforce demographics, the rise of financial products and services beyond the conventional definition of banking, digitalisation, climate change, talent shortages, and persistent supply shocks, apart from the pandemic and the recent inflation experience, will continue to pose challenges.

    Patra also touched upon the fact that climate change poses a common and potentially overwhelming macrofinancial risk for all SEACEN member countries, given the alarming rise in the incidence and intensity of extreme weather events in recent years.

    [ad_2]

    Source link

  • Etsy drifts further away from its roots with first Super Bowl ad

    Etsy drifts further away from its roots with first Super Bowl ad

    [ad_1]

    Etsy Inc., once known as a quirky marketplace for handmade, artisanal and vintage items, seems to be moving further away from its origins amid a much tougher e-commerce landscape and the impact of AI.

    Etsy
    ETSY,
    +4.83%

    will be marketing to a whole new audience on Sunday, when its first Super Bowl commercial will run. The 30-second ad is quirky; it depicts a generic 19th-century American leader who’s flummoxed over how to reciprocate France’s gift of the Statue of Liberty. With the help of an anachronistic smartphone, he and his team search on Etsy using its new Gift Mode option, and find its “Cheese Lover” category after determining that the French love cheese. Voilà — they decide to send the French some cheese.

    The commercial is part of Etsy’s push of a new user interface featuring Gift Mode, which lets shoppers search for gifts for a specific type of person or occasion — combining generative AI and human curation to give gift buyers some unusual options.

    But are these moves desperate and costly efforts to try to reach potential new buyers, coming on the heels of Etsy’s plans to lay off 11% of its staff?Or could running a TV ad at the most expensive time of the year actually lead to more sales on the once-fast growing marketplace?

    Etsy believes these moves will help the company grow again, and its research shows the average American spends $1,600 a year on gifts. “There is no single market leader and Etsy sees a real opportunity to become the destination for gifting,” Etsy’s Chief Executive Josh Silverman said in a recent blog post.

    Etsy is clearly under pressure after seeing its gross merchandise sales more than double in 2020 during the pandemic, when it became a go-to place to buy handmade masks and all kinds of items for the home, from vintage pieces to antiques to castoffs. From personal experience as an Etsy seller, I saw sales at my own small vintage-clothing shop more than double in 2020 and then fall back in 2021, while still remaining higher than in 2019. In the last two years, sales have slowed, and some other sellers have witnessed similar patterns, based on their comments in seller forums.

    The number of sellers and buyers on the platform has increased on the same level as gross merchandise sales. But e-commerce competition has also gotten more fierce.

    “Our main concern with Etsy is growing competition in the space from new players like Temu,” said Bernstein Research analyst Nikhil Devnani, in an email. Temu and fellow Chinese online retailer Shein have raised a lot of investor jitters, as Etsy’s gross merchandise sales have slipped over the last year and are forecast to fall again in its upcoming fourth-quarter earnings report later this month.

    Devnani said a Super Bowl ad could potentially help the marketplace gain visibility, something it has always lacked.

    “One dynamic they’ve talked about a lot is that brand awareness/recollection is still low, and this keeps frequency low,” he said, noting that Etsy buyers shop on the site about three times per year, on average. “They want to be more top-of-mind … Super Bowl ads are notoriously expensive of course, but can be impactful/get noticed.”

    The company’s big focus on Gift Mode, however, could be a risky strategy. How many times a year do consumers look for gifts? And in a note Devnani wrote in October, before the company’s Gift Mode launch, he said that one of the concerns investors have is that Etsy is too niche. “’How often does someone need something special?’ is the rhetoric we hear most often,” he said. Etsy, then, is counting on buyers returning for other items for themselves.

    Etsy CEO Silverman believes buyers will come back again and again to purchase gifts. Naved Khan, a B. Riley Securities analyst, said in a recent note to clients that he believes Gift Mode plays to Etsy’s core strengths, offering “unique goods at reasonable prices” versus the mass-produced products sold on Shein, Temu, Amazon.com Inc.
    AMZN,
    +2.71%
    ,
    and other sites.

    Consumer spending has changed, though. At an investor conference in December, Silverman said that consumers are spending on dining out and traveling, instead of buying things.

    But while investors still view Etsy as a niche e-commerce site, some buyers and sellers see it overrun with repetitive, non-relevant ads. Complaints about a decline in search capabilities, reliance on email and chat for support, and constant tech changes are common on seller forums and Facebook groups. AI-generated art offered by newer sellers as a side hustle has also become a thought-provoking, debated issue. And there are complaints about mass-produced items making their way on the site.

    Etsy said that in addition to its human and automated efforts, it also relies on community flags to help take down infringing products that are not allowed on its marketplace, and that community members should contact the company when if they see mass-produced items for sale on the site.

    It also continues to work on search. On its last earnings call, Silverman said the company was moving beyond relevance to the next frontier of search, one “focused on better identifying the quality of each Etsy listing utilizing humans and [machine-learning] technology, so that from a highly relevant result set we bring the very best of Etsy to the top — personalized to what we understand of your tastes and preferences.”

    The pressure could build on the company if its latest moves don’t generate growth. Etsy recently gave a seat on its board to a partner at activist investor Elliott Management, which bought a “sizable” stake in the company in the last few months. Marc Steinberg, who is responsible for public and private investments at Elliott, has also has been on the board at Pinterest
    PINS,
    -9.45%

    since December 2022.

    Elliott Management did not respond to questions. But in a statement last week, Steinberg said he was joining the board because he “believe[s] there is an opportunity for significant value creation.” Some sellers fear that the pressure from investors and Wall Street will lead to Etsy allowing mass-produced products onto the site. In its fall update, Etsy said the number of listings it removed for violating its handmade policy jumped 112% and that it was further accelerating such actions.

    Etsy’s stock before the news of Elliott’s stake was down about 18% this year. Its shares are now off about 3.65% this year, after recently having their best day in seven years on the news that Steinberg joined the board.

    Etsy is a unique marketplace that for many years had a much better reputation than some of its rivals, like eBay
    EBAY,
    +0.98%
    .
    But since going public and answering to Wall Street, the need to provide growth and profits for investors has become much more of a driver. The Super Bowl ad and Gift Mode may bring a broader awareness to Etsy, but will it be the right kind of awareness? Sellers like me hope these new efforts will stave off the continuing fight with the likes of Temu and other vendors of mass-produced products, and help Etsy retain the remaining unique aspects of its marketplace.

    [ad_2]

    Source link

  • Hewlett Packard Enterprises to buy Juniper Networks in $14 billion deal

    Hewlett Packard Enterprises to buy Juniper Networks in $14 billion deal

    [ad_1]

    In an effort to keep up in the accelerating AI arms race, cloud-services provider Hewlett Packard Enterprise Co. on Tuesday agreed to buy Juniper Networks, Inc. in a deal worth around $14 billion.

    Under the terms of the deal, Hewlett Packard Enterprises
    HPE,
    -8.92%

    will acquire Juniper
    JNPR,
    +21.81%

    — which makes communications-networking products and also has an AI segment called Mist AI — for $40 a share. The companies expect the deal to close late this year or in early 2025.

    “The acquisition is expected to double HPE’s networking business, creating a new networking leader with a comprehensive portfolio that presents customers and partners with a compelling new choice to drive business value,” the companies said in a release.

    After the deal is completed, Juniper Chief Executive Rami Rahim will lead the combined HPE networking business, and report to HPE CEO Antonio Neri.

    “This transaction will strengthen HPE’s position at the nexus of accelerating macro-AI trends, expand our total addressable market, and drive further innovation for customers as we help bridge the AI-native and cloud-native worlds, while also generating significant value for shareholders,” Neri said in a statement.

    HPE said the addition of Juniper will boost margins and result in up to $450 million in annual cost savings within three years of the deal’s completion, as well as accelerate growth. HPE’s networking segment was the company’s top source of quarterly earnings before taxes, $401 million, on $1.4 billion in revenue.

    HPE’s deeper plunge into networking closes a chapter of sorts. Then-Hewlett-Packard Co. acquired Aruba Networks for about $3 billion in March 2015, months before Silicon Valley’s original garage startup split in half, resulting in the formation of HPE, which sells servers and other equipment for data centers, and HP Inc.
    HPQ,
    -2.71%
    ,
    which makes PCs and printers.

    The Wall Street Journal reported the possibility of a deal on Monday, sending shares of Juniper higher.

    Shares of Juniper
    JNPR,
    +21.81%

    rose 0.5% after hours, after jumping 21.8% during regular trading hours. Hewlett Packard
    HPE,
    -8.92%

    shares were down 0.4% after hours, after falling 8.9% during the day.

    As of Tuesday’s close, Juniper had a market cap of $9.64 billion, while HPE’s was $23.04 billion.

    The companies hope the deal can provide a much-needed jolt after a series of lackluster quarterly earnings. Juniper shares have gained 15.7% over the past 12 months, while HPE shares are down 5.4% over that span. The S&P 500
    SPX,
    in comparison, is up about 21.4% over the past year.

    For decades, Juniper has lagged rival Cisco Systems Inc.
    CSCO,
    -1.09%

    in the networking-equipment market. In its most recent quarter, Juniper reported net income of $76 million on revenue of $1.4 billion, down 1% from the same quarter a year earlier.

    [ad_2]

    Source link

  • How to Leverage AI to Supercharge Your Business | Entrepreneur

    How to Leverage AI to Supercharge Your Business | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Artificial Intelligence: What was once a seemingly passing buzz phrase has now become an accepted and enduring technology reality — set to only increase in speed and application.

    But, as a society, and especially among those focused on business solutions, there are divides on modes of response. Some feel nervous, even frightened, but certainly concerned about the impact AI could have on their jobs individually and the job market as a whole. Others straddle the fence, unconvinced as yet about its true potential. Then there is the corps of the savvy — those who harness its capabilities and recognize its limitations.

    I heartily recommend you take your place in that last group.

    I’m lucky to work in an ecosystem of individuals committed to harnessing innovation, one that embraces inventions and improvements and then learns how to redirect workflow energy accordingly. One such individual is Chris Winfield, founder of Understanding AI. His career path has flowed through various markets and verticals but has centered on giving entrepreneurs what he likes to term an “unfair advantage” by leveraging everything from relationships to PR and social media to mentorships. Now, he’s turned his attention to AI and has identified key ways entrepreneurs can gain another unfair advantage by leveraging it.

    We sat down and discussed this pathway to more robust 21st-century business and various other related topics, including how to soothe associated anxiety.

    Related: Elevate Your Personal Brand with AI

    You can’t avoid reality

    According to Winfield, the most glaring mistake entrepreneurs make is avoiding the subject altogether. In a rapidly evolving business landscape, reluctance can leave you stranded while others sail smoothly into new opportunities. “The key,” he said, “is to understand that AI is a tool like any another: its effectiveness depends on how you wield it.”

    Early steps

    To assess the applicability of AI in a work setting, Winfield typically has clients go through a simple exercise: Figure out what your hourly wage is (everyone has one: how much your company made last year divided by the hours you worked). Then, write down everything you do for one week — tasks, meetings, minutia, calls… everything. Once that list is complete, identify tasks you wouldn’t be doing if you could pay someone else to, then ask whether ChatGTP (or any other AI tool) could handle them.

    Consider one example: Imagine how an entrepreneur who also manages writing and social media for her company might leverage AI during a typical day. She has a morning video call with an interviewer and applies vidyo.ai, an AI-assisted editing platform that transforms longer-form videos and podcasts into shorter clips suitable for TikTok, YouTube and Instagram. It also generates a snippet-ready video of the call and a transcript of everything discussed. She then engages ContentFries, which chops that video into social media-ready tidbits.

    Finally, she makes productive use of the transcript using another AI tool to write blog posts and social media captions. She has done all of this — truly maximizing output and content creation — essentially without lifting a finger.

    Related: 4 Ways This SEO Expert Uses AI to Create Content — and How You Can, Too

    The importance of smart input

    To be sure, polishing and cleverly market-applying content is still up to you. One common pitfall, Winfield pointed out, is the assumption that AI can work wonders without you, but it can only be as good as the prompts you provide, which require creativity. He recalled mentoring a chiropractor who asked an AI tool to “write good newsletters.” That was it… that was the prompt. The results, not surprisingly, were lackluster. So, Winfield coached him, including imparting the effectiveness of “laddering.”

    Generally applied in marketing realms, laddering also works with prompts. Think of it as peeling back an onion — moving from understanding features to values to the emotions that make us tick. We have to do this when using AI to help it understand our foundational and creative needs and the emotional payoff we’re looking for. Once it has all of those inputs, it can create useful and valuable content for consumers.

    Enter ChatGPT, a tool developed by Open AI that can truly revolutionize our work. In addition to crafting and focusing prompts, you can use it as a brainstorming partner, a search engine and a versatile outline creator. Applied thoughtfully, it can save oceans of research time and busy work, leaving more hours for tasks only your creative brain can handle.

    Another quick exercise: Have ChatGPT build a target market persona for your company, which could help you understand the current market better and/or identify one that a unique product or offering could reach. Be specific with your prompts, though: Ask it who may like your product, what competitors for it there might be, and lastly, have it present a demographic that you may not have thought about. Examine critically the resulting information, and don’t be afraid to make mistakes (or recognize AI’s mistakes) along the way.

    Related: What We Can Learn from the OpenAI Governance Crisis to Drive Ethical AI Leadership

    Keep AI work-related

    As artificial intelligence continues to evolve, new tools are seemingly released daily, and it’s easy to get caught up in the excitement and lose focus. So, Winfield advises that entrepreneurs designate AI tools for work-related tasks only (avoiding using them for unrelated matters during office hours) to prevent time-wasting distractions.

    The better you get at leveraging AI, the easier it will be to identify which tools you should and can integrate. Handled strategically and capably, you’ll find that productivity increases, creativity expands, and time is created for the kind of multidimensional thinking that really helps move the success needle.

    [ad_2]

    Randy Garn

    Source link

  • Synopsys and Ansys in talks to merge: report

    Synopsys and Ansys in talks to merge: report

    [ad_1]

    Shares of Ansys Inc. soared 18% in trading Friday on reports the company is in discussions to be acquired by Synopsys Inc. in a deal that would create a design-software behemoth.

    The potential deal would kick off 2024 with a mega-merger, even as the Federal Trade Commission attempts to crack down on such transactions. Talks remain fluid and a third party might still emerge as a possible suitor of Ansys, according to a Wall Street Journal report, which cited people familiar with the situation.

    Ansys
    ANSS,
    +18.08%
    ,
    which has a market value of nearly $26.3 billion, makes software that helps predict how products in aerospace, healthcare and automotive applications will work in the real world. A deal could be struck early in 2024, according to people familiar with the matter. Ansys reported revenue of $2.1 billion in 2022.

    Synopsys
    SNPS,
    -6.34%
    ,
    with a market value of $85.1 billion, makes software that engineers use to design and test silicon chips used in smartphones, self-driving cars and other forms of artificial intelligence. Its stock has climbed 65% this year as investors have hopped on the AI bandwagon boom. Shares of Synopsys dipped 6% in late trading Friday.

    Synopsys’s customers include Nvidia Corp.
    NVDA,
    -0.33%
    ,
    Intel Corp.
    INTC,
    +1.95%

    and Advanced Micro Devices Inc.
    AMD,
    -0.22%
    .

    Representatives from Synopsys and Ansys were not immediately available for comment.

    Should the companies strike a merger, it would offer a fresh test for the FTC and its chair, Lina Khan, who have opposed large tech mergers and acquisitions. The agency unsuccessfully sued Facebook parent Meta Platforms Inc.
    META,
    -0.20%

    in its pursuit of VR developer Within, as well as Microsoft Corp.’s
    MSFT,
    +0.28%

    $69 billion purchase of Activision Blizzard Inc.

    [ad_2]

    Source link

  • AMD wins high praise for AI advancements as its stock soars 6%

    AMD wins high praise for AI advancements as its stock soars 6%

    [ad_1]

    While Advanced Micro Devices Inc. shares didn’t enjoy a Wednesday bump during the company’s artificial-intelligence event, they were rallying sharply Thursday as analysts reflected on the chip maker’s presentation.

    Chief Executive Lisa Su and her team “put together one of the most impressive new product event/launches by our reckoning in the last decade, perhaps ever,” Rosenblatt Securities analyst Hans Mosesmann wrote in a note to clients.

    The launch of AMD’s
    AMD,
    +7.09%

    MI300X AI/graphics-processing-unit accelerator “was not just a speeds and feeds geek fest (it was that for sure, with AMD claiming superiority in AI inferencing), but an industry movement coalescing around the concept of ‘open’ sourced technologies are preferred (demanded really), to address the insanely fast/accelerating life-changing thing that AI has become,” Mosesmann continued.

    Opinion: AMD’s new products represent the first real threat to Nvidia’s AI dominance

    He was also impressed by the company’s talk of its software platform ROCm, which he thinks is catching up to Nvidia Corp.’s
    NVDA,
    +1.54%

    CUDA.

    “Of course, Nvidia is not going away, and we are quite sure will remain the dominant AI player for years to come but AMD we feel made the case yesterday that they will be an important AI innovator on a secular basis,” Mosesmann noted, as he kept his outperform rating and $200 target price on the stock.

    AMD shares were up 6% in Thursday morning trading.

    Baird’s Tristan Gerra was also impressed.

    “Rapidly unfolding hyperscaler engagements, highly competitive AI architecture specs, along with accelerated new product roadmap, bode well for share gains and continued acceleration in AI-related revenue for AMD beyond 2024, while faster-than-expected rate of adoption so far could potentially drive upside in the AI revenue outlook for 2024, in our view,” he wrote.

    Read: Nvidia and Microsoft CEOs say industrial companies will benefit most from AI. Here are stocks to put on your watch list.

    Gerra also sees the potential for “high-volume deployments,” thanks to the “significant software milestones” AMD is showing. He rates the stock at outperform with a $125 target price.

    TD Cowen’s Matthew Ramsay said that AMD’s event reinforced his belief that the company “is well positioned to meaningfully participate” in the large total addressable market for AI accelerators.

    The company called out Microsoft Corp.
    MSFT,
    -0.01%
    ,
    Meta Platforms Inc.
    META,
    +2.41%

    and Oracle Corp.
    ORCL,
    -0.08%

    as customers, announcements that were “strong” but not “surprising,” in Ramsay’s view.

    “We remain encouraged that AMD is making an impressive case (and is getting customer support) to provide adaptive computing solutions for both training and inference in increasingly large [generative-AI] infrastructure builds,” he wrote. “We believe this signifies a strong AI strategy of delivering a broad portfolio of [central processing unit], GPU, and [field-programmable gate array] assets, with open software that enables easily deployed AI workloads while leveraging the company’s existing partnerships to accelerate its AI ramps at-scale.”

    Ramsay has an outperform rating and $130 target price on AMD shares.

    [ad_2]

    Source link

  • Why Sam Altman is a no-brainer for Time’s ‘Person of the Year’

    Why Sam Altman is a no-brainer for Time’s ‘Person of the Year’

    [ad_1]

    Nothing has changed our lives more this year than the advances made in artificial intelligence — and they have the potential to alter our lives in even more dramatic ways down the road.

    So it’s a no-brainer that Sam Altman, co-founder and recently returned chief executive of the once-little-known OpenAI, should be named “Person of the Year” by Time Magazine when the selection is announced Wednesday.

    Altman has already cracked Time’s shortlist, joining candidates from varied backgrounds, including world leaders like Xi Jinping and entertainment phenomenon Taylor Swift. The selection ultimately comes down to an “individual or group who most shaped the previous 12 months, for better or for worse.”

    But Time has often given “agents of change” its yearly honor — just look at 2021 winner Elon Musk — and Altman certainly fits that bill.

    No other innovation in the past year has had an impact in such disparate realms. OpenAI publicly launched its ChatGPT chatbot late last year, and as the technology grew viral in 2023, it upended the stock market, Silicon Valley and companies that wouldn’t normally be classified as technology businesses. The ensuing product development and surge in generative AI investment revitalized a tech industry that had sunk into the doldrums amid a pandemic hangover.

    Admittedly, it will take time for companies to realize the true financial benefits of AI: Nvidia Corp.
    NVDA,
    -2.68%

    is among the few to generate serious money from the frenzy so far. But market researcher IDC predicted that global spending on AI, including software, hardware and services for AI-centric systems will reach $154 billion this year, up 27% from a year ago. That total could zoom above $300 billion by 2026.

    Also read: One year after its launch, ChatGPT has succeeded in igniting a new era in tech

    And AI isn’t only impacting the corporate world. The technology is already affecting our daily lives, and it will have even deeper effects going forward. Chatbots are getting smarter on websites, facilitating better customer service. They’re starting to alter the workplace as well, spitting out mostly coherent marketing copy, research and even, gasp, news articles — albeit with plenty of errors.

    At first, ChatGPT seemed like a fun way to kill time or get homework help, but the chatbot and its ilk will seriously alter the working world, helping to eliminate perhaps millions of jobs. Morgan Stanley recently predicted that more than 40% of occupations will be affected by generative AI in the next three years.

    Altman himself has been the face of OpenAI in the past year. He’s talked up the technology, but he also appeared at congressional hearings in May to discuss potential regulation of AI, testifying that “if this technology goes wrong, it can go quite wrong.” His recent firing and quick rehiring by OpenAI and its small, nonprofit board late last month fueled a veritable media storm before the Thanksgiving holiday in the U.S.

    Time chooses its persons of the year for their impact, not because they’re saints. And Altman’s own story is not without controversy. The recent brouhaha over his leadership of OpenAI is believed to have been caused by a deep schism over the ethics of AI development. The board seemingly wanted more guardrails and precautions, and feared that rushed development could irrevocably doom mankind.

    Read in the Wall Street Journal: How effective altruism split Silicon Valley and fueled the blowup at OpenAI

    Altman, who also wooed Microsoft Corp.
    MSFT,
    -1.43%

    to become an investor in OpenAI, emerged the victor in the upheaval with his own company’s altruistic board. Had Altman truly been fired from OpenAI, Microsoft was planning to hire him, and nearly every employee at OpenAI was ready to quit and follow him there. While OpenAI faces plenty of competition, including from Alphabet Inc.’s
    GOOG,
    -2.02%

    GOOGL,
    -1.96%

    Google, Altman should continue to be the face of AI development, for good and for bad, even as he has advocated industry regulation.

    The debut and influence of ChatGPT and follow-on AI products are having the biggest impact on tech development since the invention of the iPhone. Altman is at the center of it and leading the charge. Whether he can keep the lid on Pandora’s Box or not depends on many factors, but he and the company he leads are clearly driving a new tech movement that affects us all, whether we like it or not.

    [ad_2]

    Source link

  • OpenAI Has a New Board. Who’s On, Who’s Not, and What It Means for AI Safety.

    OpenAI Has a New Board. Who’s On, Who’s Not, and What It Means for AI Safety.

    [ad_1]

    Sam Altman is returning to OpenAI but power at the artificial-intelligence start-up is still set to be held by its board. The members who fired Altman are largely out and their replacements suggest the new board will be less inclined to slow or block the development of AI technology.

    Continue reading this article with a Barron’s subscription.

    View Options
    [ad_2]
    Source link

  • Sam Altman to return as OpenAI CEO, alongside new board that includes Larry Summers

    Sam Altman to return as OpenAI CEO, alongside new board that includes Larry Summers

    [ad_1]

    OpenAI has reached an “agreement in principle” for Sam Altman to return to his post as chief executive officer alongside a new board, just days after his ousting, the company said on Wednesday.

    In a posting on X, the tech group behind ChatGPT said former Salesforce CEO Bret Taylor will serve as chair, joined on the board by former Treasury Secretary Larry Summers and Quora co-founder and CEO and current director Adam D’Angelo.

    The…

    Master your money.

    Subscribe to MarketWatch.

    Get this article and all of MarketWatch.

    Access from any device. Anywhere. Anytime.


    Subscribe Now

    [ad_2]

    Source link

  • Sam Altman to Join Microsoft Following OpenAI Ouster

    Sam Altman to Join Microsoft Following OpenAI Ouster

    [ad_1]

    Updated Nov. 20, 2023 6:34 am ET

    SAN FRANCISCO—Microsoft said it is hiring Sam Altman to helm a new advanced artificial-intelligence research team, after his bid to return to OpenAI fell apart Sunday with the board that fired him declining to agree to the proposed terms of his reinstatement.

    Microsoft Chief Executive Satya Nadella posted on X (formerly Twitter) late Sunday that Altman and Greg Brockman, OpenAI’s president and co-founder who resigned Friday in protest over Altman’s ouster, will lead its team alongside unspecified colleagues. Nadella said Microsoft was committed to its partnership with OpenAI and that it would move quickly to provide Altman and Brockman with “the resources needed for their success.” 

    Copyright ©2023 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

    [ad_2]

    Source link

  • Sam Altman Is Fired as OpenAI CEO

    Sam Altman Is Fired as OpenAI CEO

    [ad_1]

    OpenAI announced Friday afternoon that CEO Sam Altman has departed External link the company, saying the executive “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

    [ad_2]
    Source link

  • OpenAI CEO Sam Altman steps down as board loses confidence in his leadership

    OpenAI CEO Sam Altman steps down as board loses confidence in his leadership

    [ad_1]

    OpenAI said Friday that Sam Altman is no longer its chief executive, with the ChatGPT parent adding that said Altman had not been “consistently candid in his communications with the board.”

    “The board no longer has confidence in his ability to continue leading OpenAI,” the company said in a blog post.

    In a tweet Friday, Altman said he “will…

    [ad_2]

    Source link