ReportWire

Tag: a.i

  • A lawyer fired after citing ChatGPT-generated fake cases is sticking with AI tools: ‘There’s no point in being a naysayer’ 

    A lawyer fired after citing ChatGPT-generated fake cases is sticking with AI tools: ‘There’s no point in being a naysayer’ 

    [ad_1]

    Artificial intelligence will bring changes to many professions, including law. But it’s also claiming victims who trust too much in its capabilities.

    Among them is Zachariah Crabill, who was an overwhelmed rookie lawyer at a law firm in Colorado Springs when he gave in to the temptation of using ChatGPT in May.

    The AI chatbot helped him write a motion in seconds, saving him hours of work, as local radio station KRDO reported in June. But after he filed the document with a Colorado court, he realized that something was amiss: Several lawsuit citations generated by ChatGPT were made up.

    OpenAI’s ChatGPT is known to be confidently wrong, and in this case it simply created cases out of thin air that sounded convincing. Crabill did not check to make sure the cases were real before submitting his work.

    Crabill admitted his mistake to the judge, who reported him to statewide office, and in July the young attorney was fired from his job at Baker Law Group. 

    In his statement to the court admitting his mistake, Crabill wrote, “I felt my lack of experience in legal research and writing, and consequently, my efficiency in this regard could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting.” 

    Crabill isn’t the only lawyer to trust ChatGPT too much. In June, two lawyers were scolded and fined $5,000 by a federal judge in New York for submitting a legal brief that also cited nonexistent cases. 

    In sanctions against Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman, the judge wrote: “Technological advances are commonplace, and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

    “I did not comprehend that ChatGPT could fabricate cases,” Schwartz had earlier told the judge.

    But Crabill, for his part, isn’t giving up on AI tools, despite the traumatic experience. 

    “I still use ChatGPT in my day-to-day, much like most people use Google on the job,” he told Business Insider. Indeed he has since started a company that provides legal services via AI.

    In a Washington Post piece published on Thursday, Crabill said he would likely use AI tools designed specifically for lawyers to aid in his writing and research.

    He added, “There’s no point in being a naysayer or being against something that is invariably going to become the way of the future.”

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Steve Mollman

    Source link

  • Nvidia CEO Jensen Huang says his AI powerhouse is ‘always in peril’ despite a $1.1 trillion market cap: ‘We feel it’ 

    Nvidia CEO Jensen Huang says his AI powerhouse is ‘always in peril’ despite a $1.1 trillion market cap: ‘We feel it’ 

    [ad_1]

    Nvidia is on a tear. It is also, according to its billionaire CEO Jensen Huang, in peril.

    The semiconductor maker, whose processors are used in gaming, data centers, and autonomous vehicles, plays a key role in the artificial-intelligence boom that has rejuvenated Silicon Valley. Tech giants compete to buy up its expensive AI chips. This year it joined the select group of companies with a market cap of $1 trillion more.

    But “there are no companies that are assured survival,” Huang warned Thursday at the Harvard Business Review’s Future of Business event.

    Nvidia in its 30-year history has faced several existential threats, which helps explain why Huang recently told the Acquired podcast that “nobody in their right mind” would start a company. For example, it almost went bankrupt in 1995 after its first chip, the NV1, failed to attract customers. It had to lay off half its employees before the success of its third chip, the RIVA 128, saved it a few years later.

    “We have the benefit of building the company from the ground up and having not-exaggerated circumstances of nearly going out of business a handful of times,” Huang said this week, as Observer reported. “We don’t have to pretend the company is always in peril. The company is always in peril, and we feel it.”

    But Huang thinks it’s important to avoid getting too stressed about it. 

    “I think the company living somewhere between aspiration and desperation is a lot better than either [being] always optimistic or always pessimistic,” he noted. 

    One challenge the Santa Clara, Calif.-based chipmaker now faces is the tightening of U.S. rules on tech exports to China. That could result in Nvidia losing billions of dollars after canceling planned deliveries to Chinese companies.

    “The restriction is a capability restriction,” Huang said. “It’s not an absolute restriction…The first thing we need to do is to comply with the regulation and understand what the limits are and, to the best of our ability, offer products that can still be competitive.”

    But trying to sell chips with decreased capabilities in China leaves Nvidia more exposed to competition from local rivals. “It’s not easy, and competitors are moving quickly,” Huang said. “It’s like anything else that you gotta stay alert and do the best you can.”

    Meanwhile despite Nvidia blowing past expectations in recent quarters, many analysts warn that competition from rival AMD and others is sure to intensify. Among them is David Trainer, chief of research firm New Constructs.

    “The rest of the world won’t just roll over and let them dominate AI,” Trainer told Fortune in August. “They’re facing the same curse as Tesla. Nvidia benefited like Tesla from being first to market. But when Tesla got profitable, loads of competitors entered the EV space, cutting its margins and slowing sales. The same will happen for Nvidia.”

    Huang told Acquired that he’s read the business books by former Intel CEO Andrew Grove, calling them “really good.” Among those is Only the Paranoid Survive.

    Huang seems to have taken it to heart. 

    “If you don’t think you are in peril,” he said this week, “that’s probably because you have your head in the sand.”

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Steve Mollman

    Source link

  • LinkedIn’s top recruiting executive says adding AI to job listings is a ‘requirement’

    LinkedIn’s top recruiting executive says adding AI to job listings is a ‘requirement’

    [ad_1]

    As the threat of artificial intelligence stealing jobs looms, employees are future-proofing their careers by specifically applying for job ads with AI mentioned in their listings—because if you can’t beat AI, you might as well join it. 

    That’s according to LinkedIn’s research, which shows that during the past two years job posts on the networking platform that mention AI or Generative AI received 17% higher application growth than job posts that do not mention AI.

    “Candidates are savvy,” said Erin Scruggs, vice president of global talent acquisition at LinkedIn. “They’re showing they want to go where opportunities are.”

    It’s why she recommends companies detail their AI plans in their job ads—even if the role advertised isn’t involved in the plans—or risk losing top talent.

    “I would consider it a requirement for most companies to share at least a basic roadmap of their AI strategy in job posts to keep up with the market,” Scruggs added.

    What’s more, companies around the world should take note: LinkedIn’s conclusion that jop postings mentioning AI are hot on the market was based on data drawn from English, Spanish, French, Japanese, Dutch, Italian, German, Portuguese, Turkish, and Chinese-written ads.

    Join the AI bandwagon—or risk being replaced

    The rush to jump on the AI bandwagon comes as fears mount that automation will wipe out millions of jobs. Just last week, Tesla and X owner, Elon Musk told the U.K. AI Safety Summit that AI will one day eradicate employment.

    “You can have a job if you want to have it for personal pleasure. But AI could do everything,” Musk told Britain’s prime minister Rishi Sunak. “I don’t know if people are comfortable or uncomfortable with that.” 

    At the same time, investment bank Goldman Sachs has estimated that AI could replace the equivalent of 300 million full-time jobs globally in the coming years. Meanwhile, IBM’s CEO Arvind Krishna predicted “repetitive, white-collar jobs” will be automated first. 

    But, he added, that doesn’t mean humans will be out of jobs. “People mistake productivity with job displacement,” he said at Fortune’s CEO Initiative conference. 

    As an example, he points to jobs created by the invention of the internet. “In 1995 no one thought there would be five million web designers—there are,” Krishna said.

    It’s why Reddit’s former CEO,  Yishan Wong advised workers concerned about being replaced by AI to futureproof their roles by side-stepping into the industry because it doesn’t require “an enormous amount of technical skill.”

    “Nontechnical people can build pretty valuable and novel applications in AI,” he told Fortune. “There’s this enormous amount of leverage that an individual can have.”

    Similarly, Nvidia’s CEO Jensen Huang recently suggested that AI will “generate jobs”—with the caveat that while people might not lose their jobs to AI, they’ll likely lose it to another human using AI.

    It’s the one thing leaders can seemingly agree on—and judging by LinkedIn’s research, workers know it too.

    AI may not replace managers, but the managers that use AI will replace the managers that do not,” IBM’s chief commercial officer Rob Thomas said during a press conference. “It really does change how people work.” Likewise, the economist Richard Baldwin echoed, “AI won’t take your job” during a panel at the 2023 World Economic Forum’s Growth Summit. “It’s somebody using AI that will take your job.”

    [ad_2]

    Orianna Rosa Royle

    Source link

  • Microsoft earnings finally show Wall Street AI’s financial potential

    Microsoft earnings finally show Wall Street AI’s financial potential

    [ad_1]

    AI hype has consumed the business world—and driven some consumers to the point of AI fatigue—but Wall Street is still obsessed with the technology and eager to learn what it will mean for companies’ bottom lines. 

    The topic of AI dominated Microsoft’s call with investors and analysts after the tech giant and OpenAI investor published stellar earnings on Tuesday. Of the eight analysts that executives took questions from, six asked about AI, the company’s Copilot AI product, or how AI is shaping the overall business. That curiosity is no surprise. For all the AI noise, few earnings reports have yet to quantify the power of the technology. 

    In an Oct. 18 report to Wolfe Research clients about Microsoft—before the company’s latest earnings—analyst Alex Zukin warned against putting too much stock in AI without the numbers to back up the company’s claims that the technology is transformative. “While last quarter we were all busy revising up our estimates for how big Copilot can be, it seems investors have since tripped and fallen into the trough of AI disillusionment questioning both the actual functionality, the profitability, and ultimately the durable and sustainable competitive advantage,” Zukin wrote, as MarketWatch reported

    Microsoft’s Tuesday earnings report was one of the first in which analysts could start to see the business implications of full-blown AI adoption. 

    In the most recent quarter, which ended Sept. 30, Microsoft beat analyst expectations across the board. It generated $56.5 billion in revenue, $2 billion higher than the consensus estimate. Adjusted earnings per share registered at $2.99, compared to expectations of $2.66. Since this time last year—right before OpenAI launched ChatGPT and sparked the AI frenzy—Microsoft’s profit increased 27%. The company’s stock rose 6.2% in after-hours trading, at its peak costing $350.90 per share.

    Microsoft’s growth in the most recent quarter stemmed from the business unit the company labels “Azure and other cloud services,” which houses its investments in AI. Of the segment’s 29% growth from the previous fiscal year, 3 points come from AI services—thanks to “higher-than-expected AI consumption,” CFO Amy Hood said during the call. “While the trends from prior quarters continued, growth was ahead of expectations, primarily driven by increased GPU capacity and better than expected GPU utilization of our AI services,” Hood said, referencing the computing technology called graphic processing units (GPUs) that render images and are essential in AI. More than 18,000 organizations use Azure AI services, which include speech-to-text and facial recognition features, CEO Satya Nadella added.

    In the quarter, Microsoft announced the integration of its AI-powered Copilot assistant into its roster of office tools, including web browsers, Windows 11 desktop software, and Microsoft 365 apps. The company launched Bing Chat, also powered by AI, in recent months as a new way to search the internet. Copilot in 365 for businesses will roll out on Nov. 1. 

    Three analysts quizzed executives on how AI will impact Microsoft’s margins and other growth metrics in the future. Brent Thill from Jefferies put it simply: “Can you sustain double-digit growth, especially with the stronger AI boost coming in the next several quarters?” Hood answered, “We feel good about our ability to execute.” 

    “With copilots, we are making the age of AI real for people and businesses everywhere,” Nadella said in the earnings statement. “We are rapidly infusing AI across every layer of the tech stack and for every role and business process to drive productivity gains for our customers.”

    [ad_2]

    Rachyl Jones

    Source link

  • Roche is using AI to find the hardest-to-find lung cancer patients, with a potential blockbuster drug on the line

    Roche is using AI to find the hardest-to-find lung cancer patients, with a potential blockbuster drug on the line

    [ad_1]

    Roche Holding AG’s lung cancer drug scored a big win against a standard therapy in a study this week. Now, the Swiss drugmaker is turning to artificial intelligence to find patients who can benefit. 

    When given after surgery to remove lung tumors, Roche’s Alecensa cut the risk of either cancer recurrence or death by 76% compared with standard chemotherapy, according to results from a primary analysis of the trial released Wednesday. The drug could “potentially alter the course of this disease,” Roche Chief Medical Officer Levi Garraway said in a statement. 

    But finding patients to treat may be difficult: The study examined the effects on people with an error in a gene called ALK that’s found in only about 4% to 5% of lung cancer patients. Most of them are younger and less likely to have smoked than typical lung tumor patients, and often go undiagnosed early on. 

    To solve the problem, Roche will use an AI collaboration with Israeli tech company Medial EarlySign Ltd. to help doctors determine when to use CT scans. While the technology, called LungFlag, doesn’t currently detect ALK-positive patients, the company said on Saturday that it’s actively exploring how to expand it so they can benefit. 

    That will help find tumors before they spread and while needed surgery is still possible, said Charlie Fuchs, Roche’s head of oncology and hematology drug development. 

    “Sometimes when you really use deep data algorithms, you may find things that identify people who are non-smokers and yet at risk,” Fuchs said in an interview. “We hope more patients can be found early and benefit from this.”

    Roche has said it will file the Alecensa study results with regulators for approval. The full results were presented Saturday at the European Society for Medical Oncology meeting in Madrid. Alecensa is already approved in the US, Europe, Japan and China for patients with ALK-positive metastatic lung cancer. 

    Analysts anticipate that Alecensa will generate 1.56 billion Swiss francs ($1.75 billion) in sales this year. That it would be a blockbuster medicine even though it treats such a small portion of lung cancer patients shows that effective drugs don’t have to serve a big patient population to be scientific and financial successes, Fuchs said. 

    Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up for free today.

    [ad_2]

    Naomi Kresge, Bloomberg

    Source link

  • Meta Is Paying Celebs Millions for Their AI Likeness, Chatbot: Report | Entrepreneur

    Meta Is Paying Celebs Millions for Their AI Likeness, Chatbot: Report | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    This article originally appeared on Business Insider.

    Meta is paying one top creator as much as $5 million over two years for six hours of work in a studio to use their likeness as an AI assistant, according to The Information.

    Mark Zuckerberg unveiled the AI assistants during the company’s Connect event last month. While the company has its own AI chatbot similar to ChatGPT, it also introduced 28 new ones with different personalities that use celebrities’ images.

    For example, Kendall Jenner’s likeness is used for Billie, who is portrayed as a big sister to give users advice. And Tom Brady plays Bru, a chatbot for debating sports.

    Meta has also brought onboard creators like MrBeast, the most-subscribed individual on YouTube, and the TikTok star Charli D’Amelio.

    The Information reports that Meta was initially willing to pay more than $1 million to use the stars’ likenesses, but shelled out more for big names. The report doesn’t say which person was paid $5 million, but primarily refers to creators.

    Right now, the AI assistants are only text-based but Meta’s announcement video featured clips of the celebrities speaking as their AI counterparts.

    In an interview with The Verge, Zuckerberg said there’s a “huge need” for AI versions of celebrities. Although he said that would be “more of a next year thing” due to brand safety concerns, because celebrities would want to make sure their image won’t be used to make problematic statements.

    At the same time as its AI assistants, Meta also launched AI stickers. But that feature has come under criticism as users have managed to generate wild images like a child soldier and a lewd image of Justin Trudeau.

    Meta did not immediately respond to Insider’s request for comment.

    [ad_2]

    Pete Syme

    Source link

  • Chipotle’s Robots Can Make Almost 200 Burrito Bowls an Hour | Entrepreneur

    Chipotle’s Robots Can Make Almost 200 Burrito Bowls an Hour | Entrepreneur

    [ad_1]

    This article originally appeared on Business Insider.

    Restaurants challenged by labor costs and retention are automating every part of the business. Chick-fil-A is testing robot bussers. McDonald’s, Domino’s, and White Castle use AI-powered voice bots to take drive-thru-and phone orders.

    Now, a robot could soon be making your next Chipotle burrito bowl.

    The chain is testing an automated kitchen line by Hyphen to prepare digital orders, a $3 billion business for Chipotle.

    Hyphen is one of a dozen food tech startups in the industry looking to streamline food operations for restaurants through the use of robotics and automation. But unlike other tech firms, Hyphen has buy-in from Chipotle, one of the industry’s most innovative brands. Chipotle invested in Hyphen in 2022 and started piloting its robotics this year in a lab near the chain’s Newport Beach, California, headquarters.

    In an April earnings call, CEO Brian Niccol called the technology an exciting part of the chain’s future as it strives to reach 7,000 restaurants.

    Hyphen “will enable us to be even more accurate,” Niccol said. “I think probably go a little bit faster, and I think give people more consistent experiences.”

    In July, Niccol told investors that Chipotle expects to install Hyphen’s automated kitchen line in restaurants “in the next 12 to 18 months.”

    Based in San Jose, California, Hyphen recently gave Insider an exclusive first look at how it uses robotics to make up to 180 bowls per hour. That’s about six times more than what a human can assemble in the same time.

    Here’s how it works.

    In July 2022, Chipotle’s Cultivate Next venture fund invested in Hyphen, a Silicon Valley tech startup building kitchen automation tools.

    Chipotle is testing Hyphen, a startup that uses robotics to make bowls. Hyphen via BI

    Hyphen’s The Makeline uses advanced robotics to assemble meals such as grain bowls and salads.

    Chipotle and Hyphen started testing the system this year at the chain’s test lab in Newport Beach, California.

    The Makeline doesn’t require humans to prepare bowls. But that’s not Chipotle’s main interest in Hyphen, the company has said.

    Hyphen’s tech will help the chain improve speed and order accuracy of digital orders, Curt Garner, Chipotle’s chief technology officer, has previously said.

    “Their use of robotics to enhance the employee and guest experience to find efficiencies in the restaurant industry aligns with our mission of leveraging emerging technology to increase access to real food,” Garner said of Hyphen.

    During my exclusive video demo, Hyphen showed me how a grain bowl is made using its robotics system.

    Hyphen gave Insider an exclusive look at its tech. Here's how it works to make a grain bowl.

    Hyphen gave Insider an exclusive look at how its tech makes a grain bowl via video. Hyphen via BI

    Once an order is placed, it is automatically sent to Hyphen’s The Makeline. Metal arms move bowls from one food dispenser to another as they fill each bowl with ingredients.

    A metal arm at the bottom of the production system moves the bowl to a pan that holds grains and squash.

    Metal arms move bowls from one pan to another.

    Metal arms move bowls from one pan to another. Hyphen via BI

    The farro and squash are sent down a chute to fill the bowl. Once complete, the metal arms move the bowl automatically down the assembly line.

    Hyphen’s The Makeline is designed to automate any assembly-line food operation that makes bowls or salads like CAVA, Chipotle, or Sweetgreen.

    Hyphen is one of a dozen food tech startups in the industry looking to streamline food operations for restaurants through the use of robotics and automation

    Each pan, or holding bin, can hold ingredients at various temperatures. Hyphen via BI

    While it doesn’t make burritos, Hyphen’s Makeline can help “build” burritos by dispensing ingredients through its automated system, cofounder and chief technology officer Daniel Fukuba told Insider during a recent interview.

    Then, an employee could take those ingredients and fold them into a tortilla, he said.

    The Makeline is also designed with open bins so employees can work in tandem with the system if needed.

    Metal arms beneath the food pans move bowls from one food dispenser to another.

    Chipotle invested in Hyphen last year, and has been piloting its robotics in a lab near its headquarters in Newport Beach, California.

    Chipotle invested in Hyphen last year, and has been piloting its robotics in a lab near its headquarters in Newport Beach, California. Hyphen via BI

    The farro and squash are sent down a chute to fill the bowl. Once done, the metal arms move the bowl down the line.

    In the testing phase, Hyphen makes bins and augers, spiral-shaped tools, from 3D printers.

    Hyphen uses 3D printers to make production tools only during the research and development phase.  The 3D printers allow the startup to test new designs in a timely manner as they perfect the assembly-line process.

    Hyphen uses 3D printers by Formlabs to make production tools during the research and development phase. Hyphen via BI

    Some of the equipment used to hold food are made using 3D printers. Augers, or spiral-shaped tools, are also made with 3D printers.

    Fukuba said Hyphen is using 3D printers to make production tools only during the research and development phase. The 3D printers allow the startup to test new designs in a timely manner as they perfect the assembly-line process.

    “We have three printers working 24/7,” he said. “Speed is essential as a startup.”

    The next stop on the line is toppings. Roasted and salted cashews top the bowl. Portioning is automated using algorithms based on the order size.

    Cashews are dispensed automatically on the bowl.  Hyphen's makeline for Chipotle.

    Cashews are dispensed automatically on the bowl. Hyphen via BI

    Fukuba said The Makeline is programmed to dynamically portion ingredients based on the order size.

    “Algorithms dynamically adjust to the order size,” he said.

    The Makeline will “scale up” each item if the customer chooses only a few toppings. But if someone orders a lot of add-ons, the system will reduce each portion size to ensure everything fits in the bowl. Protein portions are never reduced, Fukuba said.

    Dynamic portioning means chains like Chipotle can deliver a consistent meal every time. Fukuba said it solves issues restaurants have with customers complaining they’ve received “under portioned” meals.

    The bowl continues down the Makeline until all the ingredients are prepared. The Makeline can crank out between 120 to 180 bowls per hour depending on the number of ingredients on the menu.

    Hyphen's bowl continues down the automated makeline.

    Hyphen’s bowl continues down the automated makeline. Hyphen via BI

    How does automation compare to a human?

    Hyphen said one person can make about 20 to 30 bowls per hour.

    Besides volume, chains like Chipotle can count on Hyphen’s automation to be more accurate than a human.

    “So generally, across the board, we’re always more accurate than a comparable person running at the same rate,” Fukuba said. “It is more consistent and accurate than a person normally would be going at the same speed.”

    A tablet at the end of The Makeline shows the progress of the bowl as it goes down the line. The tablet will indicate when an order is complete.

    Hyphen has a tablet indicating the bowl is complete.

    Hyphen’s automated makeline has a tablet indicating when the bowl is complete. Hyphen via BI

    Once the bowl is completed, it is put in a lift and sent to the top of the Makeline. At that point, a human steps in. A worker then grabs the bowl, puts a lid on it, and gets it ready for pickup.

    Chipotle is tapping Hyphen to help streamline its digital orders, which surpassed $3 billion in sales last year.

    Chipotle doorstep delivery

    Digital orders include takeout and delivery orders placed through the Chipotle app or through third-party aggregators like DoorDash. Nancy Luna/Insider via BI

    Since Brian Niccol became CEO in 2018, the chain has been focused on building its digital business.

    Mobile orders can be picked up using Chipotle’s drive-thru lanes, dubbed Chipotlanes. Restaurants also have pickup shelves for delivery and takeout orders made through the chain’s app or third-party delivery services like DoorDash.

    The chain is also testing new store designs, focusing solely on digital orders.

    In 2022, Chipotle’s digital business surpassed $3 billion in revenue. In the third quarter of 2023, digital sales represented 38% of the chain’s food and beverage revenue.

    Automating the preparation of digital orders is among a handful of tech initiatives Chipotle is piloting. The chain is also testing Chippy, a robotic tortilla chip maker.

    Chippy robot Chipotle

    Cutting limes. Chipotle via BI

    Chipotle’s Chippy uses artificial intelligence to replicate Chipotle’s exact chip-making recipe, the company said. Chippy is a one-store test for now.

    The chain is also testing internally an avocado-cutting robot named Autocado. The robot is expected to slice guacamole preparation time in half. It’s set to eventually use artificial intelligence and machine learning to evaluate the quality of the avocados to help limit waste.

    Hyphen’s The Makeline is expected to enter Chipotle restaurants next year.

    Someone dolloping guacamole on a Chipotle bowl.

    Chipotle is testing Hyphen to automate digital orders. Employees will still prepare in-restaurant orders. Joe Raedle

    Niccol said earlier this year that the chain will likely install the first automated makelines in new restaurants.

    Hyphen and Chipotle are working to design the robotic makeline so it will “work with our existing restaurants,” Niccol said.

    The chain, which operates more than 3,250 restaurants, plans to more than double in size over time to 7,000 restaurants.

    [ad_2]

    Nancy Luna

    Source link

  • I’ve spent 25+ years in the semiconductor industry. Here’s why I’m confident we can take on the A.I. challenge

    I’ve spent 25+ years in the semiconductor industry. Here’s why I’m confident we can take on the A.I. challenge

    [ad_1]

    We are headed toward a future where artificial intelligence (A.I.) plays a role in everything we do, for every person on the planet. That scale is incredibly exciting–but there are daunting challenges ahead, from the huge computing demands to security and privacy concerns. To solve them, we need to understand one fact: the path to A.I. at scale runs through our everyday devices.

    Over the past few decades, our laptops, phones, and other devices have been the place where transformative technologies become tools that people trust and rely on. It’s about to happen again, but with greater impact than ever before: A.I. will transform, reshape, and restructure these experiences in a profound way.

    While cloud-centric A.I. is impressive and here to stay, it faces limitations around latency, security, and costs. A.I. running locally can address all three areas. It brings A.I. into the applications we already use, where we already use them, all built right into the devices that we always have available.

    However, as A.I. applications grow, we need to make sure our PCs, phones, and devices are A.I.-ready. That means designing traditional computing engines–the central processing unit (CPU) and graphics processing unit (GPU)–to run complex A.I. workloads, as well as creating new, dedicated A.I. engines like neural processing units (NPUs). Our industry is only at the beginning of a multi-year feedback loop where better A.I. hardware begets better A.I. software, which begets better A.I. hardware, which…you get the idea.

    This is the future of A.I. at scale–and it also offers a roadmap to what’s next. From my nearly three decades of experience in the semiconductor industry, I see three enduring truths for how these kinds of shifts play out and how we can make the most of this moment.

    People’s needs come first

    Meaningful innovation starts with people’s daily needs. Think about the rise of Wi-Fi in the 2000s, the explosion of videoconferencing in the 2010s, or the more recent move to hybrid work. In each case, the industry had to figure out how technology could best fit into people’s lives. Useful applications fuel adoption and further advances until the new technology becomes indispensable.

    We’re already beginning this process for A.I. on the PC. Microsoft is building A.I. into collaboration experiences for the 1.4 billion people using Windows. But in the near future, A.I. will integrate into hundreds of applications, and eventually thousands of applications that we aren’t even aware of yet. This will not only enhance existing experiences–it will elevate everything we do across work, creativity, and collaboration.

    Embracing challenges will bring forth solutions

    We must candidly discuss challenges to drive better results. That’s the only way to find the right solutions that address customer needs up and down the stack. For A.I., two core barriers are performance and security. Consider that GPT-3 is orders of magnitude larger than GPT-2, increasing from 1.5 billion parameters to 175 billion parameters. Now imagine those kinds of compute demands multiplied across every application, often running simultaneously. Only chips built for A.I. can make sure those experiences are fast, smooth, and power-efficient.

    This is one of the most impactful inflection points for the semiconductor industry in decades. We must evolve the design of our hardware and create new, integrated A.I. accelerator engines to deliver A.I. capabilities at much lower power, with the right balance of platform power and performance. At the same time, we’ll need hardware-based security to protect the data and intellectual property that will run through A.I.

    Success means collaboration across the ecosystem

    It takes an open ecosystem to create world-changing technology. We know that new innovations truly take off when put in the hands of manufacturers and developers. A great example is gaming. Gaming laptops with powerful CPUs and GPUs bring intensive computing, which game developers then use to create immersive visuals and engage in gameplay. It’s all part of a collaborative process to deliver on a common goal.

    Secure, seamless A.I. will require solutions at every layer of the stack. We’ll need close collaboration to scale the hardware and the operating system, provide tools for developers to adopt, and enable manufacturers and partners to deliver new experiences. Only industry collaboration can move A.I. forward at scale, unleashing a feedback loop and ultimately creating a new generation of A.I.-enabled features and killer apps. The A.I. promise is real–but so are the challenges. The semiconductor industry is essential to designing and scaling solutions, just as it’s done for other seismic technology shifts in the past. To get there, we must surface and solve practical challenges, collaborate across disciplines, and work toward a shared vision for how A.I. can serve people’s needs. I’m confident our industry will rise to the challenge.

    Michelle Johnston Holthaus is the executive VP and general manager of Intel’s Client Computing Group.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    [ad_2]

    Michelle Johnston Holthaus

    Source link

  • The Metaverse Is Dead. ChatGPT Killed Zuckerberg’s Obsession | Entrepreneur

    The Metaverse Is Dead. ChatGPT Killed Zuckerberg’s Obsession | Entrepreneur

    [ad_1]

    This article originally appeared on Business Insider.

    The Metaverse, the once-buzzy technology that promised to allow users to hang out awkwardly in a disorientating video-game-like world, has died after being abandoned by the business world. It was three years old.

    The capital-M Metaverse, a descendant of the 1982 movie “Tron” and the 2003 video game “Second Life,” was born in 2021 when Facebook founder Mark Zuckerberg changed the name of his trillion-dollar company to Meta. After a much-heralded debut, the Metaverse became the obsession of the tech world and a quick hack to win over Wall Street investors. The hype could not save the Metaverse, however, and a lack of coherent vision for the product ultimately led to its decline. Once the tech industry turned to a new, more promising trend — generative AI — the fate of the Metaverse was sealed.

    The Metaverse is now headed to the tech industry’s graveyard of failed ideas. But the short life and ignominious death of the Metaverse offers a glaring indictment of the tech industry that birthed it.

    Grand promise

    From the moment of its delivery, Zuckerberg claimed that the Metaverse would be the future of the internet. The glitzy, spurious promotional video that accompanied Zuckerberg’s name-change announcement described a future where we’d be able to interact seamlessly in virtual worlds: Users would “make eye contact” and “feel like you’re right in the room together.” The Metaverse offered people the chance to engage in an “immersive” experience, he claimed.

    These grandiose promises heaped sky-high expectations on the Metaverse. The media swooned over the newborn concept: The Verge published a nearly 5,000-word-long interview with Zuckerberg immediately following the announcement — in which the writer called it “an expansive, immersive vision of the internet.” Glowing profiles of the Metaverse seemed to set it on a laudatory path, but the actual technology failed to deliver on this promise throughout its short life. A wonky virtual-reality interview with the CBS host Gayle King, where low-quality cartoon avatars of both King and Zuckerberg awkwardly motioned to each other, was a stark contrast to the futuristic vistas shown in Meta’s splashy introductory video.

    The Metaverse also suffered from an acute identity crisis. A functional business proposition requires a few things to thrive and grow: a clear use case, a target audience, and the willingness of customers to adopt the product. Zuckerberg waxed poetic about the Metaverse as “a vision that spans many companies” and “the successor to the mobile internet,” but he failed to articulate the basic business problems that the Metaverse would address. The concept of virtual worlds where users interact with each other using digital avatars is an old one, going back as far as the late 1990s with massively multiplayer online role-player games, such as “Meridian 59,” “Ultimate Online,” and “EverQuest.” And while the Metaverse supposedly built on these ideas with new technology, Zuckerberg’s one actual product — the VR platform Horizon Worlds, which required the use of an incredibly clunky Oculus headset — failed to suggest anything approaching a road map or a genuine vision. In spite of the Metaverse’s arrested conceptual development, a pliant press published statements about the future of the technology that were somewhere between unrealistic and outright irresponsible. The CNBC host Jim Cramer nodded approvingly when Zuckerberg claimed that 1 billion people would use the Metaverse and spend hundreds of dollars there, despite the Meta CEO’s inability to say what people would receive in exchange for their cash or why anyone would want to strap a clunky headset to their face to attend a low-quality, cartoon concert.

    A high-flying life

    The inability to define the Metaverse in any meaningful way didn’t get in the way of its ascension to the top of the business world. In the months following the Meta announcement, it seemed that every company had a Metaverse product on offer, despite it not being obvious what it was or why they should.

    Microsoft CEO Satya Nadella would say at the company’s 2021 Ignite Conference that he couldn’t “overstate how much of a breakthrough” the Metaverse was for his company, the industry, and the world. Roblox, an online game platform that has existed since 2004, rode the Metaverse hype wave to an initial public offering and a $41 billion valuation. Of course, the cryptocurrency industry took the ball and ran with it: The people behind the Bored Ape Yacht Club NFT company conned the press into believing that uploading someone’s digital monkey pictures into VR would be the key to “master the Metaverse.” Other crypto pumpers even successfully convinced people that digital land in the Metaverse would be the next frontier of real-estate investment. Even businesses that seemed to have little to do with tech jumped on board. Walmart joined the Metaverse. Disney joined the Metaverse.

    Despite Zuckerberg’s obsession with the Metaverse, the tech never lived up to the hype. Facebook

    Companies’ rush to get into the game led Wall Street investors, consultants, and analysts to try to one up each other’s projections for the Metaverse’s growth. The consulting firm Gartner claimed that 25% of people would spend at least one hour a day in the Metaverse by 2026. The Wall Street Journal said the Metaverse would change the way we work forever. The global consulting firm McKinsey predicted that the Metaverse could generate up to “$5 trillion in value,” adding that around 95% of business leaders expected the Metaverse to “positively impact their industry” within five to 10 years. Not to be outdone, Citi put out a massive report that declared the Metaverse would be a $13 trillion opportunity.

    A brutal downfall

    In spite of all this hype, the Metaverse did not lead a healthy life. Every single business idea or rosy market projection was built on the vague promises of a single CEO. And when people were actually offered the opportunity to try it out, nobody actually used the Metaverse.

    Decentraland, the most well-funded, decentralized, crypto-based Metaverse product (effectively a wonky online world you can “walk” around), only had around 38 daily active users in its “$1.3 billion ecosystem.” Decentraland would dispute this number, claiming that it had 8,000 daily active users — but that’s still only a fraction of the number of people playing large online games like “Fortnite.” Meta’s much-heralded efforts similarly struggled: By October 2022, Mashable reported that Horizon Worlds had less than 200,000 monthly active users — dramatically short of the 500,000 target Meta had set for the end of 2022. The Wall Street Journal reported that only about 9% of user-created worlds were visited by more than 50 players, and The Verge said that it was so buggy that even Meta employees eschewed it. Despite the might of a then-trillion-dollar company, Meta could not convince people to use the product it had staked its future on.

    The Metaverse fell seriously ill as the economy slowed and the hype around generative AI grew. Microsoft shuttered its virtual-workspace platform AltSpaceVR in January 2023, laid off the 100 members of its “industrial metaverse team,” and made a series of cuts to its HoloLens team. Disney shuttered its Metaverse division in March, and Walmart followed suit by ending its Roblox-based Metaverse projects. The billions of dollars invested and the breathless hype around a half-baked concept led to thousands — if not tens of thousands — of people losing their jobs.

    But the Metaverse was officially pulled off life support when it became clear that Zuckerberg and the company that launched the craze had moved on to greener financial pastures. Zuckerberg declared in a March update that Meta’s “single largest investment is advancing AI and building it into every one of our products.” Meta’s chief technology officer, Andrew Bosworth, told CNBC in April that he, along with Mark Zuckerberg and the company’s chief product officer, Chris Cox, were now spending most of their time on AI. The company has even stopped pitching the Metaverse to advertisers, despite spending more than $100 billion in research and development on its mission to be “Metaverse first.” While Zuckerberg may suggest that developing games for the Quest headsets is some sort of investment, the writing is on the wall: Meta is done with the Metaverse.

    Did anyone learn their lesson?

    While the idea of virtual worlds or collective online experiences may live on in some form, the Capital-M Metaverse is dead. It was preceded in death by a long line of tech fads like Web3 and Google Glass. It is survived by newfangled ideas like the aforementioned generative AI and the self-driving car. Despite this long lineage of disappointment, let’s be clear: The death of the Metaverse should be remembered as arguably one of the most historic failures in tech history.

    I do not believe that Mark Zuckerberg ever had any real interest in “the Metaverse,” because he never seemed to define it beyond a slightly tweaked Facebook with avatars and cumbersome hardware. It was the means to an increased share price, rather than any real vision for the future of human interaction. And Zuckerberg used his outsize wealth and power to get the whole of the tech industry and a good portion of the American business world into line behind this half-baked idea.

    The fact that Mark Zuckerberg has clearly stepped away from the Metaverse is a damning indictment of everyone who followed him, and anyone who still considers him a visionary tech leader. It should also be the cause for some serious reflection among the venture-capital community, which recklessly followed Zuckerberg into blowing billions of dollars on a hype cycle founded on the flimsiest possible press-release language. In a just world, Mark Zuckerberg should be fired as CEO of Meta (in the real world, this is actually impossible).

    Zuckerberg misled everyone, burned tens of billions of dollars, convinced an industry of followers to submit to his quixotic obsession, and then killed it the second that another idea started to interest Wall Street. There is no reason that a man who has overseen the layoffs of tens of thousands of people should run a major company. There is no future for Meta with Mark Zuckerberg at the helm: It will stagnate, and then it will die and follow the Metaverse into the proverbial grave.

    Ed Zitron is the CEO of EZPR, a national tech and business public-relations agency. He is also the author of the tech and culture newsletter Where’s Your Ed At.

    [ad_2]

    Ed Zitron

    Source link

  • IBM Says 7,800 of Its Roles Could Be Replaced By AI | Entrepreneur

    IBM Says 7,800 of Its Roles Could Be Replaced By AI | Entrepreneur

    [ad_1]

    As prompt-driven AI chatbots, such as ChatGPT, have garnered worldwide attention and tech giants enter the artificial intelligence race with urgency, machine-learning tools are simplifying a slew of everyday tasks. Now, as a new technological frontier has begun, the question looms as to where humans stand in the future of an AI-operated world.

    On Sunday, the World Economic Forum released its “Future of Jobs” report, which estimated that nearly 14 million jobs could be eliminated by 2027 — due primarily to increased automation of many work tasks.

    While the report’s predictions used a five-year benchmark, AI has already disrupted a swarm of industries.

    On Monday, International Business Machines Corp. (IBM) CEO Arvind Krishna told Bloomberg that the company intends to pause or slow hiring on roles it believes could be entirely outsourced to AI. Krishna estimated that the adoption of AI could replace nearly 30% of its workforce, amounting to 7,800 jobs.

    Back in January, Alphabet (parent company of Google) announced 12,000 job cuts to focus on AI development — a similar move by Microsoft, which also cut thousands of jobs and increased AI spending.

    But AI isn’t just affecting tech giants competing in a new technological frontier or business magnates looking to automate tasks — several businesses have already noted losses due to the widespread use of machine learning tools like ChatGPT.

    Related: AI Could Eliminate Millions of Jobs By 2027, but Cognitive Skills Are Increasingly Important for Employers

    Homework help platform Chegg, which focuses on essay writing and other related things, said in an earnings call on Monday that ChatGPT has vastly impacted its business. As of Tuesday morning, the company’s stock is down over 60% year-to-date.

    Chegg is working with OpenAI to develop its own AI technology, CheggMate. The tool is positioned to guide student learning and be interactive, so students can ask new questions or prompt the tool to explain things in a different format.

    The somewhat “if you can’t beat them, join them” approach by Chegg is not uncommon as artificial intelligence disrupts tasks that — until recently — seemed impossible without human cognition. Other companies like Snap and Tinder have utilized artificial intelligence to streamline processes and garner more engagement as competition rises. The increasing integration of AI only furthers the World Economic Forum’s prediction that millions of jobs will be extinct at the current pace of adoption.

    However, even in the wake of an AI revolution, human cognition is still valued — maybe now more than ever. The report found that with the increasing integration of technology, creative and analytical thinking skills were among the most desirable traits in workers now, and in the next five years.

    It may be too soon to say, but critical thinking skills and creativity could be the difference between job security and elimination.

    Related: Google CEO Sundar Pichai Says There Is a Need For Governmental Regulation of AI: ‘There Has To Be Consequences’

    [ad_2]

    Madeline Garfinkle

    Source link

  • Rein in the AI Revolution Through the Power of Legal Liability | Entrepreneur

    Rein in the AI Revolution Through the Power of Legal Liability | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    In an era where technological advancements are accelerating at breakneck speed, it is crucial to ensure that artificial intelligence (AI) development remains in check. As AI-powered chatbots like ChatGPT become increasingly integrated into our daily lives, it is high time we address potential legal and ethical implications.

    And some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI, Steve Wozniak, the co-founder of Apple, and over 1,000 other AI experts and funders calls for a six-month pause in training new models. In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.

    However, the problem with these proposals is that they require the coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability.

    By leveraging legal liability, we can effectively slow AI development and make certain that these innovations align with our values and ethics. We can ensure that AI companies themselves promote safety and innovate in ways that minimize the threat they pose to society. We can ensure that AI tools are developed and used ethically and effectively, as I discuss in depth in my new book, ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation.

    Related: AI Could Replace Up to 300 Million Workers Around the World. But the Most At-Risk Professions Aren’t What You’d Expect.

    Legal liability: A vital tool for regulating AI development

    Section 230 of the Communications Decency Act has long shielded internet platforms from liability for content created by users. However, as AI technology becomes more sophisticated, the line between content creators and content hosts blurs, raising questions about whether AI-powered platforms like ChatGPT should be held liable for the content they produce.

    The introduction of legal liability for AI developers will compel companies to prioritize ethical considerations, ensuring that their AI products operate within the bounds of social norms and legal regulations. They will be forced to internalize what economists call negative externalities, meaning negative side effects of products or business activities that affect other parties. A negative externality might be loud music from a nightclub bothering neighbors. The threat of legal liability for negative externalities will effectively slow down AI development, providing ample time for reflection and the establishment of robust governance frameworks.

    To curb the rapid, unchecked development of AI, it is essential to hold developers and companies accountable for the consequences of their creations. Legal liability encourages transparency and responsibility, pushing developers to prioritize the refinement of AI algorithms, reducing the risks of harmful outputs, and ensuring compliance with regulatory standards.

    For example, an AI chatbot that perpetuates hate speech or misinformation could lead to significant social harm. A more advanced AI given the task of improving the stock of a company might – if not bound by ethical concerns – sabotage its competitors. By imposing legal liability on developers and companies, we create a potent incentive for them to invest in refining the technology to avoid such outcomes.

    Legal liability, moreover, is much more doable than a six-month pause, not to speak of a permanent pause. It’s aligned with how we do things in America: instead of having the government regular business, we instead permit innovation but punish the negative consequences of harmful business activity.

    The benefits of slowing down AI development

    Ensuring ethical AI: By slowing down AI development, we can take a deliberate approach to the integration of ethical principles in the design and deployment of AI systems. This will reduce the risk of bias, discrimination, and other ethical pitfalls that could have severe societal implications.

    Avoiding technological unemployment: The rapid development of AI has the potential to disrupt labor markets, leading to widespread unemployment. By slowing down the pace of AI advancement, we provide time for labor markets to adapt and mitigate the risk of technological unemployment.

    Strengthening regulations: Regulating AI is a complex task that requires a comprehensive understanding of the technology and its implications. Slowing down AI development allows for the establishment of robust regulatory frameworks that address the challenges posed by AI effectively.

    Fostering public trust: Introducing legal liability in AI development can help build public trust in these technologies. By demonstrating a commitment to transparency, accountability, and ethical considerations, companies can foster a positive relationship with the public, paving the way for a responsible and sustainable AI-driven future.

    Related: The Rise of AI: Why Legal Professionals Must Adapt or Risk Being Left Behind

    Concrete steps to implement legal liability in AI development

    Clarify Section 230: Section 230 does not appear to cover AI-generated content. The law outlines the term “information content provider” as referring to “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the internet or any other interactive computer service.” The definition of “development” of content “in part” remains somewhat ambiguous, but judicial rulings have determined that a platform cannot rely on Section 230 for protection if it supplies “pre-populated answers” so that it is “much more than a passive transmitter of information provided by others.” Thus, it’s highly likely that legal cases would find that AI-generated content would not be covered by Section 230: it would be helpful for those who want a slowdown of AI development to launch legal cases that would enable courts to clarify this matter. By clarifying that AI-generated content is not exempt from liability, we create a strong incentive for developers to exercise caution and ensure their creations meet ethical and legal standards.

    Establish AI governance bodies: In the meantime, governments and private entities should collaborate to establish AI governance bodies that develop guidelines, regulations and best practices for AI developers. These bodies can help monitor AI development and ensure compliance with established standards. Doing so would help manage legal liability and facilitate innovation within ethical bounds.

    Encourage collaboration: Fostering collaboration between AI developers, regulators and ethicists is vital for the creation of comprehensive regulatory frameworks. By working together, stakeholders can develop guidelines that strike a balance between innovation and responsible AI development.

    Educate the public: Public awareness and understanding of AI technology are essential for effective regulation. By educating the public on the benefits and risks of AI, we can foster informed debates and discussions that drive the development of balanced and effective regulatory frameworks.

    Develop liability insurance for AI developers: Insurance companies should offer liability insurance for AI developers, incentivizing them to adopt best practices and adhere to established guidelines. This approach will help reduce the financial risks associated with potential legal liabilities and promote responsible AI development.

    Related: Elon Musk Questions Microsoft’s Decision to Layoff AI Ethics Team

    Conclusion

    The increasing prominence of AI technologies like ChatGPT highlights the urgent need to address the ethical and legal implications of AI development. By harnessing legal liability as a tool to slow down AI development, we can create an environment that fosters responsible innovation, prioritizes ethical considerations and minimizes the risks associated with these emerging technologies. It is essential that developers, companies, regulators and the public come together to chart a responsible course for AI development that safeguards humanity’s best interests and promotes a sustainable, equitable future.

    [ad_2]

    Gleb Tsipursky

    Source link

  • A major bank has banned ChatGPT—should your company follow suit?

    A major bank has banned ChatGPT—should your company follow suit?

    [ad_1]

    Finance and artificial intelligence aren’t like oil and water. There are areas where the two mix, like expense reporting. But when it comes to generative-A.I. applications such as OpenAI’s ChatGPT, a financial institution is taking a pass.

    This week, there have been reports that JPMorgan Chase & Co. is restricting staff from using the ChatGPT chatbot. The firm’s mandate wasn’t made in response to a certain event but part of standard controls for third-party software usage, the Telegraph first reported. JPMorgan didn’t immediately respond to my request for comment. 

    Launched in November by OpenAI, ChatGPT is a chatbot that can answer questions and can generate content on any topic you can think of, and even write articles. It’s trained to follow language and thought patterns like humans. (Read more about OpenAI founder Sam Altman here.)

    To discuss ChatGPT in the workplace, I had a chat with Vikram R. Bhargava, assistant professor of strategic management and public policy at the George Washington University School of Business, who conducts research on A.I. and the future of work.

    “I think that a lot of us, including people working in finance, were sort of stunned by the performance of ChatGPT when we first started playing around with it,” Bhargava says. “A number of employees and even banks might be tempted to use these tools to make their life a little easier,” he says. For example, asking it to come up with a relevant Excel formula for a modeling task that an analyst or an associate might do, he explains. But not fully knowing how the technology operates, “does create a little bit of discomfort in heavily relying on it,” he says.

    “The thing with banking, of course, is that it’s a very heavily regulated industry, and this technology is also new to regulators,” Bhargava says. Along those lines, Mira Murati, chief technology officer at OpenAI, told Time in a recent interview that regulators will need to get involved with ChatGPT and govern the use of A.I. in a way that’s “aligned with human values.”

    “I don’t know the specifics of the rationale behind JPMorgan’s decision, but it does strike me as prudent,” Bhargava says. “This technology is rapidly evolving. One of the difficulties is—what might be true of ChatGPT as it stands, might not be true in three months.”

    JPMorgan isn’t a novice when it comes to A.I. The bank recently ranked No. 1 in data intelligence startup Evident’s A.I. Index, the first public benchmark of the major banks on their artificial intelligence maturity. The index covers the largest 23 banks in North America and Europe. JPMorgan spends $14 billion in technology annually, of which approximately half is dedicated to investments, the firm said in an announcement.  

    “Leading in A.I. and knowing how to use A.I. responsibly, sometimes might require the firm to abstain from using the given technology,” Bhargava says. 

    Michael Schrage, a research fellow at the MIT Sloan School Initiative on the Digital Economy, spoke with finance chiefs at Fortune’s CFO Collaborative event in January about the possibilities of generative A.I. in finance. I asked him his thoughts on JPMorgan’s reported restriction.

    Schrage says he’s not certain how OpenAI currently manages, collects, and analyzes “prompts” (how you get ChatGPT to do what you want). But he suggests prompts may be an issue for a bank concerned about privacy rules, compliance, and proprietary processes. Prompts that are too detailed may inadvertently reveal information that the bank or its clients would prefer not to be shared, Schrage says.

    “In the same way that Google and Bing know what topics, themes, and names are being searched, it’s similarly probable that OpenAI is tracking the level of detail and specificity of prompts,” he says.

    Again, Schrage is not sure of how OpenAI handles and tracks prompts, but says: “It’s easy to imagine and enact ways where prompts can be anonymized, aggregated, masked, and shielded to minimize revealing sensitive information while still getting good ‘generative advice’ and insight.” I reached out to OpenAI to ask about prompts, but haven’t received a response.

    Many CFOs are already cautious and experimenting with A.I. And, it will be some time before they’d feel comfortable incorporating ChatGPT, Alexander Bant, chief of research for CFOs at Gartner, recently told me

    What would make financial institutions more open to ChatGPT? “They need a little bit more security in knowing how the use of this technology interacts with the current regulatory environment,” Bhargava says. But are there perhaps some tasks where a company can experiment without being reprimanded by the Securities and Exchange Commission? 

    “Let’s say there’s an entry-level employee on your team who might not write the clearest, most concise emails,” Bhargava explains. “So, using ChatGPT might facilitate clearer communication.”

    The jury’s still out on applying ChatGPT in finance, but generative A.I. isn’t going anywhere.


    Have a good weekend. See you on Monday.

    Sheryl Estrada
    sheryl.estrada@fortune.com

    Big deal

    Hyperproof, a SaaS-based compliance and risk management company, has released its 2023 IT Compliance and Risk Benchmark Report. The company found that security, compliance, and risk management professionals were more concerned with short-term, immediate threats, as opposed to handling larger-scale decisions like long-term security issues. Respondents said their No. 1 concern was cybersecurity risks (36%), followed by third-party risk (29%), and lack of support and resources dedicated to IT risks and compliance (24%). The research also found that companies are poised and ready to level up their risk and compliance management processes in the coming years. 

    Going deeper

    Here are a few Fortune weekend reads:

    The housing market correction has already caused homeowners to lose $2.3 trillion,” by Lance Lambert

    These are the top cybersecurity startups to watch in 2023, according to VCs,” by Lucy Brewster

    The ‘free money’ tech investment is over and the ‘old economy’ is set to become the big winner, according to Bank of America,” by Will Daniel

    These 5 sleep habits could add 5 years to your life, say experts,” by L’Oreal Thompson Payton

    Leaderboard

    Here’s a list of some notable moves this week:

    Sandeep Singh Aujla was promoted to CFO at Intuit Inc. (Nasdaq: INTU), the global financial technology platform that makes TurboTax, Credit Karma, QuickBooks, and Mailchimp, effective Aug. 1. Aujla has held senior finance positions at Intuit for seven years and is currently the SVP of finance for Intuit’s largest business unit, the Small Business and Self-Employed Group (SBSEG), and for Intuit’s technology organization. Michelle Clatterbuck, who has served as CFO since February 2018, plans to step down as CFO on July 31.

    Joanne Knight was promoted to CFO at Cargill, a global food corporation that provides agricultural and financial services. Knight currently serves as Cargill’s acting CFO. Before this role, she was VP of finance for Cargill’s agriculture supply chain enterprise, including ocean transportation and the world trading group. Before Cargill, Knight spent 10 years in finance, marketing, and business leadership roles at General Mills that included P&L responsibility. She also held finance leadership roles at Wachovia.

    Robert Higginbotham was appointed interim CFO at Foot Locker, Inc., effective March 1, according to the company’s form 8-K filed on Feb. 21. Higginbotham will serve in this role in addition to his current duties as SVP of investor relations and financial planning and analysis, a role he began in December 2022. The company continues to conduct a search to identify a successor to current EVP and CFO Andrew E. Page who will depart on Feb. 28. Previously, Higginbotham served as VP of investor relations.

    Ryan Clemen was promoted to CFO at SelectQuote, Inc. (NYSE: SLQT), an insurance sales agency. Clement was named interim CFO in May 2022. Before joining SelectQuote in January 2022 as the SVP of financial planning and analysis, Clement served as the CFO of Sifted (formerly VeriShip). Before Sifted, Clemen spent seven years at Edelman Financial Engines, where he served in various senior-level finance and operational roles.

    David Rudow was named CFO at Unite Us, a software company enabling cross-sector collaboration. Rudow will lead the Unite Us finance organization. He served most recently as CFO at nCino taking the company public in 2020. For more than 20 years, Rudow served in senior leadership positions, including SVP at CentralSquare Technologies and senior analyst roles for several leading investment banking and asset management firms. 

    Kevin Schubert was named CFO at Rubicon Technologies, Inc. (NYSE: RBT), a digital marketplace for waste and recycling, effective immediately. In addition to his current responsibilities as president, Schubert will now oversee Rubicon’s end-to-end financial operations. Prior to serving as the company’s president, Schubert was Rubicon’s chief development officer. Before joining Rubicon, he held senior executive and advisory roles with public companies, most recently, CFO for Ocean Park Group.

    Overheard

    “I have all the respect for [Fed Chair Jerome] Powell, but the fact is we lost a little bit of control of inflation.”

    —JPMorgan Chase CEO Jamie Dimon said in an interview during CNBC’s Halftime Report.

    [ad_2]

    Sheryl Estrada

    Source link

  • Bungie Accidentally Showcases AI-Generated Destiny Image, Asks For Help Spotting Them

    Bungie Accidentally Showcases AI-Generated Destiny Image, Asks For Help Spotting Them

    [ad_1]

    Screenshot: Bungie Community Creations

    You can joke about fingers all you want, but the reason AI-generated imagery is perceived as a threat and not just an idle curiosity is its ability to pass for actual, human-created artwork. On the extreme end of the scale that’s a threat to accurate news reporting, and on the more harmless end it’s making life difficult for the community managers of popular video games.

    Like Destiny, a game that, thanks to its huge and devoted playerbase, regularly shouts out the creators among that crowd by highlighting their movies and artwork. Sadly last week one of those artworks turned out to be an AI-generated image:

    Upon being showcased and instantly called out as an AI-generated image by fans, the person uploading it (“hebb”) is quoted as saying “Woah, I just thought the picture was really neat so I posted on the creations page. I’ll take the post down”. At time of posting the image has not been taken down, and can still be viewed here.

    It’s not the most alarming example of this, I know, but Bungie’s response is interesting because it highlights the struggles that people involved in curating and using artwork are currently facing the world over, whether they work for a video games studio or in an international newsroom. In a blog post called “There’s Nothing Artificial About This Week’s Picks”, Bungie say:

    Artificial Intelligence (A.I.) Art

    Last week, an A.I. art submission was mistakenly featured in our blog. The process of choosing these involves a team effort and with this technology being so new, we don’t have a foolproof way of knowing what submissions are A.I. art.

    We want to keep this celebration of our community for those that work hard to bring their creative selves to the forefront when creating works that the Traveler would find joy in. Because of this, we will not knowingly ever feature A.I. art submissions as a potential #Destiny2AOTW or #Destiny2MOTW winner. That being said, this is still new. We ask for grace if we mistakenly feature a submission generated by A.I., and a respectful heads up should it ever happen again in the future. Appreciate the assist!

    While there’s no definitive guide—especially in cases where the vast majority of a piece is conjured by AI then touched by in PhotoShop—there are already plenty of tips out there for spotting AI-generated imagery that go beyond the obvious, like (as in this image’s case) “counting fingers”. As this Wired guide points out, some other key tells—for now, at least!—are dead, lifeless eyes, misshaped ears, a lack of composition and general acts of weirdness, like someone’s hair extending out of their collarbone, or jewellery/accessories that smoosh into each other.

    [ad_2]

    Luke Plunkett

    Source link

  • ReadTheory, the Biggest Up-and-Coming EdTech Company, Steps Out of the Shadows

    ReadTheory, the Biggest Up-and-Coming EdTech Company, Steps Out of the Shadows

    [ad_1]

    ReadTheory delivers on its promise to help teachers improve reading comprehension for every student.

    Press Release


    Jan 19, 2023 10:06 EST

    With over 18 million students across 175 countries, ReadTheory’s online learning platform is improving reading comprehension for students around the globe. Designed by an English Teacher in North Carolina, ReadTheory came from humble beginnings. The company has organically grown by word-of-mouth without any advertising, and has been quietly taking market-share from some of edtech’s biggest giants.

    “ReadTheory’s superpower is that it recognizes there is no one-size-fits-all classroom. The platform’s A.I identifies each student’s strengths and weaknesses and serves adaptive reading practice at the ‘just right’ level. It’s personalized, and that’s why it has such a significant impact,” says Josh Capon, Co-Managing Partner of ReadTheory.

    Schools and districts are able to purchase ReadTheory’s expanded offering, just in time as test-prep season begins across the United States.

    “We all know that the Nation’s latest Report Card showed that reading scores have plummeted to levels not seen in 20 years and students are struggling to get back on track – but we’re ready to change the narrative. We’re helping teachers make up for lost time,” says Ron Kirschenbaum, Co-Managing Partner of ReadTheory.

    The program is aligned to ELA standards and supports teachers with real-time reporting that helps them know what to teach next. The school and district program will centralize efforts across the school  community, seamlessly integrate with Learning Management Systems, and offer deeper student performance data. School and District administrators can learn more about ReadTheory and request an introduction here

    As Mrs. Kara Guiff, an Indiana educator put it, “ReadTheory finds the appropriate leveled texts and comprehension questions and students learn by tracking their data and setting goals. It’s the best comprehension intervention program I’ve used in over 30 years of teaching. I’ve been known to say I won’t even teach ELA without it.”

    While ReadTheory has a free subscription available to teachers, its premium subscription has grown over 300% in the last year. Of teachers surveyed, 80% say ReadTheory positively impacts standardized test scores, and school and district leaders are starting to take notice. On top of increasing achievement, 89% of teachers say ReadTheory also keeps students engaged and interested when on the platform. 

    “ReadTheory was born in a classroom, so our approach has always been working hand-in-hand, shoulder-to-shoulder with educators – and we’ll continue to evolve to support the ever-changing needs of this next generation,” shared Courtney Cioci, Head of Marketing at ReadTheory. 

    About ReadTheory:

    ReadTheory’s reading comprehension platform is used by 18 million students across the globe and is tailored for each.  For more information on ReadTheory visit www.readtheory.org.

    Source: ReadTheory

    [ad_2]

    Source link

  • Rate Highway and Perfect Price Partner to Deliver Artificial Intelligence for Car Rental Pricing

    Rate Highway and Perfect Price Partner to Deliver Artificial Intelligence for Car Rental Pricing

    [ad_1]

    Press Release



    updated: Jun 2, 2017

    RateHighway, the leading provider of automated rate positioning technology for the global auto rental industry, has partnered with Perfect Price, the leader in artificial intelligence for revenue management and price optimization, to deliver the first artificial intelligence solution to car rental companies. 

    “We are delighted to offer this groundbreaking, first-of-its-kind capability to our customers,” said Michael Meyer, President, RateHighway. “In today’s exceedingly competitive car rental environment, driving rental profitability is more important than ever.”

    “For decades, companies shot from the hip on pricing, Then automation made better rate positioning possible. But without rigor and oversight, it can result in a race to the bottom. Artificial intelligence represents a new way to recapture time, revenue and profit while increasing growth.”

    Alex Shartsis, CEO, Perfect Price

    The comprehensive pricing solution combines the leading rate automation technology Rate-Highway has provided since 2002, and the revolutionary artificial intelligence capabilities Perfect Price brings to the industry from Microsoft, Twitter and the FERMI nuclear physics laboratory.

    “I was amazed by how quickly AI improved our business,” said Sharky Laguana, CEO, Bandago and Board Member, American Car Rental Association. “We have seen both utilization and revenue per unit climb measurably in cities where we use Perfect Price, while staying the same in cities where we left our old pricing model in place.”

    “For decades, companies shot from the hip on pricing,” said Alex Shartsis, CEO, Perfect Price. “Then automation made better rate positioning possible. But without rigor and oversight, it can result in a race to the bottom. Artificial intelligence represents a new way to recapture time, revenue and profit while increasing growth.”

    “With this partnership, we build on RateHighway’s transformative automation, which has delivered windfalls for its customers. Together, we enable the next level of business excellence through artificial intelligence — for nearly any car rental business, not just the majors,” said Mr. Meyer.

    About RateHighway, Inc.

    RateHighway is the leading provider of revenue management technology for the global auto rental industry. RateHighway has been providing web rate gathering technology to the travel industry since 1998 and introduced the all-inclusive, ground-breaking RateMonitor(r) automated rate positioning product to the auto rental industry in 2004. RateMonitor is a full cycle rate gathering, analysis, and correction solution that can ensure your fleet is always competitively priced. For more information, contact sales@ratehighway.com.

    About Perfect Price, Inc.

    Perfect Price is the leader in artificial intelligence for revenue management and price optimization. Headquartered in San Francisco, Perfect Price serves global customers with the most advanced artificial intelligence and machine learning based solutions in a software as a service model. For more information, contact press@perfectprice.com.

    Source: Rate-Highway, Inc.

    [ad_2]

    Source link