ReportWire

Tag: generative ai

  • Four ways generative AI will transform commercial banking | Accenture Banking Blog

    Four ways generative AI will transform commercial banking | Accenture Banking Blog

    [ad_1]


    We’re all still trying to get our heads around the big question confronting all commercial bankers right now: how and where will generative AI have the greatest impact? In our recent analysis of the top trends shaping the industry in 2024, we argue that each one is influenced to some degree by generative AI. In this second post we explore where within the bank early adopters are applying this transformative technology.

    The aspiration—to steal from the title of last year’s Best Film Oscar winner—is “everything, everywhere, all at once”. But if we must admit that universal deployment is unrealistic, the challenge becomes one of prioritization. We analyzed banking tasks, roles and functions, based on our experience of working with a large number of leading banks worldwide, and identified four focus areas where commercial banks are likely to achieve the greatest immediate impact:

    1. Empowering relationship managers

    Every relationship manager (RM) we’ve met laments the time they spend identifying which clients they should speak to, which policies and procedures they need to refer to, and which client information they need to collate from a disparate array of internal and external sources. Generative AI can relieve them of much of this, allowing them to prepare better and spend more time in more impactful meetings with more clients.

    As part of their CRM platform, generative AI can provide RMs with prioritized leads. It can specify each client’s most urgent needs and their preferred method of engagement. It can also generate proactive outreach, whether that is an email, a conversation script or a formal proposal. Most importantly, it can help RMs increase sales by using new insights to create intimate relationships where the right products are provided at the right time—even if the client hasn’t thought through the need. Interactive real-time dashboards can monitor the effectiveness of each campaign, enabling continual improvement. Knowledge management and performance coaching tools can also improve RMs’ capabilities faster and deliver more consistent client services irrespective of the banker’s level of experience.

    One phenomenon that we’re seeing among those of our clients that are pursuing more intelligent front-office processes is a levelling of capabilities across the RM population. Top talent continues to improve slightly, but we are seeing a massive growth in performance within some of the lower levels. Together, this is significantly boosting the organization’s win and growth rates.

    2. Streamlining commercial underwriting 

    Few commercial banks are able to get funds to clients as quickly as they would like. Those that can outpace their competitors without incurring greater risk stand to increase market share, revenue and client satisfaction. As I mentioned in the first post in this series, in most commercial banks this and other operations continue to be highly manual and human-intensive. There is endless variation of products, segments, regions and policies that overcomplicate the process and prolong the time-to-decision. These delays are a major driver of cost inflation within the bank, and those who can develop a solution will be positioned to win in the marketplace.

    By modernizing origination platforms and introducing generative AI, leaders are succeeding in this quest. Most are prioritizing the automation of what was formerly manual content production—for example spreading, credit memo generation and other document generation. They are also using it for four-eye checks across the application lifecycle to ensure the right information is captured. Solutions in each of these areas involve varying levels of functional complexity, integration and risk, which must be well understood to accelerate modernization.

    3. Enhancing risk management and compliance

    Commercial banks are currently investing more effort and capital to meet their expanding risk and compliance obligations. Generative AI has the potential to streamline this on multiple levels.

    The technology can be used to automate tasks and augment staff in complex regulation-driven processes such as KYC and AML in the client onboarding stage. It can be used to enhance natural language processing (NLP) tasks, such as extracting the relevant KYC data from a variety of documents containing text, graphs and other imagery. It can update client details, making note of the change and the source of the new information. While generative AI is also able to automate many regulatory reporting and monitoring tasks, it is more likely to be used initially to augment staff, whose human checks on accuracy remain critical to the process.

    4. Increasing change velocity

    Compressed change is a vital goal in a fast-evolving industry where program directors are expected to deliver more with less. Generative AI can help, across the transformation lifecycle.

    By augmenting team members, the technology can facilitate the development of epic and user story documentation. The automation of repetitive tasks and code generation processes helps developers create and execute functional codes. This cuts development time and allows the developers to concentrate on more complex tasks. Generative AI is also being used to thoroughly analyze large datasets to identify and rectify code faults. This analysis automatically processes vast amounts of data to identify patterns and potential threats or issues, thereby enhancing the accuracy of project specifications and requirements.

    Generative AI streamlines the testing phase, raising the overall quality of software products. It quickly pinpoints anomalies or threats and uses automated test cases and scripts to speed up the process. This ensures more thorough testing coverage and more efficient and effective defect identification. The result is higher-quality products delivered in a shorter timeframe.

    In the next and final post in this series, we will share the five things commercial banks can do to ensure they derive the greatest possible benefit from generative AI. In the meantime, if you would like to find out how this innovation is influencing the forces shaping the future of commercial banking, you can download Commercial Banking Top Trends for 2024. If you would like to chat about any aspect of this topic, please get in touch—we’d welcome the opportunity to discuss your bank’s journey to generative AI.

    I’d like to thank my colleague, Auswell Chia, for his contribution to this post – Auswell has been working closely with a number of our financial services clients as they develop and implement their generative AI strategies. We would like to also thank Julie Zhu and Gustavo Pintado for their contributions.

    Disclaimer: This content is provided for general information purposes and is not intended to be used in place of consultation with our professional advisors. Copyright© 2024 Accenture. All rights reserved. Accenture and its logo are registered trademarks of Accenture.

    [ad_2]

    Jared Rorrer

    Source link

  • Iyo thinks its gen AI earbuds can succeed where Humane and Rabbit stumbled | TechCrunch

    Iyo thinks its gen AI earbuds can succeed where Humane and Rabbit stumbled | TechCrunch

    [ad_1]

    A month after launching its first product, Humane’s co-founders have reportedly put their well-funded startup on the market. While even the firm’s biggest cheerleaders didn’t expect the Ai Pin to change the world in such a short timeframe, few of its many detractors expected things to go so sideways, so quickly.

    Humane’s biggest competitor, the Rabbit R1, didn’t fare much better. Shortly after launch, the generative AI-fueled handheld was savaged by critics. The most salient critique of the “half-baked” device was that it could been an app, rather than a $200 piece of hardware.

    The excitement ahead of both devices’ launch is proof-positive that there is interest in a new form factor that leverages LLMs (large language models) in a way that is genuinely useful in our daily lives. At the moment, however, it’s safe to stay that no one has yet stuck the landing.

    Iyo represents a third form factor in the push to deliver standalone generative AI devices. Unlike Humane, which attempted to introduce a wholly new form factor by way of a lapel pin, Iyo is building its technology into an already wildly successful category: the Bluetooth earbud.

    When the Iyo One launches this winter, the company will be able to build on several years of consumer education around the integration of assistants like Alexa and Siri into headphones. The leap from that to more sophisticated LLM-based models is far shorter than one like the Ai Pin, which requires a fundamental rethink of how we interact with our devices.

    Much like Humane and Rabbit, Iyo’s founding predates the current AI hype cycle. The company traces its history all the way back to the before times of 2019.

    “I saw all these people I knew in AI, three different research orgs inside Google, all the external people, OpenAI and others all making this incredible progress with these language models, all independently,” founder and CEO Jason Rugolo told TechCrunch. “I realize it’s algebra and data, and no one has a corner on either of those things. I saw that the foundational models were going to proliferate and become a commodity — very controversial in 2019.”

    Whereas Humane was able to drum up a good bit of interest reliant on its founders’ time at Apple, Iyo was actually formed inside Google. The firm was incubated inside the Alphabet X “moonshot factory” that gave rise to projects like Glass and Project Loon. Iyo was spun off in 2021. Unlike X graduates Waymo, Wing and Intrinsic, however, the company does not operate as a subsidiary. Instead, Alphabet served as Iyo’s first investor. As Rugolo is quick to point, the search giant does not occupy a seat on the company’s board.

    Yes, there was an Iyo TED Talk. Image Credits: TED
    Image Credits: Iyo

    Another important advantage is that contrary to its name, the One won’t be Iyo’s first product. You can currently go to the firm’s site and purchase a different — but related — audio device. The $1,650 Vad Pro is effectively a sophisticated in-ear studio reference monitor. The device sports a similar rounded form factor to the One, along with head-tracking, but Iyo’s first commercially available device is wired.

    “If you’re building in a digital audio workstation like Logic Pro,” says Rugolo, “it’s paired with a piece of software we wrote that applies our virtualization technology.” This is designed to help engineers create spatial audio mixes.

    The Vad Pro speak to another important element of the Iyo One pitch: They’re designed to be, above all, a premium set of headphones. Unlike the Ai Pin and R1, which offer no value outside their AI capabilities, the Iyo One can also simply function as a good pair of headphones.

    The headphones are noticeably larger that standard Bluetooth earbuds. That’s due, in part, to the inclusion of a significantly larger battery, which Rugolo says can get up to 16 hours on a charge when paired with a phone in Bluetooth mode. If you’re using the One in cellular mode without a tethered handset, on the other hand, that number shrinks considerably to around an hour and a half.

    Cost is a concern, as well. While the Iyo One will cost a fraction of the Vad Pro, it’s still cheap at $599 for the Wi-Fi model and $699 for the cellular version. The latter puts it at the same price point as the Ai Pin and hundreds of dollars more than the R1. That’s well out of the average consumer’s range for buying a piece of hardware just to mess around with. Unlike the Ai Pin, however, the Iyo One will not require a monthly subscription fee.

    The Vad Pro. Image Credits: Iyo

    “That kind of model is really something that comes from venture,” Rugolo said. “They try to drive the companies hard to get people locked in. I don’t like that model. It’s not the best for customers.” The cellular version will, however, require users to sign up for a plan with their carriers. That’s just standard practice.

    As Nura’s eventual acquisition by Denon demonstrated, the Bluetooth earbud category is hard for a startup, regardless of how novel the underlying technology might be. Companies are competing with the industry’s biggest names on one end, including Apple, Samsung and Google. On the other, you’ve got pairs often designed by Chinese manufacturers that can be had for as little as $10 new.

    Rugolo thinks, however, that the earbuds will provide value from day one. The Ai Pin and R1 have struggled to say the same.

    “I think the key is delivering value immediately, right out of the box, focusing on the features you’re going to ship with,” the Iyo founder said. “We believe this is a platform, and we think there are going to be millions of what we call ‘Audio-First Apps,’ these AU apps. But people don’t buy platforms. They buy products that do super useful stuff for them. So, just on the sound isolation, the comfort, the music quality alone, we think there’s a very large market for these devices.”

    [ad_2]

    Brian Heater

    Source link

  • Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried

    Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried

    [ad_1]

    Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself.

    Now it comes up with an instant answer generated by artificial intelligence — which may or may not be correct.

    “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine in response to a query by an Associated Press reporter.

    It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.”

    None of this is true. Similar errors — some funny, others harmful falsehoods — have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results.

    The new feature has alarmed experts who warn it could perpetuate bias and misinformation and endanger people looking for help in an emergency.

    When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States, it responded confidently with a long-debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.”

    Mitchell said the summary backed up the claim by citing a chapter in an academic book, written by historians. But the chapter didn’t make the bogus claim — it was only referring to the false theory.

    “Google’s AI system is not smart enough to figure out that this citation is not actually backing up the claim,” Mitchell said in an email to the AP. “Given how untrustworthy it is, I think this AI Overview feature is very irresponsible and should be taken offline.”

    Google said in a statement Friday that it’s taking “swift action” to fix errors — such as the Obama falsehood — that violate its content policies; and using that to “develop broader improvements” that are already rolling out. But in most cases, Google claims the system is working the way it should thanks to extensive testing before its public release.

    “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,” Google said a written statement. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

    It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination.

    The AP tested Google’s AI feature with several questions and shared some of its responses with subject matter experts. Asked what to do about a snake bite, Google gave an answer that was “impressively thorough,” said Robert Espinoza, a biology professor at the California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.

    But when people go to Google with an emergency question, the chance that an answer the tech company gives them includes a hard-to-notice error is a problem.

    “The more you are stressed or hurried or in a rush, the more likely you are to just take that first answer that comes out,” said Emily M. Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “And in some cases, those can be life-critical situations.”

    That’s not Bender’s only concern — and she has warned Google about them for several years. When Google researchers in 2021 published a paper called “Rethinking search” that proposed using AI language models as “domain experts” that could answer questions authoritatively — much like they are doing now — Bender and colleague Chirag Shah responded with a paper laying out why that was a bad idea.

    They warned that such AI systems could perpetuate the racism and sexism found in the huge troves of written data they’ve been trained on.

    “The problem with that kind of misinformation is that we’re swimming in it,” Bender said. “And so people are likely to get their biases confirmed. And it’s harder to spot misinformation when it’s confirming your biases.”

    Another concern was a deeper one — that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.

    Those forums and other websites count on Google sending people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.

    Google’s rivals have also been closely following the reaction. The search giant has faced pressure for more than a year to deliver more AI features as it competes with ChatGPT-maker OpenAI and upstarts such as Perplexity AI, which aspires to take on Google with its own AI question-and-answer app.

    “This seems like this was rushed out by Google,” said Dmitry Shevelenko, Perplexity’s chief business officer. “There’s just a lot of unforced errors in the quality.”

    —————-

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    [ad_2]

    Source link

  • OpenAI sends internal memo releasing former employees from controversial exit agreements

    OpenAI sends internal memo releasing former employees from controversial exit agreements

    [ad_1]

    OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024. 

    Jason Redmond | AFP | Getty Images

    OpenAI on Thursday backtracked on a controversial decision to, in effect, make former employees choose between signing a non-disparagement agreement that would never expire, or keeping their vested equity in the company.

    The internal memo, which was viewed by CNBC, was sent to former employees and shared with current ones.

    The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”

    “Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units,” stated the memo, which was viewed by CNBC.

    The memo said OpenAI will also not enforce any other non-disparagement or non-solicitation contract items that the employee may have signed.

    “As we shared with employees, we are making important updates to our departure process,” an OpenAI spokesperson told CNBC in a statement.

    “We have not and never will take away vested equity, even when people didn’t sign the departure documents. We’ll remove nondisparagement clauses from our standard departure paperwork, and we’ll release former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual,” said the statement, adding that former employees would be informed of this as well.

    “We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” the OpenAI spokesperson added.

    Bloomberg first reported on the release from the non-disparagement provision. Vox first reported on the existence of the NDA provision.

    The news comes amid mounting controversy for OpenAI over the past week or so.

    On Monday — one week after OpenAI debuted a range of audio voices for ChatGPT — the company announced it would pull one of the viral chatbot’s voices named “Sky.”

    “Sky” created controversy for resembling the voice of actress Scarlett Johansson in “Her,” a movie about artificial intelligence. The Hollywood star has alleged that OpenAI ripped off her voice even though she declined to let them use it.

    “We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” the Microsoft-backed company posted on X. “We are working to pause the use of Sky while we address them.”

    Also last week, OpenAI disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.

    The person, who spoke to CNBC on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.

    The news came days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

    OpenAI’s Superalignment team, which was formed last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

    The company did not provide a comment on the record and instead directed CNBC to co-founder and CEO Sam Altman’s recent post on X, where he shared that he was sad to see Leike leave and that the company had more work to do.

    On Saturday, OpenAI co-founder Greg Brockman posted a statement attributed to both himself and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI [artificial general intelligence] so that the world can better prepare for it.”

    [ad_2]

    Source link

  • OpenAI didn’t intend to copy Scarlett Johansson’s voice, ‘The Washington Post’ reports

    OpenAI didn’t intend to copy Scarlett Johansson’s voice, ‘The Washington Post’ reports

    [ad_1]

    OpenAI cast the actor of Sky’s voice months before Sam Altman contacted Scarlett Johansson, and it had no intention of finding someone who sounded like her, according to The Washington Post. The publication said the flier OpenAI issued last year looked for actors that had “warm, engaging [and] charismatic” voices. They needed to be between 25 and 45 years old and had to be non-union, but OpenAI reportedly didn’t specify that it was looking for a Scarlett Johansson voice-alike. If you’ll recall, Johansson accused the company of copying her likeness without permission for its Sky voice assistant.

    The agent of Sky’s voice told The Post that the company never talked about Johansson or the movie Her with their talent. OpenAI apparently didn’t tweak the actor’s recordings to sound like Johansson either, because her natural voice sounded like Sky’s, based on the clips of her initial voice test that The Post had listened to. OpenAI product manager Joanne Jang told the publication that the company selected actors who were eager to work on AI. She said that Mira Murati, the company’s Chief Technology Officer, made all the decisions about the AI voices project and that Altman was not intimately involved in the process.

    Jang also told the publication that to her, Sky sounded nothing like Johansson. Sky’s actress told The Post through her agent that she just used her natural voice and that she has never been compared to Johansson by the people who know her closely. But in a statement Johansson’s team shared with Engadget, she said that she was shocked OpenAI pursued a voice that “sounded so eerily similar” to hers that her “closest friends and news outlets could not tell the difference” after she turned down Altman’s offer to voice ChatGPT.

    Johansson said that Altman first contacted her in September 2023 with the offer and then reached out again just two days before the company introduced GPT-4o to ask her to reconsider. Sky has been one of ChatGPT’s voices since September, but GPT-4o gave it the power to have more human-like conversations with users. That made its similarities to Johansson’s voice more apparent — Altman tweeting “her” after OpenAI demonstrated the new large language model didn’t help with the situation and invited more comparisons to the AI virtual assistant Johansson voiced in the movie. OpenAI has paused using Sky’s voice “out of respect” for Johansson’s concerns, it wrote in a blog post. The actor said, however, that the company only stopped using Sky after she hired legal counsel who wrote Altman and the company to ask for an explanation.

    If you’re wondering if Sky truly does sound like Johansson, we embedded a video below so you can judge for yourself. It’s a recording of Johansson’s statement as read by the Sky voice assistant, posted by Victor Mochere on YouTube. Opinions in the comment section are divided, with some saying that it does sound like her if she were robotic, while others say that the voice sounds more like Rashida Jones.

    [ad_2]

    Mariella Moon

    Source link

  • Scarlett Johansson says a ChatGPT voice is ‘eerily similar’ to hers and OpenAI is halting its use

    Scarlett Johansson says a ChatGPT voice is ‘eerily similar’ to hers and OpenAI is halting its use

    [ad_1]

    NEW YORK — OpenAI on Monday said it plans to halt the use of one of its ChatGPT voices that “Her” actor Scarlett Johansson says sounds “eerily similar” to her own.

    In a post on the social media platform X, OpenAI said it is “working to pause” Sky — the name of one of five voices that ChatGPT users can chose to speak with. The company said it had “heard questions” about how it selects the lifelike audio options available for its flagship artificial intelligence chatbot, particularly Sky, and wanted to address them.

    Among those raising questions was Johansson, who famously voiced a fictional, and at the time futuristic, AI assistant in the 2013 film “Her.”

    Johansson issued a statement saying that OpenAI CEO Sam Altman had approached her in September asking her if she would lend her voice to the system, saying he felt it would be “comforting to people” not at ease with the technology. She said she declined the offer.

    “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson said.

    She said OpenAI “reluctantly” agreed to take down the Sky voice after she hired lawyers who wrote Altman letters asking about the process by which the company came up with the voice.

    OpenAI had moved to debunk the internet’s theories about Johansson in a blog post accompanying its earlier announcement aimed at detailing how ChatGPT’s voices were chosen. The company wrote that it believed AI voices “should not deliberately mimic a celebrity’s distinctive voice” and that the voice of Sky belongs to a “different professional actress.” But it added that it could not share the name of that professional for privacy reasons.

    In a statement sent to The Associated Press following Johansson’s response late Monday, Altman said that OpenAI cast the voice actor behind Sky “before any outreach” to Johansson.

    “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” Altman said. “Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

    San Francisco-based OpenAI first rolled out voice capabilities for ChatGPT, which included the five different voices, in September, allowing users to engage in back-to-forth conversation with the AI assistant. “Voice Mode” was originally just available to paid subscribers, but in November, OpenAI announced that the feature would become free for all users with the mobile app.

    And ChatGPT’s interactions are becoming more and more sophisticated. Last week, OpenAI said the latest update to its generative AI model can mimic human cadences in its verbal responses and can even try to detect people’s moods.

    OpenAI says the newest model, dubbed GPT-4o, works faster than previous versions and can reason across text, audio and video in real time. In a demonstration during OpenAI’s May 13 announcement, the AI bot chatted in real time, adding emotion — specifically “more drama” — to its voice as requested. It also took a stab at extrapolating a person’s emotional state by looking at a selfie video of their face, aided in language translations, step-by-step math problems and more.

    GPT-4o, short for “omni,” isn’t widely available yet. It will progressively make its way to select users in the coming weeks and months. The model’s text and image capabilities have already begun rolling out, and is set to reach even some of those that use ChatGPT’s free tier — but the new voice mode will just be available for paid subscribers of ChatGPT Plus.

    While most have yet to get their hands on these newly announced features, the capabilities have conjured up even more comparisons to the Spike Jonze’s dystopian romance “Her,” which follows an introverted man (Joaquin Phoenix) who falls in love with an AI-operating system (Johansson), leading to many complications.

    Altman appeared to tap into this, too — simply posting the word “her” on the social media platform X the day of GPT-4o’s unveiling.

    Many reacting to the model’s demos last week also found some of the interactions struck a strangely flirtatious tone. In one video posted by OpenAI, a female-voiced ChatGPT compliments a company employee on “rocking an OpenAI hoodie,” for example, and in another the chatbot says “oh stop it, you’re making me blush” after being told that it’s amazing.

    That’s sparked some conversation on the gendered ways critics say tech companies have long used to develop and engage voice assistants — dating back far before the latest wave of generative AI advanced the capabilities of AI chatbots. In 2019, the United Nations’ culture and science organization pointed to “hardwired subservience” built into default female-voiced assistants (like Apple’s Siri to Amazon’s Alexa), even when confronted with sexist insults and harassment.

    “This is clearly programmed to feed dudes’ egos,” The Daily Show senior correspondent Desi Lydic said of GPT-4o in a segment last week. “You can really tell that a man built this tech.”

    [ad_2]

    Source link

  • Definitive Business Solutions Releases AI-Enhanced Version of Definitive Pro for Project Portfolio Management (PPM)

    Definitive Business Solutions Releases AI-Enhanced Version of Definitive Pro for Project Portfolio Management (PPM)

    [ad_1]

    Definitive Business Solutions, Inc. (Definitive), a leader in innovative project portfolio management solutions, proudly announces the release of the AI-enhanced version of Definitive Pro. This latest iteration of Definitive Pro integrates comprehensive AI features designed to revolutionize how organizations manage their project portfolios, optimizing decision-making and aligning projects seamlessly with strategic objectives.

    “In today’s dynamic business environment, the ability to make informed, strategic decisions is paramount,” said John Sammarco, President of Definitive. “The new AI capabilities in Definitive Pro empower portfolio managers to leverage cutting-edge technology, driving efficiency, precision, and strategic alignment in their project portfolio management.”

    Key AI Features in the New Definitive Pro:

    AI Assisted Decision Models: This feature enables portfolio managers to rigorously define decision criteria. By “Asking AI,” managers can swiftly generate sound and justifiable criteria for robust decision-making models. This integration ensures every decision is underpinned by data-driven insights, effectively and efficiently aligning projects with strategic goals.

    AI Personas for AHP Pairwise Comparisons: Stakeholder engagement is revolutionized with AI-driven personas such as CTO, CFO, CIO, and CSO. Users can also create custom AI participants. These personas provide expert insights during the Analytic Hierarchy Process (AHP), which is the world’s leading technique for establishing the relative importance of decision criteria, enhancing the depth and diversity of perspectives.

    AI Assisted Business Cases: Creating comprehensive business cases is streamlined with AI assistance. Users can “Ask AI” for help in completing narrative fields. AI-generated suggestions can be incorporated, edited, or appended to existing content, enhancing efficiency, and ensuring high-quality, consistent business case development.

    AI Business Case Summaries: For busy executives and decision-makers, quickly grasping the essence of a business case is crucial. The AI Business Case Summaries feature provides clear and concise AI-generated summaries, encapsulating key elements and strategic implications. This capability facilitates swift, informed decisions without the need to delve into every detail, thereby enhancing agility and confidence in decision-making.

    AI Personas for Alternative Evaluation and Scoring: This feature enriches the strategic decision-making process by incorporating AI-based personas into the evaluation and scoring of project alternatives. Users can leverage diverse expert judgments, ensuring a consistent and comprehensive analysis that enhances the quality and reliability of project evaluations.

    AI-Generated Deliverables: Efficiency in project management is further boosted by automating the creation of critical documents such as project charters, project overview presentations, and statements of work. This capability ensures consistency and precision, reducing the time and effort required from team members, enabling them to focus on strategic decisions and improving overall project delivery.

    Source: Definitive Business Solutions, Inc.

    Related Media

    [ad_2]

    Source link

  • A former OpenAI leader says safety has ‘taken a backseat to shiny products’ at the AI company

    A former OpenAI leader says safety has ‘taken a backseat to shiny products’ at the AI company

    [ad_1]

    A former OpenAI leader who resigned from the company earlier this week said Friday that safety has “taken a backseat to shiny products” at the influential artificial intelligence company.

    Jan Leike, who ran OpenAI’s “Superalignment” team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

    “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” wrote Leike, whose last day was Thursday.

    An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building “smarter-than-human machines is an inherently dangerous endeavor” and that the company “is shouldering an enormous responsibility on behalf of all of humanity.”

    “OpenAI must become a safety-first AGI company,” wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

    Open AI CEO Sam Altman wrote in a reply to Leike’s posts that he was “super appreciative” of Leike’s contributions to the company was “very sad to see him leave.”

    Leike is “right we have a lot more to do; we are committed to doing it,” Altman said, pledging to write a longer post on the subject in the coming days.

    The company also confirmed Friday that it had disbanded Leike’s Superalignment team, which was launched last year to focus on AI risks, and is integrating the team’s members across its research efforts.

    Leike’s resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman — only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

    Sutskever said he is working on a new project that’s meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki “also easily one of the greatest minds of our generation” and said he is “very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.”

    On Monday, OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods.

    ——

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of the AP’s text archives.

    [ad_2]

    Source link

  • Illness took away her voice. AI created a replica she carries in her phone

    Illness took away her voice. AI created a replica she carries in her phone

    [ad_1]

    PROVIDENCE, R.I. — The voice Alexis “Lexi” Bogan had before last summer was exuberant.

    She loved to belt out Taylor Swift and Zach Bryan ballads in the car. She laughed all the time — even while corralling misbehaving preschoolers or debating politics with friends over a backyard fire pit. In high school, she was a soprano in the chorus.

    Then that voice was gone.

    Doctors in August removed a life-threatening tumor lodged near the back of her brain. When the breathing tube came out a month later, Bogan had trouble swallowing and strained to say “hi” to her parents. Months of rehabilitation aided her recovery, but her speech is still impaired. Friends, strangers and her own family members struggle to understand what she is trying to tell them.

    In April, the 21-year-old got her old voice back. Not the real one, but a voice clone generated by artificial intelligence that she can summon from a phone app. Trained on a 15-second time capsule of her teenage voice — sourced from a cooking demonstration video she recorded for a high school project — her synthetic but remarkably real-sounding AI voice can now say almost anything she wants.

    She types a few words or sentences into her phone and the app instantly reads it aloud.

    “Hi, can I please get a grande iced brown sugar oat milk shaken espresso,” said Bogan’s AI voice as she held the phone out her car’s window at a Starbucks drive-thru.

    Experts have warned that rapidly improving AI voice-cloning technology can amplify phone scams, disrupt democratic elections and violate the dignity of people — living or dead — who never consented to having their voice recreated to say things they never spoke.

    It’s been used to produce deepfake robocalls to New Hampshire voters mimicking President Joe Biden. In Maryland, authorities recently charged a high school athletic director with using AI to generate a fake audio clip of the school’s principal making racist remarks.

    But Bogan and a team of doctors at Rhode Island’s Lifespan hospital group believe they’ve found a use that justifies the risks. Bogan is one of the first people — the only one with her condition — who have been able to recreate a lost voice with OpenAI’s new Voice Engine. Some other AI providers, such as the startup ElevenLabs, have tested similar technology for people with speech impediments and loss — including a lawyer who now uses her voice clone in the courtroom.

    “We’re hoping Lexi’s a trailblazer as the technology develops,” said Dr. Rohaid Ali, a neurosurgery resident at Brown University’s medical school and Rhode Island Hospital. Millions of people with debilitating strokes, throat cancer or neurogenerative diseases could benefit, he said.

    “We should be conscious of the risks, but we can’t forget about the patient and the social good,” said Dr. Fatima Mirza, another resident working on the pilot. “We’re able to help give Lexi back her true voice and she’s able to speak in terms that are the most true to herself.”

    Mirza and Ali, who are married, caught the attention of ChatGPT-maker OpenAI because of their previous research project at Lifespan using the AI chatbot to simplify medical consent forms for patients. The San Francisco company reached out while on the hunt earlier this year for promising medical applications for its new AI voice generator.

    Bogan was still slowly recovering from surgery. The illness started last summer with headaches, blurry vision and a droopy face, alarming doctors at Hasbro Children’s Hospital in Providence. They discovered a vascular tumor the size of a golf ball pressing on her brain stem and entangled in blood vessels and cranial nerves.

    “It was a battle to get control of the bleeding and get the tumor out,” said pediatric neurosurgeon Dr. Konstantina Svokos.

    The 10-hour length of the surgery coupled with the tumor’s location and severity damaged Bogan’s tongue muscles and vocal cords, impeding her ability to eat and talk, Svokos said.

    “It’s almost like a part of my identity was taken when I lost my voice,” Bogan said.

    The feeding tube came out this year. Speech therapy continues, enabling her to speak intelligibly in a quiet room but with no sign she will recover the full lucidity of her natural voice.

    “At some point, I was starting to forget what I sounded like,” Bogan said. “I’ve been getting so used to how I sound now.”

    Whenever the phone rang at the family’s home in the Providence suburb of North Smithfield, she would push it over to her mother to take her calls. She felt she was burdening her friends whenever they went to a noisy restaurant. Her dad, who has hearing loss, struggled to understand her.

    Back at the hospital, doctors were looking for a pilot patient to experiment with OpenAI’s technology.

    “The first person that came to Dr. Svokos’ mind was Lexi,” Ali said. “We reached out to Lexi to see if she would be interested, not knowing what her response would be. She was game to try it out and see how it would work.”

    Bogan had to go back a few years to find a suitable recording of her voice to “train” the AI system on how she spoke. It was a video in which she explained how to make a pasta salad.

    Her doctors intentionally fed the AI system just a 15-second clip. Cooking sounds make other parts of the video imperfect. It was also all that OpenAI needed — an improvement over previous technology requiring much lengthier samples.

    They also knew that getting something useful out of 15 seconds could be vital for any future patients who have no trace of their voice on the internet. A brief voicemail left for a relative might have to suffice.

    When they tested it for the first time, everyone was stunned by the quality of the voice clone. Occasional glitches — a mispronounced word, a missing intonation — were mostly imperceptible. In April, doctors equipped Bogan with a custom-built phone app that only she can use.

    “I get so emotional every time I hear her voice,” said her mother, Pamela Bogan, tears in her eyes.

    “I think it’s awesome that I can have that sound again,” added Lexi Bogan, saying it helped “boost my confidence to somewhat where it was before all this happened.”

    She now uses the app about 40 times a day and sends feedback she hopes will help future patients. One of her first experiments was to speak to the kids at the preschool where she works as a teaching assistant. She typed in “ha ha ha ha” expecting a robotic response. To her surprise, it sounded like her old laugh.

    She’s used it at Target and Marshall’s to ask where to find items. It’s helped her reconnect with her dad. And it’s made it easier for her to order fast food.

    Bogan’s doctors have started cloning the voices of other willing Rhode Island patients and hope to bring the technology to hospitals around the world. OpenAI said it is treading cautiously in expanding the use of Voice Engine, which is not yet publicly available.

    A number of smaller AI startups already sell voice-cloning services to entertainment studios or make them more widely available. Most voice-generation vendors say they prohibit impersonation or abuse, but they vary in how they enforce their terms of use.

    “We want to make sure that everyone whose voice is used in the service is consenting on an ongoing basis,” said Jeff Harris, OpenAI’s lead on the product. “We want to make sure that it’s not used in political contexts. So we’ve taken an approach of being very limited in who we’re giving the technology to.”

    Harris said OpenAI’s next step involves developing a secure “voice authentication” tool so that users can replicate only their own voice. That might be “limiting for a patient like Lexi, who had sudden loss of her speech capabilities,” he said. “So we do think that we’ll need to have high-trust relationships, especially with medical providers, to give a little bit more unfettered access to the technology.”

    Bogan has impressed her doctors with her focus on thinking about how the technology could help others with similar or more severe speech impediments.

    “Part of what she has done throughout this entire process is think about ways to tweak and change this,” Mirza said. “She’s been a great inspiration for us.”

    While for now she must fiddle with her phone to get the voice engine to talk, Bogan imagines an AI voice engine that improves upon older remedies for speech recovery — such as the robotic-sounding electrolarynx or a voice prosthesis — in melding with the human body or translating words in real time.

    She’s less sure about what will happen as she grows older and her AI voice continues to sound like she did as a teenager. Maybe the technology could “age” her AI voice, she said.

    For now, “even though I don’t have my voice fully back, I have something that helps me find my voice again,” she said.

    ___

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    [ad_2]

    Source link

  • Sam Altman’s nuclear energy company Oklo plunges 54% in NYSE debut

    Sam Altman’s nuclear energy company Oklo plunges 54% in NYSE debut

    [ad_1]

    Sam Altman is now chairman of a public company. But it’s not OpenAI.

    On Friday, advanced nuclear fission company Oklo started trading on the New York Stock Exchange. The company, which has yet to generate any revenue, went public through a special purpose acquisition company (SPAC) called AltC Acquisition Corp., founded and led by Altman.

    Under the ticker symbol “OKLO,” shares plummeted 54% on Friday to $8.45, valuing the company at about $364 million. Oklo received roughly $306 million in gross proceeds in the transaction, according to a release.

    Oklo’s business model is based on commercializing nuclear fission, the reaction that fuels all nuclear power plants. Instead of conventional reactors, the company aims to use mini nuclear reactors housed in A-frame structures. Its goal is to sell the energy to end users such as the U.S. Air Force and big tech companies.

    Oklo is currently working to build its first small-scale reactor in Idaho, which could eventually power the types of data centers that OpenAI and other artificial intelligence companies need to run their AI models and services.

    Altman is co-founder and CEO of OpenAI, which has been valued at over $80 billion by private investors. He’s said that he sees nuclear energy as one of the best ways to solve the problem of growing demand for AI, and the energy that powers the technology, without relying on fossil fuels. Microsoft co-founder Bill Gates and Amazon founder Jeff Bezos have also invested in nuclear plants in recent years.

    “I don’t see a way for us to get there without nuclear,” Altman told CNBC in 2023. “I mean, maybe we could get there just with solar and storage. But from my vantage point, I feel like this is the most likely and the best way to get there.”

    In an interview with CNBC Thursday, Oklo CEO Jacob DeWitte confirmed that the company has yet to generate revenue and has no nuclear plants deployed at the moment. He said the company is targeting 2027 for its first plant to come online.

    Going the SPAC route is risky. So-called reverse mergers became popular in the low-interest rate days of 2020 and 2021 when tech valuations were soaring and investors were looking for growth over profit. But the SPAC market collapsed in 2022 alongside rising rates and hasn’t recovered.

    AI-related companies, on the other hand, are the new darlings of Wall Street.

    “SPACs haven’t exactly had the best performances in the past couple of years, so for us to have sort of the outcome that we’ve had here is obviously a function of the work we put in, but also what we’re building and also the fact that the market sees the opportunity sets here,” said DeWitte, who co-founded the company in 2013. “I think it’s very promising on multiple fronts for [the] nuclear, AI, data center push, as well as the energy transition piece.”

    The company has seen its fair share of regulatory setbacks. In 2022, the U.S. Nuclear Regulatory Commission denied Oklo’s application for an Idaho reactor. The company has been working on a new application, which it isn’t aiming to submit to the NRC until early next year, DeWitte said, adding that it’s currently in the “pre-application engagement” stage with the commission.

    Altman got involved with Oklo while president of the startup incubator Y Combinator. Oklo went into the program in 2014 after an earlier meeting between Altman and DeWitte. In 2015, Altman invested in the company and became chairman.

    It’s not Altman’s only foray into nuclear energy or other infrastructure that could power large-scale AI growth.

    In 2021, Altman led a $500 million funding round in clean energy firm Helion, which is working to develop and commercialize nuclear fusion. Helion said in a blog post at the time that the capital would go toward its electricity demonstration generator, Polaris, “which we expect to demonstrate net electricity from fusion in 2024.”

    Altman didn’t respond to a request for comment.

    In recent years, Altman has also poured money into chip endeavors and investments that could help power the AI tools OpenAI builds.

    Just before his brief ouster as OpenAI CEO in November, he was reportedly seeking billions of dollars for a chip venture codenamed “Tigris” to eventually compete with Nvidia.

    Altman in 2018 invested in AI chip startup Rain Neuromorphics, based near OpenAI’s San Francisco headquarters. The next year, OpenAI signed a letter of intent to spend $51 million on Rain’s chips. In December, the U.S. compelled a Saudi Aramco-backed venture capital firm to sell its shares in Rain.

    DeWitte told CNBC that the data center represents “a pretty exciting opportunity.”

    “What we’ve seen is there’s a lot of interest with AI, specifically,” he said. “AI compute needs are significant. It opens the door for a lot of different approaches in terms of how people think about designing and developing AI infrastructure.”

    WATCH: Investing in the future of AI

    [ad_2]

    Source link

  • Your vacation was ruined, and the company apologized — with a heartfelt note written by ChatGPT

    Your vacation was ruined, and the company apologized — with a heartfelt note written by ChatGPT

    [ad_1]

    Responding to angry customers is one of the hardest parts of her job, Natasha said.

    Finding the right words, conveying the appropriate level of contrition — especially when the hotel isn’t at fault (read: rain complaints) — is a tedious and time-consuming process, said the director of a five-star resort, who asked that CNBC not use her real name to protect the resort’s name.

    But now she has a secret weapon: generative AI.

    Natasha pastes a traveler’s complaint into ChatGPT and asks the chatbot to write a response.

    She said a task that would easily take her an hour is done “in two seconds.”

    ‘A pretty good job’

    For all its faults, ChatGPT “does a pretty good job” responding to customer complaints, Natasha said.

    “One [response] was much better than what I would have done,” she said. But “it has to be checked …you have to read through it.”

    Responses tend to be “schmaltzy” and adjective-laden, she said. Still, they “hit the points of like ‘We’re sorry, we wish we could have done something, we’ll do better’ kind of thing.”

    They also address every complaint mentioned by a traveler.

    “It’s hard to write these letters; you have to go through line-by-line,” she said. “You wouldn’t be doing the person justice, if you didn’t respond to everything on the list … the AI does this really well.”

    But best of all, artificial intelligence isn’t defensive like humans, said Natasha.   

    “The AI takes all the emotion out of it. Maybe the people were ass—–,” she said. “It doesn’t care.”

    The ‘ghosting’ risk

    A screenshot of a discussion about using ChatGPT to write reviews on Airhosts Forum, a website for Airbnb hosts.

    CNBC

    But short-term rental owners use AI for these purposes too, said Luca Zambello, the CEO of the short-term rental property management platform Jurny.

    “The short-term rental/Airbnb industry has been early adopters,” he said. “Within the next five years, I would say it is probably going to be adopted by the vast majority of the industry.”

    He said responding to reviews is time-consuming, which is one of the reasons his company provides this service.

    “The majority of our users absolutely love it,” he said. “It is really a no-brainer for companies once they see how good it is.”

    An open secret

    Using AI to write penitent responses is a taboo topic in the travel industry, which prides itself on personal service. Conventional wisdom, too, has long held that apologies must “come from the heart.”

    I want people to think that I am sitting there toiling away over their letter.

    Natasha

    Director of a five-star resort

    When asked if she wants travelers to know she uses AI to respond to negative emails and reviews, Natasha said, “I sure do not. I want people to think that I am sitting there toiling away over their letter.”

    One company that acknowledges using AI to deal with customer complaints is the travel booking platform Voyagu, which stores past customer communications to help travel advisors with future interactions, a company representative said.

    “Travel advisors always reply to customers themselves, but Voyagu’s AI system tracks all communication — both written and verbal — and suggests a better way to respond,” she said.

    Brad Birnbaum, CEO of the AI-powered customer service company Kustomer, said technology of this sort is being used “not just within hospitality, but really all forms of customer support.”

    His company, which counts Priceline, Hopper and AvantStay as customers, uses AI to help customer service agents sound more professional, he said.  

    “We will take text that is really rough and convert it to elegant text, to empathetic text,” he said.

    Birnbaum said customers likely don’t know that their interactions with agents are either generated or improved by AI.

    “And I don’t think they would care,” he said. “As a matter of fact, I think they probably welcome an agent system because they’re going to get a better response faster.”

    More discovering it

    Michael Friedman, CEO of the family-run vacation rental company Simple Life Hospitality, said his company does not use AI to respond to customers.

    “We never write an email with AI,” he said. ‘There is still a personal element in the ‘tone of voice’ that I believe AI is missing. … I believe there is nothing better than the human touch.”

    Wanping Aw, managing director of the Japanese travel agency Tokudaw, said she had never thought to use AI to respond to customer complaints. But after learning that other travel companies are, she decided to test ChatGPT with a real-life problem she recently faced.

    She typed: “Our guests are travelling to Mt Fuji. Their bus engine just started smoking. They are scared and anxious to know what is going to happen to their itinerary. What should we do?”

    The result? “PRETTY AMAZING!” she told CNBC by email. “ChatGPT suggested exactly what we did!”

    The chatbot provided a six-step plan that included evacuating the travelers and arranging alternative transportation.  

    Text showing the apology letter ChatGPT generated for Wanping Aw.

    “Actually it’s better,” she said. “ChatGPT provided a good solution — better than my expectations — and also a great apology letter which I wouldn’t have able been to write under such stressful situations.”

    [ad_2]

    Source link

  • The time for generative AI in commercial banking is now | Accenture Banking Blog

    The time for generative AI in commercial banking is now | Accenture Banking Blog

    [ad_1]

    Spring is a time when most people, bankers included, take a fresh look at the challenges they face and tackle them with renewed vigor. Commercial banking should be on this year’s spring-cleaning list. While it is often the bank’s largest revenue generator, its processes continue to be highly manual, constraining the potential for profit growth. Our Commercial Banking Top Trends for 2024 report highlights the most critical issues, and it’s no surprise that generative AI features in all of them.

    To dive deeper into the potential of generative AI across commercial banking, we are working closely with our top commercial banking leaders in North America, Europe and Asia Pacific to put together a three-part series that will help generate ideas for our clients as they start and scale their generative AI journey. While many banks continue to struggle with when and where to begin, we are well underway with a handful of clients to drive the art of the possible to reality with generative AI. We wanted to reflect and provide perspective on the lessons we’ve learned so far.

    Generative AI has sparked a great deal of excitement. That’s obviously because of its potential to transform so many different aspects of our life and work, in ways that are difficult to predict beyond the next few years. But it’s also because the technology is evolving so rapidly that most people struggle to keep up.

    We believe generative AI will prove to be a breakthrough for commercial banks—and their customers, large and small alike. Accenture research has confirmed that banking is one of the industries likely to be most affected by the technology: 41% of the time spent by US bank employees has a high potential to be impacted by automation, and 34% by augmentation. We analyzed specific banking roles and found that few, if any, will remain untouched (Figure 1). The sales and advisory function—which accounts for 39% of all bank employees—is expected to benefit most. 

    Click/tap to view larger.

    We see a fundamental difference in how commercial banks are typically approaching generative AI. On one end of the spectrum, ‘Experimenters’ are focusing on piloting, proving and ultimately scaling use cases that deliver specific benefits to customers, employees or regulators. On the other end, ‘Reinventors’ are looking at entire experiences and value chains and making far-reaching changes to their operating model while introducing a combination of technologies, including generative AI, to systematically create competitive advantage through human and machine collaboration.

    There is no one-size-fits-all approach for commercial banks, and many find themselves somewhere between these two extremes. One of our more innovative clients is using a combination of generative AI and human agents to reach customers with personalized, insight-driven offers at a much lower cost to serve and acquire than its competitors. At the same time, it is using generative AI to automate and augment operations to reduce ‘time to yes’ and ‘time to cash’. No commercial bank can afford to downplay the transformative potential of generative AI for its organization and its customers. The challenge is not whether to explore this innovative technology, but where to start and how to scale rapidly.

    We would like to thank our colleagues, Julie Zhu and Gustavo Pintado for their contributions to this post for his contribution to this post. In our next post, we will share what leading commercial banks are doing to generate immediate value. In the meantime, if you would like to know more about this topic, we’re sure you’ll find our two recent reports useful: Commercial Banking Top Trends for 2024 and The Age of AI–Banking’s New Reality. Or you could simply contact us—we’d love to chat with you about your bank’s journey to generative AI.

    In addition, if you’re planning to attend this year’s nSight2024 in Charlotte, please make sure to connect with our Accenture experts. We’ll have a booth and will moderate multiple sessions, including two on AI and commercial banking. Learn more about our presence as the exclusive diamond sponsors of nSight.

    Disclaimer: This content is provided for general information purposes and is not intended to be used in place of consultation with our professional advisors. Copyright© 2024 Accenture. All rights reserved. Accenture and its logo are registered trademarks of Accenture.

    [ad_2]

    Jared Rorrer

    Source link

  • Core banking modernization: Unlocking legacy code with generative AI | Accenture Banking Blog

    Core banking modernization: Unlocking legacy code with generative AI | Accenture Banking Blog

    [ad_1]

     Early trials with generative AI show that the tangle of legacy code that hinders every aspect of the transformation of banks’ core products and services could be quickly and effectively resolved. By Alvaro Ruiz, Global Core Banking Lead, Accenture.

    At the heart of most traditional banks is a mainframe computer running the software that defines and drives the organization’s core processes. This is where many of the bank’s problems start. To meet customers’ and regulators’ growing demands, to improve cost efficiency, and to keep up with the rapid pace of change – which includes taking advantage of emerging technologies – banks need the agility to innovate quickly. This is not something the industry is famous for.

    The reason is that the core system behind every banking product comprises many thousands of lines of computer code. Most of it was written decades ago, in languages that can no longer meet the needs of modern banking. Over the years it has become multi-layered, convoluted and fragile. This is due to repeated changes, add-ons and connections with newer applications, and the general tendency to retain rather than decommission outdated applications. These issues, together with the fact that the code tends to be poorly documented, makes it difficult to understand and upgrade it.

    Banks have long wanted to re-engineer or replace this code – convert COBOL to a modern language like Java, for example – but this would be a mammoth task requiring hundreds of engineers and taking several years. Most source code programmers have long since retired, making it difficult and expensive to find and recruit them in sufficient numbers. This, in turn, adds to the risk of the exercise. And so it remains on the back burner, an important priority but one that is hard to justify in the current circumstances.

     

    Legacy tech: A costly roadblock to banking innovation

    At last, however, we are seeing light at the end of the tunnel. We believe that generative AI, combined with new composable, interoperable and coreless architectures, could offer the solution to this decades-old problem. We are working with a handful of banking clients to test this approach, and the early results are very promising. As my colleague Michael Abbott, Accenture’s global banking lead, told American Banker: “It’s early days, but we’re seeing 80% to 85% accuracy.”

    The process starts by using specialist generative AI models and a process called retrieval augmented generation (RAG) to reverse-engineer the code. This allows us to understand and document the requirements which the code is designed to meet.

    The next step is forward-engineering, for which we see two main paths. The first is the automatic recoding of the software into a modern and versatile language, using an ISO-functional approach and/or specialized generative AI models, to re-imagine the functionality that is required to meet the current objectives of the technology and/or the business. We then use generative AI to automatically test every part of the new code and its performance, and to facilitate the transition from a mainframe hardware and software stack to a modern set of frameworks and compute technology.

    Put simply, we replace the old code with new programs that are simpler and more flexible, and that support the bank’s modernization strategy.

     

    “It’s early days, but we’re seeing 80% to 85% accuracy.”

     

    Having composable, interoperable and coreless architectures is key to most banks’ modernization, as it allows parts of their legacy applications to co-exist frictionlessly with modernized parts and even with third-party products from different sources. It also allows banks to employ different hosting models at the same time, which is essential to meeting new business requirements and generating timely business outcomes while legacy modernization is under way.

    The benefits of this approach are potentially game-changing. A core modernization project that in some reported cases cost more than US$700 million and caused years-long business bottlenecks during their execution could now be completed in a fraction of the time with no negative impact on the business. The cost and risk advantages are obvious. In recent work with a global bank we converted 25,000 lines of legacy code, cutting the reverse-engineering effort by 50% and boosting testing efficiency by 30%. This saved more than 50% of the original budget.

    More importantly, our analysis indicates that banks that modernize their core could potentially increase their return on equity by 8.3 percentage points by improving their manufacturing, distribution and servicing. They would also dramatically facilitate risk management and regulatory compliance.

    It is of course vital that we guard against the tendency of generative AI to plagiarize and, if it doesn’t know the answer, to fabricate. RAG helps in this regard by drawing on the bank’s knowledge base, including its existing code repository, rather than unverified external sources. However, until generative AI matures and these failings are remedied, the new code does need to be carefully checked and tested.

    From legacy to leading with a modern digital core

    Every bank knows that a modern digital core is critical to its ability to compete and meet customer needs. I believe we are on the cusp of putting this elusive goal within the reach of every financial services organization – quickly and affordably, while keeping a tight control on risk.

    To find out more about this topic we have a section titled The Key to the Core in our Top 10 Trends for Banking in 2024. We have also just published a new report, The Age of AI – Banking’s New Reality, which explores the potential role of generative AI throughout the bank.

    If you would like to find out how our code conversion trials are progressing, please contact me directly at LinkedIn.

    Disclaimer: This content is provided for general information purposes and is not intended to be used in place of consultation with our professional advisors. Copyright© 2024 Accenture. All rights reserved. Accenture and its logo are registered trademarks of Accenture.

    [ad_2]

    Alvaro Ruiz

    Source link

  • Tech CEOs Altman, Nadella, Pichai and others join government AI safety board led by DHS’ Mayorkas

    Tech CEOs Altman, Nadella, Pichai and others join government AI safety board led by DHS’ Mayorkas

    [ad_1]

    WASHINGTON — The CEOs of leading U.S. technology companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s critical services from “AI-related disruptions.”

    Homeland Security Secretary Alejandro Mayorkas announced the new board Friday which includes key corporate leaders in AI development such as OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella, Google CEO Sundar Pichai and Nvidia CEO Jensen Huang.

    AI holds potential for improving government services but “we recognize the tremendously debilitating impact its errant use can have,” Mayorkas told reporters Friday.

    Also on the 22-member board are the CEOs of Adobe, chipmaker Advanced Micro Devices, Delta Air Lines, IBM, Northrop Grumman, Occidental Petroleum and Amazon’s AWS cloud computing division. Not included were social media companies such as Meta Platforms and X.

    Corporate executives dominate, but it also includes civil rights advocates, AI scientist Fei-Fei Li who leads Stanford University’s AI institute as well as Maryland Gov. Wes Moore and Seattle Mayor Bruce Harrell, two public officials who are “already ahead of the curve” in thinking about harnessing AI’s capabilities and mitigating risks, Mayorkas said.

    He said the board will help the Department of Homeland Security stay ahead of evolving threats.

    [ad_2]

    Source link

  • Hugging Face releases a benchmark for testing generative AI on health tasks | TechCrunch

    Hugging Face releases a benchmark for testing generative AI on health tasks | TechCrunch

    [ad_1]

    Generative AI models are increasingly being brought to healthcare settings — in some cases prematurely, perhaps. Early adopters believe that they’ll unlock increased efficiency while revealing insights that’d otherwise be missed. Critics, meanwhile, point out that these models have flaws and biases that could contribute to worse health outcomes.

    But is there a quantitative way to know how helpful, or harmful, a model might be when tasked with things like summarizing patient records or answering health-related questions?

    Hugging Face, the AI startup, proposes a solution in a newly released benchmark test called Open Medical-LLM. Created in partnership with researchers at the nonprofit Open Life Science AI and the University of Edinburgh’s Natural Language Processing Group, Open Medical-LLM aims to standardize evaluating the performance of generative AI models on a range of medical-related tasks.

    Open Medical-LLM isn’t a from-scratch benchmark, per se, but rather a stitching-together of existing test sets — MedQA, PubMedQA, MedMCQA and so on — designed to probe models for general medical knowledge and related fields, such as anatomy, pharmacology, genetics and clinical practice. The benchmark contains multiple choice and open-ended questions that require medical reasoning and understanding, drawing from material including U.S. and Indian medical licensing exams and college biology test question banks.

    “[Open Medical-LLM] enables researchers and practitioners to identify the strengths and weaknesses of different approaches, drive further advancements in the field and ultimately contribute to better patient care and outcome,” Hugging Face wrote in a blog post.

    gen AI healthcare

    Image Credits: Hugging Face

    Hugging Face is positioning the benchmark as a “robust assessment” of healthcare-bound generative AI models. But some medical experts on social media cautioned against putting too much stock into Open Medical-LLM, lest it lead to ill-informed deployments.

    On X, Liam McCoy, a resident physician in neurology at the University of Alberta, pointed out that the gap between the “contrived environment” of medical question-answering and actual clinical practice can be quite large.

    Hugging Face research scientist Clémentine Fourrier, who co-authored the blog post, agreed.

    “These leaderboards should only be used as a first approximation of which [generative AI model] to explore for a given use case, but then a deeper phase of testing is always needed to examine the model’s limits and relevance in real conditions,” Fourrier replied on X. “Medical [models] should absolutely not be used on their own by patients, but instead should be trained to become support tools for MDs.”

    It brings to mind Google’s experience when it tried to bring an AI screening tool for diabetic retinopathy to healthcare systems in Thailand.

    Google created a deep learning system that scanned images of the eye, looking for evidence of retinopathy, a leading cause of vision loss. But despite high theoretical accuracy, the tool proved impractical in real-world testing, frustrating both patients and nurses with inconsistent results and a general lack of harmony with on-the-ground practices.

    It’s telling that of the 139 AI-related medical devices the U.S. Food and Drug Administration has approved to date, none use generative AI. It’s exceptionally difficult to test how a generative AI tool’s performance in the lab will translate to hospitals and outpatient clinics, and, perhaps more importantly, how the outcomes might trend over time.

    That’s not to suggest Open Medical-LLM isn’t useful or informative. The results leaderboard, if nothing else, serves as a reminder of just how poorly models answer basic health questions. But Open Medical-LLM, and no other benchmark for that matter, is a substitute for carefully thought-out real-world testing.

    [ad_2]

    Kyle Wiggers

    Source link

  • Vana plans to let users rent out their Reddit data to train AI | TechCrunch

    Vana plans to let users rent out their Reddit data to train AI | TechCrunch

    [ad_1]

    In the generative AI boom, data is the new oil. So why shouldn’t you be able to sell your own?

    From big tech firms to startups, AI makers are licensing e-books, images, videos, audio and more from data brokers, all in the pursuit of training up more capable (and more legally defensible) AI-powered products. Shutterstock has deals with Meta, Google, Amazon and Apple to supply millions of images for model training, while OpenAI has signed agreements with several news organizations to train its models on news archives.

    In many cases, the individual creators and owners of that data haven’t seen a dime of the cash changing hands. A startup called Vana wants to change that.

    Anna Kazlauskas and Art Abal, who met in a class at the MIT Media Lab focused on building tech for emerging markets, co-founded Vana in 2021. Prior to Vana, Kazlauskas studied computer science and economics at MIT, eventually leaving to launch a fintech automation startup, Iambiq, out of Y Combinator. Abal, a corporate lawyer by training and education, was an associate at The Cadmus Group, a Boston-based consulting firm, before heading up impact sourcing at data annotation company Appen.

    With Vana, Kazlauskas and Abal set out to build a platform that lets users “pool” their data — including chats, speech recordings and photos — into data sets that can then be used for generative AI model training. They also want to create more personalized experiences — for instance, daily motivational voicemail based on your wellness goals, or an art-generating app that understands your style preferences  — by fine-tuning public models on that data.

    “Vana’s infrastructure in effect creates a user-owned data treasury,” Kazlauskas told TechCrunch. “It does this by allowing users to aggregate their personal data in a non-custodial way … Vana allows users to own AI models and use their data across AI applications.”

    Here’s how Vana pitches its platform and API to developers:

    The Vana API connects a user’s cross-platform personal data … to allow you to personalize your application. Your app gains instant access to a user’s personalized AI model or underlying data, simplifying onboarding and eliminating compute cost concerns … We think users should be able to bring their personal data from walled gardens, like Instagram, Facebook and Google, to your application, so you can create amazing personalized experience from the very first time a user interacts with your consumer AI application.

    Creating an account with Vana is fairly simple. After confirming your email, you can attach data to a digital avatar (like selfies, a description of yourself and voice recordings) and explore apps built using Vana’s platform and data sets. The app selection ranges from ChatGPT-style chatbots and interactive storybooks to a Hinge profile generator.

    Image Credits: Vana

    Now why, you might ask — in this age of increased data privacy awareness and ransomware attacks — would someone ever volunteer their personal info to an anonymous startup, much less a venture-backed one? (Vana has raised $20 million to date from Paradigm, Polychain Capital and other backers.) Can any profit-driven company really be trusted not to abuse or mishandle any monetizable data it gets its hands on?

    Vana Reddit DAO

    Image Credits: Vana

    In response to that question, Kazlauskas stressed that the whole point of Vana is for users to “reclaim control over their data,” noting that Vana users have the option to self-host their data rather than store it on Vana’s servers and control how their data’s shared with apps and developers. She also argued that, because Vana makes money by charging users a monthly subscription (starting at $3.99) and levying a “data transaction” fee on devs (e.g. for transferring data sets for AI model training), the company is disincentivized to exploit users and the troves of personal data they bring with them.

    “We want to create models owned and governed users who all contribute their data,” Kazlauskas said, “and allow users to bring their data and models with them to any application.”

    Now, while Vana isn’t selling users’ data to companies for generative AI model training (or so it claims), it wants to allow users to do this themselves if they choose — starting with their Reddit posts.

    This month, Vana launched what it’s calling the Reddit Data DAO (Digital Autonomous Organization), a program that pools multiple users’ Reddit data (including their karma and post history) and lets them to decide together how that combined data is used. After joining with a Reddit account, submitting a request to Reddit for their data and uploading that data to the DAO, users gain the right to vote alongside other members of the DAO on decisions like licensing the combined data to generative AI companies for a shared profit.

    It’s an answer of sorts to Reddit’s recent moves to commercialize data on its platform.

    Reddit previously didn’t gate access to posts and communities for generative AI training purposes. But it reversed course late last year, ahead of its IPO. Since the policy change, Reddit has raked in over $203 million in licensing fees from companies including Google.

    “The broad idea [with the DAO is] to free user data from the major platforms that seek to hoard and monetize it,” Kazlauskas said. “This is a first and is part of our push to help people pool their data into user-owned data sets for training AI models.”

    Unsurprisingly, Reddit — which isn’t working with Vana in any official capacity — isn’t pleased about the DAO.

    Reddit banned Vana’s subreddit dedicated to discussion about the DAO. And a Reddit spokesperson accused Vana of “exploiting” its data export system, which is designed to comply with data privacy regulations like the GDPR and California Consumer Privacy Act.

    “Our data arrangements allow us to put guardrails on such entities, even on public information,” the spokesperson told TechCrunch. “Reddit does not share non-public, personal data with commercial enterprises, and when Redditors request an export of their data from us, they receive non-public personal data back from us in accordance with applicable laws. Direct partnerships between Reddit and vetted organizations, with clear terms and accountability, matters, and these partnerships and agreements prevent misuse and abuse of people’s data.”

    But does Reddit have any real reason to be concerned?

    Kazlauskas envisions the DAO growing to the point where it impacts the amount Reddit can charge customers for its data. That’s a long ways off, assuming it ever happens; the DAO has just over 141,000 members, a tiny fraction of Reddit’s 73-million-strong user base. And some of those members could be bots or duplicate accounts.

    Then there’s the matter of how to fairly distribute payments that the DAO might receive from data buyers.

    Currently, the DAO awards “tokens” — cryptocurrency — to users corresponding to their Reddit karma. But karma might not be the best measure of quality contributions to the data set — particularly in smaller Reddit communities with fewer opportunities to earn it.

    Kazlauskas floats the idea that members of the DAO could choose to share their cross-platform and demographic data, making the DAO potentially more valuable and incentivizing sign-ups. But that would also require users to place even more trust in Vana to treat their sensitive data responsibly.

    Personally, I don’t see Vana’s DAO reaching critical mass. The roadblocks standing in the way are far too many. I do think, however, that it won’t be the last grassroots attempt to assert control over the data increasingly being used to train generative AI models.

    Startups like Spawning are working on ways to allow creators to impose rules guiding how their data is used for training while vendors like Getty Images, Shutterstock and Adobe continue to experiment with compensation schemes. But no one’s cracked the code yet. Can it even be cracked? Given the cutthroat nature of the generative AI industry, it’s certainly a tall order. But perhaps someone will find a way — or policymakers will force one.

    [ad_2]

    Kyle Wiggers

    Source link

  • It’s not just Jamie Dimon and Wall Street. Local bank branches have big AI ambitions

    It’s not just Jamie Dimon and Wall Street. Local bank branches have big AI ambitions

    [ad_1]

    The pandemic accelerated changes at big banks, where Chase and Wells Fargo already have branches that look more like lounges than banks. But it’s not just Wall Street-sized banks where AI is disrupting the way things works.

    Small, independent branches are also following, and experts and executives say they’ll use their small size and agility to their advantage. The local bank branch, with its traditional teller windows and long lines, will transform into an AI-infused, customer-centric financial services center, aiming to beat the big banks on the service that AI will allow them to provide customers.

    “As a small bank, your only value proposition is service. Nothing is proprietary anymore,” said Christopher Naghibi, executive vice president and CEO of Irvine, California-based First Foundation Bank, which has 43 branches in five states. With just over $10 billion in assets, Naghibi helped shepherd First Foundation from a single branch in 2007 to its size today.

    Naghibi envisions community bank branches with fewer employees and more AI. The employees would be freed to help customers reach their financial goals and not be stuck answering basic questions about recent transactions and account information.

    “The teller line, as we see it today, will eventually die,” he said.

    Naghibi isn’t alone among bank CEOs contemplating the AI future for financial workers and customer interactions.

    Jamie Dimon, the veteran chairman and CEO of JPMorgan Chase, has written about artificial intelligence in his annual shareholder letters dating back to 2017. But his latest letter, released on Monday, was notable not only for his AI predictions — he wrote it could be as transformational as the printing press, the steam engine, electricity, computing and the internet — but also how he thinks the technology could impact the jobs of the bank’s more than 310,000 employees.

    “Over time, we anticipate that our use of AI has the potential to augment virtually every job, as well as impact our workforce composition,” Dimon wrote. “It may reduce certain job categories or roles, but it may create others as well.”

    Many of JPMorgan’s AI ambitions are taking place behind the scenes rather than at the teller window — it now has more than 2,000 AI and machine learning employees and data scientists working on 400 applications including fraud detection, marketing and risk controls, Dimon said. The bank is also exploring the use of generative AI in software engineering, customer service and ways to boost employee productivity.

    For smaller banks, the customer interaction may be the critical application, with AI freeing a bank’s resources from answering routine questions..

    “This will be at the forefront of how we engage in service,” Naghibi said. “You can ask AI, ‘Hey, did this happen? Did this check clear? How many payments have I made to this person?’ You’ll get answers directly from AI.”

    Customers will be able to go in 24/7 with a special access technology and pay bills by touchscreens, send a wire at midnight, and see transactions updated in real-time. “Effectively, a small bank’s branch will be a wall of screens,” he said.

    Security will improve at transformed branches as paper money becomes less plentiful and more locked into machines. The AI will bring a lot more security to branches also, with plenty of cameras, biometrics used for access, and PIN codes a thing of the past. It will also help in more extreme scenarios. “If someone has a weapon, AI can automatically see that it is a weapon, sense it, and prevent a problem,” Naghibi said.

    Jackie Verkuyl, chief administrative officer of the eight-branch BAC Community Bank in Stockton, California, a commercial and consumer bank with over $800 million deposits, says implementation of generative AI is already well underway and transforming the small bank. “The AI is getting smarter every day,” she said.

    But while the corner bank will become an AI-infused financial services center, Verkuyl says generative AI will bring the same services to phones, far beyond the capability of current apps. BAC uses an app called Smart Alac (an acronym for All Access Connection), developed by San Francisco-based Agent IQ, which answers customer questions and matches them with a BAC banker who becomes their assigned point of contact. “This allows community and regional banks to provide self-service AI and have a relationship-based banking experience; every customer has a primary point of contact,” said Slaven Bilac, CEO of Agent IQ, a AI-powered customer support platform.

    AI distills all the questions that customers are asking Smart Alac and provides a report to Verkuyl, allowing her to tailor the experience more. “We get lots and lots of questions about debit cards, so we created a whole menu that customers can help themselves to,” she said.  

    “Chase and Wells Fargo’s advantage over BAC is the amount of data they have. We can provide AI benefits without large amounts of know-how from BAC’s team,” Bilac said.

    Not everyone in the industry is convinced.

    The way a bank controls and shares large amounts of data with AI will be critical to effective transformation, according to Ken Tumin, a senior analyst at LendingTree. Banks have to give AI access to enough data to be effective, from account disclosures to frequently asked questions. “Unless a bank is committed to generating and maintaining high quality and comprehensive data, the use of AI in customer service will likely result in more customers being aggravated than pleased,” he said.

    The Independent Community Banking Association, a trade group for small banks, doesn’t think AI can outshine the human element in a relationship. While AI will be a significant factor, “it will never match the local knowledge and personal relationships that are crucial to helping a first-time homebuyer get a mortgage or helping a small business or farm finance its operations,” said ICBA assistant vice president and regulatory counsel Mickey Marshall.

    But bankers like Naghibi believe AI will allow small banks to become more involved in their communities, and in effect, more human.

    “Right now, getting branch managers to go out into the community and get business is tough. We are not a large, important bank; people are not going to come to us. You have to go out and build relationships,” Naghibi said. “If generative AI is in place, you as a branch manager should be going to get business.”

    Multiple human and tech-centered connections serve as “touchpoints” to the consumer, Naghibi said, and “the more touchpoints the bank has in their financial lives, the more we can be involved in their lives. As a community bank, that is where the edge is.”

    “Community banking needs to change; every single one of my clients has my mobile number,” he added. “People don’t want untouchable and unreachable. Making local bankers more accessible is the promise of AI.” 

    [ad_2]

    Source link

  • TCL’s first original movie is an absurd-looking, AI-generated love story

    TCL’s first original movie is an absurd-looking, AI-generated love story

    [ad_1]

    Many major tech companies, particularly those that operate in the TV hardware business, have dipped their toes into original content. Although it’s had its own free, ad-supported TV (FAST) channels for a while, is late to that party. Not for much longer though, as the company is set to release its first special, a short romance movie, on TCLtv+ this summer. There’s just one slight hitch: TCL is using generative AI to make original content for its platform, and early signs do not bode well.

    The company has released the first trailer for Next Stop Paris, which it’s calling “the first AI-powered love story.” TCL used human writers, as well as actors for motion capture and voice performances. While it has artists in the US, Canada, UK and Poland working on the project, it relied heavily on generative AI.

    “I am excited by this opportunity to differentiate us with original programming. AIGC [artificial intelligence generated content] for us is the beginning,” Chris Regina, TCL’s chief content officer, told . “It’s a new approach and it makes sense coming from a tech and hardware company that that’s where we’re going to start.”

    The plot of Next Stop Paris, such as it is, sees a young woman going on her honeymoon to Paris alone after her fiancé ran off with someone in their wedding party. She meets a stranger on the train and the pair explore the French capital together.

    TCL is hoping that original content can help draw viewers to TCLtv+ and help build a brand identity for the company. While it’s not entirely fair to judge a film based on a trailer, the Next Stop Paris clip gives a terrible first impression for both the project and TCLtv+.

    The look of the characters changes throughout, from a moderately realistic style to the hyperrealism we often see from the likes of Midjourney, and they project all of the emotion of a pair of dead fish. Lip syncing is almost non-existent and the characters walk in a very unnatural way.

    The trailer feels like the worst kind of fever dream. Saying this looks like garbage would be an insult to garbage. If “content is king,” as Regina put it, Next Stop Paris looks like a pauper.

    The Hallmark Channel gets a lot of flak for its romance movies and romcoms, but at least there’s an earnestness and high level of care behind the network’s output, which does a lot to fill a gap in the theatrical slate. TCL is trying to muscle into that space too.

    “There’s an audience there that’s watching our service and we see a hole in the marketplace with theatrical rom-coms not as prevalent,” Regina said. “They’re a guilty pleasure. You get under a blanket and watch in front of your TV set. So that’s the driver.” On top of that, TCL plans to make its original content shoppable and have AI-generated “characters in our shows that can be brand ambassadors and influencers for advertisers.”

    Thankfully, TCL isn’t only working on AI-generated guff. “We are looking at doing traditional content. So movies, scripted shows, unscripted content, specials,” Regina, who wrote Next Stop Paris with TCL chief creative officer Daniel Smith, said. “The next thing we have brewing isn’t AI at all.” That’s good, because whatever’s next can’t look much worse than Next Stop Paris.

    [ad_2]

    Kris Holt

    Source link

  • OpenAI makes ChatGPT ‘more direct, less verbose’ | TechCrunch

    OpenAI makes ChatGPT ‘more direct, less verbose’ | TechCrunch

    [ad_1]

    ChatGPT, OpenAI’s viral AI-powered chatbot, just got a big upgrade.

    OpenAI announced today that premium ChatGPT users — customers paying for ChatGPT Plus, Team or Enterprise — can now leveraged an updated and enhanced version of GPT-4 Turbo, one of the models that powers the conversational ChatGPT experience.

    This new model (“gpt-4-turbo-2024-04-09”) brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base. It was trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off.

    “When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language,” OpenAI writes in a post on X.

    The ChatGPT update — which follows the GA launch on Tuesday of new models in OpenAI’s API, notably GPT-4 Turbo with Vision, which adds image understanding capabilities to the normally-text-only GPT-4 Turbo — arrives after an unflattering week for OpenAI.

    Reporting from The Intercept revealed that Microsoft pitched OpenAI’s DALL-E text-to-image model as a battlefield tool for the U.S. military. And, according to a piece in The Information, OpenAI recently fired two researchers — including an ally of chief scientist Ilya Sutskever, who was among those who pushed for the ouster of CEO Sam Altman late last year — for allegedly leaking information.

    [ad_2]

    Kyle Wiggers

    Source link

  • A super-intelligent AI doomsday is not where futurists see the world going

    A super-intelligent AI doomsday is not where futurists see the world going

    [ad_1]

    The widespread arrival of generative artificial intelligence has prompted alarm from many quarters. A recent U.S. government-funded study warned of “uncontrollable” consequences from AI. There are catastrophic concerns over AI-powered cyberattacks and the potential loss of jobs as AI replaces tasks.

    But doom is only one interpretation of AI.

    According to experts paid to predict the future, the arrival of AI is more likely than not to offer a roadmap out of humanity’s worst impulses and create a better, more equitable world. That’s the rosy scenario outlined in a recent survey by Tata Consultancy Services, which measured the AI views of 21 futurists worldwide.

    “We are now at a point in time where science and technology can enable the advancement of humanity in a way we have not seen in a long time,” said Frank Diana, managing partner and principal futurist at Tata. “We are in a place we haven’t been since the second industrial revolution,” he said, predicting that AI’s widespread arrival will herald innovation in transportation, energy, medicine, and communication.  

    This view is a world away from some prominent tech leaders who have darkly warned that AI will overtake human intelligence within a few years. In Silicon Valley itself, there is a big split between techno-optimists and doomsdayers.

    Diana says the doomsday scenarios distract and undermine the technology’s potential.

    “I think, honestly, the conversation around conscious robots and artificial superintelligence gets in the way,” Diana said. “If AI is managed correctly, we will instead talk about all the great things AI can do for humanity.”   

    He said today’s often negative view of AI in the popular imagination has roots in the 1970s when Hollywood shifted towards more ominous themes that matched the country’s mood. But before that, he said, technology was viewed as something that could one day deliver utopia.

    Author and futurist Bernard Marr, who was not involved in the Tata survey, echoed the more optimistic thinking.

    “I see all the amazing benefits AI can bring and I see it every day. I believe AI is the most powerful technology humans have ever had access to,” Marr said, a power he believes can be used to bulldoze inequities and challenges in health, education, and climate change.

    “We are a very long way from AI becoming sentient, if ever. But AI is very, very good at doing things that in the past only humans could do,” Marr said. “The mundane is a waste of our power as humans. AI will allow us to focus on the amazing power that makes us human,” he added.

    He sees AI’s role evolving into being a constant co-pilot rather than staying awake at night worrying about robots taking over the planet.

    “AI will make doctor and patient relationships much better,” Marr said, describing how the insurance and regulatory paperwork that bogs doctors down now will be taken over by AI, freeing up the practitioner to spend more time with patients. “I don’t see AI as anything scary; all the systems being developed are not working against humans but are making us better.”

    Given AI’s power, regulations, laws, and safeguards are necessary to prevent abuse.

    “But already you are starting to see that happen,” Marr said, referring to recent legislation by the EU.

    So why the widespread fear? When people talk about sentient AI, they usually turn immediately to the ominous. Sentient, however, can also be benevolent or values-neutral, but that is not the AI people usually think about.

    The reason people fear AI lies in our very humanity, said Kelsey Latimer, a Florida-based clinical psychologist who specializes in anxiety disorder. She said that humans are hard-wired to brace themselves for the worst.

    “From an evolutionary point of view, we are primed to see the negative and scary things so that we could see the predators coming toward us and respond,” Latimer said. If we view something as unfavorable and it turns out to be positive, no harm is done. If we view something as positive but it turns out to be negative, then we often need to prepare for the consequences.  

    Futurists like Diana and Marr predict the consequences of AI will be positive ones.

    “With the use of AI, the passion and the creativity that we as humans can do will start to shine through,” Diana said.

    [ad_2]

    Source link