Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


No longer flying under the radar, Cohere AI is ramping up its large language model (LLM) enterprise efforts.

Cohere announced today a new effort to help bring its LLM technology to conversational AI leader LivePerson in an innovative approach that aims to help reduce bias, improve explainability and limit the risk of AI hallucinations. The effort could have a profound impact on the way that LLMs are able to safely and responsibly be deployed in enterprise environments.

Based in Toronto, Canada, the founding team of Cohere has its roots in the group of people from Google Brain that helped to kick off the generative AI revolution with transformers. Aidan Gomez, the CEO of Cohere, is one of the co-authors of the original research paper on transformers titled “Attention Is All You Need.”

>>Follow VentureBeat’s ongoing generative AI coverage<<

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

LivePerson is no lightweight in the world of AI either; as a pioneer in the conversational AI space, it helps enterprises with bots that can understand user inquiries. The basic idea behind the partnership is for LivePerson to adapt Cohere’s LLM models with extended training to provide generative AI that can support enterprise deployments, responding to inquiries and having the ability to actually execute tasks as well.

“One of the areas that’s most exciting in the large language modeling space is dialogue. And it is the conversational models, I think, that’ve been the thing that really just blew this technology into the mainstream,” Gomez told VentureBeat.

What the combination of LivePerson and Cohere brings to LLMs

Gomez explained that as part of the partnership, LivePerson will be using Cohere’s LLM models and adapting them to fit enterprise deployment requirements for its customers. He emphasized that the training and model-tuning occur in a private environment where the LivePerson data is kept secure and isn’t mixed with general models from Cohere.

LivePerson is certainly no stranger to the world of LLMs, with the company using the technology since 2018, according to chief scientist Joe Bradley. 

“We’re no stranger to this technology, but the way we look at this is not to go build one large language model that does everything, but more to build a constellation of models that solve different pieces of the problem,” Bradley told VentureBeat.

Having multiple LLMs is important as it can help reduce risk by not being dependent on just a single model. With the addition of Cohere’s LLMs into LivePerson, the idea is to help agents decide what the next steps to take in a conversation are. The next steps could include more text or suggestions; they could also include actual operations or actions that the LLM can take to support a user inquiry.

Bradley said LivePerson is also looking at using Cohere to help safeguard conversations that a chatbot generates. The basic idea is to fine-tune the Cohere models to help make sure that the statements that are coming out of the LivePerson system are factual and accurate.

The plan is for LivePerson to first use Cohere for LivePerson’s own internal usage to improve the platform as an initial test case. The goal thereafter is to extend the Cohere integration into end-user enterprise deployments, providing a new set of capabilities to help organizations deploy the most accurate models possible.

No need to hallucinate, thanks to retrieval augmented generation 

The process of fine-tuning and customizing an LLM is something that Gomez referred to as an adaptation.

Gomez explained that the way Cohere handles adaptations is with a combination of supervised learning and reinforcement learning in a continuous process. He noted that the system uses an RLHF (reinforcement learning human feedback) loop to help train the model.

A key part of the training process is making sure that AI explainability is front and center. Gomez said Cohere has a number of techniques for helping provide AI explainability, including one called retrieval augmented generation.

“With retrieval augmented generation, you’re generating in the same way that a large language model would, but you’re asking the model to cite sources,” Gomez said.

As such, when the LLM generates a response, it’s citing that response back to the corpus of knowledge that it has. The idea is to have much more explainable outputs where humans can actually go one step deeper and they can diagnose any items that were generated as there are citations.

Explainability is also critical to limiting the risk of AI hallucinations, which are particularly risky for enterprise applications. An AI hallucination is a term used for when an LLM provides an inaccurate response.

“That [retrieval augmented generation] solves the hallucination problem, because now the model can’t just say something without grounding, without citing a source,” Gomez said. “Now the model has to make reference to something it needs to ostensibly justify its answer in a way that humans can verify.”

More partnerships ahead for Cohere

As Cohere continues to grow and compete against its rivals, including industry giant OpenAI, a key focus will be on pushing out LLM technologies for enterprise use cases. Gomez said a great way for Cohere to push its LLM technology into enterprises is to work closely with organizations like LivePerson.

Overall the push into enterprise is now in some ways easier than it has ever been as there is more awareness than ever before about the capabilities of LLMs, thanks in no small part to the staggering success of ChatGPT.

“Cohere is nearly four years old now and, in most conversations that I have had, even as recently as six months ago, I’d spend the first 30 minutes explaining the technology — what it is and why it’s important,” Gomez said “Now that’s completely changed and everyone has used the technology themselves and have first-party experience.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Sean Michael Kerner

Source link

You May Also Like

OpenAI GPT Store launching next week

OpenAI appears to be ready to kick off the year 2024 with…

Rise in Indian corporate lending signals new investment cycle

Indian lenders are expanding lending to local corporations at the fastest pace…

Live news: Eight in 10 UK businesses claim they cannot hire skilled employees, survey shows

Eight in 10 UK businesses claim they cannot hire skilled employees, according…

Walmart pays $1.4B to increase its stake in India’s Flipkart – report (NYSE:WMT)

To ensure this doesn’t happen in the future, please enable Javascript and…