Choosing the Right Dialogue Solution

Your VIP is more than just a pretty face

Hello, I’m James; CXO at Rapport. This series of articles is designed to provide high-level guidance to the uninitiated on how to get the most out of the Rapport platform.

Every aspect of the Virtual Interactive Personality (VIP) experience you create using Rapport is critical. To ensure the best possible outcomes for your organization and its audiences, the various pieces need to come together in a way that’s greater than the sum of their parts. Luckily, Rapport makes this incredibly easy in a no-code, plug’n’play, mix-and-match way. 

The face and the voice are naturally the most tangible components in creating a relatable character. Still, the success of the engagement will of course depend heavily on how quickly and accurately your VIP can answer questions, or guide users through processes to a successful conclusion. 

As such, the brain you choose, and how well it can convey its knowledge, is paramount. 

Broadly speaking, there are two main approaches you can take to power the conversation: LLMs (AI-based) and Branching Dialogue Solutions (rules-based). 

In this article, we’ll explore the definitions and the pros and cons of each option. 

Branching Dialogue Solutions

These are essentially decision trees that rely on predefined question-and-answer pairs. The VIP drives the conversation with questions, and the human participant provides answers to advance the conversation to the next stage as appropriate. Try this example of a branching dialogue solution, where the conversation is guided, and very much on rails. In this case, the VIP will only use the specific words and phrases we’ve authored, but users can reply in natural language that doesn’t need to match precisely the wording of the options presented on screen.  

LLMs

AI is the overarching field that encompasses the development of intelligent systems capable of performing tasks that typically require human intelligence. It includes subfields like machine learning (ML), computer vision, robotics, and natural language processing (NLP). 

LLMs (Large Language Models) are a category of AI models trained on massive amounts of data enabling them to understand and generate natural language. Some well-known examples include ChatGPT, Google Gemini, Groq, and Anthropic.   

You can finetune LLMs to optimize their handling of specific tasks. Simple prompt engineering can go a long way in shaping a Rapport experience, whereby you write instructions to define the character, their communication style, and their field of expertise. However, other, more elaborate methods can be employed to achieve higher levels of accuracy. These can include uploading documents, scraping websites, referencing videos, and importing other materials. 

You can interact with a completely untrained example of ChatGPT, here.

Hybrids

Of course, there are also ways of combining LLMs and branching dialogue solutions that make sense. For example, a branching solution could be used to keep the conversation on track to its intended goal, while an LLM generates the optimal wording for each question. More typically, it may be appropriate to switch between the “on-rails” nature of a branching dialogue solution to handle certain topics, and an LLM to pull in information on other topics outside of the primary domain, perhaps where factual accuracy and control over the specific response is less critical. 

So, which approach is right for you? 

Branching dialogue solutions follow predefined scripts and are suitable for use cases where accuracy is crucial. They are ideal for maintaining on-topic conversations, ensuring factual correctness, and handling structured data. They are commonly used in regulated industries and scenarios like product configurators (e.g. exploring which options to specify on a new car, or choosing a mobile phone tariff).

LLM-based conversations may offer a more engaging user experience due to their generative capabilities. However, they come with higher risks related to content accuracy.

1. Structured Data and Factual Accuracy

  • Branching dialogue solutions: If your use case involves structured data (e.g. product specifications in an e-commerce scenario, legal information) and factual accuracy is critical (e.g. compliance in regulated industries such as healthcare and finance), consider using branching dialogue solutions. They enforce strict rules and keep conversations on track.

  • LLMs: LLMs can handle structured data to some extent, but their strength lies in generating contextually relevant responses. If you need both accuracy and natural language fluency, consider a hybrid approach - use branching dialogue solutions for aspects of the conversation that depend on structured data, and LLMs for free-flowing dialogue. This could be a better option in simulations-based training and education settings where discussion is key to understanding a topic from a broader range of perspectives. 

2. Engagement and User Experience

  • Branching dialogue solutions: While branching dialogue solutions lack the creativity of LLMs, they excel in maintaining focus. If your goal is to guide users through specific processes (e.g. troubleshooting, order placement), branching dialogue solutions ensure clarity and adherence to predefined paths.

  • LLMs: For engaging, companion-style experiences, LLM-based solutions shine. They can hold dynamic conversations, understand context, and provide creative responses. Businesses looking to enhance customer interactions or create virtual companions may prefer this approach.

3. Integration and Deployment

  • Branching dialogue solutions: Implementing branching dialogue solutions requires upfront design and development. However, they offer stability and predictability. Consider them when you need a robust, well-defined conversational flow.

  • LLMs: LLM-based experiences can utilize existing systems, databases, and CRMs. They adapt to user input dynamically. The complexity of the overall experience design will be dependent on any additional parameters that may need to be considered and tested.

Conclusion

In summary, choose your conversational approach based on your priorities: balancing accuracy, engagement, and integration. Whether you opt for LLMs, branching dialogue solutions, or a combination of both, remember that the right choice depends on your specific use case and business goals. As technology evolves, we’ll likely see these approaches converge, creating even more powerful AI systems that can both understand and act with human-like intelligence. 🤖🔍

For example, in the real world, the mark of a good consultative conversation is daisy-chained with questions and answers from both parties. Each party needs to show interest in the other, to properly mine a level of understanding that makes the relationship worthwhile and establishes rapport. This is important even if the underlying reason for the interaction is transactional - such as a teacher-student relationship. 

Ultimately, whichever route you take, Rapport gives you all the flexibility and control you need, thanks to an ever-increasing number of integrations with dialogue solutions. Sign up here if you’d like to try it for free! 



Previous
Previous

What’s in a name?

Next
Next

Product Launch: Rapport Self-Service