To make Leolani work, and to make conversational analytics work in general, we need to start with a real-time data science solution. Conversational analytics is the data science we draw on to analyse customer voice and textual conversations, such as a chat, and potentially respond to them automatically using an algorithm. The solution is not a single algorithm, but an entire group of them – combining image, sound, text and context into one meaningful dialogue. These techniques are built on a real-time platform containing pieces of data that are picked up and enriched by a relevant algorithm. To bring this all together, we need a ‘brain’ that collects, merges and leverages all the enriched pieces of data. In the Netherlands, for example, KPN (an Underlined partner) has now developed exactly this kind of infrastructure, which is very suitable for this: it’s called ROCX’R. VU University has developed its own platform for its scientific applications for Leolani, and major players like Microsoft will certainly follow. Designing this architecture and how it interacts with data science algorithms requires a different line of thinking to a classic analytical approach. Classic analytics usually take a composite data set, rather than a stand-alone set of image-only or text-only data. In these new data science architectures, a specific algorithm can pick up and enrich these data files, and then put them back again. The ‘brain’ then brings all these components together again and displays them in the right context.
It is precisely this context that plays a major role in a digital conversation, because what does the artificial intelligence (AI) algorithm know about your services and your customers and their needs? In these modern architectures, each data science algorithm for image, sound, text or customer experience becomes self-contained. Each algorithm can also be optimised. So, you can see how a bot algorithm like Leolani learns much faster than an approach through classical analytics.