The Marketing Technologist.

We talk about analytics, code, data science and everything related to marketing technology. Backed by the tech geeks of Greenhouse Group.

Optimizing a conversational medium

If you’re on a site called The Marketing Technologist, chances are that you’ve ‘talked’ to a device or AI before. All the big tech players have some form of assistant; Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana and Google’s Home are all available already, and this year Samsung is set to launch Bixby, and even Nokia is joining the fun with their own Viki. If you’re more of a typer than a talker, Facebook (and others) has paved the way for anyone to create a chatbot. Conversational interfaces are here to stay according to the internet, and I think so too. This poses an interesting question for conversion optimization: How can you optimize a self-learning bot?

Optimizing bots by server-side testing communications

While machine learning should pick up on conversational details, like what attitude works best for any specific user, and what the best response should be for that user, A/B testing a response for just one user makes no sense, since every conversation will probably be unique and you have a testing group of 1 person. Luckily, most, if not all of the responses your fancy virtual assistant gives you are made up by a server somewhere, and sent to the device, because the device itself won’t have enough computing power to know what you’ve said, define what you meant, and formulate a specific response.

Because most of the answers are formulated on the server, this would be the logical place to trigger a test, creating a variation on the standard answer and serve 50% of the people who ask a question (or a similar question with this applicable answer) the variant. I’m pretty sure this already happens for the internal workings of your assistant, since it’s learning how to communicatie with you effectively.

Complementing a Virtual Private Assistant's learning curve

But what happens when you want to buy a product online? When you’ve asked your Alexa for a TV, and have narrowed down your filtering options. Everything so far has happened in your mind, you want the TV, and Alexa asks you “would you like to purchase this product?”. I’d argue you would want to test multiple versions of this sentence, as it’s essentially a CTA. Would it make sense for Alexa to mention the price? maybe mention the product? maybe change the whole structure of the sentence, or add an upselling element. This requires A/B testing of the server-side of your assistant.

There are a few tools out there that offer server-side testing for websites, and I’m pretty sure they will move into this space eventually, if they haven’t already. Conversational interfaces offer great new insights and opportunities, and by adding a server-side testing process we can complement your virtual assistant’s learning curve. It’ll not only learn how to understand you, but also how to add a dash of unobtrusive marketing to the conversation.