Voice and language are innate communication tools for humans. During the history of interaction, we have asked the user to interact with keyboards. Then we improved it to mouse clicks and then to clickable screens (with sticks).  We then followed with smartphones which let users click with fingers.

In 2011, Gartner said hat 85% of consumer interactions would be non-human by 2020. We have already implemented online services today that have replaced 90% of human interactions in less than a year, improving customer satisfaction and reducing customer claims. Users start shy but get used to non-human interactions relatively quickly.

"You should just be able to message a business in the same way that you message a friend," Mark Zuckerberg.

Chatbots and virtual agents are capable of creating personalised and contextual customer engagement while saving time and effort for live agents and increasing their productivity.

And now we gather a new piece of user information: emotion. Sentiment analysis is now possible in real-time thanks to text and voice tone. Chatbots and voice assistants can adapt the language, the response or even redirect the call to a real person, taking into account the user satisfaction identified by a sentiment analysis tool.

And, on top of that, device fragmentation is a problem again: available are Alexa Echo, Google Home, Apple Homepod, Lenovo Smart Display, Samsung Bixby, Harman Kardon (MS Luis), etc. All of them have a cognitive engine, NLP, AI platforms, etc.

So having a voice architecture is a must-have. Cognitive-engine agnostic, multichannel, omnichannel, suitable for voice and text, integrated with corporate backends, including sentiment analysis, analytics, customizable, etc. Does this exist? Yes, eVA is the everis virtual agent platform that solves this.