Canadian and US insurers have a lot on their plates this year.  They’re not just grappling with extreme weather, substantial underwriting losses from all those motor vehicle claims, but also rising customer expectations and an onslaught of fintech disruptors.  These disruptors are spurring lots of activity in insurance digital labs, insurance venture capital arms, and […]

It won’t be an easy march though once we get to the nitty-gritty details. For example, I heard through the grapevine that when Starbucks looked at the voice data they collected from customer orders, they found that there are a few millions unique ways to order. (For those in the field, I’m talking about unique user utterances.) This is to be expected given the wild combinations of latte vs mocha, dairy vs soy, grande vs trenta, extra-hot vs iced, room vs no-room, for here vs to-go, snack variety, spoken accent diversity, etc. The AI practitioner will soon curse all these dimensions before taking a deep learning breath and getting to work. I feel though that given practically unlimited data, deep learning is now good enough to overcome this problem, and it is only a matter of couple of years until we see these TODA solutions deployed. One technique to watch is Generative Adversarial Nets (GAN). Roughly speaking, GAN engages itself in an iterative game of counterfeiting real stuffs, getting caught by the police neural network, improving counterfeiting skill, and rinse-and-repeating until it can pass as your Starbucks’ order-taking person, given enough data and iterations.
Do the nature of our services and size of our customer base warrant an investment in a more efficient and automated customer service response? How can we offer a more streamlined experience without (necessarily) increasing costly human resources?  Amtrak’s website receives over 375,000 daily visitors, and they wanted a solution that provided users with instant access to online self-service.
The sentiment analysis in machine learning uses language analytics to determine the attitude or emotional state of whom they are speaking to in any given situation. This has proven to be difficult for even the most advanced chatbot due to an inability to detect certain questions and comments from context. Developers are creating these bots to automate a wider range of processes in an increasingly human-like way and to continue to develop and learn over time.
A chatbot (sometimes referred to as a chatterbot) is programming that simulates the conversation or "chatter" of a human being through text or voice interactions. Chatbot virtual assistants are increasingly being used to handle simple, look-up tasks in both business-to-consumer (B2C) and business-to-business (B2B) environments. The addition of chatbot assistants not only reduces overhead costs by making better use of support staff time, it also allows companies to provide a level of customer service during hours when live agents aren't available.
Chatbots are gaining popularity. Numerous chatbots are being developed and launched on different chat platforms. There are multiple chatbot development platforms like Dialogflow, Chatfuel, Manychat, IBM Watson, Amazon Lex, Mircrosft Bot framework, etc are available using which you can easily create your chatbots. If you are new to chatbot development field and want to jump…
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
×