Kunze recognises that chatbots are the vogue subject right now, saying: “We are in a hype cycle, and rising tides from entrants like Microsoft and Facebook have raised all ships. Pandorabots typically adds up to 2,000 developers monthly. In the past few weeks, we've seen a 275 percent spike in sign-ups, and an influx of interest from big, big brands.”

These are just a few of the most inspirational chatbot startups from the last year, with numerous others around the globe currently receiving acclaim for how quickly and innovatively they are using AI to change the world. With development becoming more intuitive and accessible to people all over the world, we can expect to see more startups using new technology to solve old problems.
Back in April, National Geographic launched a Facebook Messenger bot to promote their new show about the theoretical physicist's work and personal life. Developed by 360i, the charismatic Einstein bot reintroduced audiences to the scientific figure in a more intimate setting, inviting them to learn about the lesser-known aspects of his life through a friendly, natural conversation with the man himself.

For example, say you want to purchase a pair of shoes online from Nordstrom. You would have to browse their site and look around until you find the pair you wanted. Then you would add the pair to your cart to go through the motions of checking out. But in the case Nordstrom had a conversational bot, you would simply tell the bot what you’re looking for and get an instant answer. You would be able to search within an interface that actually learns what you like, even when you can’t coherently articulate it. And in the not-so-distant future, we’ll even have similar experiences when we visit the retail stores.
The classification score produced identifies the class with the highest term matches (accounting for commonality of words) but this has limitations. A score is not the same as a probability, a score tells us which intent is most like the sentence but not the likelihood of it being a match. Thus it is difficult to apply a threshold for which classification scores to accept or not. Having the highest score from this type of algorithm only provides a relative basis, it may still be an inherently weak classification. Also the algorithm doesn’t account for what a sentence is not, it only counts what it is like. You might say this approach doesn’t consider what makes a sentence not a given class.
Die Herausforderung bei der Programmierung eines Chatbots liegt in der sinnvollen Zusammenstellung der Erkennungen. Präzise Erkennungen für spezielle Fragen werden dabei ergänzt durch globale Erkennungen, die sich nur auf ein Wort beziehen und als Fallback dienen können (der Bot erkennt grob das Thema, aber nicht die genaue Frage). Manche Chatbot-Programme unterstützen die Entwicklung dabei über Priorisierungsränge, die einzelnen Antworten zuzuordnen sind. Zur Programmierung eines Chatbots werden meist Entwicklungsumgebungen verwendet, die es erlauben, Fragen zu kategorisieren, Antworten zu priorisieren und Erkennungen zu verwalten[5][6]. Dabei lassen manche auch die Gestaltung eines Gesprächskontexts zu, der auf Erkennungen und möglichen Folgeerkennungen basiert („Möchten Sie mehr darüber erfahren?“). Ist die Wissensbasis aufgebaut, wird der Bot in möglichst vielen Trainingsgesprächen mit Nutzern der Zielgruppe optimiert[7]. Fehlerhafte Erkennungen, Erkennungslücken und fehlende Antworten lassen sich so erkennen[8]. Meist bietet die Entwicklungsumgebung Analysewerkzeuge, um die Gesprächsprotokolle effizient auswerten zu können[9]. Ein guter Chatbot erreicht auf diese Weise eine mittlere Erkennungsrate von mehr als 70 % der Fragen. Er wird damit von den meisten Nutzern als unterhaltsamer Gegenpart akzeptiert.

If AI struggles with fourth-grade science question answering, should AI be expected to hold an adult-level, open-ended chit-chat about politics, entertainment, and weather? It is thus encouraging to see that Microsoft’s Satya Nadella did not give up on Tay after its debacle, and Amazon’s Jeff Bezos is sponsoring an Alexa social chatbot competition. I love this below quote from Jeff:
AI, blockchain, chatbot, digital identity, etc. — there’s enough emerging technology in financial services to fill a whole alphabet book. And it’s difficult not to get swept off your feet by visions of bionic men, self-executing smart contracts, and virtual assistants that anticipate our every need. Investing in emerging technology is one of the main […]

Chatbots are used in a variety of sectors and built for different purposes. There are retail bots designed to pick and order groceries, weather bots that give you weather forecast of the day or week, and simply friendly bots that just talk to people in need of a friend. The fintech sector also uses chatbots to make consumers’ inquiries and application for financial services easier. A small business lender in Montreal, Thinking Capital, uses a virtual assistant to provide customers with 24/7 assistance through the Facebook Messenger. A small business hoping to get a loan from the company need only answer key qualification questions asked by the bot in order to be deemed eligible to receive up to $300,000 in financing.
Chatbots can reply instantly to any questions. The waiting time is ‘virtually’ 0 (see what I did there?). Even if a real person eventually shows up to fix the issues, the customer gets engaged in the conversation, which can help you build trust. The problem could be better diagnosed, and the chatbot could perform some routine checks with the user. This saves up time for both the customer and the support agent. That’s a lot better than just recklessly waiting for a representative to arrive.
Chatbots such as ELIZA and PARRY were early attempts at creating programs that could at least temporarily fool a real human being into thinking they were having a conversation with another person. PARRY's effectiveness was benchmarked in the early 1970s using a version of a Turing test; testers only made the correct identification of human vs. chatbot at a level consistent with making a random guess.
The plugin aspect to Chatfuel is one of the real bonuses. You can link up to all sorts of different services to add richer content to the conversations that you're having. This includes linking up to Twitter, Instagram and YouTube, as well as being able to request that the user share their location, serve video and audio content, and build out custom attributes that can be used to segment users based on their inputs. This last part is a killer feature.
Chatbots are often used online and in messaging apps, but are also now included in many operating systems as intelligent virtual assistants, such as Siri for Apple products and Cortana for Windows. Dedicated chatbot appliances are also becoming increasingly common, such as Amazon's Alexa. These chatbots can perform a wide variety of functions based on user commands.
The trained neural network is less code than an comparable algorithm but it requires a potentially large matrix of “weights”. In a relatively small sample, where the training sentences have 150 unique words and 30 classes this would be a matrix of 150x30. Imagine multiplying a matrix of this size 100,000 times to establish a sufficiently low error rate. This is where processing speed comes in.
Consider why someone would turn to a bot in the first place. According to an upcoming HubSpot research report, of the 71% of people willing to use messaging apps to get customer assistance, many do it because they want their problem solved, fast. And if you've ever used (or possibly profaned) Siri, you know there's a much lower tolerance for machines to make mistakes.
For as long as I can remember, email has been a fundamentally important channel for a large majority of businesses. The ability to market products directly through a channel that scales up to an incredibly high ceiling is very attractive. The only problem is that it's costing more and more money to acquire email addresses from potential customers, and the engagement from email is getting worse and worse.
ALICE – which stands for Artificial Linguistic Internet Computer Entity, an acronym that could have been lifted straight out of an episode of The X-Files – was developed and launched by creator Dr. Richard Wallace way back in the dark days of the early Internet in 1995. (As you can see in the image above, the website’s aesthetic remains virtually unchanged since that time, a powerful reminder of how far web design has come.) 
Being an early adopter of a new channel can provide enormous benefits, but that comes with equally high risks. This is amplified within marketplaces like Amazon. Early adopters within Amazon's marketplace were able to focus on building a solid base of reviews for their products - a primary ranking signal - which meant that they'd create huge barriers to entry for competitors (namely because they were always showing up in the search results before them).
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of cue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY'). Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
Tay, an AI chatbot that learns from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. The bot was exploited, and after 16 hours began to send extremely offensive Tweets to users. This suggests that although the bot learnt effectively from experience, adequate protection was not put in place to prevent misuse.[56]
×