Conversational bots “live” online and give customers a familiar experience, similar to engaging an employee or a live agent, and they can offer that experience in higher volumes. Conversational bots offer scaling—or the capability to perform equally well under an expanding workload—in ways that human can’t, assisting businesses to reach customers in a way they couldn’t before. For one, businesses have created 24/7/365 online presence through conversational bots.
The sentiment analysis in machine learning uses language analytics to determine the attitude or emotional state of whom they are speaking to in any given situation. This has proven to be difficult for even the most advanced chatbot due to an inability to detect certain questions and comments from context. Developers are creating these bots to automate a wider range of processes in an increasingly human-like way and to continue to develop and learn over time.
Efforts by servers hosting websites to counteract bots vary. Servers may choose to outline rules on the behaviour of internet bots by implementing a robots.txt file: this file is simply text stating the rules governing a bot's behaviour on that server. Any bot that does not follow these rules when interacting with (or 'spidering') any server should, in theory, be denied access to, or removed from, the affected website. If the only rule implementation by a server is a posted text file with no associated program/software/app, then adhering to those rules is entirely voluntary – in reality there is no way to enforce those rules, or even to ensure that a bot's creator or implementer acknowledges, or even reads, the robots.txt file contents. Some bots are "good" – e.g. search engine spiders – while others can be used to launch malicious and harsh attacks, most notably, in political campaigns.[2]

Back to our earlier example, if a bot doesn’t know the word trousers and a user corrects the input to pants, the bot will remember the connection between those two words in the future. The more words and connections that a bot is exposed to, the smarter it gets. This process is similar to that of human learning. Our capacity for memory and synthesis is part of what makes us unique, and we’re teaching our best tricks to bots.

Some bots communicate with other users of Internet-based services, via instant messaging (IM), Internet Relay Chat (IRC), or another web interface such as Facebook Bots and Twitterbots. These chatterbots may allow people to ask questions in plain English and then formulate a proper response. These bots can often handle many tasks, including reporting weather, zip-code information, sports scores, converting currency or other units, etc.[citation needed] Others are used for entertainment, such as SmarterChild on AOL Instant Messenger and MSN Messenger.


The sentiment analysis in machine learning uses language analytics to determine the attitude or emotional state of whom they are speaking to in any given situation. This has proven to be difficult for even the most advanced chatbot due to an inability to detect certain questions and comments from context. Developers are creating these bots to automate a wider range of processes in an increasingly human-like way and to continue to develop and learn over time.
We need to know the specific intents in the request (we will call them as entities), for eg — the answers to the questions like when?, where?, how many? etc., that correspond to extracting the information from the user request about datetime, location, number respectively. Here datetime, location, number are the entities. Quoting the above weather example, the entities can be ‘datetime’ (user provided information) and location(note — location need not be an explicit input provided by the user and will be determined from the user location as default, if nothing is specified).
Lack contextual awareness. Not everyone has all of the data that Google has – but chatbots today lack the awareness that we expect them to have. We assume that chatbot technology will know our IP address, browsing history, previous purchases, but that is just not the case today. I would argue that many chatbots even lack basic connection to other data silos to improve their ability to answer questions.
Three main reasons are often cited for this reluctance: the first is the human side—they think users will be reluctant to engage with a bot. The other two have more to do with bots’ expected performance: there is skepticism that bots will be able to appropriately incorporate history and context to create personalized experiences and believe they won’t be able to adequately understand human input.
There is a general worry that the bot can’t understand the intent of the customer. The bots are first trained with the actual data. Most companies that already have a chatbot must be having logs of conversations. Developers use that logs to analyze what customers are trying to ask and what does that mean. With a combination of Machine Learning models and tools built, developers match questions that customer asks and answers with the best suitable answer. For example: If a customer is asking “Where is my payment receipt?” and “I have not received a payment receipt”, mean the same thing. Developers strength is in training the models so that the chatbot is able to connect both of those questions to correct intent and as an output produces the correct answer. If there is no extensive data available, different APIs data can be used to train the chatbot.
The classic historic early chatbots are ELIZA (1966) and PARRY (1972).[10][11][12][13] More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E (Agence Nationale de la Recherche and CNRS 2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include functional features such as games and web searching abilities. In 1984, a book called The Policeman's Beard is Half Constructed was published, allegedly written by the chatbot Racter (though the program as released would not have been capable of doing so).[14]
×