Amazon AWS, Azure, Editorials

Bots, Agents and Databases

Have you started working with (or supporting) automation systems that include bots?  Specifically, chat bots?  I have to admit I have mixed feelings about this whole thing.  I’m that person that calls the cable company and starts with “I’m technical.  I’ve done the test stuff.  I know what the issue is, please send help.”  All of this to avoid the normal “please reset your modem and count to 10 with me…” stuff.  Nothing drives me nuts faster, then, than them STILL wanting to do those things “just in case.”

I kind of feel like that will be the approach of Bots, at least until we get really good at them.  I don’t want to have that basic conversation, I want to be able to reasonably describe my situation, what I’ve done, what I need and to move on.  This means a lot of natural language work, and a lot of database and information in the back-end of things to support keeping things fluid and intelligent as the bots work with real people.  While the basics are really good already, it’s easy to see that we’ll have to solve some of the data, storage and query functional issues along the way on this as well.  Think of it as real-time, hyper-specific reporting.

We’ve been testing them too on our video platform.  We can get to some really great responses on most inquiries, which I suppose is the goal. But where it all breaks down is when things get incredibly specific.  The database might not know of the inferred question quite yet or might not recognize someone’s special keywords that they use to describe things.

I can’t hear anything

I have no audio

There’s no sound

Each of these (and more, clearly) have to be learned by the system.  These are stupid-simple examples, but what we’ve seen is that teaching phrases is interesting, but teaching keywords is more successful… so far.  It’s almost like pulling back a level in understanding the user’s question and using that abstract against the database of FAQs vs. looking for specific things to lookup.  This has also lead to some intriguing changes and tweaks to indexing and reporting on our side, trying to recognize an issue in real time so we can jump on anything that might be going on.  After all, if you can’t recognize that the three questions all suggest an issue with the same thing, you can’t realize you have an issue.  It’ll just seem like three different inquiries.

We’ve been playing with the Azure Chatbot services that are really quite good about learning and recognizing things.  But every now and then we hit some odd non-recognition event and we tend to get gun-shy about really rolling it out.  Finding out what’s going on on the misses is just as important as getting it right.  It’s the only way we’ll learn how to train the engine more accurately to avoid the frustration.  I love it when it works – which is “usually” – it’s so quick and straightforward.  But when it misses, it’s aggravating and then I start cussing about “why’d they have to use a stupid automated process anyway?!?” (you can see I’m really mature about my analysis).

In the Azure sense, and even on AWS’ approach, the database is not really a concern of the process.  But in your “let everyone get access to what they are allowed so they can do their own thing” sense, it’s really a direction that each of us is getting to support.  We have to figure out ways information can be used.  Figure out how to present it.  Determine the best way to go get it.    Sound vaguely familiar?  And all of this just to support Bots.

I do think this is something that is here to stay, so it’s a great thing to be considering now, to be prepared for when the time comes to help the bot deal with people like me that get impatient quickly.