Source: How we changed our recruitment process using a Chatbot

I have been working in the marketing and business development team of my company for a little over a year now, and as a retail consultancy harnessing the power of big data, our growth in recent years has been phenomenal — closing on 30% client growth for this financial year. This growth, however, has precipitates a perpetual problem: we are always stretched to capacity. Like many companies in our sector, finding staff that are both experienced and able to adapt to the latest technology is difficult and frustrating. And within the incestuous world of retail, searching for new faces is even more tiresome.

It means, as a company, we invest heavily in graduates; hiring fresh-faced students who are eager to learn but lack decent industry experience. Although they are great investment, the vast number of applicants for the junior roles can often result in graduate recruitment days which yield candidates who are either not a good fit or when they learn more about the job, drop out of the process — a frustrating but unavoidable consequence of the recruitment process.

Or is it?

I knew we had a job to be done — a graduate recruitment process with better, more suitable candidates that are actually interested in applying for the role — but we had no clear way of improving the process. Well, until we looked at chatbots.

The Perspective

I had been building chatbots in the music and events industry for around 6 months before trying to apply their potential to our recruitment process. And the one key learning that I took with me was the lens through which companies and businesses need to look at chatbots. This was a revelation that one of my co-founders at Dialect, Cameron, can be credited for finding. He began looking at chatbots as automated digital employees, rather than a new marketing or sales channel — essentially, instead of hiring a person to do a job, you hire a chatbot.

So, with this new insight, began building our newest recruitment employee…

The Planning

The key focus when planning was around the job to be done — receive more suitable applicant — then building outwards. Essentially, we wanted to filter the massive pool of potential graduates into a smaller pool of desirable applicant:

The next thing I did was look at our current recruitment process and finding out which bits were not working, or where the costs of a hiring an employee to ease the problem were too high. It looked a bit like this:

As you will see, there are flaws in this process:

  1. The job spec is competing in an arena against loads of other jobs when posted on a job site
  2. It is not being shown to a targeted audience, only relying on keyword searches
  3. The application process is one way, meaning we only have their CVs as reference until we interview
  4. The traditional job spec is rigid and does not sell the job correctly, meaning we lose potential applicants who lose interest following the interview

With a tradition solution — an intern or recruitment employee — these problems can be overcome, but are expensive. I was sure that a chatbot created on Facebook Messenger and used in conjunction with tools within the Facebook platform would also suffice, even potentially producing a quicker, more responsive experience. This was how we would overcome the previous hurdles:

  1. The job spec would only be posted on our company website, with no other jobs to compete against
  2. Instead of relying on job site searches, we would create a Facebook advert and target it to the desired demographic — recent graduates
  3. A conversational interface would provide a two-way application process, providing the desired information that is not displayed in a CV
  4. A chatbot can explain the job role, the company and other key information, answering specific questions to filter through only the most suitable candidates

This is how the applicant journey would change:

Using this new approach, the obstacles that were prevalent previously can be overcome without hiring another physical employee. Applying a conversational interface to the entire application process enables fluidity and suits the wide range of variables which come into play. With this basis, I began building the chat flows into the bot.

The Building

The build process is based around 3 fundamental stages of the job we are trying to do — filtering candidates that are suitable:

1. Discovering if the candidate fits the basic job spec

2. Educating the candidate about the role

3. Finding out if the candidate is still interested

These 3 stages would represent the building blocks for the chat flows, and would funnel candidates like this:

The creation and implementation of the bot was performed using Chatfuel — in my experience the most feature-heavy bot building platform currently available. It enabled me to begin with a completely blank canvas that could be customised and moulded during testing to create the most effective experience possible.

Stage 1:

The first stage of building was about questioning; finding out basic information about each candidate, enabling us to proceed and weed out those that didn’t fit. This meant asking:

  • Are you at university?
  • Have you graduated?
  • When do you graduate?
  • What university did you attend?
  • Do you have any relevant industry experience?

These brief but simple questions allowed us to initially filter applicants and begin building a profile about their current position and availability. It looked something like this:

As you can see, Jacob is a student study at university; however he doesn’t graduate until 2018, which doesn’t match our spec. We were also able to find out whether we could contact him again in a year once he has graduated, allowing us to gather a pool of ‘potential’ candidates for the future. Matty was not at university, nor had he gained a degree or any industry experience, meaning he was not a fit. He was filtered out and as a result, it would be one less unsuitable application which would have been received from the traditional process.

The advantage to using the chatbot’s conversational interface is the unique ability to collect real-time data and provide customised and personalised responses based around user input — just like conversing with a physical person. Essentially, the chatbot’s machine learning could use the candidates’ inputs to generate varying outputs. The aspiration of many in the AI and machine learning community — and what many of the big players in the industry are on the brink of achieving –is that a bot will serve a unique user experience based on personal data. Chatfuel’s AI is far less sophisticated than that, but with enough time and testing, impressive results can be achieved.

The first personalisation was based around the question: ‘Which university did you attend?’ Because we were recruiting for the role of Graduate Client Manager (GCM), I was able to use our current GCM staff profiles to create varied and personal responses. This is how it worked:

The range of alumni in our Client Management team meant that we were able to serve 12 different answers to candidates depending on what university they inputted. You can see that Josh attended Goldsmiths, a university we have never recruited a GCM from before; whereas Doug attended Manchester, which was once attended by one of our GCMs, Hannah. I was able to serve a few key skills and qualities which are vital to a candidate in this role, then asking whether the applicant was similar — another opportunity to funnel out unsuitable graduates.

Stage 2:

Stage 2 required us to provide a brief description of the role and the company. The outcome that we are trying to achieve in this section is discovering if the candidate is still interested in applying, even after they have been introduced to the job in more depth. This meant receiving affirmation of intent, whilst providing a realistic and insightful description in a short space.

Unlike roles that required experience and skill, graduate positions rely much more on the personality and fundamental skills of a candidate. Companies understand and accept that young people straight out of university require investment through time and education; therefore the qualities and skills that graduates need to be successful are more closely related to their personalities. This is why we wanted to know if our candidates had a good attention to detail, could work under the pressure of tight deadlines, and enjoyed solving complex problems — they were skills that were innate, not taught.

Here is how Stage 2 looked for the applicants:

As you can see, this was an opportunity to upsell the role, conveying a tone of voice that cannot be communicated on traditional job sites. At this stage of the conversation, the candidate could really get a feel for the company’s mission and what environment they would be entering if they were successful. Finally, we once again confirm the candidates intent on applying, funnelling out those that have lost interest.

Stage 3:

By Stage 3, our cohort of potential applicants has been filtered to the point where any candidate who applies meets the criteria of our basic job spec, and without any more questioning, could apply for the role. However, in most cases, candidates have their own concerns and queries about jobs they apply for; concerns that can be the difference between applying and not applying. The next step then would require us to answer those potential questions through an autonomous piece of software, rather than through a human.

The help you understand the complexity and scope of this challenge, and all the potential questions and answers that can be cumulated in regards to a job application, I have produced a diagram to show the basic structure through which I attempted overcome this problem:

The help quantify the mountain that I had to climb to ensure each and every candidate was satisfied enough to apply for the role, I created 20 categories of Frequently Asked Questions, based around the most common questions that we receive from applicants. Within each category, there could be anywhere from 1 to 10 different questions, each with very different answers and on top of that, a multitude of different ways of asking that question. For example, if we take ‘Probation FAQs’ — a relatively narrow category — the different variances could be:

  • Will I have a probation period?
  • How long is my probation period?
  • Will I have to take a test at the end of my probation period?
  • What happens if I fail my probation?
  • What do I need to do to pass my probation successfully?

Now, think of how many ways you can ask these questions. Let’s take ‘how long is my probation period?’ and break it down into different possibilities:

  • How long is my probation period?
  • What length is my probation period?
  • What’s the probation period?
  • Is there a long probation period?
  • How many months is my probation period?
  • When will my probation period end?

This is just a few of the top level variations, discounting the regional accents and dialects which shape the various linguistic styles in the UK. Even variances between the word ‘my’ and ‘the’ would instantly multiply the possible combinations just around one category — that’s the versatility of language!

However, when it comes to teaching a machine how to make sense of all these variations, the task is monumental. It meant that no matter how much time and effort I put into teaching the bot all the different variations, I could never completely cover every angle. There needed to be a threshold for how detailed a question could be, a point where we say ’no more’ — just like when you ask a person a question and they say ‘I don’t know’.

In order to reach this threshold, instead of starting with the questions, I began with the answer. The answer, unlike the questions, has limits. It is the only reference point we can tangibly use to create a workable and realistic automated response to all of these questions, without spending hours and hours actually inputting the variations. So, instead of building outwards, I built inwards. This is how it looked:

When we overlay the probation example into this process, we really see how this helped:

Because we have settled on a top-level answer that covers the 4 ‘sub-categories’ of question, I was able to say ‘this is what the candidates need to know about probation, anything else can wait’ — i.e. this amount of information is enough for them to apply for the role. Of course, the answer is variable, but with the help of our in-house recruiter, we were able to build coverage of all the relevant points connected to the role based in one answer.

This is quite an abstract process, and from here, I still had to spend time building out all the variations of question that covered the probation topic. As the standard AI in platforms like Chatfuel develops, the creative input will diminish. I assume that one day I will only need to input basic categories to achieve the same results, with the AI harnessing data from the internet to fill in all the gaps.

To help you see how this worked in reality, and the wide range of questions I was able to cover using this process, here are some screen examples:

In addition to prompting completely ad-hoc questioning, I saw an opportunity to alleviate many of the potential friction points the bot could encounter by building guides at the end of the chat flow. Utilising the ‘quick reply’ feature on Chatfuel, I was able to build in FAQs buttons with the most popular categories as headings. The aim was to reduce organic questions and display to the candidate that other information was available at the touch of a button. Here is what it looked like to use:

The Results:

I’ve spoken a lot about how and why I implemented a chatbot into our recruitment process, and I’ve shown how recruiting a chatbot should, in theory, solve our job to be done. However, the important aspect is the end result and how it actually changed our ability to recruit great grads.

Because we wanted to test the uplift of the chatbot on a quantitative level, we ran a control campaign alongside the bot campaign whereby candidates interacted with the ad and were prompted to send their CV and covering letter to our recruitment email address. The control candidates were then judged suitable for interview based on their CV alone — I wanted to compare the two funnels to see which one produced the most suitable candidates.

In terms of tangible results after the first two weeks, we had 14 applications from the control group, with 8 worthy candidates. That’s 57% of those who engaged with the ad who we invited to attend our next graduate recruitment day. To put this in perspective, 57% is a good number for us. Our average graduate recruitment day will be made up of 12–14 potential candidates from job sites, with typically only 1 or 2 reaching the next stage — to receive 8 worthy candidates from the control group was awesome!

The bot ads yielded a total of 15 conversions — candidates that engaged with the ad and then began a conversation with the bot. Of those 15, 10 completed their conversations (they didn’t just disengage midway through). Of the 10 that completed chat flows, 6 were judged as worthy candidates and were given the chance to apply. We received 5 applications, all of which were from worthy candidates.

The significance here is quality. The job we wanted to complete was improving the quality of candidates we received CVs and applications from, which, as you can see, was achieved. Although we saw candidates disengage with the bot and not complete a conversation — a 1/3rd of candidates — we only received applications from those who we deemed suitable.

The bot has now been live just over 1 month (we have paused the ad campaigns but left the bot running on our Facebook page) and we have seen 74 unique users engage in conversation with our new digital employee.

I’m sure you’re dying to know whether or not we have a new Graduate Client Manager? And yes, we do! Unfortunately, I cannot say that all three came through the bot — we only had one from the bot, another from the control group, and one from our traditional recruitment process — so any recruiters, don’t worry, your jobs are safe (for the moment!)

The Conclusion:

The biggest piece of advice and learning that you and your company can take away from this project is in relation to gap I managed to fill by employing a chatbot: bots can fill certain gaps just as well as human, improving process along the way — we made genuine progress. We could have allocated expensive human resources to the problem, which would have likely solved the issue, however, we utilised the potential of a bot to solve the problem in a new way.

Of course, not every company has the luck of having a bot developer working for them, but as many of the bot platforms reduce the level of expertise needed to develop a bot, the more accessible they will become. You can see from my experience that the inclusion of a conversational interface was the real difference between quality and quantity. I believe that as businesses begin to get the hang of harnessing big data to create more personalised, individual interactions with their customers, bots will play a massive role in enhancing this quality. In addition, what I’ve demonstrated here is the application of a bot on top of Facebook’s vast data reserve, which we also utilised as a recruitment tool. It is these two resources working in tandem — bots and data — that provided a real change in our process.

So, next time you’re thinking about a problem, ask yourself: could a bot do this? Chances are, it probably could!

If you want to check out the chatbot from this project, here is a link: m.me/more2careers