‘Facts are stubborn things, but statistics are more pliable’

0

Hannah and Will have reputations that go before them.  Hannah is a senior manager in a global insurance firm.  Each year, she regularly dismisses the reminders that pop-up onto her computer screen drawing her attention to the fact that time is running out.  Time is running out for her to complete her data protection computer-based training; time is running out before she must achieve an 80% pass mark on her anti-money laundering assessment test after having completed her e-learning; time is running out before she must demonstrate her understanding of the legal frameworks, company policy and procedures relating to bribery and corruption.  Time is running out yet she must also ensure she meets the objectives of her marketing role.  So she dismisses these reminders right up until the last moment.  And at the last moment, what does she do?  She does what she knows so many others do.  She skips the training and goes straight to the assessment tests.  If she passes she reasons, she will save herself the time and effort of flicking through seemingly endless numbers of e-learning screens.  If she fails, nothing much is lost except a little more time; she can complete the training and re-take the tests.

And because Hannah is an intelligent person, and because the tests are multiple-choice, she doesn’t find it difficult to pass with flying colours.  She demonstrates to her employer that she knows the Data Protection Act was passed in 1998 (not 1992 or 1996), and that there are eight data principles, (not three or seven).  She is aware the third stage in the money-laundering process is integration, (not combination or assimilation) and that the maximum sentence for ‘tipping off’ a suspected money-launderer is 5 years, (not 6 or 10). And Hannah correctly guesses that under the provisions of the Bribery Act, which she correctly deduced was passed in 2010, (though not implemented immediately), her firm’s only defence in law is to demonstrate it has adequate procedures in place to prevent bribery and corruption.

Will is equally clever at the way he plays the regulatory training game.  A middle manager in a retail bank, time is similarly running out for him.  With an important product development role, he has tight deadlines to meet and he, too, must complete his regulatory training programme within strict timescales.  He follows the e-learning training for Fraud Prevention and quickly moves on to the assessment test, (virtually the same as the previous year’s though given the questions are selected at random from a question bank, some questions are new to him.  He correctly guesses the Fraud Act was passed in 2006, (not 2001 or 2009) and that one of the offences is abuse of position, (not abuse of power or abuse of systems) and that according to the National Fraud Authority in its Annual Fraud Report of June 2013, the loss to the UK economy from fraud was estimated to be £52 billion, (not £30 billion, nor £45 billion).

To all intents and purposes, everyone is satisfied.  Hannah and Will have completed their regulatory training for another year and have demonstrated their competence.  It will be several months before they are bothered again by pop-up screens reminding them to complete their regulatory training.  And Hannah’s global insurance firm and Will’s retail bank are happy with the statistics: they can report that 96% of their employees completed their regulatory e-learning programmes and 94% of those employees passed their assessments.

Because they have the statistics, both Hannah’s and Will’s firms are able to demonstrate they have training programmes in place and that the majority of their employees pass their assessments.  Their firms may even have policies in place to deal with those who fail.

But in spite of these statistics, the fact that both Hannah and Will have passed these memory tests provides no assurance, either to their firms or to the regulators, that they know what to do in their job roles to minimise risk to customers or to their firms; that they know what to do in their job roles if they suspect fraudulent activity, or what to do to protect their customers’ personal information.

With the time and effort that goes into developing regulatory training and assessment tests, how have things reached the point where employees bypass training yet pass assessments that test their memory, or, perhaps, more likely, their ability to make informed guesses?

Of the many answers to this question, one stands out particularly.  Over the last ten years, increasing numbers of firms have shifted from face-to-face training to e-learning.  This has been a sensible shift not least because e-learning is one of the most cost-efficient ways of delivering regulatory training.  It permits thousands of employees to complete their training without leaving their desks and with minimal annual updating training can be repeated down the years.  But when we create assessment tests using e-learning platforms, our tendency is to use multiple-choice questions, which means we must give away the answers to those who are being tested.  This in itself does not make multiple-choice assessments necessarily bad.  What makes multiple choice assessments bad is the constraints we place on their construct.

For example, if we determine that for every multiple-choice question in a test, there must be one absolutely right answer whilst the others must be absolutely wrong, we create a constraint that is unlike real-life, since in real-life there are rarely occasions when we have to make a decision that is either absolutely right or absolutely wrong.  Often, in real-life and as much in regulated environments as in non-regulated ones, we must make decisions using our judgement, what we believe to be the best thing to do given the circumstances in which we find ourselves.  Sometimes there will be no doubt, our judgement will be right; sometimes it will be wrong.  But often, it will be neither – we will make good decisions, and perhaps some less good decisions, (without these being absolutely wrong).  Multiple choice questions that demand answers must be absolutely right or absolutely wrong restrict us from asking questions that demand an employee to think, to draw on their knowledge and experience or use their judgement much as they would have to do in their day to day job roles.  And as soon as we restrict the use of judgement in making decisions in multiple-choice tests, we force designers to restrict the range of questions they might ask.  It is consequently no surprise then that designers look to the rules, regulations, legal jargon, processes and policies that permit them to ask questions that have only absolutely right or absolutely wrong answers – which is why we end up with questions that test whether or not employees remember the dates of the Data Protection, Fraud, and Bribery Acts, or the maximum penalties for ‘tipping off’, all of which are interesting but do nothing to protect our customers or our firms.

One powerful way of doing this is to make our regulatory training less about reading screens of bullet points but more about designing experiential learning. 

If those responsible for training tomorrow’s generation of doctors were similarly forced to restrict themselves to constructing questions that had absolutely right or absolutely wrong answers, our lives would be seriously put at risk!  Consider, for example, a trainee doctor being faced with a patient in a multiple-choice question who presents with a headache.  The patient might have a tumour; in all probability, s/he probably doesn’t since the proportion of headaches attributable to tumours is relatively low.  Either way, the trainee doctor could not say with all certainly the patient does or does not have a tumour.  The doctor would need to gather more data, hence a follow-up question to the doctor that begins: ‘Which of the following would you ask the patient in order to clarify your diagnosis?’

Neither could those training doctors ask questions in multiple-choice tests such as:

‘What is the most likely diagnosis for the following symptoms?’, or

‘Which of the following medications is the most likely cause of …?

since the term ‘likely’ infers neither an absolutely right nor absolutely wrong, merely the most probable one!  Important tests are therefore omitted because the question cannot be designed to satisfy the spurious criterion that the answer must be wholly correct or wholly incorrect.  And rarely in real life do make unconnected decisions – often, one decision impacts on another and multiple-choice assessments fail to replicate this phenomenon.

Surely the world of financial services, like the world of medicine, contains some ‘probables’ as well as some ‘absolutely rights’ and some ‘absolutely wrongs’?   If we accept this premise, then we need to find other ways of using elearning platforms to assess what Hannah and Will have learned through their regulatory training.  One powerful way of doing this is to make our regulatory training less about reading screens of bullet points but more about designing experiential learning.  Using this ‘business simulation’ approach, we put employees into real-life situations they might encounter on a day to day basis in their job roles; situations that demand they make decisions using the knowledge and experience they have acquired, and crucially, by using their judgement based on their knowledge and experience.  Through the simulation, if each decision Hannah and Will take is scored, this provides training and competence teams with a record of where employees have taken strong decisions that protect customers and the firm, and where there may be weaknesses across the firm’s employees that need to be addressed. This approach will surely be more enjoyable for employees completing their regulatory training and as important, provide firms with more meaningful data about the strengths and areas for development on which they need to focus more training.

Share.

About Author

Leave A Reply