Assessing non-technical competence (or more likely, not)
Welcome to the second blog from the new SPIRE / For Good Measure team! We hope to bring to you our impressions and reactions to emerging issues in psychometrics and testing, specifically tuned to Canadian regulation!
One issue that has gained our attention over the last decade at least, is the foregrounding of so-called non-technical knowledge and skill in the definition of professional competence. This can be seen in the increasing prevalence of role-based competency profiles in which the majority of roles can be considered non-technical (like this, this, and of course, this. There are lots of great reasons for this emphasis as it is clear that competent performance as a professional relies heavily on these skills. I’m sure there’s published data on this somewhere, and we at least have heard from many of our clients that complaints about registrants are disproportionately about these non-technical skills than the technical ones usually associated with the profession.
As justified or even laudable that this more expansive definition is, there are under-appreciated consequences to using these profiles as the basis for entry-to-practice examinations. I’m going to focus on just one here, examination multidimensionality. (I’ll discuss standards for some of these roles at entry-to-practice on a later blog.) To jump to the punch line, because technical and non-technical skills are substantively distinct, a single score on an examination comprising them is not only inadequate to characterize candidate performance, a single passing score for such an exam risks accepting candidates who do not meet minimum standards in each.
A moment’s reflection makes this risk clear. On an exam equally split between technical and non-technical competencies and a pass score of 70% overall, it is theoretically possible to get 100% on the non-technical and 40% on the technical part and pass the exam (or vice versa). This is an extreme example of course, but the logic holds for 95/45, 90/50, 85/55… are we uncomfortable yet? And this is happening, right now.
You might be thinking that this same logic applies to any two topics, like Assessment and Treatment and both of them get summarized together without concern. You’d be right, except for the multidimensional part I mentioned above. It can be demonstrated statistically (and perhaps psychologically) that skills like Assessment and Treatment are more likely to be acquired together than Assessment and Communication or Treatment and Practice Management. It’s like assessing Algebra and Shakespeare on the same high school quiz and summarizing that quiz with one score; what students know of each subject is impossible to know. The key assumption of using one score to summarize performance on examinations is that the content specifications of those examinations are expected to measure one thing, that is, be unidimensional. (Parenthetically, I believe this is one reason why US-based exams are generally more technically defined than their Canadian equivalents.) If they’re not unidimensional, one score is a poor summary.
Fortunately, there’s a pretty easy solution to this problem that not only addresses this vulnerability but also feels like an important principle for regulators to communicate; minimum standards need to be met for competency clusters as well as for the test overall. Using this approach, candidates would not only have to meet the overall passing score but also meet separate passing scores for technical and non-technical competencies. Note that the laws of probability alone dictate that the passing score for subareas must be lower than for the exam overall, otherwise fewer people would pass just because there are multiple hurdles to clear.
I understand the overall motivation to include more non-technical competencies in the definition of professional competence. And ultimately, it’s a regulator’s decision about how to define their profession so that they can best protect the public. I hope this blog underscores that unless we get the psychometrics right, including more non-technical competencies in licensing exams could undermine public protection, not reinforce it.
I’d love to hear your thoughts!