Introduction
The medical education community has come to agree that direct measures
of physician ability need to replace time-in-training as the main
indicators of physician competence.1, 2 Under the
banner of competency-based education (CBE), the fundamental idea is that
trainees will develop knowledge, skills, and behaviours at different
rates, and that only through measurement of trainee abilities with
reference to standards of achievement (i.e., milestones), can it be
determined when a learner is competent enough to progress into
independent practice.3 In the surgical specialties,
where many specific competencies are concerned with precision technical
skills, this shift has been served by the development of a number of
effective assessment approaches; including, the Objective Structured
Assessment of Technical Skills (OSATS),4 the Global
Operative Assessment of Laparoscopic Skills (GOALS),5and the McGill Inanimate System for Training and Evaluation in
Laparoscopic Surgery (MISTELS).6 While each of these
tools is psychometrically robust, they each still rely on subjective
appraisals from qualified experts, which purports to be problematic as
the time required to complete assessments encroaches deeper and deeper
into the schedules of clinician-educators.3
As a consequence, the medical education community has explored the use
of measurements from computerized systems, which have the ability to
provide objective information to evaluators about a procedure, as a
potential avenue to improving the process of assessing technical skills.
This approach to assessment encompasses a wide variety of technologies,
which are capable of measuring a number of surrogates of performance
quality. In general, these types of measurements provide rich digitized
metrics about the outcomes of a clinical performance. For instance,
technologies that measure forces have been used to reveal the tensile
strength of surgical knots,7, 8 the consistency and
accuracy of acupuncture needling,9 and one’s surgical
expertise in bone-drilling tasks.10, 11 However, in
addition to outcomes, computerized measures can also provide objective
assessments about the efficiency with which a skill is performed.
Efficiency is often an important perspective on skill performance, as
its optimization can have important impacts on patient safety and
hospital operations; including, reducing patient exposures to
radiation12, 13 and the potential for
infection,14 improving the patient-to-patient flow of
the operating theatre,15, 16 and protecting the
physician from fatigue.17, 18
Historically, the efficiency of clinical performance has usually been
inferred through measurements of the time it takes to complete a
procedure,19-21 but more recently, medical educators
and researchers have turned to kinematic measurements derived from
motion capture analyses to assess technical skill
efficiency.22-25 Motion capture is used widely in a
number of industries (i.e., filmmaking, video game development, military
and sports) and for an array of different purposes (i.e., gait analysis,
facial recognition, and computer animation). The process involves
affixing markers to a performer’s body, hands, or tools during a
performance. These markers emit signals (i.e., electromagnetic,
optoelectric, inertial, acoustic, etc.) that allow for their position to
be recorded several times a second, permitting the determination of many
things; including, the total distance traveled by the performer’s limbs,
the kinematic characteristics (i.e., displacement, velocity,
acceleration) of the clinical movements, and the trial-by-trial spatial
variability with which procedures are performed.
One particular measure of procedural efficiency derived from motion
capture techniques that has become exceedingly popular in the surgical
education literature is a count of the “number of movements ”
made by the practitioner during a technical skill
performance.22, 26, 27 The conceptual idea is that the
technique performed with fewer movements is smoother, better planned,
and more efficient, and thus more indicative of an expert clinician.
Indeed, construct and concurrent validation studies have revealed the“number of movements” metric to differentiate expert and novice
performances in a way that aligns with the ratings provided by
subjective assessment scales.26-28 To enact this
measure as part of a competency-based technical skill education program,
one may envision requiring learners to reduce their performances to
below certain “number of movements ” milestones in a
simulation-based context before moving on to new entrusted activities in
the criterion clinical environment. Although this approach to assessment
has some appeal for its ability to ensure a certain degree of skill
efficiency, above and beyond skill accuracy, before a learner
progresses, it is not without its potential pitfalls. Specifically, the
outcomes of motion capture and analysis are highly dependent on
system-level decisions that are inputted by the assessor prior to the
data collection period. Across cohorts of trainees, inconsistency in
these decisions may have a major impact on the ability to distinguish
learners’ capabilities with respect to standards of competence, or even
on our ability to set standards at all. Moreover, the relationship
between number of movements and expertise is not necessary indirectly
linear. As such, it is important that educators remember that strategic
approaches to learning and the contexts of performance can have
significant influence on the number of movements a trainee performs
while practicing.
In this commentary, I consider the processes of counting and
interpreting “number of movements ” data from the perspectives
of kinesiology and engineering science, which have contributed heavily
to the rigorous application of motion analysis methods. In doing so, my
goal is to advocate for consensus agreement on the setting of standards
for motion capture in clinical assessment, which will allow for the
determination of accurate and appropriate metrics that will ensure
consistency in the way that measures of efficiency are utilized in
competency assessment across surgical education programs.