Simulator Training vs. Proficiency-based Training
Simulator Training vs. Proficiency-based Training
Objective: We hypothesized that novices will perform better in the operating room after simulator training to automaticity compared with traditional proficiency based training (current standard training paradigm).
Background: Simulator-acquired skill translates to the operating room, but the skill transfer is incomplete. Secondary task metrics reflect the ability of trainees to multitask (automaticity) and may improve performance assessment on simulators and skill transfer by indicating when learning is complete.
Methods: Novices (N = 30) were enrolled in an IRB-approved, blinded, randomized, controlled trial. Participants were randomized into an intervention (n = 20) and a control (n = 10) group. The intervention group practiced on the FLS suturing task until they achieved expert levels of time and errors (proficiency), were tested on a live porcine fundoplication model, continued simulator training until they achieved expert levels on a visual spatial secondary task (automaticity) and were retested on the operating room (OR) model. The control group participated only during testing sessions. Performance scores were compared within and between groups during testing sessions.
Results: Intervention group participants achieved proficiency after 54 ± 14 and automaticity after additional 109 ± 57 repetitions. Participants achieved better scores in the OR after automaticity training [345 (range, 0–537)] compared with after proficiency-based training [220 (range, 0–452; P < 0.001].
Conclusions: Simulator training to automaticity takes more time but is superior to proficiency-based training, as it leads to improved skill acquisition and transfer. Secondary task metrics that reflect trainee automaticity should be implemented during simulator training to improve learning and skill transfer.
In the last decade, the traditional surgeon-training paradigm has undergone a significant shift. The Halstedian apprenticeship model seems outdated for the needs of today's surgical trainees and the demands of the current health care environment. In an effort to improve training, surgical educators, and national societies have embraced simulation following the example of other industries such as aviation. New simulators and curricula have been developed that enable training outside the operating room and enhance resident performance before patient encounters.
These efforts have been fueled by the evidence provided in the literature regarding effective transfer of simulator-acquired skill to the clinical environment. Furthermore, to maximize simulation training effectiveness, a proficiency-based training paradigm in which learners are required to achieve expert-derived performance goals has been suggested. This type of training which, according to several experts, is ideal for training on simulators, is tailored to individual needs and ensures acquisition of uniform skill by learners. Nevertheless, although proficiency-based curricula have proven to be effective in improving operative performance, we have clearly demonstrated before that simulator-trained learners uniformly outperform control subjects but do not reach expert performance in the operating room(OR). We postulate that the root of this incomplete skill transfer is that we do not reliably detect when learning is complete on the simulator because of incomplete metrics of performance, and this is unmasked in the demanding environment of the OR. Hence, although proficiency-based simulator training is effective, it may not foster optimal skill acquisition.
Most current simulation curricula use the traditional metrics of time and errors for performance feedback and assessment. Global rating scales are also being used for performance assessment but rely on the subjective opinion of the assessor and are difficult to use during simulator training, as it is not feasible to have an expert rater present during training of multiple learners. Other performance metrics such as motion recordings have also been demonstrated to be valid in distinguishing individuals with different skill levels, but their value during simulator training is poorly understood. In a previous study from our group that examined the value and relationship of time and motion efficiency performance metrics during proficiency-based simulator training, we demonstrated that time was the more robust metric, as motionmetrics (path length and smoothness) were easier to achieve than time by the majority of trainees. Importantly, the aforementioned metrics do not provide a complete picture of the attentional demands required by the primary task, the effort the performer had to invest, and the quality of the learning that occurred. It is well known that although 2 performers may produce equal results on time and accuracy measurements, they may have substantial differences in workload, attention demands, and physiologic parameters that reflect differences in learning, true skill level, and experience.
One of the main characteristics that distinguish skilled performers (experts) from novices is their ability to engage in certain activities without requiring significant attentional resources. To describe this characteristic, psychologists first used the term automaticity. Many habitual or highly practiced motor acts can be performed automatically, leaving enough spare attentional capacity for engagement in multiple activities. Evidence of automaticity has been used in the motor skill literature to identify skilled performers and confirm learning by novices. The attainment of automaticity has been mapped to specific areas of the brain that differ from those used by novices unfamiliar with a particular task. In general, automaticity is achieved through repeated practice on tasks with consistently mapped characteristics. A common procedure to measure automaticity has been the use of a secondary task that assesses spare attentional capacity when the main task is being performed.
We have previously used a visual-spatial secondary task for performance assessment on simulators and have demonstrated that the metrics obtained from this task were more sensitive to subtle performance differences between skilled individuals compared with the traditional metrics of time and accuracy. More recently, we also demonstrated that although novices achieve proficiency in laparoscopic suturing after relatively short training periods (based on time and errors), the attainment of automaticity (based on secondary task measures) requires significantly longer training intervals. These findings support the argument that the currently used performance metrics on simulators do not adequately reflect skilled performance and call for the incorporation of more sensitive methods such as the secondary task. In addition, they call for a study that evaluates their effectiveness in improving the incomplete transfer of simulator-acquired skill.
We, therefore, hypothesized in this study that novices who learn laparoscopic suturing on simulators would perform better in the OR after training to expert levels of secondary task performance, time, and errors (automaticity) compared with training to expert levels of time and errors alone (proficiency).
Abstract and Introduction
Abstract
Objective: We hypothesized that novices will perform better in the operating room after simulator training to automaticity compared with traditional proficiency based training (current standard training paradigm).
Background: Simulator-acquired skill translates to the operating room, but the skill transfer is incomplete. Secondary task metrics reflect the ability of trainees to multitask (automaticity) and may improve performance assessment on simulators and skill transfer by indicating when learning is complete.
Methods: Novices (N = 30) were enrolled in an IRB-approved, blinded, randomized, controlled trial. Participants were randomized into an intervention (n = 20) and a control (n = 10) group. The intervention group practiced on the FLS suturing task until they achieved expert levels of time and errors (proficiency), were tested on a live porcine fundoplication model, continued simulator training until they achieved expert levels on a visual spatial secondary task (automaticity) and were retested on the operating room (OR) model. The control group participated only during testing sessions. Performance scores were compared within and between groups during testing sessions.
Results: Intervention group participants achieved proficiency after 54 ± 14 and automaticity after additional 109 ± 57 repetitions. Participants achieved better scores in the OR after automaticity training [345 (range, 0–537)] compared with after proficiency-based training [220 (range, 0–452; P < 0.001].
Conclusions: Simulator training to automaticity takes more time but is superior to proficiency-based training, as it leads to improved skill acquisition and transfer. Secondary task metrics that reflect trainee automaticity should be implemented during simulator training to improve learning and skill transfer.
Introduction
In the last decade, the traditional surgeon-training paradigm has undergone a significant shift. The Halstedian apprenticeship model seems outdated for the needs of today's surgical trainees and the demands of the current health care environment. In an effort to improve training, surgical educators, and national societies have embraced simulation following the example of other industries such as aviation. New simulators and curricula have been developed that enable training outside the operating room and enhance resident performance before patient encounters.
These efforts have been fueled by the evidence provided in the literature regarding effective transfer of simulator-acquired skill to the clinical environment. Furthermore, to maximize simulation training effectiveness, a proficiency-based training paradigm in which learners are required to achieve expert-derived performance goals has been suggested. This type of training which, according to several experts, is ideal for training on simulators, is tailored to individual needs and ensures acquisition of uniform skill by learners. Nevertheless, although proficiency-based curricula have proven to be effective in improving operative performance, we have clearly demonstrated before that simulator-trained learners uniformly outperform control subjects but do not reach expert performance in the operating room(OR). We postulate that the root of this incomplete skill transfer is that we do not reliably detect when learning is complete on the simulator because of incomplete metrics of performance, and this is unmasked in the demanding environment of the OR. Hence, although proficiency-based simulator training is effective, it may not foster optimal skill acquisition.
Most current simulation curricula use the traditional metrics of time and errors for performance feedback and assessment. Global rating scales are also being used for performance assessment but rely on the subjective opinion of the assessor and are difficult to use during simulator training, as it is not feasible to have an expert rater present during training of multiple learners. Other performance metrics such as motion recordings have also been demonstrated to be valid in distinguishing individuals with different skill levels, but their value during simulator training is poorly understood. In a previous study from our group that examined the value and relationship of time and motion efficiency performance metrics during proficiency-based simulator training, we demonstrated that time was the more robust metric, as motionmetrics (path length and smoothness) were easier to achieve than time by the majority of trainees. Importantly, the aforementioned metrics do not provide a complete picture of the attentional demands required by the primary task, the effort the performer had to invest, and the quality of the learning that occurred. It is well known that although 2 performers may produce equal results on time and accuracy measurements, they may have substantial differences in workload, attention demands, and physiologic parameters that reflect differences in learning, true skill level, and experience.
One of the main characteristics that distinguish skilled performers (experts) from novices is their ability to engage in certain activities without requiring significant attentional resources. To describe this characteristic, psychologists first used the term automaticity. Many habitual or highly practiced motor acts can be performed automatically, leaving enough spare attentional capacity for engagement in multiple activities. Evidence of automaticity has been used in the motor skill literature to identify skilled performers and confirm learning by novices. The attainment of automaticity has been mapped to specific areas of the brain that differ from those used by novices unfamiliar with a particular task. In general, automaticity is achieved through repeated practice on tasks with consistently mapped characteristics. A common procedure to measure automaticity has been the use of a secondary task that assesses spare attentional capacity when the main task is being performed.
We have previously used a visual-spatial secondary task for performance assessment on simulators and have demonstrated that the metrics obtained from this task were more sensitive to subtle performance differences between skilled individuals compared with the traditional metrics of time and accuracy. More recently, we also demonstrated that although novices achieve proficiency in laparoscopic suturing after relatively short training periods (based on time and errors), the attainment of automaticity (based on secondary task measures) requires significantly longer training intervals. These findings support the argument that the currently used performance metrics on simulators do not adequately reflect skilled performance and call for the incorporation of more sensitive methods such as the secondary task. In addition, they call for a study that evaluates their effectiveness in improving the incomplete transfer of simulator-acquired skill.
We, therefore, hypothesized in this study that novices who learn laparoscopic suturing on simulators would perform better in the OR after training to expert levels of secondary task performance, time, and errors (automaticity) compared with training to expert levels of time and errors alone (proficiency).