Summary of Sitzmann and Ely’s Meta-Analytic Examination of the Effectiveness of Computer-Based Simulation Games
October 21, 2010 10 Comments
Inherently those of us in the serious games industry know an interactive experience is more rewarding than say watching a video, however, often one of the very first questions we are faced with when talking to new clients, customers, interested passers by, friends, family, you name it, is, ‘does this stuff really work?’ And ‘if it works as well as you say it does, then how does it work, and why is it effective?’
For years researchers have been trying to prove the effectiveness of serious games and simulations but somehow always falling short, either by not having the publicity, the budget for ROI studies or the permission to publish their findings due to corporate confidentiality. The serious games research field, despite good intentions, has been pretty disjointed and lack-luster: that is until now.
On October 20th 2010 Science Daily published a summary of a University of Colorado Denver Business School study that ‘found those trained on video games do their jobs better, have higher skills and retain information longer than workers learning in less interactive, more passive environments.’ This exciting research project, to be published in the winter edition of Personnel Psychology, used meta-analytic techniques to examine the instructional effectiveness of computer-based simulation games relative to a comparison groups on a comprehensive set of training outcomes.
Due to the lack of consensus on definitions of serious games, simulations and game based learning Sitzmann coined her own term and definition for the research; Simulation Games are instruction delivered via personal computer that immerses trainees in a decision-making exercise in an artificial environment in order to learn the consequences of their decisions. Sitzmann thoroughly analysed 65 studies and data from 6,476 trainees and focused her review exclusively on studies that compared post-training outcomes for simulation game and comparison groups. Learners were undergraduate students in 77% of samples, graduate students in 12% of samples, employees in 5% of samples, and military personnel in 6% of samples, however it is worth noting that she states also that the effectiveness of simulation games relative to a comparison group did not significantly differ across undergraduate, graduate, employee, or military populations signifying relevance across all end user groups.
Sitzmann particularly focused on moderators of effective simulation games including; the Entertainment value, the activity level of the simulation game group, the access level of the simulation game, the use of the simulation game as sole instructional method and the activity level of the comparison group and from these moderators she produced the following 9 hypotheses;
H1: Post-training self-efficacy will be higher for trainees in the simulation game group than the comparison group.
H2-H4: Post-training declarative knowledge (H2), post-training procedural knowledge (H3), and retention of the training material (H4) will be higher for trainees in the simulation game group than the comparison group.
H5: The entertainment value of the simulation game will moderate learning from simulation games; relative to the comparison group, trainees will learn more from simulation games that are high rather than low in entertainment value.
H6: The activity level of the instruction in the simulation game will moderate learning from simulation games; relative to the comparison group, trainees will learn more from simulation games that actively engage trainees in learning rather than passively conveying the instructional material.
H7: Whether trainees have unlimited access to the simulation game will moderate learning from simulation games; relative to the comparison group, trainees will learn more from simulation games when they have unlimited access to the simulation game than when access to the simulation game is limited.
H8: Whether simulation games are embedded in a program of instruction will moderate learning from simulation games; relative to the comparison group, trainees will learn more from simulation games that are embedded in a program of instruction than when they are the sole instructional method.
H9: The activity level of the comparison group will moderate learning; relative to trainees taught with simulation games, the comparison group will learn more when they are taught with active rather than passive instructional methods.
Sitzmann’s research found supporting evidence of all but one of her hypotheses. She found that ‘self-efficacy, declarative knowledge, procedural knowledge, and retention results all suggest that training outcomes are superior for trainees taught with simulation games relative to the comparison group.’ Interestingly enough H5, the inclusion of ‘entertainment elements’ did not prove to be significant, noting that there was no difference in learning between simulation games with high entertainment value and those with low entertainment value.
The three strongest messages which seem to come out from this research for me, as a serious game designer is that firstly, serious games/simulation games need to be designed in a way that they engage the end user in the subject matter – this does not mean that watching a video, it means actively involving the user in the decision making process around the content. This provides evidence to the notion that serious games and simulations are not effective because they use the technology, but because they use good design principles of engagement and participation in learning – leading me onto another blog post about why serious games work, but that’s coming soon!
Secondly providing end users with autonomy over their access to the content – allowing continued access to the simulation games seemed to dramatically improve user’s confidence around the content.
And finally serious games need to be implemented in a blended learning environment; those studies that included a mix of training before and after the game activities produced better results when compared to the games used as standalone training applications. It seems end users require training before on the knowledge required, they can then practice their skills, and then debrief and transfer to on the job training.
I do have to questions Sitzmann’s finding however, over the costs of developing simulation games, a price she puts somewhere between the $5 and $20 Million figure, a lot of effective applications have been developed for significantly less than that, including I imagine quite a few of the pieces she references in her research.
Overall this paper looks very exciting and has provided what seems to be some solid evidence for what we have believed for some time; that serious games have the potential, if designed appropriately, to add significant value to training and education.
To conclude I will leave you with, once again the main findings of the study;
Overall, declarative knowledge was 11% higher for trainees taught with simulation games than a comparison group; procedural knowledge was 14% higher; retention was 9% higher; and self-efficacy was 20% higher.
For anyone wanting to review the whole paper you can find it here: A Meta-Analytic Examination of the Effectiveness of Computer-Based Simulation Games