Summary of Sitzmann and Ely’s Meta-Analytic Examination of the Effectiveness of Computer-Based Simulation Games

Inherently those of us in the serious games industry know an interactive experience is more rewarding than say watching a video, however, often one of the very first questions we are faced with when talking to new clients, customers, interested passers by, friends, family, you name it, is, ‘does this stuff really work?’ And ‘if it works as well as you say it does, then how does it work, and why is it effective?’

For years researchers have been trying to prove the effectiveness of serious games and simulations but somehow always falling short, either by not having the publicity, the budget for ROI studies or the permission to publish their findings due to corporate confidentiality. The serious games research field, despite good intentions, has been pretty disjointed and lack-luster: that is until now.

On October 20th 2010 Science Daily published a summary of a University of Colorado Denver Business School study that ‘found those trained on video games do their jobs better, have higher skills and retain information longer than workers learning in less interactive, more passive environments.’ This exciting research project, to be published in the winter edition of Personnel Psychology, used meta-analytic techniques to examine the instructional effectiveness of computer-based simulation games relative to a comparison groups on a comprehensive set of training outcomes.

Due to the lack of consensus on definitions of serious games, simulations and game based learning Sitzmann coined her own term and definition for the research; Simulation Games are instruction delivered via personal computer that immerses trainees in a decision-making exercise in an artificial environment in order to learn the consequences of their decisions. Sitzmann thoroughly analysed 65 studies and data from 6,476 trainees and focused her review exclusively on studies that compared post-training outcomes for simulation game and comparison groups. Learners were undergraduate students in 77% of samples, graduate students in 12% of samples, employees in 5% of samples, and military personnel in 6% of samples, however it is worth noting that she states also that the effectiveness of simulation games relative to a comparison group did not significantly differ across undergraduate, graduate, employee, or military populations signifying relevance across all end user groups.

Sitzmann particularly focused on moderators of effective simulation games including; the Entertainment value, the activity level of the simulation game group, the access level of the simulation game, the use of the simulation game as sole instructional method and the activity level of the comparison group and from these moderators she produced the following 9 hypotheses;

H1: Post-training self-efficacy will be higher for trainees in the simulation game group than the comparison group.

 

H2-H4: Post-training declarative knowledge (H2), post-training procedural knowledge (H3), and retention of the training material (H4) will be higher for trainees in the simulation game group than the comparison group.

 

H5: The entertainment value of the simulation game will moderate learning from simulation games; relative to the comparison group, trainees will learn more from simulation games that are high rather than low in entertainment value.

 

H6: The activity level of the instruction in the simulation game will moderate learning from simulation games; relative to the comparison group, trainees will learn more from simulation games that actively engage trainees in learning rather than passively conveying the instructional material.

 

H7: Whether trainees have unlimited access to the simulation game will moderate learning from simulation games; relative to the comparison group, trainees will learn more from simulation games when they have unlimited access to the simulation game than when access to the simulation game is limited.

 

H8: Whether simulation games are embedded in a program of instruction will moderate learning from simulation games; relative to the comparison group, trainees will learn more from simulation games that are embedded in a program of instruction than when they are the sole instructional method.

 

H9: The activity level of the comparison group will moderate learning; relative to trainees taught with simulation games, the comparison group will learn more when they are taught with active rather than passive instructional methods.

Sitzmann’s research found supporting evidence of all but one of her hypotheses. She found that ‘self-efficacy, declarative knowledge, procedural knowledge, and retention results all suggest that training outcomes are superior for trainees taught with simulation games relative to the comparison group.’ Interestingly enough H5, the inclusion of ‘entertainment elements’ did not prove to be significant, noting that there was no difference in learning between simulation games with high entertainment value and those with low entertainment value.

The three strongest messages which seem to come out from this research for me, as a serious game designer is that firstly, serious games/simulation games need to be designed in a way that they engage the end user in the subject matter – this does not mean that watching a video, it means actively involving the user in the decision making process around the content. This provides evidence to the notion that serious games and simulations are not effective because they use the technology, but because they use good design principles of engagement and participation in learning – leading me onto another blog post about why serious games work, but that’s coming soon!

Secondly providing end users with autonomy over their access to the content – allowing continued access to the simulation games seemed to dramatically improve user’s confidence around the content.

And finally serious games need to be implemented in a blended learning environment; those studies that included a mix of training before and after the game activities produced better results when compared to the games used as standalone training applications. It seems end users require training before on the knowledge required, they can then practice their skills, and then debrief and transfer to on the job training.

I do have to questions Sitzmann’s finding  however, over the costs of developing simulation games, a price she puts somewhere between the $5 and $20 Million figure, a lot of effective applications have been developed for significantly less than that, including I imagine quite a few of the pieces she references in her research.

Overall this paper looks very exciting and has provided what seems to be some solid evidence for what we have believed for some time; that serious games have the potential, if designed appropriately, to add significant value to training and education.

To conclude I will leave you with, once again the main findings of the study;

Overall, declarative knowledge was 11% higher for trainees taught with simulation games than a comparison group; procedural knowledge was 14% higher; retention was 9% higher; and self-efficacy was 20% higher.

 

For anyone wanting to review the whole paper you can find it here: A Meta-Analytic Examination of the Effectiveness of Computer-Based Simulation Games

10 Responses to Summary of Sitzmann and Ely’s Meta-Analytic Examination of the Effectiveness of Computer-Based Simulation Games

  1. Pingback: Tweets that mention Summary of Sitzmann and Ely’s Meta-Analytic Examination of the Effectiveness of Computer-Based Simulation Games « Pixelearning's Blog -- Topsy.com

  2. Davin Pavlas says:

    Interesting article, but I have some concerns with how the meta rolled-up different kinds of virtual environments, games, simulations, etc. into a single group they’re simply calling “simulation games.” There’s some real differences in the kinds of processes that are taking place in the different tools of the meta’s studies. Extrapolating all that to serious games seems misleading. The field may be too new to provide a meta of pure game studies, which may have influenced the decision of the authors.

    However, some acknowledgment of this potential issue would have been nice. The authors say that it’s difficult or impossible to categorize these different environments as sims, games, etc. but don’t actually make reference to the issues that arise from this collapsing of categories. Indeed, the description of “simulation games” provided is strikingly similar to a generic virtual environment definition.

    Nonetheless, interesting work that should be useful as a review to point people to when the question of learning efficacy arises.

    • Helen says:

      Davin,

      Great point around the different types of ‘simulation games’ included in the study, however we must acknolwedge that there isn’t a great amount of publicly available published research around efficacy just now; so as a first step collapsing them together into one category seems like a logical approach. After all, those of us in the serious games industry or simulations industry etc all propose to create immersive and engaging products that are more effective than traditional elearning.

      It would be very interesting and helpful for further research to see the ‘simulation games’ they reviewed so we could begin to look for commonalities across the most effective, then we could begin to break the categories down into sub groups.

      As for the definition they provide sounding like that of a generic virtual world, I would have to disagree with you there. In my opinion a virtual world provides the framework for a ‘simulation game’ to take place but is not itself a game. Virtual worlds – and I’m thinking second life mostly, don’t ‘immerses trainees in a decision-making exercises in order to learn the consequences of their decisions’. Virtual worlds by their nature are open ended environments shaped by the users themselves. As soon as you start putting in rules and decision making points, and cause and effect (win states) the virtual world then becomes shaped by an external force (i.e. an instructional or game designer) and moves towards becoming a game.

      • Davin Pavlas says:

        Helen,

        Given some elbow grease, you could probably take a look at the types of tools used, as all the meta-included articles have an asterisk in the references. But to do any sort of analysis, it’d basically become a re-tread of the meta work. Perhaps this is a better topic for a future meta, when the literature is a little more ripe.

        I actually didn’t mean for “virtual environment” to evoke “virtual world” — rather, I meant it to mean the sort of overarching group all of these tools belong to.

  3. Rob says:

    I’m wondering why data to support the effectiviness of serious games has been so hard to come by if this meta analysis was able to come up with enough evidence to generate significant meta-analyses… why not point to the original studies to which this meta analysis refers? Having said that, it is nice to have a summary article which spans a range of studies to point to…

    • Helen says:

      Thanks for the comment Rob. I agree it would be great to have the list of studies they reviewed as part of the meta analysis! I also agree with you and can’t understand why it has taken so long for someone to do this. There are specific organisations across the globe who are dedicated at looking into serious games but perhaps they are looking too far in the future rather than focusing on what is important for the industry right now?

      From a business perspective this research is very significant. Passing on individual pieces of research to our customers is a time consuming activity and often has limited impact. A study that looks across a variety of genres, learning outcomes and audiences, showing in the main efficacy and increased retention is worth more to us than individual studies alone! 🙂

  4. Doug Nelson says:

    Thanks for posting this research. I’ve read your overview, but not the original document.

    I’m not sure this is such good news from a sales (or selling) perspective. I envision a conversation with a client in which I point to this research to show that we’ll likely achieve outcomes that are approximately 15% better using a serious game. And then the client says “Great… and it’ll only cost 15% more than the alternative solution?”

    At which point I get to smile and say, “Well, no, actually according to this research it’ll cost between $5M – $10M to replicate those outcomes.” Somehow, I don’t think I’ll get the sale 🙂

    Clearly, her prices for simulations are way off — we’ve put together effective Serious Games for much, much less than her figures. But our custom games are *much* more expensive than doing a Powerpoint-based page-turner in Articulate. Certainly more than 15% more….

    So I wonder what the games were being compared to, and I wonder how we justify the premiums that games cost if we’re only showing 9% – 15% improvements in learning. Thoughts?

    — Doug

    • Helen says:

      Hi Doug,

      Comparing serious games to power points is not comparing like for like. Yes they are cheaper to create, but that doesn’t mean they will be any use. You could create 50 power points (number plucked from the air) and not come close to the effectiveness of a well designed serious game. So comparing the cost of one ppt to a serious game is a non starter, and that’s the first thing to point out to your clients!

      The research compares serious/simulation games to a variety of training methods include f2f and other methods, and what was found was that the higher the level of active participation in the learning the more effective the intervention was.

      A 15% increase is very significant from a L&D perspective when you take in to account reduction in training time, cost of mistakes and also a newly motivated learner who is more willing to try out more courses – I’m extrapolating there, but in general the reaction of our clients who have seen this research has been ‘wow!’ And in fact some ROI studies we have completed with big blue chip clients, which we unfortunately cannot discuss in detail due to corporate confidentiality, have seen a much lower % rate of effectiveness but very very good ROI figures.

      Yes the costs noted in the research for the development are way off – it’s more likely to be in the 10’s of thousands rather than millions (unless we get lucky!) but once you have this training piece, you can use it over and over again with as many employees as you need thereby retaining the effectiveness levels across multiple groups – one advantage over f2f delivery alone.

      This research is not perfect and I don’t think anyone who reads it will think that, but it is, in my opinion, a very exciting step forward for the serious games industry.

  5. Richard says:

    Doug

    I know Helen has responded above but i thought you might be interested in another response I recieved from a LinkedIn group.

    Richard: This looked interesting, and had me wondering why the differences were not larger than this! But after looking at the report, it seems to come down to active instructional methods (computerized tutorials, completing assignments, and conducting laboratory experiments):

    “The comparison group learned more than the simulation game group when they were taught with active instructional methods… the simulation game group learned more than the comparison group when the (latter) was taught with passive instructional methods … Computerized tutorials were much more effective than simulation games, and hands on practice was slightly more effective … Simulation games were more effective than lecture and reading.”

    So basically, passive instruction (lectures and reading in particular) fall short, and active instructional methods win overall. Games fell in between passive methods and active methods. ”

    Hope that was useful. Richard

  6. Pingback: Crossroads: Serious Games for Business « PIXELearning's Blog – Serious Games and more

Leave a comment