When does it make sense to design a learning game to answer a research question, and then test it with rigorous scientific methodology? Let’s consider why that approach worked well for Re-Mission, but not so well for other kinds of serious games (especially those intended for use in a school setting).
Tate1 describes a process, “Rational Game Design,” that worked well to address a research question regarding cancer treatment. This process emphasized iterative product optimization through formative research, as in biomedical targeted-drug development:
“Empirical testing is the best way to resolve conflict: When in doubt, collect data — the right answer is the one that best changes the target behavior in teenaged cancer patients.” (page 31)
“On many occasions, conflicting visions emerged in efforts to synthesize fun game play, cancer biology, and behavioral science. In those cases, player-focused data collection provided the basic recipe for choosing the right answer. Empirical demonstrations that ‘that’s what kids want, and it works’ played a major role in helping health professionals embrace a video game based on shooting, stool softener, and a sassy back-talking protagonist.” (page 33)
That description, along with the research analysis in the accompanying Kato2 paper, makes it clear that the success of such a design process is predicated on measurement: quantitative and qualitative, baseline and outcome. The feasibility of such measurement is determined by the theoretical framework, the intended primary outcome, and the constraints of the research setting. The case of Re-Mission was particularly well-suited for the measurement requirements of this design process: it employs a behaviorist approach; the intended primary outcome is easily measured (performance consistency of self-treatment protocols); the setting allowed for self-paced, occasional gameplay (1 hour or less each week) over a few months with periodic measurement. Furthermore, the informal, low-risk context of the gameplay made it easy to conduct a randomized controlled trial: the study participants had the freedom to spend one hour a week playing (or choosing not to play) a computer game that was entertaining, and possibly therapeutic (in the intervention group).
Contrast that case with studies of other sorts of serious games, especially academic learning games focusing on higher-cognitive or constructivist objectives: the intended outcomes may not be uniform or easily measurable; the setting is often intensive, requiring prolonged and coordinated gameplay over a shorter timespan (perhaps only a few weeks), which is not conducive to measurement of gradually-emergent or long-term effects; formal accountability requirements (e.g. test scores) hamper the ability to randomly place participants, lead to confounding factors, and may even stymie the definition of a “control” group. The difficulties and delays in measurement are also likely to stretch out the product iteration cycle. Little wonder that, in Video Games and Learning (2011), Squire rails against the “gold standard” (randomized controlled trials) for game-based learning studies in schools, considering the theoretical framework and sorts of outcomes that he advocates.
1 Tate, Haritatos, and Cole (2009). Hopelab Approach to Re-Mission. International Journal of Learning and Media, 1, 29 – 35.
2 Kato, Cole, Bradlyn, and Pollock (2008). A Video Game Improves Behavioral Outcomes in Adolescents and Young Adults With Cancer: A Randomized Trial. Pediatrics, 122, e305 – e317.