This article by Kristen DiCerbo, Distinguished Learning Games Scientist at GlassLab, is cross-posted here from Pearson’s Research & Innovation Network blog.
Institute of Play is today releasing a white paper called Psychometric Considerations in Game-based Assessment, a work led by Bob Mislevy to which I contributed.
While there is much passionate debate around the role of high-stakes standardized testing, one thing that has developed from it is a set of rigorous statistical methods around measurement. We have tools for equating two different forms of a test and for tests taken at different times so we can compare scores across administrations. We have models to combine lots of individual observations (item scores in the traditional testing world) and make estimates of student achievement that take into account our error in measurement. We have ways of designing activity that allow us to gather evidence from them that tells us about student skills.
As we are looking for new ways to “do assessment” that address some of the concerns with our current assessment regimes, we should be careful not the throw away these useful tools. As I have written about previously, I do a lot of work on the idea of gathering data from students’ everyday interactions in digital learning environments to understand what they know and can do. Specifically, I have been part of the GlassLab collaboration that released SimCityEDU this fall.
This paper stems from that work. It walks through building an assessment argument in a game, maintaining many of the strong measurement-related (psychometric) considerations that are necessary to make good assessments. What do we think of as evidence? How is that captured and uncovered? How do we move from what we observe in a game to an estimate of a student’s skill?
It also addresses some interesting questions that game pose, such as how do we deal with measuring a skill as it is changing and how do we interpret results from multiple attempts at the same activity?
The paper closes with an introduction to the idea of Evidence-Centered game Design (ECgD), which is the merging of Evidence-Centered Design (from the assessment world) with principles of game design. This is a really nice section for those who are thinking about what it takes to design games that will gather valid and reliable evidence.
News | February 6th, 2014