Search
  • Ashley

A Critical Review of Digital Game-Based Learning: Effect of Instructions and Feedback on Motivation

A Quick Introduction:

This is my first critical review of research. The article reviewed here is cited below. It was written in Microsoft Word with proper formatting, which did not translate over to blog format. I have done my best to recreate that formatting below.


Erhel, S. & Jamet, E. (2013). Digital game-based learning: impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156-167.


Problem

1. Identify the clarity with which this article states a specific problem to be explored.

While the author’s intentions are only vaguely stated in the first portion of the article, as the reader progresses, the author’s specific intentions become much more apparent and appear in the introduction of each experiment. The problems explored in each experiment have different and separate purposes, but use the same game in their implementation. The first experiment studied the outcomes of learning when different instructions were given; namely, whether the students were asked to play a game for learning or for entertainment. The second experiment examined whether feedback in the quiz at the end of the game influenced the type of learning strategy used by participants. Overall, the authors’ approach to presenting their specific problem was disorganized and unclear for a large portion of the article.


2. Comment on the need for this study and its educational significance as it relates to this problem.

This study addresses two small but important details in the realm of digital game-based learning. While the abstract claims to address learning and motivation in digital games, the specific experiments confront only two small pieces of the game experience, namely instructions and questionnaire feedback, and do not assess the game itself whatsoever. This data is useful because these two details are commonly overlooked in typical game research (instructions from supervisor are not specifically scripted or effect of quiz feedback is not examined). However, these details are generally overlooked because there are larger and more important pieces of digital game-based learning that have not yet been addressed, most of which have to do with the actual content of the game, which is not examined at all in this study. The research done in this article contributes evidence to a few finer details of research design; particularly, the use of feedback and the wording of instructions. It does not make a significant impact on the broader topic of digital game-based learning as the early sections of the article would suggest.


3. Comment on whether the problem “researchable”? That is, can it be investigated through the collection and analysis of data?

The problems stated in each experiment are absolutely researchable as they are both specific and measurable. However, the larger problem of learning and motivation in digital game-based learning is a much more daunting task. Though the sample size was small and not many demographics about them were reported, they provided enough data for the researchers to analyze their findings and determine whether or not their original hypotheses were correct. The authors did a good job assessing the outcomes of two types of instructions and the usefulness of immediate feedback to learning, but overall these two details do not contribute much to the knowledge base of digital game-based learning.


Theoretical Perspective and Literature Review

4. Critique the author’s conceptual framework.

The author lays out a framework through the introduction and literature review that decidedly does not match the information presented in the rest of the article. The introduction of the paper places an emphasis on digital gameplay and its popularity. The review of literature delves into rules for DGBL, motivational studies, and benefits of gameplay over traditional media. The final portion of the literature review covers using instructions to impact learning effectiveness of games, which is the only part of the literature that is assessed in this study’s experiments. The topic of the second experiment, the effect of feedback on learning during DGBL, is not introduced at all until the beginning of the section on experiment 2. This is problematic because, from the way the literature is presented, the reader is expecting a broader overview of digital game-based learning than the research here provides. If the authors had stated their intention to examine a few of the finer points of instruction and feedback, the framework of the paper and the reader’s expectations would be more closely aligned.

In my opinion, this conceptual framework is not effective for conveying the importance of the author’s research. The information presented in the introduction and the majority of the literature review paint a picture of digital game-based learning overall as both a popular topic in educational technology and an entertaining way for children to learn. While the final part of the literature review ties in the research of experiment 1 with the topics covered in the beginning of the article, it is not sufficient to support the design of the following two experiments. Even in the introduction of experiment two where the variables examined are explained, knowledge of correct response appears to be chosen almost at random from a list of factors shown to be influential in a few studies. The general discussion at the end of the article does a nice job of pulling together results from both studies, but does not completely make up for the lack of support for them as presented in earlier parts of the article.


5. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

The theories presented at the beginning of the paper appear to be relevant to popular topics in educational research today, but the scope of the paper changes as the literature review progresses. In this light, many of the sources cited to support points made earlier in the literature review and introduction do not directly support the research done in experiments 1 & 2. As a specific example, Mayer and Johnson (2010) were cited for their four rules of DGBL environments. However, when the reader gets to the methodology section of each experiment, the reader quickly notices that these four rules of DGBL environments are not entirely implemented in this ‘game.’ The first principle is a set of rules or constraints, which are likely featured in the introductory portion of the game. The second principle, dynamic responses to learners’ actions, is conspicuously absent from the game used in this study, as the game follows a specific format which is mostly instructional and does not change based on learners’ interactions with it. Appropriate challenges that promote self-efficacy in learners is the third principle, which may be present in the questionnaire portion of this game, but is absent from the game itself. The fourth, which is a gradual, learning outcome-oriented increase in difficulty, is entirely left out of this game. This step in particular is a key feature of effective digital games as it keeps pace with the individual learner. If a student fully understands the material, they don’t have to spend time continuously reviewing it and can move on to something new that will be more engaging. For the struggling student, the game can slow down and spend more time focusing on what they may be missing before moving on. The difficulty of the game in this study stays consistent throughout and is the same for every participant. This comparison brings into question the entire premise of the experiments: how can the authors claim to be studying digital game-based learning when the methods used do not measure up to the requirements for a DGBL environment?

The research and sources which are most relevant are contained in the sections about the experiments themselves, where each study is attempted to be justified and explained at the same time. Experiment 1 contains four specific studies that are relevant to the chosen variable of study and experiment 2 contains many sources discussing other types of feedback and their usefulness, but only four sources for feedback generally and knowledge of correct response specifically. Much of the relevant research was contained in the general discussion at the end, where the authors compared their findings to other similar studies. The sources presented here both helped to validate the research hypotheses and support the findings, conclusions, and inferences drawn from the research conducted in this article.


6. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

The literature review does not conclude with a brief summary of literature, but with descriptions of incidental, surface, and deep learning. It also briefly reviews cognitive and germane cognitive load. These types of learning are important knowledge for educational researchers generally, but have limited applications to the two main focuses of this article, instruction effect and feedback effect. Learning types and cognitive load are briefly mentioned in the general discussion, but there is no summary of literature or rollup of the literature’s implications for the problems being investigated.

I believe including this overview would have been very helpful to the overall effectiveness of the literature review and introductory portions of the article. An overview of what the literature implies for the experiments would set the tone for experiments 1 and 2 instead of the reader being surprised by their contents upon arriving in the sections devoted to each experiment. This overview could include much of the beginning contents of each experiment, as the beginning of each reads like a miniature literature review. After including those studies, a rollup of all topics covered in the literature, as well as their interactions, would help the reader to understand why so many elements were included in the literature review.


7. Evaluate the clarity and appropriateness of the research questions or hypotheses.

The hypotheses remain unclear throughout quite a bit of the article. While most articles will state their central research questions near the beginning, usually the abstract, introduction, or in the literature review, the authors do not introduce all of their research questions until the seventh page of the article. In fact, I would argue that the central hypothesis of the article is not stated until the first sentence of the general discussion at the end of the article. There would be much more clarity in the author’s intentions if this central statement was put forward much closer to the beginning of the paper. It is important that the reader understand the authors’ intentions for the article so that they can properly evaluate the arguments and research that the author presents.

When reading the introduction and first portions of the literature review, it appears that this article will systematically demonstrate digital game-based learning’s benefits on motivation and learning. The following portion of the literature appears to define and review the motivational benefits of DGBL and raise the question of how to use it to get learners into a ‘flow’ state. The next research question posed is concerning the benefits of DGBL in comparison to traditional media. None of these research questions are explicitly answered in the experiments presented in this paper.


Research Design and Analysis

8. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

There were significant shortcomings in the design of these experiments. First, and most importantly, the ‘game’ used in this study was poorly designed as a digital game-based learning environment. Not only did the game not meet the requirements for a DGBL environment, it was also created without the most motivating aspects of video games. It does not logically make sense to try to determine how motivating digital games are from a game that does not include any of the features shown to be motivating in other studies of DGBL. There was no learner autonomy (except perhaps choosing which module to watch first), no customization to the learner, and no sense of competition. Even the aesthetic of the game was unmotivating in the sense that the scenes (several pictured in the article) used muted colors, stationary figures, and focused on ailments of the elderly while being targeted at participants ages 18-26. This information would not be entertaining or relevant for the learners participating in these experiments.

While changing the communicated purpose and adding knowledge of correct response feedback may have an effect on learner engagement and motivation, neither of these are likely to be the most significant aspects of learning and motivation in digital game-based learning. In order to assess a question of this magnitude, the game used in these experiments should have represented the best aspects of video games combined with the best aspects of learning theory in order to actually identify the conditions under which DGBL is the most effective.


9. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

Each experiment contained a different group of participants, which makes sense considering the same game with the same questions was used for both experiments and participants who had been a part of experiment 1 would obviously already know information presented in the activity. The researchers made sure that the groups in each experiment were demographically equal, which gives the study more power to generalize their results for demographic differences. However, the sample sizes of both experiments were quite small (46 and 44 respectively), and therefore may not be generalizable to the public. Demographics other than age and years in school were not presented and may have also had an effect on learner responses.

One of the larger issues with this group of participants was the screening out of students who already had knowledge of these diseases. I understand why this was done, to show greater improvement between the pre- and post-tests, but believe that this contradicts the overall goals of DGBL. Part of the allure of game-based learning is the ability to customize the experience around the learner and keep pace with the learner as far as difficulty is concerned. This game, as previously noted, does neither of those things and is therefore not truly a DGBL environment. This method of participant screening draws attention to the lack of sophistication involved in this learning experiment and cannot be widely generalized as it now is not only limited to one age group, but students in an age group lacking specific knowledge of diseases of the elderly.


10. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

This study used several techniques to measure performance of the participants, including a pre-test, computer simulation, a content quiz, and a series of questionnaires. Of these materials, the computer simulation and content quiz seemed to be the only digitally-based parts of the procedure; the pre-test and all questionnaires were completed on paper. Participants sat in a room divided into six booths and remained in their respective booths for the entire experiment.

For an experiment on digital game-based learning, the methods and materials used here were a bit rudimentary. ASTRA, the ‘game’ used in this study, utilized both out-of-date graphics and very limited tools for interaction. The researchers decided to conduct over half of the experiment on paper, which removes participants from the digital learning mindset that is the driving force behind this study. Not only does this method not make sense, but it removes some of the validity of their findings about students’ interaction with digital media. While the questions asked in the questionnaires were valuable to the overall purpose of the study, there is no reason that both the questionnaires and the pre-test could not have been administered via computer.

There are many metrics that can be pulled from computer programs in order to determine how much students have learned. None of these metrics are utilized in this study. Taking measurements of how long students viewed the simulations for, how quickly they clicked through screens, and how long it took them to answer the content quiz at the end would have been valuable measures to determine participant’s level of interaction with the digital learning system.


11. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

The measures used in this study were the most clearly thought-through part of the experiment. As opposed to using more qualitative methods like interviews, the researchers had participants evaluate their learning, mastery, and performance goals via a questionnaire. Intrinsic motivation was also assessed, while extrinsic motivation was left out. The authors did not provide the specific questions asked in the surveys, leaving the reader wondering what was asked to determine the levels of these goals and motivations.

It is difficult to determine appropriateness and quality without knowing the exact nature of the questions asked in the pre-test, content test, or questionnaires on goals and motivation. There are obviously several biases at play, which were not considered here and may have affected the overall ratings of participants, particularly in a Likert scale format. In particular, response bias could have easily affected these results. In response bias, the participant is aware they are participating in an experiment and want to do their best to make sure the researcher gets the responses they are looking for. If this is occurring and participants are saying what they think the researcher wants to hear (perhaps they are motivated by a better grade or a reward for participating), there is a chance they will answer questions more positively than they actually feel. For example, a participant may communicate that they enjoyed the experience and learned more than they actually did in hopes of thoroughly earning their reward, whatever that may be.

High reliability is not particularly important to this study as it is simply exploring participants’ depth of learning and analysis involved in a digital learning environment. If the authors were trying to test this framework for use in a larger population or determine exact levels of response to DGBL overall, the reliability levels of these experiments would need to be adjusted. As it is, this was a small exploratory study and does not need to focus intensely on the consistency of its results over time, situations, and researchers.


Interpretation and Implications of Results

13. Critique the author’s discussion of the methodological and/or conceptual limitations of the results.

The authors cover in their discussion several possibilities of what may be studied further and where there may have been flaws or interferences with their results. The first limitation mentioned was the lack of effect that instructional change had on participants’ ability to memorize the information. They admit that this may because participants were never told to memorize the information. While the authors cite several studies that their results are consistent with, they do not go in-depth to explain how their results support these theories or studies. For example, the authors mention that their study on adding feedback to digital game-based learning allows students to memorize the information more easily, which in turn supports the cognitive load theory and germane cognitive load. Unfortunately, besides briefly defining the theories in the literature review, there is not a thorough delve into the implications and limitations of applicability of these results for each theory.


14. How consistent and comprehensive are the author’s conclusions with the reported results?

The authors presented several conclusions in the general discussion portion of the article that appear to have direct relation to the results of the study. The results portions of the article were incredibly dense, as they mostly contained results from statistical analyses but did not include any graphs or diagrams, simply statistical descriptors such as means, variances, and standard deviations. If a reader had limited knowledge of statistical tests, these portions would be virtually unreadable as there are no visual aids or descriptions of the statistical tests used.

The conclusions section does a superior job of explaining the experiments’ results than the actual results sections do. While the results section of experiment 2 claims that there is a better response to KCR feedback, the general discussion reassesses this claim by providing the context that most participants had very little exposure to the KCR feedback, as the average score on the content quiz was a 12/16 (only 4 questions prompted the computer to give KCR feedback). The general discussion relates the results of the experiments to several studies previously mentioned in the literature, although it occasionally lacks details or specific links to said studies. For example, while they cite a study by Moos and Azevedo (2006) that demonstrated that learner planning involves “recycling goals in working memory and activating prior knowledge.” The authors then claimed that this supports their guess that the entertainment instruction made participants less frightened of failure. This was a large leap in logic and the reader is sure to have a hard time following the author with this argument of support.


15. How well did the author relate the results to the study’s theoretical base?

This is a particularly poignant question, as the theoretical base of the study presented in the literature does not appear to have a direct relation to the experiments until the reader reaches the general discussion section at the end of the paper. In the general discussion, many of the components of the literature review finally make an appearance in connection with the experiments. The original theories presented, such as the many potential benefits listed for the entertainment aspect of video games, the comparison of digital games with other media, and the popularity of video games, are discussed heavily as a part of the article’s theoretical base but not mentioned again in the entirety of this article.

One of the main research questions (as stated in the conclusion) is to determine whether or not deep learning is compatible with serious games, but does not take the time to define what a ‘serious game’ entails. They set up the parameters for a digital game-based learning environment, but as discussed in previous answers here, the game used does not meet the requirements for a DGBL environment. Therefore, though the authors claim their results support deep learning in serious games, the reader has no way of knowing if this is the case without further reading of the cited articles.


16. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

In my view, this study brings to light the importance of instructions given to participants in a research study as well as the benefits of knowledge of correct response feedback to the memorization of content. Both of these details are often either overlooked or just not taken into account during the course of research, and this study demonstrates that both may have a potential effect on the learning outcome of participants. As far as answering the presented research questions, this study does not do an adequate job. There are many reasons that this study is inadequate, from the disorganized review of literature to the poor design of the game used as the primary tool in this study. Its results are not generalizable because not enough detail is presented to thoroughly understand what participants were being asked to assess and because the sample size was small.

Several questions are raised throughout this article that require further research. First and most importantly, the main question of whether or not digital game-based learning is educationally sound is definitely a topic for further research that was not thoroughly investigated in this study. Based on the study at hand, further research should be carried out on instructional methods for games that DO meet the requirements for a DGBL environment and the effects of KCR feedback should be studied in a scenario where more difficult questions are posed to students based on the content. I believe that a fundamental step in the right direction would first be building a game that reflects the aspects of video games that participants would enjoy playing; something entertaining, truly interactive, relatable, and customizable for individual experience. While the game is played, it can pull metrics about the player and the amount of time spent playing and how questions were answered can be easily pulled together to build a more significant picture of what the participants learned. While this study contained a good foundation to build from, there is much further research needed to answer questions about deep learning in DGBL.


#Edu800 #CriticalReview #CRR #DGBL #Games #Instruction #Details

1 view0 comments

Recent Posts

See All

TOWER DEFENDERS

This game is my first attempt at a tower defense game. While building, I learned several valuable game-making skills such as random enemy generation, clicking and dragging a spawned tower, and program