Improving the Multiple Choice Question (MCQ) Format

Summary and Literature Review.

The use of Multiple-Choice Questions for course assessment has been a controversial topic for many decades because of the significant evidence of its many disadvantages. Some of these include:

  • the inability of the instructor to be privy to the reasoning behind why an answer is selected;
  • student guessing influencing evidence on whether a student understands the knowledge topic;
  • the inability of the instructor to know how much a student understands the knowledge topic; and
  • students feeling there is less to study because of the MCQ format.

Current studies that are trying to improve this format by tackling some of these issues vary from studying hybrid MCQ approaches to studying the appropriate number of options in a question to studying the language and syntax of making the questions.

Research examining hybrid approaches to MCQ exams also vary. In addition to her own study, Kottke (2001) cited many studies (Dodd & Leal, 1988; Nield & Wintre, 1986) which have experimented with adding options for students to explain each of their answer choices.

Other research has focused on the trend toward building “confidence-level” question types into the traditional MCQ format (Klymkowsky, Taylor, & Spindler, 2006; Swartz, 2006; Wisner & Wisner, 1997). Swartz cited  Hassman & Hunt, 1994; Bruno, 1986; Bruno, Holland, and Ward, 1988.

In her article, Swartz also illustrates this type with the following example:

1) 1+2=?

A.  2.717         D.  A  or B       G. I don’t know.

B.  3                 E.  B or C

C.  3.141         F.  A or C

Where full credit is given for the correct answer and partial credit is given to students answering D, E, or G.

Studies focusing on the language and phrasing of MCQ’s note that students, in many cases, have a hard time understanding common disciplinary vocabulary and syntax that faculty aren’t even conscious of. Turner and Williams (2007) published a study that showed that vocabulary test performance “predicted performance on multiple-choice exams more strongly than pre-course knowledge and critical thinking.”. Ingram and Nelson (2006) also tested student understanding of MCQ language and have cited two additional studies by Pickersgill and Lock (1991) and Cassels and Johnstone (1985) within this focus.

The full bibliography for the above summary is referenced here.

[youtube 4-QyNE9vg8s 445 364]

Sacramento State faculty member and Faculty Assessment Coordinator Dr. Terry Underwood (Education) discusses some of the inherent difficulties with using the traditional MCQ format.

Links to Further Help and Best Practices

Get Help. Additional article resources on developing and evaluating MCQ format tests include:

More Discussion. Patti Shank, PhD, CPT, a recognized instructional designer for online and blended courses, examined the different types of multiple choice questions in her 4-part series article on Better Multiple Choice Questions.

Cast Your Vote

Tags: ,

Share

Comments

Showing 3 Responses

  1. Scott Farrand says:

    I don’t use MC items in my tests, but I have 8 years of experience as a writer for the Entry Level Math (ELM) test for the California State University system.

    When I started teaching I used MC items sparingly, but I noticed something strange. The students who did very well on the rest of a test tended to do well on the MC items, but the students who were earning C’s on the rest of the test tended to do worse on the MC items than if they had simply guessed. The students who failed the rest of the test would generally score on the MC items as though they did guess (and I presume that they did guess). My conclusion is that the distractors I used would capture the errors that C students make. In short, if a student knew a little bit then he or she knew enough to choose a wrong option, but if the student knew less than that he or she guessed and did a little better. So for me MC items were lousy at discriminating between C and D students, which is a critical area for tests.

    In writing items for the ELM exam, it was extremely useful to look at the item statistics. These can tell you how well the item did in several ways. For example, you can look at each of the distractors on an item and find out how well the students who chose that distractor did on the rest of the test, to see whether the distractor was picking off strong students or weak students. You can see whether doing well on that item corresponds with doing well on the rest of the exam. What I learned from the item statistics is that it is extremely difficult to anticipate how easy or hard an item is, and who will make what kind of error. Years of this leads me to not want to use MC items without first piloting them in some way to know more about what they are telling me.

    For example, it often happens that two items appear to be similar in difficulty and to test the same knowledge (for example if one item is a clone of the other), but the results can be very different. There is always a reason for this, but these reasons tend to be diabolically difficult to see in advance. I believe that no matter how much you think that you have accounted for the difficulty to anticipate what is going on with an MC item, you still have plenty of room to be astonished by what is going with an item, if you saw the item statistics.

  2. Scott Farrand says:

    I want to broach another topic related to MC items. Although I do not use MC items on tests, I make extensive use of them for homework, where they can serve a very different role.

    On a test, items are used to see what students know and can do. Homework items should help students to learn. These are very different goals, and MC items serve these goals in very different ways.

    The short version of what I am doing this semester in my calculus class is this: I am writing all of the homework problems, and I am making them all MC. The key (the correct answer) is always one of the options (I never use “none of the above” for my homework options) and I do not try to write distractors that capture common errors.

    Students work a problem out, and then look at the options to see whether their answer is the same as any of the given options. If their answer matches one of the options, then they can be pretty sure that they got the right answer and they select that option. If they did not get any of the options I provided, then they know that they made a mistake. Then a miracle occurs – they revisit their work to find their error. This, in a nutshell, is why I love using MC items for homework. With conventional math homework problems, students don’t know when they are making an error and don’t find out if they made an error until after they have submitted it and it has been graded. Now they get a cue to think about where their method might have failed them.

    In my calculus classes, students submit their homework solutions using out WebCT system, and I have that rigged so that as soon as they have submitted a solution to a homework problem, WebCT allows them access to a carefully worked out solution to that problem, so they can learn from that in order to tackle the next problem. I also require that my students hand in a written version of their homework, to try to avoid having students just trade answer keys to the homework problems.

    Typical systems in math classes train students that their goal on a homework assignment is to be done with the assignment. For example, they are often given credit on homework for the attempt. The idea that they should work on a problem until they have gotten it figured out is altogether foreign, in part because they can’t tell whether they have it right until long after they turn it in. The idea that they should learn in the process of doing the homework is certainly desired, but traditionally there is not much in place in the process that supports their learning. Homework typically looks like assessment, not education.

    For lots of good reasons, math teachers eschew the use of MC items on tests. Perhaps this generally bad reputation of Multiple Choice is why there hasn’t been much consideration given to using MC items on homework. I have found remarkably good results from the use of MC items on a carefully structured homework system. I just thought I would put in a good word for MC items, when used appropriately.

  3. Mark Stoner says:

    I like Scott’s idea of using MC items for homework. I use MC quizzes to prod students to study basic reading material. (They respond to the evaluation code in ways that other means of encouragement don’t, it seems.) However, I will consider making the quizzes “homework” and see if I can get more learning outcome from the process. The MC item structure does allow us to focus attention, create ways of seeing problems, etc. that could get me a bit further in facilitating student transformation of thinking (i.e. learning). Thanks, Scott.

Leave a Reply