Writing Test Items, Protecting Your Investment

Posted on
Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone

One approach to maintaining the validity of a test is to create test items that are less likely to be stolen. The literature is full of books and articles outlining how to develop good test items, but not all of them are equally effective at preventing piracy and cheating. By utilizing some of the practices outlined here, you can reduce measurement costs (loss of test score reliability and test utility) and replacement costs (based only on the time and expense involved in creating the items, plus potentially lost revenue if the exam needs to be pulled until a new one is released). This article provides a sampling of best practices in item construction that will help you protect the investment incurred during the item development phase by extending the shelf life of those items. For more information on developing tests with test security in mind, look for a chapter by Jim Impara, Ph.D., and David Foster, Ph.D., in a soon-to-be-published book, edited by Downing and Haladyna, that is scheduled for release by Erlbaum in 2005.

 

  • Add Verbiage: In terms of developing the actual test item, a popular item-writing guideline is to avoid the use of excess verbiage. The use of this guideline may contribute to the validity of the item during the field-testing and initial release of the test. However, these types of items are easy to memorize and pirate and, over time, their validity will potentially decrease, thus creating the need to develop new content. Adding additional verbiage and complexity to an item that is relevant to the skill/construct being tested will make it more difficult to pirate and will also allow the measurement of higher-level concepts by requiring the examinee to separate out the relevant information from the irrelevant information.
  • Develop Multiple Correct Answer Choices: Traditional item-writing guidelines have suggested that multiple-choice items should have a single correct response. This guideline makes the memorization of the correct answer easier than if the examinee had to select multiple correct answers. In some instances, it may be appropriate to have just a single correct answer choice, but in other instances, it may be important to measure the examinee’s ability to classify information or identify an exhaustive list of options. These skills can be tested by a series of multiple-choice questions, or they could be efficiently measured by a multiple-choice question with multiple correct answers. In the latter case, the examinee is cued about the number of options to select and the item is scored as right (selected all of the correct responses) or wrong (did not select at least one of the correct responses). Another alternative is the multiple true-false question, in which the examinee is given a stem and asked to indicate which of the response choices are true and which are false. Each choice is scored right or wrong. These questions and the correct answers are extremely difficult to memorize.
  • Create Novel Material: Another strategy is to create novel material or methods of testing when testing higher-level concepts. Some of the ways to create novel material include writing new scenarios instead of using textbook examples, including graphics and tables and paraphrasing textbook/training material unless the content is procedural.
  • Include Alternative Item Formats: There are several item formats that have surfaced with the advent of computer-based testing (CBT) and some that have been around for decades that can help enhance the security of your test. Some of the most popular formats include fill-in-the-blank, short answer, drag-and-drop, point-and-click and performance items with an interactive computer interface.

 

The cost to redevelop item pools is substantial, so any of these processes that can be implemented to maintain the validity of items over time are worthwhile. To determine if CBT-based alternative item formats are appropriate, consider the additional cost of creating or purchasing the software to author, publish and score these items.

 

Cyndy Fitzgerald, Ph.D., is co-founder and senior security director at Caveon Test Security (www.caveon.com) and is a member of Association of Test Publishers. Address any test security questions or recommendations to Cyndy via e-mail at cfitzgerald@certmag.com.

Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone
cmadmin

ABOUT THE AUTHOR

Posted in Archive|

Comment: