The advent of computer-based testing, the ability to deliver and score tests electronically, has opened a prime opportunity to expand the types of questions that can be presented in a testing environment.
Computer-based test designers and developers commonly refer to questions as test items. This classification works well in a computerized testing environment and establishes a common language among testing professionals. Test items can be further categorized into item types. One with which everyone should be familiar is the multiple-choice item type. Computer-based testing, however, enables us to provide a variety of item types.
Not all tests include all item types, and different item types can be used to test for different cognitive skills or knowledge levels. Interestingly, very few item types actually test an individual’s ability to perform a task.
One consideration when deciding what item types to use is the technical capability of the test engine. Some test engines have limited ability to effectively render some items, and overall performance must be considered.
The certification program I support uses a variety of item types. We classify exams into two categories that dictate the items that are used — traditional and case study.
With a traditional exam, each question is independent of the other questions on the exam. The questions are presented in groups based on the functional area covered by the question and in random order within the group. The groups are presented in random order within the exam. There is usually one time limit for answering all the questions in the exam.
On a case study exam, questions are related to a scenario description, known as a case study. There are several case studies on each exam. Typically, there is a time limit associated with each case study on the exam.
Here are examples of the types of items that can be supported in a computerized testing environment:
- Multiple-Choice: The test taker selects the correct answer or answers from a list of answer choices. This question type is equivalent to the multiple-choice format used on paper-and-pencil-based exams.
- Hot Area: The test taker indicates the correct answer by selecting one or more elements within a graphic. Selectable elements are marked with a border and are shaded when the test taker moves the mouse over them.
- Drag-and-Drop: The test taker completes a diagram in the work area. In a drag-and-drop question, the test taker needs to move sources from the source area into targets in the diagram in the work area. Each target is indicated by a gray box.
- Active Screen: The test taker manipulates a screen-shot representation of an element of a product. It is very much like a simulation of one dialogue box or a single page of a wizard. The test taker indicates the correct answer by changing one or more aspects of the screen shot.
- Build, List and Reorder: The test taker indicates the correct answer by building an answer list. In this question type, the test taker builds a list by dragging the appropriate source objects to the answer list and then placing them in the correct order based on criteria defined in the question.
- Create a Tree: The test taker creates a tree structure and indicates the correct answer by dragging source nodes to the correct locations in the answer tree. Nodes consist of text and possibly a small icon.
All these item types effectively support knowledge-based testing. It’s the computerized testing environment that makes it feasible to incorporate these types of items into tests.
Moving along the spectrum from knowledge-based testing to performance testing, item types such as simulations, emulations and live application scenarios are more appropriate. These types of items validate an individual’s ability to perform a task.
Simulations mimic a subset of the behavior of a software application or operating system. The test taker performs a task to achieve the end state as described in the item stem.
Emulations provide a controlled live environment in which the test taker performs a task as defined by a scenario. Typically in live application testing, the test taker has full control of the environment and performs tasks as defined by a scenario