Document Type


Date of Degree

Spring 2010

Degree Name

PhD (Doctor of Philosophy)

Degree In

Psychological and Quantitative Foundations

First Advisor

David F. Lohman


Ability tests play an important role in the assessment programs of many schools. However, the inferences about ability made from such tests presume that students understand the tasks they are attempting. Task familiarity can vary by student as well as by format. By design, nonverbal reasoning tests use formats that are intended to be novel. The popularity of nonverbal reasoning tests has increased substantially in recent years because of the increasing number of English-language learner (ELL) students in many U.S. school districts. Nonverbal tests are thought to eliminate the need for language in test items and to reduce cultural content. Formats on these tests are also assumed to be equally novel for all students. However, in at least one large study, researchers found substantial differences between the average performance of ELL and non-ELL Hispanic students on three of the most widely used nonverbal tests. Although these differences might reflect real variation in cognitive development, they may also reflect differences in knowledge of test formats and the testing practices used in U.S. schools.

In this study, I hypothesized that the score gaps between ELL and non-ELL students might, in part, be due to differences in test familiarity and that providing directions that include more practice and feedback might attenuate these differences. I drew from the research on universal design, dynamic assessment, and cross-cultural testing to develop three different types of directions with practice items. I then compared the effects of these three types of test directions on students completing a nonverbal figure analogies test. Figure analogies tests are generally among the best measures of reasoning abilities and are known to be quite difficult for young students. All directions were provided using video with English and Spanish audio and minor animations to concretize the instructions. The three types of directions were nonverbal-dynamic directions, verbal-dynamic directions, and a control condition that used standard test directions. The nonverbal-dynamic directions presented four practice problems that sampled the range of items on the test. Oral instructions and feedback were minimal. The verbal-dynamic directions presented the same four practice problems with more in-depth description and feedback. These directions also described useful strategies for solving items. The standard test directions presented two sample problems with minimal instruction and feedback.

The sample consisted of 882 students in 40 first- and second-grade classrooms in 8 schools. A hierarchical linear model was used to control for similarity among students nested in classrooms and schools and to account for the assignment of treatment (type of directions) at the classroom level. The model included tests for main effects and interactions among treatment, ELL status, and grade. Results indicated that providing additional practice (the nonverbal-dynamic directions) led to small gains in performance, but that the more extensive set of directions (verbal-dynamic directions) were effective only for high-ability students. Contrary to the hypotheses, there was no interaction of ELL status with treatment. An unexpected finding was that use of teacher-read directions instead of video-based directions led to better performance for second-grade students. I conclude that test directions are an important means for improving test familiarity in young students, but that excessive standardization and lengthening of the directions may hinder performance. I also conclude that the choice of practice items and feedback are crucial considerations in the design of test directions.


Cognitive Abilities, English language learner, Test Construction


2, ix, 174 pages


Includes bibliographical references (pages 114-125).


Copyright 2010 Joni Marie Lakin