Document Type


Date of Degree

Spring 2016

Degree Name

PhD (Doctor of Philosophy)

Degree In

Psychological and Quantitative Foundations

First Advisor

Welch, Catherine

Second Advisor

Dunbar, Stephen

First Committee Member

Yarbrough, Don

Second Committee Member

Ansley, Timothy

Third Committee Member

Seaman, Walter


The purpose of this research is to provide information about the psychometric properties of technology-enhanced (TE) items and the effects these items have on the content validity of an assessment. Specifically, this research investigated the impact that the inclusion of TE items has on the construct of a mathematics test, the technical properties of these items, and the influence these item types have on test characteristics. An empirical dataset was used to investigate the impact of including TE items on a multiple-choice (MC) assessment. The test used was the Iowa End-of-Course Algebra I (IEOC-A) assessment. The sample included 3850 students from the state of Iowa who took the IEOC-A assessment in the spring of 2012. The base form of the Algebra EOC assessment consisted of 30 MC items. Sixty TE items were developed and aligned to the same blueprint as the MC items. These items were appended in sets of five to the base form, in effect resulting in 12 different test forms. The forms were randomly assigned to students during the spring administration window.

Several methods were used in an attempt to form a more complete understanding of the content characteristics and technical properties of TE items. This research first examined whether adding TE items to an established MC exam had an effect on the construct of the test. The factor analysis confirmed a two-factor model comprising latent factors of MC and TE items, indicating that TE items may add a new dimension to the test. Subsequent to these findings, a more thorough analysis of the item pool was conducted and IRT analyses were done to investigate item information, test information, and relative efficiency. This analysis indicated that there may be a difference in the way students perform on MC and TE items. There is evidence in this particular pool of items that there is a difference in these two item types. This difference may manifest itself as an additional, perhaps unintended, construct on the exam. Additionally, TE items may perform differently depending on the ability level of the student. Specifically, TE items may provide more information, and measure the construct more efficiently than MC items at higher levels of ability. Finally, the quantity of TE items included on a test has the potential to affect the relative efficiency of the instrument, underscoring the importance of selecting items that reinforce the purpose and uses of the test.

Public Abstract

The development of the Common Core State Standards (CCSS), Race to the Top legislation, and enhancements in computer-based testing have served as a motivation for the creation of item formats that rely on technology and new ways of measuring what students know and can do. Technology-enhanced items are becoming increasingly popular in K-12 standardized assessment, yet there is little research about the properties of these item formats. The practice of adding technology-enhanced items to test forms with traditional multiple choice items deserves more research, especially given the enthusiasm this practice has received among practitioners. This research will serve to address this gap in the literature by providing information about the psychometric properties of TE items and the effects these items have on the construct validity of an assessment.


publicabstract, Technology-Enhanced (TE) items, TE


ix, 174 pages


Includes bibliographical references (pages 166-174).


Copyright 2016 Ashleigh Crabtree