ETD Guide/Universities/The Assessment and Measurement Process

This chapter of the Guide is intended for those who have responsibility for implementing an institutional or departmental ETD program. The process of creating a successful assessment and measurement requires planning, creating goals and objectives, choosing types of measures, and deciding what type of data to collect.


Some campuses form an ETD committee or team, consisting of representatives from the Graduate School, the faculty, the library, the computing center, and other relevant units on campus. As part of their overall planning for the development of an ETD program, the committee should make explicit their goals for assessment and measurement of the program and put in place mechanisms to collect data related to the goals. The data collection could be gathered from web statistics packages, from student surveys, or from interviews, depending on what type of information meets the assessment’s goals.

Several units of the university—including academic departments, the graduate school, information technology services, and the library—are involved in any ETD program. In developing an assessment plan, it is useful to think convergent. Are there particular things that would be useful for several of these constituents to know? This will leverage the value of the assessment and may allow for joint funding and implementation, thereby spreading the costs and the work.

During the planning process, the ETD committee may focus on a variety of issues, including:

  • the impact that the ETD program is having on the institution’s reputation;
  • the degree to which the ETD program is assisting the institution in developing a digital library; and
  • the benefits to and concerns of students and advisors participating in the ETD program.

Once the focus of the assessment and measurement activities is identified, the ETD committee should assign responsibility for development and implementation to another team. This team should include assessment and measurement experts from institutional units such as an institutional planning office, a survey research institute, or an instructional assessment office.

Creating Goals and ObjectivesEdit

A necessary prerequisite to assessment is a clear understanding of the ETD project’s goals. Each institution’s assessment plan should match the goals and objectives of the institutional ETD program, which may have a broader scope than the simple production of electronic content.

If an ETD program does not have clearly defined goals, an excellent resource is the NDLTD web site. This site includes the goals of the NDLTD, which can be adapted to local needs. These goals include:

  • Improving graduate education
  • Increasing availability of student research
  • Lowering costs of submission and handling of theses and dissertations
  • Empowering students
  • Empowering universities
  • Advancing digital library technology

Within each of these broad areas, many types of measures can be developed to help evaluate whether the ETD program is succeeding.

Choosing Types of MeasuresEdit

Institutions have many choices in what they measure in an ETD program. After aligning the assessment goals with the institution’s goals, the assessment plan must describe what types of measures are needed for various aspects of the program. McClure describes a number of categories of measures, including those that focus on extensiveness, efficiency, effectiveness, service quality, impact, and usefulness. (Charles R. McClure and Cynthia L. Lopata, Assessing the Academic Networked Environment, 1996, p. 6)

An extensiveness measure collects data on such questions as how many departments within the university are requiring ETDs or how many ETD submissions are made each year. This type of data lends itself to comparison, both as trend data for the individual institution and in comparisons to peer institutions.

An efficiency measure collects data to compare how life cycle costs of ETDs compare to those of print theses and dissertations. For example, Virginia Tech includes such a comparison on its Electronic Thesis and Dissertation Initiative web site.

Effectiveness measures examine the degree to which the objectives of a program have been met. For example, if an objective of the institution’s ETD program is to empower students to convey a richer message through the use of multimedia and hypermedia, data can be collected that displays the proportion and number of ETDs employing such techniques by year. If an objective of the program is to improve students’ understanding of electronic publishing issues, the institution can measure such understanding prior to and after the student produces an ETD.

Measures of service quality examine whether students are receiving the training and follow-up assistance they need.

Deciding What Data to CollectEdit

Frequently, discussions of how to assess electronic information resources are limited to defining ways of counting such things as searches, downloads, and hits. These measures are certainly useful, but they provide a limited view of the overall value of electronic information resources.

Many vital questions cannot be answered with statistics about searches, downloads, and hits: Can users access more information than in the past due to availability of information resources online? Has the availability of electronic information resources improved individuals’ productivity and quality of research? Has the availability saved them time?

Collection of data for assessment should be designed to answer some of these questions, to address the educational goals embedded in an ETD program, and to gauge whether or not those goals have been achieved.

Decisions about data collection are also informed by the institutional mission and goals. For example, if the institution is interested in increased visibility both nationally and internationally, then statistics on downloads of ETDs by country and institutional IP address could be useful. Examining who is using an institution’s ETDs by country and by amount of use would also be a valuable gauge of impact. As recommended systems develop, this area may grow in importance.

There are a number of questions related to users that are possible targets for assessment. These include:

  • Are students achieving the objectives of the ETD program?
  • Are students using tools such as Acrobat appropriately and efficiently?
  • Has the availability of student work increased?
  • Do students have an increased understanding of publishing issues, such as intellectual property concerns?

In addition, a number of questions related to student satisfaction could be addressed in data collection plans. These may include:

  • Were students satisfied with the training or guidance they received to assist them with producing an ETD?
  • Did the availability of their dissertation on the web assist them in getting a job?
  • Are they using the technology and electronic authoring skills they learned in their current work?

Usefulness to students may be a factor of the availability of their ETD on the web, assisting employers in gauging their area of research and the quality of their output. Availability may also lead employers to contact students for openings that require a particular skill set. And students may find that the skills of preparing an ETD and the framework of issues associated with the ETD, such as intellectual property issues, is useful in their places of employment after graduation.

The usefulness of an ETD program to students, faculty, and others may be an important factor to measure in order to gather data that can be conveyed to administrators and funding agencies. This data might best be collected six months to a year after the completion of the ETD.

Finally, the ability for faculty and students around the world to easily examine the dissertation and thesis output of a particular department may provide a new dimension to rankings and ratings of graduate departments. Monitoring national ratings in the years pre- and post-implementation of an ETD program could be useful, although may be only one factor in any change in ranking or rating.

Next Section: Measuring Production and Use of ETDs: Useful Models

Last modified on 18 June 2009, at 16:35