Computer-based Testing: Secure, efficient, cost effective

n/a
Office of Digital Learning
Craig Zilles
Faculty/Instructor
Craig Zilles, U Illinois
Digital Innovations & Tools
Online Assessments & Rapid Feedback

Problem
The University of Illinois’ engineering classes have 800-1000 students per semester. Pencil and paper exams presented multiple operational challenges:

  1. Difficult to get all students in the same place and time
  2. Necessity to schedule conflict exams and conflict-conflict exams
  3. Must generate new exams each semester
  4. Labor required to print, proctor, and grade exams

Because of these challenges, faculty typically held only one or two midterms per semester and a final, even though research data clearly indicated frequent testing improves student learning.

Motivated by both operational and pedagogical concerns, U of I sought an assessment strategy that would scale for large populations without reducing quality, as well as offer students frequent testing with rapid and reliable test results.

Solution
The solution was a computer-based testing facility (CBTF). Custom designed for U of I, PrairieLearn, a highly flexible question-asking platform, allows testers to randomize parameters and auto-grade a wide range of questions. It also supports numeric and symbolic questions. Using docks or containers, PrairieLearn can also run code in numerous programming languages.

The testing facility is a secured computer lab, proctored 12 hours a day, 7 days a week. With a total of 330 seats running at 70% occupancy, it is one of the most utilized spaces on campus. Students use lab computers to take exams, compile code, use CAD tools, MATLAB, etc. Controlled networking prevents access to the internet. Proctors physically oversee the lab for student integrity infractions. Security cameras monitor the lab. A full-time coordinator hires/manages proctors and liaises with faculty running exams. Additional staff do development.

With student laptop ownership on the rise and computer lab use on the decline, existing labs are excellent candidates for conversion to testing facilities. Computer labs have the advantage of being already wired for power and networking to support the needs of a CBTF. U of I purchased very little new furniture for their CBTF, most came from pre-existing computer labs and/or campus surplus.

Not counting space or lighting or electricity, expenses are $190K per year, mostly for staffing. The CBTF runs ~100,000 exams per year, thus the cost is roughly $2 per exam, less than 3rd party online proctoring services and competitively priced compared to multi-page exams which can cost up to $1 per exam to print and include additional labor.

Benefit to Faculty
Instead of building new exams every semester, faculty incrementally assemble question banks. Each year questions can be improved and question banks extended. Questions can be tagged making re-use easier.

Faculty can see the students' responses and view the performance of a specific student, including how many attempts at a question. If a student wishes to go over their exam during office hours, faculty can pull up their responses. Faculty can also see the average score for any given question or an entire test.

Freed from conducting and grading exams, TAs and faculty can focus on more student interaction and on re-introducing active learning pedagogies into large enrollment classes.

With computer based testing, many classes now run 50 minute exams every two weeks. In the off weeks, students are offered an optional second-chance test with partial score replacement.

Exam procedure
Students self-schedule exams: they receive an email with a link to a reservation page showing available times and make their choice. On the day of the exam, the student receives a reminder email. The student arrives ten minutes before the exam with their university ID.

Students store their belongings before entering the room. Students are not allowed to bring in paper but are provided with blank scratch paper (the paper color is constantly changed so students cannot try to pass off crib sheets as scratch paper.)

Once checked in, the student is assigned a random seat. Many different exams are run concurrently and students are generally interwoven so that the person sitting next to one is usually taking an entirely different exam.

The student logs into the computer using their net ID, navigates to the exam, and waits for the exam to start. The student answers the questions in whatever order they choose, and the exam is graded interactively. Many exams allow students to attempt questions multiple times, giving 90% if they get it right on the second try; 80% on the third try, etc., or whatever scale the faculty decides. When a student enters an answer, they can save it or have it graded. Students usually do the exam linearly, checking each answer as they go.

The proctors enforce time limits; students leave knowing their score.

In rare cases where a student has an issue (their computer didn’t work, MATLAB crashed, etc.) they must raise their concern before leaving the CBTF: they report the problem from their perspective, the proctor reports the problem from the proctor’s perspective; both reports are submitted to the faculty member who makes the final judgement.

TAs and instructors are not present during the exam and proctors do not respond to domain specific questions. However, if there is a problem with a question that may have a flaw, most exams allow students to report problem questions. Typically, the first students to take an exam are the strongest, so if a question is ‘broken’, it will be first encountered by strong students and they will report it. An instructor can monitor an exam the first few hours it's running, fix the question, give the reporting students full points, and the remaining bulk of the students never encounter the problem.

Because the CBTF allows for asynchronous exam taking, early exam-takers could conceivably share information with later exam-takers. To counteract this, randomization makes each exam unique. Every question on an exam is randomly chosen from a pool of questions. Data from U of I suggest that at least three versions of a question are enough to make it more time consuming to cheat than to simply learn the material. With three versions of each question, it would take cheaters more time to poll early exam-takers for all possible answers than it would to study for the exam.

It is possible that a question pool might not contain a balanced set of questions, ie. a professor may have inadvertently made one or more questions in a question pool harder than the others. To determine if this is the case, PrairieLearn generates statistics on the rate of correct answers for each version of a question. If a specific question is out of calibration with the others in the pool (had significantly more incorrect answers) students who were impacted unfairly can be compensated. In the next iteration of the exam, instructors can re-balance question pools.

Student Response
Student response has been overwhelmingly positive. Computer science and electrical engineering students in particular prefer computer based tests for several reasons:

  1. Writing code on a computer is easier than writing code on paper.
  2. Computer scientists are used to compiling code. If code doesn’t run, it is helpful information.
  3. Students rarely use pencils and writing for two hours can be exhausting; most prefer using a computer.

With improved efficiency both for students and faculty, computer-based testing is a viable alternative to paper exams.

To learn more, watch Dr. Zilles' xTalk presentation, or read MIT student Kevin Shao's blog post.

Share