Craig Zilles

Craig Zilles Demonstrates How to Administer Effective Tests

Kevin Shao MIT '23

Having just gone through my first round of midterms, I appreciated how efficient the grading turnaround was at MIT. Instead of waiting weeks for feedback on assessments, as is often the case elsewhere, my graded assessment was returned to me within a day. However, to achieve this impressive speed, classes must use a large amount of teaching staff, tying down their valuable time.

In the system imagined and implemented by Craig Zilles, the tradeoff between rapid feedback and conserving teaching resources would no longer pose a problem. In his xTalk on October 7, Zilles described the Computer Based Testing Facility (CBTF) – where test questions are randomly generated for each student and feedback is automatic and instantaneous. Under this new system, administering a test would be nearly effortless, allowing professors and teaching staff to more effectively devote their time. Moreover, this system is not only a theory: Zilles has implemented this system for several years.

Zilles’ inspiration came from necessity: at the University of Illinois, where Zilles is an Associate Professor, many lower level engineering classes have hundreds of students, making proctoring exams, dealing with conflicts, and grading extremely difficult. As such, classes offer infrequent testing opportunities: often just a midterm and final, which has been shown to detract from a student’s learning.

The CBTF consists of two parts: the physical computer lab and the innovative software, PrairieLearn LMS. In designing the computer lab, Zilles held security as a primary concern. Multiple proctors supervise the lab backed up by security cameras, seating for exams is random and interleaved, and Internet access is restricted. Using a computer lab also provides flexibility for students: the exam can be given asynchronously, allowing students to schedule the exam for the time slot that suits them best.

The PrairieLearn LMS software, however, gives CBTF its true advantage over traditional testing. The software generates a unique, random exam for each student drawing from a problem bank of topics the professor wishes to cover. The auto-grader can check students’ answers, giving instantaneous feedback and optionally allowing for additional attempts for partial credit. Since the results are instantaneous, students can decide to take a second-chance exam for points back. All in all, the entire system, from delivering to grading the exam, is completely automated, allowing the teaching staff to more effectively use their time.

Having been implemented for several years, Zilles’s system has begun to show impressive results. When comparing two classes with the same final – one that employed the CBTF and one that didn’t – the classes that used the CBTF and were thus able to test frequently saw far fewer failing grades, and often a significant increase in ‘A’ grades.

Zilles encourages professors everywhere to adopt this method, and his reasons and results are compelling. As a student, I would be thrilled to take tests in a smoothly run CBTF, even if it did mean taking exams bi-weekly.

 

Kevin Shao 

 

 

     Kevin Shao is a first year student at MIT.

Share

Open Learning newsletter