Teaching and Assessing Programming Language Tracing
Author
Nelson, Gregory Lloyd
Metadata
Show full item recordAbstract
Learning to write programs is hard, but many fail to even learn basic program reading skills, such as mentally tracing a program to predict its behavior. This dissertation argues a new theory of programming language knowledge that includes mappings from syntax to semantics and their nested combinations can serve as the basis for more granular tools for learning and more precise assessments of that knowledge. First, I created a new theory of basic programming language knowledge as knowing the mapping from token-level syntax to semantics encoded in a PL interpreter's execution paths, and nested combinations of those paths. I proposed this knowledge can be learned by humans via causal inference, drawing on recent research in psychology. I used this theory to design a new reading-first spiral curriculum approach for learning programming that directly shows and teaches program tracing, without program writing. I implemented this curriculum in PLTutor, an interactive textbook; in a comparative study versus an interactive writing tutorial (Codecademy), I found initial evidence of improved learning gains on the SCS1, with average learning gains of PLTutor 60% higher than Codecademy. PLTutor students also did not fail their midterm, versus >10% who failed in the Codecademy group. Second, I created a new formative assessment for program tracing, with precise questions systematically generated to cover that knowledge, including nested combinations. My evaluation with 31 people found per question error and guessing rates generally within desirable thresholds for educational assessment, and a ~70% success rate for targeting precise feedback for learning. Finally, I created differentiated assessments, a new genre of assessment question designed to diagnose learner issues more precisely for an advanced topic versus prerequisite program tracing skills. I led a collaboration to 1) empirically show existing advanced topic assessments depend on advanced and prerequisite knowledge, and have difficulty precisely differentiating between the two, and 2) create example questions and design guidelines for more precise differentiated assessments. Together, this thesis advances teaching and assessment for basic program tracing skills, and concretely raises the possibility that other skills in computing might be taught better by asking what a theory of the skill might be, how to teach at a useful level of granularity, and how to more precisely assess that skill both directly and as a dependency when assessing advanced skills. This thesis concludes with a vision of programming education that includes reading and learning from great works of code and the meaningful human stories of their creation.