About Me

Picture of me

I am a postdoctoral fellow at Penn State University’s Center for Humanities and Information. My research is largely about the intersection of technology and language, which I have dealt with both as a historian of technology and as a programmer. I have written and lectured on topics ranging from seventeenth-century mathematics to expressions of emotion on social media. I have also done extensive work on text mining as a research methodology and on the use of artificial neural networks to generate and analyze text.

In 2018, I received a PhD in English from the Graduate Center, City University of New York. I also have an M.A. in English from NYU and studied literature, mathematics, and computer science at the undergraduate level. Before starting my PhD, I worked for five years as a Data Analyst and computer programmer.

The Distance Machine shows how the words in a passage from Moby-Dick went in and out of common use over a period of two centuries.

If you want to get a sense of what my programming work is like, a good place to start is the Distance Machine. The Distance Machine is a web-based program (which you can try in your browser right now) that identifies words in historical texts that were uncommon at particular points in time. The idea is to find the opposite of archaisms—words that seem ordinary to us now, but that had not yet become common at the time when a text was written. I published an article about the project in American Literature; the source code is on GitHub.

I’ve also made some fun things like a poetry generator that can create rhymed verse, a depoeticizer that makes texts more banal, and a JavaScript-based simulation of an algorithmic poetry generator from 1677. My most recent creative-critical project is A Hundred Visions and Revisions, a program that uses a neural language model to rewrite a poem to be about a different topic.

There is a longer list of my digital projects here; you can also view a list of my major publications or my full CV.

My forthcoming book, tentatively titled Language and the Rise of the Algorithm, gives a history of the idea of algorithm from the sixteenth century to the present. It focuses on four very different attempts to extend the power of symbolic computation into new areas: G. W. Leibniz’s calculus ratiocinator, which is often taken as a precursor of the modern computer; a universal algebra scheme Nicolas de Condorcet designed during the French Revolution; the nineteenth-century logic system that later developed into Boolean logic; and the early programming language ALGOL. Based on these instances, my book shows that the modern idea of algorithm is implicated in a centuries-old debate about how to draw the line between computation and communication. I argue that recent developments in machine learning, especially the emergence of large language models like GPT-3 and BERT, will require a rethinking of this division.