Abstract
An increasing number of large humanities data sets are becoming available, and new tools and methods are required to analyze them. There is a risk that statisticians and humanists will fail to recognize the historical contingency of such data. If appropriate methods don’t appear, algorithmic analysis of large humanities datasets will only be able to be used in a heuristic sense, to augment current understanding and prompt new questions and angles of analysis but not to make strong empirical claims. This work develops Bayesian nonparametric models that allow researchers to ask longitudinal questions of large humanities data sets with confidence they have corrected for pre-existing bias derived from the received tradition.
Original language | English |
---|---|
Place of Publication | Christchurch, N.Z |
Publication status | Published - 2015 |