Monthly Tidings – January 2018

OK, I’m a little behind schedule. These things happen.

What I’m reading:

Tressie McMillan Cottom: THICK. This book is A Lot but it’s important. Cottom is a sociologist, writer, and speaker — a proper Public Intellectual, and the book is a collection of somewhat autobiographical essays largely about American culture and the Black* experience. (At least, that’s my best stab at a summary so far. I’m only two chapters down, and I started in the middle.) It’s so good, folks (and I’m so glad it’s essays; it’s a lot easier for me to digest shorter works these days).

(Cottom uses lowercase-b black and I should probably follow her example, but I finally got myself trained to assume capital-B, following the argument in this 2014 NYT op-ed, and habits are hard to learn and unlearn. I’m doing my homework on this question now.)

What I’m working on:

Figuring out an accountability cohort/plan/scheme/secret society that will help keep me on track with the writing projects I aim to get out the door this year. If I manage to get something off the ground, I’ll probably natter about it here.

(YES I know I am doing the “clean your desk when you have a deadline looming” but I am as God made me.)

What I’m listening to:

Either my Pandora station of “classical Muzak” (think Barber’s Adagio for Strings and a lot of Ralph Vaughan Williams) or dance pop. Do I get nerd points if the music‘s in a language I don’t speak?

What I learned recently:

As a profession, historians are much much stuffier than librarians.

2018 – What Is and What “Should” Be

As I do many years, I got to the middle of January and thought “I think I have a blog floating around… somewhere. I should probably update it so that people will see that I am a Serious Librarian who does Serious Things.” The joke’s on me, however; I am very seldom a Serious Librarian (nor, as I’ll get into later, am I a librarian at all these days) and I would rank most of my day-to-day work as only moderately serious. I do think a lot about accountability these days, though—accountability to myself and to my cobbled-together little peer groups—and so I will give blogging yet another Girl Scout try.

Why do anything online in the name of accountability? Twitter has no need to verify what I or any librarian did in a year or a month or a day. Facebook isn’t thanking me for logging on once every month or so to lock down my privacy settings again and post admonishments to my college friends about keeping a skeptical mind when reading the news. These kinds of blogs and social media posts and letters to the editor, etc. can, in the aggregate, improve our profession, because by being transparent about what we do and how and why we do it, whenever we can, we can ideally help ourselves, our peers, and our future peers be the best for and to each other.

That’s a long-winded and sort of pompous way to get to the point, which is: as of July 2018, I no longer have “librarian” in my title, and I no longer have a job as part of a library staff (though I am still within the same organization and do eventually report up to a senior manager who happens to be a librarian). I am 85% super cool with this, 10% ambivalent, and about 3% existential angst to the tune of “but I went to library school to be a librarian!” (The other 2% is inescapable workplace nonsense—the cafeteria doesn’t have my favorite kind of ginger ale, people don’t change the water in the tea kettle, etc. etc.)

I am 85% cool with it because my job is really cool. I was at a conference earlier this year put on by some of my awesome departmental colleagues for the community of instructors who (primarily) teach economics in the college/university classroom. The woman next to me had come in a few minutes late and I handed her my program, handouts, etc., knowing I could always run out and get more. When the session we were in ended, we introduced ourselves. When I explained that no, I wasn’t an economics professor, but a librarian who worked with our online resources, particularly our economic history library, she exclaimed “That is the coolest job!” I basked in that for weeks; if a non-librarian thought my job was cool, it might actually be cool! (Whether or not you think an economics professor is capable of recognizing cool is another matter.)

In my new job, I get to think about a lot of very librarian-y topics and to use knowledge of a lot of the things I enjoyed studying most in library school: metadata, human-information behavior, scholarly communication, web design. (Cool stuff!) To account for that 10% of meh, I also get to (have to?) deal with topics that I mostly learned in my pre-library school days and which I would happily avoid if possible: privacy and copyright, reputation management, branding, workplace politics.

What I don’t get to do a lot of is the stuff I suspect many librarian folks dream of (at least those who dream of library school as a path to specialized reference work at a venerable research university, anyway): original research, student instruction, and rich, in-depth reference work. (I sometimes get the sense that the dream librarian job is somewhere between Robin Williams in Dead Poets Society and Rachel Weisz in The Mummy. To be fair, a job that was both of those things would indeed be a very cool job.) I get to do some of those Serious Academic Librarian tasks sometimes, and I enjoy the work immensely. The mismatch between what I dreamed of doing, however—being the kind of only-in-the-movies brilliant and effortlessly tweed-chic academic librarian—and what I actually do in a job that more than pays the bills is what causes that 3% existential angst.

In the era of endemic burnout, is 3% angst worth it? Almost certainly. Maybe reducing the 3% to a friendlier 2% is an achievable goal. How much is the dream that I’m not quite realizing my dream, and how much is societally imposed? How much is my fear of missing out or sense that I’m not doing “enough” caused by someone else’s image of what a librarian does, what an educated person should do, or what would make my grandparents proud? Would shedding those assumptions about work “success” make me happier?

Maybe instead I should be aiming to turn my 85% to a 90% EVERYTHING IS AWESOME quotient. Or maybe I should break the 85% cool into something more nuanced: 40% awesome because I’m using skills I worked hard to get, 30% exciting because I’m learning new and interesting things, 15% satisfying because I’m becoming a better version of myself. Though I don’t know what the path forward is, I do know I want to be more thoughtful and deliberate about understanding what I do and how I feel about it.

This blog might help me do that. It might not. To be accountable to myself in an honest and authentic way, I’ll post if it helps, and not if it doesn’t. Wish me luck, friends; let’s start as we mean to go on.


Monthly Tidings, March 2018

Ahh daylight savings time! When as a society we go “wait, how is it spring already?” and scramble to either catch up on our New Year’s resolutions or hide all evidence that they ever existed.

(Note: I was going to call this “monthly tidbits” but when drafting this on my phone my spellchecker decided “tidings” was better; I can’t say I disagree.)

What I’m Reading

Dan Rothstein and Luz Santana, Make Just One Change: Teach Students to Ask Their Own Questions (Harvard Education Press, 2011)

I participated in a dynamite session led by Lori Donovan and Lynne Bland at last year’s American Association of School Librarians conference that showcased this book’s teaching technique, which aims to get students to ask their own questions about a topic instead of just responding to teacher questions. It’s a fairly short book, but is really rich with examples and explanations of why and how this technique works and can tie in to more traditional research methods and information literacy.

Dana Thomas, Deluxe: How Luxury Lost Its Luster

This book had been recommended to me a number of times, and when I finally found a secondhand copy in my local indie bookstore I grabbed it. Deluxe came out in August 2007, just before the beginning of the Great Recession, and is a weirdly dated snapshot of the particular excesses of the early 2000s. It has already answered a number of burning questions I have had, like “why do they expect me to pay $400 for this bag when the stitching looks like that?” and “do my sunglasses really need a giant designer logo cutout, even though it makes the glasses less effective at their blocking the sun?”

What I’m Working On

I just started compiling the bibliography for a new for-fun project on data history which I’m pretty pumped about.

What I’m Listening To

Janelle Monae’s glorious new singles (be forwarned: neither of those links is particularly safe for work)

What I Learned Recently

How deeply bizarre and openly corrupt Russian professional hockey is.


Helpful Links for Tonight’s Text Encoding Lesson

Recommended text editors: Notepad++ , TextPad, TextWrangler (for Mac)

TEI Lite:

A transcribed letter fully encoded in XML format TEI:

Letter from John Adams to Abigail Adams, 3 May 1789:

Letter from John Adams to Abigail Adams, 1 May 1789:

About the transcription of the Adams letters: 

Civil War Love Letters: 

Project Proposal: Mapping Chinatowns

One of the recurring themes of our semester so far has been the “why” of digital projects – yes, you can put a historical idea into a digital medium, but why would you? How does it change or enhance what you’re trying to discover or show about the topic in question?


As part of a research project for a course on the 19th century American West, I discovered that, beyond merely having a Chinatown (known locally as “Hop Alley”), St. Louis had had one of the earliest Chinese communities in a large American urban center not on the Pacific coast. Although I produced a digital map (above) with some simple color coding to try to represent the spread of the Chinese-American population, the lack of movement, interactivity, and clear tie between the map and other content in the research presentation, along with the complexity of the data, made the facts — and the importance of the facts — much less clear.

For my digital history project, therefore, I’m going to use the University of Virginia’s VisualEyes platform and editing tools to  turn my static map into something that can better show movement over time, and which will tie individual points together with images and descriptions of what was happening when.

Even though this follows a more traditional linear narrative (passage through time), like a research paper or chronology, my goal is that being able to watch a more literal representation of the spread of these communities will allow viewers to  see how the Chinese population in 19th century America moved not only with the railroads, but with other demographic and economic shifts.

stretching the humanities brain

Richard White’s excellent “What is Spatial History?” argues convincingly that digital (specifically spatial) history isn’t just a technique for making pretty representations of research, but rather

It is a means of doing research; it generates questions that might otherwise go unasked, it reveals historical relations that might otherwise go unnoticed, and it undermines, or substantiates, stories upon which we build our own versions of the past.

However, as I discussed in my last post, I have some concerns about how scholars (and scholars-in-training) learn these new techniques: not just how to do them, but how to do them well, as experts.

Take me, for example. I’m a reasonably well-educated person, with a good academic background in the humanities and strong computing skills, and yet when I encounter some of these discussions of digital humanities techniques, I really struggle to understand what’s going on. Neither my undergrad days nor my graduate studies have trained me in statistical analysis and/or software, nor have they given me reason to do any programming. Does breaking into digital humanities work (to build something, as Stephen Ramsay demands) mean either spending my (very limited) free time teaching myself new skills, or does it require even more years of schooling? If I only work on part of a project, am I contributing substantively, or am I myself just a tool?

When Ben Schmidt says “most humanists who do what I’ve just done—blindly throwing data into MALLET—won’t be able to give the results the pushback they deserve”[1] he is warning us that these tools are, like other research methods, fallible. Yet when we draw specious or simply wrong conclusions in traditional historical research, we can catch ourselves, or our advisors/peer reviewers/editors can catch us. Megan Brett encourages us: “Don’t be afraid to fail or to get bad results, because those will help you find the settings which give you good results.”[2]

But how do we learn what a bad result looks like? Some are obvious, but the really tricky ones buried in the data may be invisible to the undertrained eye – and everyone in this field seems to have an undertrained eye.

Furthermore, there’s an old computer science cliché: Garbage In, Garbage Out. In other words, no matter how elegant your program, if you put in bad data (or bad rules, or bad metadata), your output is going to be crap.

This article on data analysis techniques used to research the 1918 flu epidemic demonstrates the difficulty of getting good data into a digital system:

Human beings recognize tone. Algorithms are better suited to sifting through data in search of keywords—like “influenza” and “kissing.” But “when we see a word or something being highlighted with an algorithm, we don’t know what it means,” says Mr. Ramakrishnan.

Mr. Ewing came armed with a set of “tone categories” to focus on: Were newspaper reports alarming, reassuring, factual? The group talked through the analysis that members wanted to do. “Our goal was to mimic it in an algorithm,” Mr. Ramakrishnan says.[3]

How do you make ‘tone’ mathematically analyzable? This isn’t a rhetorical question; these kinds of syntactical and semantic classifications underlie good text analysis (as Brett points out, to do topic modeling, you have to prepare your textual corpus for analysis and already have a good understanding of what’s in it). If you’re working with a programmer instead of programming things yourself, will important content get missed? If you do the programming, will important analysis get missed? After all, encoding “aboutness” is hard – just ask any library cataloguer or metadata librarian (like me!). Encoding the nuance of human language is even harder.

Digital humanities and digital history is a growing field for a reason; when these tools work, and when they’re used well, they can give us insights into the human experience that are simply unreachable with traditional methods. But the skills needed to use these tools well are not innate to the humanist’s methods of analysis. (Nor to the computer scientist, the statistician, or the librarian, for that matter.) If we as scholars want to realize the full potential of the digital humanities, we’re going to have to stretch our brains even further than our PhD studies and our traditional research already ask us to.

[1] Ben Schmidt, “When You Have a MALLET, Everything Looks Like a Nail,” Sapping Attention (November 2, 2012),

[2] Megan Brett, “Topic Modeling: A Basic Introduction,” Journal of Digital Humanities 2 (Winter 2012),

[3] Jennifer Howard, “Big-Data Project on 1918 Flu Reflects Key Role of Humanists,” Chronicle of Higher Education, February 27, 2015,

what is the economic model of the digital humanities?


I ask this question a bit tongue-in-cheek, but I think a more serious version of the question is one that has not yet been adequately answered. Who does the work that produces digital humanities output? This is the conundrum underlying most questions of  evaluation (particularly for tenure credit) of digital humanities work, of much of the altmetrics conversation, and to some extent the question of what training our advanced graduate students require.

To be sure, there are other important questions that are continuously being asked of DH: what are we doing? To what end? Particularly in history, is the tool that we use for our project contributing substantively to the scholarship it purports to represent?

Yet as I read the short introduction to DH from the book Digital_Humanities,[1] which speaks of DH projects in terms of project management, deliverables, and development cycles, I can’t help but wonder: who is responsible for this work? Who is teaching history PhDs to write technical documentation? To evaluate, choose, and implement metadata standards? To do database administration? Are those instructors departmental faculty? Does the single-apprenticeship model of scholarly training still hold up? Is coursework on proper LAMP implementation taught the same semester as paleography?

Or, alternatively, does DH methodology allow the humanities to borrow more efficiently from the model used in the physical sciences, with small armies of grad students and postdocs doing “grunt work” (text encoding or map reading) in support of a primary investigator’s research and output?

As to the expert option, if professionals (programmers, information architects, designers, and publishers are all possible collaborators identified by Roy Rozenzweig[2]) are used, by whom are they paid? For whom do they work? Does the future of humanities research depend on the grant-funded research structure? What is the scholar’s role in the production of a high-tech undertaking that requires professionals to develop it? If they’re merely writing content, why not write a book?

[1] Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp. Digital_Humanities. The MIT Press, 2012.

[2] Douglas Seefeldt, and William G. Thomas III. “What is digital history? A look at some exemplar projects.” (2009), 3.