Apr 20, 2017

‘Technology-Enhanced Retention’ and Other Ed-Tech Interventions

I can always count on Audrey Watters to join words together that get at something that’s been brewing somewhere beyond my own language motors:

And I’ll say something that people might find upsetting or offensive: I’m not sure that “solid research” would necessarily impress me. I don’t actually care about “assessments” or “effectiveness.” That is, they’re not interesting to me as a scholar. My concerns about “what works” about ed-tech have little to do with whether or not there’s something we can measure or something we can bottle as an “outcome”; indeed, I fear that what we can measure often shapes our discussions of “effect.”

Arguments around “outcomes,” “assessments,” and “effectiveness” bother me because they tend to be reductive and self-serving. They’re reductive because they require us to place measuring sticks on students that don’t take into account their perspective. And they’re self-serving because anything that you choose to measure can be optimized for, providing an easy escape to the question of whether we’re measuring the right thing: “Sure we are, just look at how much {thing we are measuring} has improved!”

At the same time, I do have a bias toward practical, hands-on education. What is practical? strikes me as a tough question, but I still personally prefer it to How do we measure effectiveness?

Audrey finishes this talk with a real doozy that will likely ring in my ears for a long time:

My concern, I think – and I repeat this a lot – is that we have substituted surveillance for care. Our institutions do not care for students. They do not care for faculty. They have not rewarded those in it for their compassion, for their relationships, for their humanity.

Adding a technology layer on top of a dispassionate and exploitative institution does not solve anyone’s problems. Indeed, it creates new ones. What do we lose, for example, if we more heavily surveil students? What do we lose when we more heavily surveil faculty? The goal with technology-enhanced efforts, I fear, is compliance not compassion and not curiosity. So sure, some “quantitative metrics” might tick upward. But at what cost? And at what cost to whom?