As a tenured professor deeply immersed in the confluence of digital media and posthuman philosophy, my life’s work has largely revolved around deciphering the intricate web of technology, identity, and human experiences. Prior to becoming an academic, I worked in the ecommerce sector for two decades, and spent five years working for a nonprofit organization that helped K-12 schools better integrate educational technology. The impetus for this chapter comes from a profoundly personal place—a purposeful sense of self that draws from both my academic background, my professional background, and inherent interests in the subject at hand.

The journey we’ll undertake in the following pages isn’t just a scholarly expedition; it’s also an exploration of my own evolving understanding of how technology can both empower and marginalize, illuminate and obfuscate. In this sense, the chapter serves as a dual lens: one that presents a specific subject matter through the filter of academic rigor, and another that invites you to understand how my own experiences and intellectual journeys have shaped this presentation.

My hope is that the ensuing discussions will not only add to your knowledge base but will also inspire you to consider your own positionality—your unique vantage point formed by your experiences, background, and education. Just as I have connected my own life story to this area of study, I encourage you to discover your own connections, contradictions, and curiosities as we delve deeper into the complexities of this intriguing subject.

As I’ve taught about the impacts of big data and artificial intelligence (AI) over the years, I find myself frequently running headfirst into one formulation or another of the above quote, which I’ve obviously made up and not cited exactly. Or perhaps it’s more accurate to say I’ve been trained on a large set of data that consists of responses to concerns about privacy, processed those through my neural network, and generated some predictive text that looks a lot like what most people say – much like any good large language model (LLM) would do as part of a generative AI process. Either way, developing an approach to teaching about data that cuts through the apathy associated with this quote, or one like it, has become a central focus of my pedagogy. Why exactly should we spend our precious time on this planet thinking about or even caring about ideas as abstract and hard to regulate as these?

As it turns out, there are quite a few good reasons. The challenge is these reasons are buried in layers of legal and bureaucratic jargon that, frankly, make it all sound quite boring. Comedian John Oliver described this best when discussing the intricacies of net neutrality and cable companies on his show Last Week Tonight:

Oh my god! How are you still so dull? And that’s the problem. The cable companies have figured out the great truth of America. If you want to do something evil, put it inside something boring. Apple could put the entire text of Mein Kampf inside the iTunes user agreement and you’d just go, “Agree, agree, agree. What? Agree, agree.” (Oliver, 2014)

Oliver goes on to distill the issue of net neutrality, explaining it in detail while also making it funny. While I would love to be able to do something like that in every class session that I teach, the amount of content I have to produce every semester while teaching four courses is far greater than the amount of content someone like Oliver produces for his show, and he has an entire team of writers helping him. Nonetheless, I’ve worked hard over the years to find ways to make the big picture questions about data and society both personal and interesting to students. Let’s explore why this matters.