Silicon Valley has lost some of its shine in recent months, what with the “fake news” and the bots and the hacks and the hate speech. All the promises about the democratization of information and power ring a little hollow nowadays.
I’d say they rang a little hollow all along. Of course that’s what I’d say. I’ve been saying it for years now.
There’s a new tale that’s being told with increasing frequency these days, in which tech industry executives and employees come forward – sometimes quite sheepishly, sometimes quite boldly – and admit that they have regrets, that they’re no longer “believers,” that they now recognize their work has been damaging to individuals and to society at large, that they were wrong.
These aren’t apologies as much as they’re confessions. These aren’t confessions as much as they’re declarations – that despite being wrong, we should trust them now that they say they’re right.
Rick Hess published Amplify CEO Larry Berger’s confession this week on his blog at Education Week. “Until a few years ago,” Berger writes, “I was a great believer in what might be called the ‘engineering’ model of personalized learning, which is still what most people mean by personalized learning.” But it doesn’t work. In Berger’s terms, we have neither the map of what students need to learn, nor the “optimal learning object” for each waypoint on the map, nor the assessments to gauge whether students have learned.
Earlier this year, former AltSchool employee Paul France penned a number of blog posts, in which he described having made similar revelations: Silicon Valley’s vision for personalized learning, something to be achieved through customized assignments called “playlists,” simply wasn’t effective. France felt “burnt out,” and despite the massive amounts of data that AltSchool collected, “we didn’t have the data to back up our approach,” to substantiate the claims that any of this improved students’ learning.
France writes as though he’s the first person to have discovered that the surveillance pedagogy of AltSchool might be a problem. “When I began working in Silicon Valley, personalized learning was very new,” he states matter-of-factly. “No one really knew what it meant, and as a result, it led to us having unrealistic expectations for what we could really achieve in the classroom and what was actually best for kids.” “This just in,” France writes in another article, “Technology isn’t necessary to personalize learning.”
“This just in.”
Many of these recent narratives, as France’s articles perfectly exemplify, don’t just posit that problems exist (and have long existed) with technologies. The confessors claim to have discovered the problems in the first place. They do not appear to have read any history. So it is easy for them to declare: no one could have possibly known. And now that these enlightened few do seem to know, they declare loudly they’re here to inform and warn the rest of us.
Among the best known of these cautionary storytellers: Tristan Harris, a former Google employee and now “the closest thing Silicon Valley has to a conscience,” as one Atlantic article described him. “No one has done more,” Wired argued last year, to help explain the powerful lure of smartphone technology. What he’s done, to be clear: make a lot of high profile media appearances, including a TED Talk on “How a handful of tech companies control billions of minds every day” – that is, how the tech industry uses behavioral design to attract and manipulate our attention. Earlier this month, Harris was in the news again when he and other ex-Facebook and Google employees announced they were launching the Center for Humane Technology, a lobbying group that will promote Harris’s “anti- tech addiction” message.
Maya Ganesh has criticized how this frames both the problem and the solution of “tech addiction”:
The Center for Humane Technology constructs the problem in terms of addiction and therefore as one of individual attention. And while they acknowledge the importance of lobbying Congress and hardware companies (Apple and Microsoft will set us free as if they don’t lock us into digital ecosystems and vie for our attention?), they emphasize a focus on individual action be that of tech workers, or users. By invoking ‘addiction’ as a metaphor, they see the problem as being about individual attention, and eventually, individual salvation. Naming the co-founder of the Center, Harris, as the ‘conscience’ of Silicon Valley evokes a similar emphasis on individual rather than community, political, or structural dimensions to the attention economy and its dismantling, or restructuring.
And I think there’s something about all these confessional narratives (and their hopes, I think, of becoming redemption narratives) that is also deeply intertwined with individual rather than structural change. These stories rarely situate themselves in history, for example, and as such really cannot offer much insight into how or why or even when things might’ve “gone wrong.” They rarely situate themselves among other thinkers or scholars (or activists or “users”). They are individual realizations, after all.
So then, I have to wonder: why should we trust these revelations (or revelators) to guide us moving forward? Why not trust those of us who knew it was bullshit all along and who can tell you the whole history of a bad idea?