Notes on what I've been reading

Marginalia 17: when your boss is a robot

“How hard”, asked Josh Dzieza at The Verge last February, “will the robots make us work?”

While we’ve been watching the horizon for the self-driving trucks, perpetually five years away, the robots arrived in the form of the supervisor, the foreman, the middle manager.
...for workers, what look like inefficiencies to an algorithm were their last reserves of respite and autonomy, and as these little breaks and minor freedoms get optimized out, their jobs are becoming more intense, stressful, and dangerous.

Dzieza, like many others, identifies Amazon.com as one of the most dystopian “algorithmically managed” workplaces, but it's certainly not the only example. Callum Cant, on a recent episode of Paris Marx's Tech won't save us podcast, talked about his book and what he learned from his experience working for Deliveroo in the UK — a job entirely driven by an opaque algorithm communicating via an impassive mobile app. Can't maintain the pace it sets? No more 'drops' for you.

In more “highly skilled” workplaces, the algorithms are, ironically, less sophisticated, and still used mostly to launder management decisions rather than completely replace bosses. In University.xlsx, Andrew Brooks and Tom Melick write:

[The university as spreadsheet] allows for some puzzling promises, such as a commitment to research without researchers and a dedication to teaching without teachers. Feedback is encouraged but never enters the spreadsheet itself.
...While the algorithms that work on Excel spreadsheets might remain relatively simple operations when compared with the machinic systems that sort and stratify massive data sets into perceptible patterns, it is important to not to lose sight of their complicated effects. In the workplace, the classification systems that organise the structuring of data in the spreadsheet are determined by managers and productivity consultants, and to many extents dissolve into the daily tasks of management like sugar in tea. Similarly, the problems in need of solving or the forecasts in need of generating have been identified by the same players. Despite the appearance of scientific objectivity, the spreadsheet is always a product of judgement: some things enter the spreadsheet while others are discarded; some things are assigned value while others are dismissed as worthless.

All of this must be a shock to Tech Crunch's Danny Crichton, who in 2014 heralded the dawning of a new age of worker liberation and happiness, declaring that “Algorithms Are Replacing Unions As The Champions of Workers”, and calling out fast-food delivery and university workers specifically as likely beneficiaries. It's hard to tell whether Crichton is extraordinarily credulous, or merely suffers from the myopia common to Silicon Valley vulture capitalists and their cheerleaders in technology “journalism”. Either way, he could hardly have been more spectacularly wrong. Just four years later, the tech workers who write the algorithms directing so many other workers were so fed up, they went on strike themselves — over company culture and management, not pay or hours.

Crichton was certainly aiming at the right target, he was just wildly off-base about how to hit it:

Perhaps most importantly, [under algorithmic management via platform capitalism] workers have the ability to develop their own personalities and brands, an issue that has deeply resonated with me in the past. One of the most insidious ways that employers prevent workers from advancing in their careers is preventing them from having their own voice and being recognized for their accomplishments.

But far from freeing workers to express themselves, algorithmic management has precisely the opposite effect. Dzieza writes about an application used in call centre work to measure and rank workers based on their “empathy”:

Workers say these systems are often clumsy judges of human interaction. One worker claimed they could meet their empathy metrics just by saying “sorry” a lot. Another worker at an insurance call center said that Cogito’s AI, which is supposed to tell her to express empathy when it detects a caller’s emotional distress, seemed to be triggered by tonal variation of any kind, even laughter.

This “affective computing” technology is the subject of Frank Pasquale's article More than a feeling. He's not a fan:

Much of affective computing is less about capturing existing emotional states than positing them.
...If institutions buy into these sorts of assumptions, engineers will continue making such machines that try to actualize them, cajoling customers and patients, workers and students, with stimuli until they react with the desired response — what the machine has already decided certain emotions must look like.

This literally inhuman oversight, far from allowing workers to “have their own voice and be recognised for their accomplishments”, does exactly the opposite:

Angela, the worker struggling with Voci, worried that as AI is used to counteract the effects of dehumanizing work conditions, her work will become more dehumanizing still.

“Nobody likes calling a call center,” she said. “The fact that I can put the human touch in there, and put my own style on it and build a relationship with them and make them feel like they’re cared about is the good part of my job. It’s what gives me meaning,” she said. “But if you automate everything, you lose the flexibility to have a human connection.”

The AI simply doubles down on all the terrible things about life under corporate control. Ingrid Burrington, talking to Inhabit: Territories about capitalism, supply chains, the COVID-19 pandemic and Jenny Odell, cuts to the heart:

One of the things that Jenny Odell gets across very well is that doing nothing is not about actually just stopping, or being useless or being lazy. It’s about being really clear about what you actually want and doing that thing instead of the thing you think you’re supposed to do, or the thing that meets someone else’s expectations.

Callum Cant ended his interview by outlining what on-demand food-delivery really is:

What is a service like [Deliveroo]? Functionally what is its core concrete nature? Well it's really care. What these platforms do is largely provide people who are exhausted from work, hungover, too depressed to go out to the shops, caring for children, with food quickly to their door.
...It's a care service that should be prioritised for people who need care. The actual use value here is “provide hot food to people who need it”... this should be one modality of a universal food service.

And Cant has a beautiful vision of what it could be:

Using food as a care service, providing it universally on a de-commodified basis, in delivery form to people who can't leave the house, in canteen form to those who can, and using that as a basis to rebuild our society.

Sounds pretty great to me.


Marginalia is an email and web newsletter about things that made me think over the last month – articles, books, podcasts, and perhaps from time to time some videos. It comes out on the first Monday of every month. You can subscribe by following @share@marginalia.hugh.run on any ActivityPub platform (e.g. Mastodon) or via email using the form below.

You might also enjoy my weekly newsletter Libraries & Learning Links of the Week, or my irregular blog Information Flaneur.