Thoughts on Tech Governance, books and music reviews.

[Thoughts] What is wrong with how we understand organizational (Social Media companies') behavior

Or where the cuban missile crisis meets tech decision making and governance

It is time that we move on from simplistic assumptions to explain the reasoning behind organizational behavior. These generalized unconscious biases contribute to make us ineffective in understanding the incentives and decision-making paths tech companies actually display. Thus, making us more prone to finding the wrong solutions to the problems of free speech, moderation and AI ethics. To achieve alternative, more intentional, version of these assumptions, I've decided to go back in history (both to my past as a political scientist and also to historical events) because part of the path towards that understanding has already been charted for us.

In 1962, at what some consider the closest moment the world has ever been to nuclear war, the USSR had set up ballistic missiles in Cuba in response to the US' move of doing the same in Turkey and Italy (and their failed Bay of Pigs invasion). After the month and 4 days that the crisis lasted, scholars turned rapidly to find explanations as to why both States had acted the way they did (why did the USSR put missiles in Cuba? Why did the US react by blockading Cuba? Why were the missiles withdrawn?). One of them, Harvard Professor Graham Allison, understood the value of getting this right better than anyone, writing his seminal book “essence of decision” in 1971 to try to explain why the events unfolded the way they did.

In general, when we look at the behavior of companies like Facebook or when we look at foreign policy decisions we are focused on a particular outcome (e.g. why did Facebook not do anything to fix AI that ranked inflammatory content?; why did Japan decide to attack Pearl Harbor?). This outcome can be understood in a myriad of ways depending on our analytical framework. However, most of the tech writings on content moderation use a very specific framing: Why did X Company decide to take X decision? Then they fix the unit of analysis and circumscribe it only to the company's choices. Next they focus their attention on certain concepts, specifically, general goals and objectives of the company (making more money, holding more power, etc.) And finally they invoke some pattern of inference: if the company acted in a certain way is because it must have a goal of the type they outlined before. Thus, the offering of explanations revolves essentially around calculating the rational thing for a company to do in a certain situation given specified objectives. A prime example of this is Karen Hao's MIT Technology Review article where she outlines that “everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth.” Although this assumption seems fair in the context of a company subject to a capitalist model where the incentive is to always make more profit, saying this implies that: (1) this is the ulterior motive behind Zuckerberg's decisions; (2) this was his intended outcome all along; (3) that there aren't internal (political) forces and standardized processes contributing to the way things turn out; (4) that Zuckerberg has full control over everything that is presented to him. In short, it assumes an individual has way more power than what they actually might have.

So, while this model might render fruits, this simplification obscures some of the most interesting aspects of (tech) decision making. In fact, according to Allison it neglects the power of bureaucracy (the maker of policy is not one calculating decision maker but is rather a conglomerate of large organizations and actors) and also the power of internal political bargaining amongst individuals with different preferences and goals within an organization. In other words, the “rational actor” approach is simplistic and can be harmful to our quest of making these companies behave differently. Moreover, in offering or accepting rational actor explanations about technology companies' (or any organization really) behavior we are assuming that they can be understood by analogy with the acts of individuals (the classic “what are the CEO's/President's/State's interests” question, ingrained in the idea that “a corporation is an individual”).

Allison in his “essence of decision” (1971) book proposes two alternative frameworks to try and capture the more complex reality of decision making within big organizations. His first view relates to what he calls “organizational process model” (also called “bureaucratic) where (1) organizations are considered black boxes that obscure standard processes and highly differentiated decision making structures across the organization and; (2) large acts result from innumerable and often conflicting smaller actions by individuals at various levels in the service of a variety of only partially compatible conceptions of rational goals and organizational goals. In other words, decision making is a function of bureaucratic patterns of behavior.

If we were to apply this framing to Facebook's decision making we would have to ask: from what organizational context and processes did these (e.g. content moderations, AI implementation) decision emerge? Based on this, we can absolutely say that there are some decisions that can be explained as outputs from standard patterns of behavior. For example, on any content moderation decision it is valuable to understand that Facebook has a system of double review's called Cross Check by which of high profile pages is gets a mistake prevention layer. Or, more generally, that content moderation (for the most part) happens in an orderly fashion (flowing from outsourcing partners to internal teams), following global Community Standards enforcement. This is the vision Facebook tries to make sure prevails because it lands better with the public: “we apply our standards through standard procedures”. Thus, the process must be (or at least appear) fair, legitimate and understandable to the public. There's however, much to be understood about the actual channels (Is the content reviewed by a human, machine or both? If there is more than one review, which teams are involved?) and operational procedures (how are the reviews done? What do reviewers look at and why?) that lead to the final decisions being made – this is where the value of this model comes to shine.

While the bureaucratic model might be great to help us understand 99% of Facebook's content moderation decisions, there's the thorny 1% of high profile decisions (e.g. de-platforming Trump, flagging (or not) fake news on high profile politician pages, etc.) that cannot be explained by understanding standard modes of operation. This is where the final framework, the political bargaining model, comes into play. This framework allows us to wrestle with the idea that decisions come from internal negotiations between players in an organization (Mark Zuckerberg vs. Monika Bickert vs. the Oversight Board, etc.). Indeed in this model we should be looking at what kind of bargaining among players yielded the critical decisions and actions (e.g. did Joel Kaplan really use his influence to stop misinformation flags on right wing accounts in the US?). In this framework, the concepts that structure this model are: the perceptions, motivations, positions, power and strategic maneuvers by the players in question. In fact, the key value of this model lays in identifying “the game” in which an issue will arise, the relevant players and their relevant power and skill. However, the idea that an organization as large and powerful as Facebook (or any government) might take decisions based on unstructured power dynamics is frightening because that means that important decisions for democracy and free speech online are subject to events that are way more random than we would possibly like.

Interestingly though, none of this models by itself is able to explain the full picture. Allison said it himself: relying on any of the models separately might render incomplete and misleading analyses. While the actor model was useful to understand State's behaviors in the foreign policy arena, it fell short in helping us realize the real power dynamics behind the decisions being made. We should demand that tech investigators and journalists strive to find ways to supplement the rational actor model with visions from these two other models to build a full picture. Yes, this might be messier and more complex but will be way closer to the truth of these companies' functioning.

Indeed, while the rational actor and the bureaucratic model might offer some sense of comfort by leading us to believe that decisions are governed by rational interests or standard organizational behavior, the truth is that policy and decision making processes are far more complex than that, as proven by the political bargaining framework. Understanding (1) the key decision points/players; (2) path dependencies and; (3) organizational and personal goals, will lead us closer to figuring out what is wrong with how we govern and understand tech. Because ultimately, the better we understand their essence of decision-making, the better we will be able to predict their behavior in reaction to new legislation or civil society pressures.

Most fundamentally though, it is probable that “the essence of ultimate decision remains impenetrable to the observer – often, indeed, to the decider himself ... There will always be the dark and tangled stretches in the decision-making process – mysterious even to those who may be most intimately involved.” In other words, it is very much possible that we give organizations and individuals much more explanatory power than they actually deserve. Accepting this not easy. We crave ways to simplify the world in order to cope with it. What is worse, we can't even trust companies or individuals to be rational or follow standard procedures. Reality is messy and understanding that complexity can definitely help figure out how to potentially do better governance, regulation and ethics in the world of tech by helping devise the right behavioral incentives (i.e. the ones that really matter for high level decision making).

See you in another adventure,

W.

#thoughts

ko-fi