Thoughts on Tech Governance, books and music reviews.

[Thoughts] How manufactured ignorance is clouding our judgement on tech governance

or how social media companies benefit from keeping algorithms and data processing as a black box

Ikon Images / Getty Images

1. What is manufactured ignorance (agnogenesis) and why should we care?

This fancy greek word comes from the mix of the word for ignorance or ‘not knowing’ and ontology, which is philosopher-speak for studying the nature of being. Coined by Robert Proctor and Iain Boal in 1995, they state that “ignorance is not just the not-yet-known, it’s also a political ploy, a deliberate creation by powerful agents who want you ‘not to know’.” In fact, some of the original manufactured ignorance studies were focused on tobacco companies’ public relations campaigns. Their strategy focused on seeding doubt to compete with the epistemological paradigms and scientific evidence to establish controversy.

According to Proctor ignorance can be “produced or maintained in diverse settings, through mechanisms such as deliberate or inadvertent neglect, secrecy and suppression, document destruction, unquestioned tradition, and myriad forms of inherent (or avoidable) culturopolitical selectivity”. He broadly characterizes three types of ignorance (1) native state ignorance (what is yet to be discovered); (2) lost realm ignorance (as product of inattention or sociological selection); (3) strategic ploy ignorance (manufactured with some specific goal). That being said, ignorance is not inherently evil, even when manufactured. I’ll get to that in the final section of the post. However, the public should care about strategic ploys by organizations (government or private corporations) to create public ignorance when their stake may be at odds with public interest.

This post deals with the tactics of manufactured ignorance and information obfuscation that social media companies seem to leverage in order to fend off criticism, avoid answering the tough questions and escape effective regulation.

2. Agnogenesis: you can’t fight what you don’t know

“Proctor found that ignorance spreads when firstly, many people do not understand a concept or fact and secondly, when special interest groups – like a commercial firm or a political group – then work hard to create confusion about an issue.” – BBC, Georgina Kenyon, 6th January 2016

A quick glance over social media’s newsrooms will land you articles upon articles dedicated to “explain” failures and features of content moderation processes. The truth is that these posts are infested with (1) unverifiable claims because the public has no access to the critical data required to pass accurate judgement; (2) sleight of hand arguments that try to create a fake ‘balanced debate’ argument.

Two current examples to illustrate these points. First, the latest controversy for Facebook is related to COVID-19 and vaccine misinformation. Their response to President Biden’s formulation is contained in an article called “Moving Past the Finger Pointing”. A forceful piece in which Facebook tries to defend itself from the claim that they have contributed to people dying by feeding them misinformation. There, as Carl T. Bergstrom points out that “it would be nice to be able to rely on facts. Unfortunately, Facebook has exclusive access to the data we need to know what the facts are, and is not forthcoming with those facts”. Similar claims has been made by experts in the field here. In short, it is impossible to fairly judge a problem if the companies continue to keep the public from accessing critical data.

Second, Robert Proctor explains that ignorance can often be propagated under the guise of balanced debate. There are fitting examples of this in “‘It takes two to Tango” by Nick Clegg. In the article the ‘balance routine’ allows the spin-doctor to claim there are two sides to the story, that ‘experts disagree’, that ‘causality is unclear’. In this piece Clegg states that algorithms are not solely to blame for integrity issues – that people are also responsible to what they see. However, as Elinor Carmi has pointed out: “Unlike the way he portrays the tango as a partnership between two dancers on equal footing, in fact in most styles of the dance there is a leader and a follower. […] The Facebook tango is also not with two dancers on equal footing.” Likewise, Clegg’s article is trying to ‘balance’ the debate, for instance, by posing the following rhetorical question: “Perhaps it is time to acknowledge it is not simply the fault of faceless machines?” All the while avoiding the core subject of algorithmic responsibility.

Of course agnogenesis can take other shapes. For example, it has been reported that Sheryl Sandberg has downplayed Facebook’s role in the January 6th Capitol attack commentary. This would classify in Procter’s taxonomy as deliberate secrecy. Another case would be the internal selectivity on Information Operations highlighted by Sophie Zhang that portrays the case “where organizational amnesia may be as important as institutional memory”.

Regardless of the reasoning used by companies to communicate in this way (paternalism, internal bureaucratic tensions, etc.) there seems to be a pattern of creation of ignorance that crushes efforts to create commons, ensure power redistribution, or even the democratization or effective governance/regulation of tech platforms. The knowledge vacuum created by companies is subservient to those in power, keeping them from actual public accountability. In this scenario the public and law-makers are left disarmed.

3. Is fabricated ignorance opposed to democratizing efforts ?

In a general sense, the argument of an ill (or even mis) informed public makes an easy case for those who believe that governance/government should be left to an oligarchy. The government of the enlightened few – or the “government of philosophers” as Plato would have it – has proven, in practice, to derive in tyrannies that end up producing sub-optimal societal outcomes. In short, those few in power have strong incentives to make sure they retain that power. Fabricated ignorance is one of the conduits through which an organism can keep operating at will because they are the only ones “who know better”. On the flip side, public ignorance also perpetuates the vicious cycle of company and tech zealots being able to say that “users/politicians don’t know what they are talking about”.

In addition, agnogenesis as a strategy to deal with tech governance/PR issues seems to be at odds with achieving any sort of elegant (win-win) solution to the main integrity problems plaguing social media platforms. Fabricated ignorance as a defense mechanism – allowing for plausible deniability, blame-shifting, or even defense by obscurity by companies – is antipode to a more democratic and fair use technology. In simple terms, the asymmetry of information makes it impossible for the public to (1) be literate enough to understand the issues, (2) advocate for feasible solutions and (3) rally behind the key issues and solutions.

To make matters worse, to counterfeit legitimacy, companies have done efforts to disclose some data on enforcement. However, in most cases there is no way for the public and its institutions to audit any of it. Some would argue that this feature drains the data of value or that is makes more harm than good to the governance ecosystem overall. Facebook’s Community Standards Enforcement Report is one of these initiatives. An experts panel requested by Facebook themselves have highlighted public scrutiny issues with this report.

These contextless disclosures foster further public ignorance. Companies throw complicated definitions and big numbers that can’t be easily explained at the public thereby creating a false sense of “legitimacy”. This obscurity of the data makes productive public debate effectively impossible because either (1) everyone bases themselves on these numbers that have little meaning or (2) the important methodological discussions (what type of data and context are required for content moderation accountability, for example) are given little attention, giving journalists no incentive to effectively report on it.

4. Some fissures in the agnotology strategy: the push for knowledge is the first step for real democratization

Agniogenesis disguised as transparency might be the most harmful of manufactured ignorances because it gives a further false sense of knowledge to an already deceived public. In general, it would seem like transparency-for-auditability and transparency-for-accountability is what is needed. Coupling that information with the existence of independent watchdogs dedicated to advocacy and research and we would be closer to create a much healthier version of our current tech governance ecosystem. Alas, social media companies have no incentive to put themselves in the crosshairs of the public.

However, the Oversight Board – an interesting effort at self-regulation – has pushed Facebook to be more transparent in several ways that align with the goal of “airing the truth”. Through their investigations and case recommendations they have indeed forced the company to explain previously obscured parts of its content moderation processes.

This governance effort seems to fall short on the specifics of accountability because its by-laws do not (1) require Facebook to comply with their recommendations, leaving the public and the Board at the mercy of a corporate framing of the issues at hand; (2) establish transparent ways for the Oversight Board to have actual supervision or access to a larger array of Facebook data (think classifier features, precision, recall — although those might be revealed voluntarily by Facebook); (3) doesn’t necessarily allow the Oversight Board to disclose this key information to the public in any format. In order to call this body an actual “oversight” organism I’d like to see its investigative and public disclosure powers enhanced.

The operating of the Oversight Board begs the question of what the standard should be. In practice it is helpful to see what our oldest public institutions do. The difference between government manufactured ignorance and private companies is that government has to follow clear rules established through democratic decision making (at least in democratic countries). For example, classification – in spite of its imperfections – poses certain safeguards to secrecy and thus to government manufactured ignorance. In stark contrast, social media companies are not subject to special rules rules or governance around what’s accessible to the public, for example governing what’s private company data, when information should be “declassified”, what can be required by the people for the public interest, etc. Because these companies are private corporations, by default everything gets treated as “secret”, manufacturing crippling public ignorance. Therefore, there is no certainty that the public will ever know or have authoritative data on some of the main issues affecting our social tissue. We currently rely on studies done with small portions of data relinquished by the companies or by user testing/experiences which tend to have less external explanatory power. In other words, it is very hard to draw general, authoritative conclusions from them — we live in a world of anecdotes.

Moreover, there are no rules that govern the public’s access to data other than what companies believe to be your “personal information”. For instance, there is no Freedom of Information Act (FOIA) request that can be filed by a normal citizen to a company. There’s no rule by which a company needs to comply with data declassification after a certain amount of years. In this world, we need to settle for companies creating insurmountable asymmetries of information in the name of “trade secrets”. To be clear, I’m not advocating for companies to give up their intellectual property – the same way that we don’t expect States to relinquish documents that contain raison d’État. However, we could definitely do better. Governance around transparency and manufactured ignorance matters. In fact, law-makers, instead of trying to solve the integrity issues, should start with pushing effective ways of dealing with public ignorance. Some ideas: create mechanisms for companies to (1) disclose data to audit behavior of their algorithms (news feed, content classification and enforcement, etc.); (2) give the public the ability to send and appeal information requests related to, not only their personal data, but also on matters of public interest including content moderation treatment; (3) create/foster an ecosystem of NGO (or even government backed) watchdogs that audit the data, publish reports and inform law-makers — ideally this government organism would work hand in hand with the Oversight Board; (4) create mechanisms for reparations on wrongful acting by companies following investigations.

In several countries around the globe there are oversight organisms that have legal power to access internal data of companies dealing within key industries. Classic examples of this type of governance include pensions, banks and energy. This would be a good first step to have in replacement of the FTC in the USA — HQ seat for most of the social media companies. However, in the future, international initiatives should be established to consider the countries in the periphery. No effort to disrupt manufactured ignorance will be complete without access by those who are marginalized and given less power.

5. Final Words

It is clear that agnogenesis arguments lend themselves to paranoia. Regardless of the intentionality behind this ignorance, it is clear that it exists in ways that hold us back. In this context, it seems that the current imperfect antidote for manufactured ignorance is whistleblowing. This has been the case for government created ignorance through classification — let’s think about Snowden, WikiLeaks, Chelsea Manning, etc. This has also been true for social media where the greatest opportunities for public knowledge have been provided by whistleblowers and/or journalists (think, for example, about the Dispatches investigation on content moderation or the more recent covid vaccine hesitancy scores ). They give us some of the key pieces of the wicked-problem-mosaic that’s content moderation operations at large. Thus, if companies want to keep some sort of control of what is disclosed I believe it is in their best interest to stop pretending like “things are too complicated” or that they will be revealing “trade secrets”. Comms teams need to start explaining and integrity teams within the companies need to start disclosing key relevant data and creating ways for people to audit them. It surely is easier to keep us under the weight of the night – as Diego Portales, Chilean “founding father” said – but it is also clearly worse overall.

Is agnotology always bad? Not necessarily. Manufactured ignorance might be helpful in some cases specifically in “perfect theory” scenarios. One example of a relevant thought experiment is John Rawls’ ‘veil of ignorance’: the only way for people to write just rules for a society would be to make sure that they know everything they need to do their job but absolutely nothing about their position in said society. This ignorance would, in theory, make these decision makers take positions that will benefit society as a whole instead of their own interest. A paradigm that we could explore expanding to policy-making in the content moderation world: how to leverage manufactured ignorance for governance and to ensure fairer outcomes.

Clegg himself has said that “a better understanding of the relationship between the user and the algorithm is in everyone’s interest”. He is right. But now we know he may be saying it for the wrong reasons.

See you in the next adventure,

parra-yagnam

P.S. It is clear that we could dedicate a whole article to the business of manufactured ignorance that surfaces in the form of mis/disinformation. It would be valuable to start talking about agnotology instead of misinformation. If enough folks request it, I might take a stab at this.

#thoughts

Robert Proctor, Londa Schiebinger (2008) “Agnotology: The Making and Unmaking of Ignorance”, Stanford University Press

ko-fi