Internet Anthropology

Table of Contents:

Introduction

1. Universalism


1.1. Idealism

1.2. Maxims embedded in historical frameworks

1.3 Ideal of the commons, post scarcity

1.4. Democratic knowledge contributions

1.5. Internet Communities

1.6. Copyleft movement grew out of remix culture

1.7 Anti-authoritarian activism

1.8 Siloed web

1.9 Gaming culture and Male trolls

2. Online Communautarism

2.1 Online Identity

2.2 Personalisation

2.3 Confirmation Bias

2.4 Arguing with Straw Men

2.5 Shared Eccentricities

2.6 Shifting Norms

2.7 People converge towards their likeliest behaviour

2.8 Moulding voters

3. Algorithms Systematise Unfairness

3.1 Definition of “Algorithm”

3.2 Attention Economy

3.3 Surveillance Capitalism

3.4 Addictive by Design

3.5 Ubiquitous with no way to opt out

3.6 Numeracy not Neutrality

3.7 Garbage In, Garbage Out

3.8 Sentencing Algorithms and Criminal Injustice

3.9 Algorithms for Assassination

3.10 Metrics of Success

3.11 Growth is an Awful Metric to Optimise For

3.12 Better Metrics of Success

3.13 Solutions

Introduction

This is an observation of the consequences of the formal qualities of the internet as a medium and their implications for political organising. It traces the evolution of internet politics from a (1) universalist to a (2) communitarian model of organising, the fact that social media platforms are optimised for maximising attention and predictable behaviour, and (3) the broader implications of letting algorithms rule our lives.

The internet has profoundly impacted most areas of society. Its 3.4 billion users have collectively taken part in a revolution analogous to the invention of the printing press or the steam machine. Some worship at its altar, such as WIRED’s editor Kevin Kelley, who posits that the web could end up being a fortuitous participatory project. Building upon all our aggregate contributions and whims, we would be devising a set of maxims that will ultimately regulate our society. This exuberance speaks to the level of technological utopianism and misguided universalism common among the culture of those who make a living upon its adoption. In making this prediction that the internet’s information will give us a perfect political structure, Kevin Kelley is literally anthropomorphising a communication tool and granting omniscience to the wisdom of the crowds without pondering how or why these crowds form. It’s understandable that someone like Kevin Kelley would find this narrative seductive. Indeed, he saw the trajectory of new technology in tension with the governments and gatekeepers who funded them, and saw the tremendous power of disintermediation, connection among like-minded eccentrics, and serendipity and innovation that was happening from cross-pollinations as people came online and expressed themselves in an unfettered manner. In the aftermath of 2016, Kelley’s proposal reads as naive – the internet is no longer experienced as a utopia, nor do people see its antagonisms as a model of constructive discourse. A lot of the internet’s counter-cultures enable each other by posting vile imagery and attempting to shock or “trigger” those whom they disagree with, disproportionately targeting people of colour and women. Indeed, Microsoft experimented with a very basic version of Kelley’s fantasy by creating a robot named “Tay” on twitter that conversed with and learned from other users. Within hours, anonymous posters bombarded Microsoft’s robot “Tay” with abuse and references to nazi propaganda, such that Tay begun using his newfound platform to deny the holocaust.

PART 1: UNIVERSALISM


1.1. Idealism

The idealistic model of the internet that Kevin Kelley has in mind is closer to how the internet was conceptualised in academic and counter cultural spheres in the 1980s. This ideal lies somewhere between the library of Alexandria and Denis Diderot’s enlightenment project to catalog all scientific knowledge and disseminate it – against the wishes of Jesuit educators, the previous gatekeepers. The internet can also be inscribed in this power struggle, as a tool for disseminating knowledge, culture and information universally.

1.2. Maxims embedded in historical frameworks

In 1962, at the height of the Cold War, the US Department of Defense was looking to build a fail-safe system for US information to keep circulating in the case of a nuclear blast taking down centralised infrastructure. The Advanced Research Projects Agency hired computer science researcher J. C. R. Licklider, who wrote about a vision of Man/Computer symbiosis as an ideal in which machines could connect minds and extend human cognition. To encourage adoption and dissemination of the network, Licklider reached out to academic sites such as UC Berkley, UC Oklahoma and Bell Labs. These quickly embraced the internet as a universalist tool for amplifying knowledge. Licklider designed the network which would later become the Internet so that all important information was being sent between the end nodes of the network directly, so that there was no centre point left vulnerable to attack. In doing so, the network “sucks power out of the center” according to Esther Dyson. Unlike the old, centralised schema of previous mass media like radio or TV, the internet transcends the “hub-and-spoke architecture with unidirectional links to the end points” of old and shifts to “distributed architecture with multidirectional connections among all nodes in the networked information environment” (Yochai Benkler). The internet’s standards were further codified through the 1980s and 1990s by the Internet Engineering Task Force, a group of government researchers operating on consensus. Sandra Braman at A&M University identified the principles and maxims that emerged among this culture, namely a desire to future-proof the technology: making it interoperable and flexible, easily extensible, and maximising innovation by encouraging minds from disparate fields to contact each other. These standards set up the rules of engagement online: “Technological innovations are similar to legislative acts or political founding that establish a framework for public order that will endure over many generations” (Langdon Winner).

1.3 Ideal of the commons, post scarcity

The ideals of the early internet were about universally disseminating knowledge, putting potential innovators in touch, and creating a new commons. The internet makes it possible to copy information or media at no cost, facilitating this: “In the analogue world, copying was difficult and degenerative. […] In the digital world, copying is effortless and perfect. […] You can't even look at something on the web without (unknowingly) making a copy of it” (John Naughton), which means digital goods are endlessly replicable with no loss or marginal cost. One is reminded of Thomas’ Jefferson’s quote about how ideas benefit from being shared: “he who lights his [candle] at mine receives light without darkening me”. Indeed, while a bird today creates its nest in much the same fashion as it did a century ago, humans are unique in that we improve on our tools and improve upon our peers’ previous ideas – the internet extended our ability to document and transmit knowledge and improve upon it. John Milton argued in Paradise Lost that truth was best achieved trough a free and open encounter of views (though he was only interested in defending the freedoms of Protestants). Gerard Hauser in Vernacular Dialogue and the Rhetoricality of Public Opinion expands upon Jurgen Habermas’ concept of the public sphere by defining it as “a discursive space in which individuals and groups associate to discuss matters of mutual interest and […] to reach a common judgment about them”. The internet has facilitated the creation of such spaces: “increased choice and social networks lead to greater exposure to diverse ideas, breaking individuals free from insular consumption patterns” (Yochai Benkler).

1.4. Democratic knowledge contributions

The internet allows for great serendipity of discovery, living up to its universalist ideals as a tool for democratising knowledge. Stewart Brand created the Whole Earth Catalog in 1968, which was a list documenting tools and skills towards building a better society (from scratch, if need be). Readers would submit new articles to Brand, who would include most of them in the next issue. While regular publication of the Whole Earth Catalog ceased in 1972, Brand had the foresight to put up the collective wisdom he had elicited from his audience up on a precursor of the internet in 1985 on a bulletin board system called the Whole Earth 'Lectronic Link, which allowed anyone with a connection to contribute their wisdom. One can think of this as a precursor to today’s “how to” tutorials, in that they are similarly “do-it-yourself” and spreading aptitudes from enthusiastic experts. The most impressive example of collective construction of knowledge on the internet today is the organisation and collaboration in real time at scale of Wikipedia. Wikipedia operates solely on user donations as a non commercial-entity. Its contributors are all volunteers, operating on a “gift economy” where one attains a positive reputation for making civilly minded edits to encyclopedic articles. Wikipedia builds upon Diderot’s ideal, because it is constantly perfectible, and truly able to elicit experts from all fields – with a slight preponderance of nerdy and technical topics. In epistemological terms, knowledge is now a function of curiosity once someone is online, and internet users expect to encounter knowledge from wildly different points of view, with an initial bias towards privilege, since early adopters were wealthy hobbyist techies, and universities. However, this access to knowledge is meaningless without curation, in part because of the sheer volume of what’s being created: every year of cultural production on the internet is equivalent to the first two millennia of human culture in terms of sheer output.

1.5. Internet Communities

The internet was created to connect people. It created communities. Benedict Anderson wrote in Imagined Communities about the phenomenon where people who would never expect to meet in the physical world can imagine that they are part of a community if they consume the same media. At his time, he emphasized the role that the printing press and newspapers played to give rise to notion of the nation state. Sociologist Zeynep Tufecki writes in Twitter and Tear Gas that “the shift from face-to-face communities to communities identified with cities, nation-states, and now a globalized world order is a profound transition in human history”. The internet is producing tribal identities and groups that aren’t located in particular spaces but around transnational interests, what I will refer to as “shared eccentricities” in part 2.

1.6. Copyleft movement grew out of remix culture

While it is important to outline the universalist ideals of its creators, the internet was also created to facilitate innovations in ways they themselves hadn’t anticipated. Jonathan Zittrain calls this “generativity": a system has generativity when it empowers an audience to use it to create things that the people who made the system had never thought of before. A lot of early internet activism in the 2000s was centred around reforming copyright because computers made it easy to obtain, transform and distribute copyrighted works without barriers. Richard Dawkins defines the “meme” as the unit of meaning, akin to the gene, that gains prominence by reproducing itself. Internet memes are variations on an idea, usually transformative. They are the syncretic culture of the internet, often laterally bridging or juxtaposing ideas or works, allowing a subversive reinterpretation of culture among peers. This is what Lawrence Lessig calls “remix culture”, which threatened existing media distributors by making them secondary works and criticising the notion of a “monoculture”. This practice builds upon the counter cultural practice of détournement and culture jamming, taking its roots in the Situationists movement of Guy Debord. Existing copyright law made these transformative works illegal, because copyright gets automatically applied to every fixed expression (like writing) since 1976 in the US. Early internet activists such as Lessig, Doctorow or Jimmy Wales, defended copyright rules that better matched this new norm of transformative online sharing, and were labelled as “copy-left” activists by industry. This ideal of reforming copyright was close to being a collective ideology for the web, and grew into real world politics, as seen with the emergence and success of so called “Pirate (political) Parties" in Sweden, Iceland, the Netherlands, and the global mobilisation against the 2012 Stop Online Piracy Act.

1.7 Anti-authoritarian activism

Broad access to the internet makes it much harder for governments to hold a monopoly on the information that their citizens get access to. This is an essential tool of control in authoritarian regimes: George Orwell wrote that “the Russian masses could only practice civil disobedience if the same idea happened to occur to all of them simultaneously”. Modern tools like Twitter allow people to coordinate so that ideas can be expressed ‘simultaneously’: it allows activists to coordinate via hashtags, a “many-to-many” coordination tool. Evgeny Morozov in Iran Elections: A Twitter Revolution? argues that technological progress always plays a role in facilitating protests: “remember that the early anti-communist protests in Poland were facilitated with the help of the Xerox machines!”. In this example, the dissemination of reproducible tracts allowed activists to reveal their discontent with the regime to each other without fear of reprisal for expressing their views. This trend continues: today’s “digitally networked public sphere can […] help people reveal their (otherwise private) preferences to one another and discover common ground” (Zeynep Tufecki). The adoption of ubiquitous cell-phone cameras over the last decade has increased the ability of citizens to document abuses of power, and their footage moves the conversation beyond “authorities said, activists claimed”. In 2011, educated young activists across the Middle East seized on the opportunity that new internet tools provided them and rose up in protest of their leaders: we refer to this as the Arab Spring. Tufecki argues this movement is an outgrowth of online culture: “protesters’ language about technology, protests, and politics [in Tahir Square, Egypt] resembled those of protesters elsewhere in the world. […] Egyptian youth and New York youth, different in many ways, also sounded similar themes in discussions: antiauthoritarianism, distrust of authority, and desire for participation. […] Globalization from below had arrived”. New political movements now organise on social media to further their goals. These platforms allow them to “find one another, to craft and amplify their own narrative, to reach out to broader publics, and to organize and resist” (Tufecki). However, because this structure is anathema to hierarchy, they are not nimble enough to agree on how to shift their tactics: “they lack mechanisms for making decisions in the face of disagreements among constituents, their mistrust of electoral and institutional options […] [forces them to] return to their only moment of true consensus: the initial […] slogan or demand or tactic.” (Tufecki). Modern authoritarian regimes desirous to control information can let these movements burn themselves out by starving them of attention. While they now have a harder time censoring information, they have weaponised distraction: the Chinese government employs an army of “50 cent warriors” to flood social media feeds with frivolous topics whenever activists uncover embarrassing information, burying it away from sight.

1.8 Siloed web

The greatness of the early open internet was that it made it hard to block the ability to speak and allowed people to innovate without permission, which led to a decentralisation of most internet users’ informational diet. The experience of browsing the early, textual web was one where the user pursued hyperlinks to make horizontal connections between pages and browse both simultaneously to compare them, in a much more associative and less linear and passive form as television or radio. However, in the last decade, visuals are increasingly replacing typography online, partly driven by increases in bandwidth. The experience of the web is now a siloed one, where users are stuck in lanes. Each "app" on a smartphone is a unitary experience, where news sources appear to the user indiscriminately as part of a continuous personalised feed. Clicking a link keeps the user locked into the single purpose app (which keeps the user inside the initial app’s advertising market), so they can go back to the feed after consuming a morsel of “content”. Bloggers don’t get shared widely enough to be sustainable because they aren’t a destination anymore. Internet users browsing in this fashion no longer discover new serendipitous places, they only get things promoted to them that become visible based on prior habits and revealed preferences.

1.9 Gaming culture and Male trolls

Video games emerged as a global cultural reference for early internet users: they overlapped with their technical inclination, and provided as close to a shared worldwide reference point, as most game players grew up playing some of the same works. In the late 1980s and 1990s especially, such games were mostly a Japanese cultural export. In most of the world, this wave of video game commercialisation coincided with a gendered marketing push: computers became toys for boys to this demographic. In a tragic example of the pernicious consequences of gender role promotion in kids advertising, this shut off many girls from getting as familiar with computers, because their parents were less likely to put one in their rooms, for instance. This was reflected by the proportion of women majoring in Computer Science: 35% in 1985 to 20% in 2005 (where it has roughly stayed since, according to the NSF). Women competing with boys who had been socialised into computers via video games from a young age found computer science classes less familiar to them than their male counterparts. 4chan is an anonymous website, an English language offshoot of the Japanese forum 2chan dedicated to anime and gaming, which was designed around ephemerality and anonymity. Each post has an expiration date if no one has engaged with it in the last day, and a discussion thread gets put back at the top of the pile whenever someone replies to it. This trains its users to be good marketers and manipulate the attention economy, what Dale Beran calls the “evolutionary [Darwinian] struggle” in Max Read’s article “The Whole World is a Message Board”. 4chan users are disproportionately male and often frustrated. Beran call it a “gathering place for the people who felt most thwarted by challenges of offline life: “Everyone on 4chan drifts to the right because their lives still aren’t working out”. As a result, much of their collective organising revolves around their ideal of internet “sovereignty”, the idea that the internet makes it impossible to control information, and that this was the “real" space in which they could be powerful: “Posters and trolls wanted to reserve for themselves on the internet the power and freedom they couldn’t find off it” (Beran). This manifests itself in targeted harassment campaigns, mostly against women trying to participate in gaming culture. 4chan users often rationalise their abuse by dissociating from it: they think their “inner” technological life doesn’t affect their day-to-day life and that these are separate. Psychology professor John Suler refutes this in his study on “The Online Disinhibition Effect”: “The self does not exist separate from the environment in which that self is expressed”. He also blames asynchronicity for facilitating this disconnect: “Not having to cope with someone’s immediate reaction disinhibits people”. In Kill All Normies, Angela Neagle writes that an additional layer of dissociation is facilitated by the jocular culture of 4chan: “interpretation and judgment are evaded through tricks and layers of metatextual self-awareness and irony” – if someone gets offended, they are told not to take things too seriously, that it was all a big joke to get a rise out of people. Laurie Penny confirms this in her reporting from CPAC in 2016 “I’m with the Banned”: "for [trolls] the reaction itself is the win.”. She calls it “the game of turning raw rage into political currency”. These people “take pride in performative bigotry and call it strength”. She identifies them and their memes as what happens when “weaponised insincerity is applied to structured ignorance.” As she argues, “like Trump […] they channel their own narcissism to give voice to the wordless, formless rage of the people neoliberalism left behind”. Steven Bannon ran afoul of these 4chan trolls when he was investing his Goldman Sachs portfolio in the Chinese company “Internet Gaming Entertainment” which paid Chinese workers pennies to “farm” artificial in-game currency in World of Warcraft by performing menial tasks and selling the fruits of their artificial labour to wealthy westerners. 4channers saw it as an encroachment upon their sovereignty, since it allowed others without as much computer time as them to bypass their progress, and successfully brought a lawsuit against Bannon’s company. Bannon was taken aback and called them “smart, focused, relatively wealthy, and highly motivated about issues that mattered to them”. He soon realised, in his own words, that “these rootless white males had monster power”, and wasted no time in exploiting and pandering to it.

Part 2: ONLINE COMMUNAUTARISM

2.1 Online Identity

In The Performed Self, Erving Goffman argues there is no true self, only performed roles. The internet as an anonymous or pseudonymous space can be a great playground for experimentation with other performed selves (comedically, artistically, sexually). In Outsiders, Howard Becker outlines in the ways in which an individual’s self expression is constrained by how society perceives and labels them. Early commercials for the internet depicted black people and handicapped people using a computer and smiling, with the implication that it provided a space where their difference could become “invisible” and they would become free of discrimination. This is a hegemonic ideal, since it removes their agency. John Perry Barlow wrote the “Declaration of Independence of Cyberspace” from Davos in 1996 to protest against the impending Telecommunications Act. He presents a similar ideal: “We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. […] Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion” (Barlow). People who self identified as on the margins of society (geeks, nerds, queer folk, fetishists) became early adopters of the internet because it was seen as a counter-cultural medium, and they were desperate to find people like themselves, or who shared an interest with them over geographic distances. They sought out similarity along marginal lines to foster community. Today, this process is now automated for everyone, which leads people to double down on their existing situation. Furthermore, Facebook collapses a person’s various social roles to a single identity. Zuckerberg justifies this by saying that “having two identities for yourself is an example of a lack of integrity”.

2.2 Personalisation

Facebook is how over half of Americans get their news. Mark Zuckerberg is quoted as saying that “a squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa” (p181) in David Kirkpatrick’s The Facebook Effect. Iranian blogger Hossein Derrakhshan questions whether the Facebook Newsfeed’s criteria of relevance are optimal: "the prominence of the stream today doesn’t just make vast chunks of the internet biased against quality – it also means a deep betrayal to the diversity that the world wide web had originally envisioned”. Eli Pariser coined this phenomenon as the 'filter bubble': “[social media feeds] (such as Google’s) are architecting a filter bubble around each user, blocking [important and uncomfortable] information […] from entering and incorporating information selected only on impulse—that of the first click”. Filter bubbles lock users into personalised echo chambers for what they’ve chosen before, rather than what they say or think they want.

Google started personalising search results in 2009 based on previous behaviour, rather than ranking by most cited. Twitter, under pressure from Wall Street, began sorting the “best tweets first” rather than chronologically in 2016. Eli Pariser is concerned that such models may lead us down an “informational determinism in which what you’ve clicked on in the past determines what you see next”. Anastasia Utesheva’s “Theory of Digital Technology and Evolution” posits that our brains are hardwired for news that we think will concern us, that resembles us. This is an expression of the mere exposure effect: people are drawn to familiar things, and find comfort in repetition. Buzzfeed’s listicles exploit this by being tailored to very specific niches, and listing the cultural references they share, often fostering groupthink among a community. Personalised social media make it possible to choose one’s own reality, and strive to creates spaces where people can be comforted in their worldview.

2.3 Confirmation Bias

The internet makes it possible to quickly resolve factual disputes. Consequently, debates have shifted to become more about narratives and feelings tied to cultural identities, backed up by divergent facts or anecdotes. These are often sourced back to outlets designed to pander to a niche and attempt to discredit alternative sources. One might see an echo of Plato’s concern in the Republic that democracy without discourse would turn into “tribes of feelings”. The U.S. National Intelligence Council describes this phenomenon in their report Global Trends: Paradox of Progress: “The information environment is fragmenting publics and their countless perceived realities— undermining shared understandings of world events that once facilitated international cooperation”. Once people get their information from opinion enclaves, they confer among themselves and intensify their views. As Cass Sunstein outlines in #Republic: Divided Democracy in the Age of Social Media: “Those who flock together […] will end up both confident and wrong, simply because they have not been sufficiently exposed to counterarguments. They may even think of their fellow citizens as opponents or adversaries in some kind of ‘war’”.

2.4 Arguing with Straw Men

Arguments on the internet are often sapped from context, and excerpted among partisans as a means of getting riled up against their opposition. Unfortunately, as Whitney Phillips, professor at Mercer University, explains “When you’re engaging with a meme, you’re not engaging with a full narrative”. Indeed, screenshots of arguments deprive conversation from being traced back to its original link, and are prone to faking. As with all tribes, their ties are bound stronger by having an antagonistic force to critique collectively, even if it ends up being a mangled caricature of their opponents’ views. Because of the unphysical nature of the internet, such members of an in-group can be prone to dehumanising the “other” and questioning their authenticity, e.g. arguing that they are paid by either “Correct the Record” (a democratic Super PAC which paid people to comment on social media to support Clinton in 2015 and 2016), or the Russian government, which is known to hire troll armies and use robots to create fake accounts and amplify their message. This lets partisans feel comfortable with the fact that those publicly disagreeing with them have ulterior motives, and should entirely be discredited. The study “The Hostile Audience: The Effect of Access to Broadband Internet on Partisan Affect” by Yphtach Lelkes, Gaurav Sood and Shanto Iyengar corroborates that: “exposure to messages attacking the out-group reinforces partisans’ biased views of their opponents” and that therefore “the ability to select information sources that routinely denigrate the out-party is likely to lead to increased out-party animus”. Partisan political discourse on the internet consequently leads people to collectively gang up onto the image of the opposition that they have built in their heads, with no bearing on reality, and no real exchanges of views.

2.5 Shared Eccentricities

Communities are bound together by homophily – the desire to interact with people similar to oneself, and policed by pluralistic ignorance – the fact that in homogenous environments, disagreement is assumed to be marginal such that people self-censor rather than express dissent. Social media enable the emergence of long-distance homophily based on viewpoint, leading to the formation of new political communities. David Rothkopf in The Great Questions of Tomorrow argues that people’s interests can more easily transcend their demographic categories when expressed online: “Today, Internet users can go venue shopping for cultural commonalities— be they religious, musical, artistic, or political”. These groupings allow people to feel like they are among a subset of peers, most often congregating around a characteristic that represents their uniqueness from the dominant paradigm, or a shared eccentricity. By socialising on the internet, people can feel comfortable to express their essential qualities even if they are at odds with the norms of the physical culture in which they are embedded. For instance, closeted teens from repressive environments can find spaces where their desires are seen as normal, and met with support.

2.6 Shifting Norms

Some online communities can enable behaviours recognised as deviant by mainstream society. The study “The Majority Illusion in Social Networks” by Kristina Lerman, Xiaoran Yan, and Xin-Zeng Wu describes how people adopt the beliefs they encounter most, which are often biased: “Local prevalence of some attribute among a [person’s connections] can be very different from its global prevalence, creating an illusion that the attribute is far more common than it actually is, […] leading them to accept as a norm a behavior that is globally rare”. They observe that this phenomenon is particularly prevalent in networks where many less connected users connect to a few users with large followings, such as Twitter, because these influencers have a lot of sway. This can have an accelerating effect: when 10% of a group hold a strong view, they pull others towards them according to the study "Social Consensus Through the Influence of Committed Minorities” from the Rensselaer Polytechnic Institute. In some communities, users can reward each other by voting on or liking posts, creating a reputational score that quantifies approval, which makes people more confident in the legitimacy of their stances. Subcultures often develop their own parlance to shift norms, or criticise the arbitrariness of “normal” characteristics in the dominant linguistic paradigm. This often leads to a shift in local norms among the group, and can be adopted by society if promoted and deemed acceptable. This was the case with the adoption of the term “cisgender”, such that “transgender” ceased to be defined as a deviation from the norm but instead became recognised as an equally valid gender identity. Unfortunately, these parlances are often exclusionary, and used to create questionable norms. For example, autistic people found each other online as more than patients, started defining themselves in opposition to those without the disorder, whom they denigrated as “neurotypicals”, sometimes criticising their inferior moral qualms. Some communities of young women idealise deformed anorexic bodies by referring to them as “thinspiration”, posting their thigh gap pictures and sharing vomiting tips on the hashtag. Yet another example of a linguistic norm shift is the “liftblr” community, a community of shoplifters posting on the website Tumblr to document and share their stolen goods (mostly cosmetic products) as status symbols. They justify their actions by quoting anticapitalist rhetoric, but it is belied by a shallow embrace of materialism. One such “lifter” who refers to herself as Unicorn-Lift is quoted as saying: “I only lift from stores that are multi-million dollar companies. I would never steal from a person or a small local store”. These moral justifications among impressionable young people are an example of trying to normalise hedonistic theft by portraying themselves as victims of consumerist society. Some of these collective norm shifts online have moved fringe suspicions and conspiracies to the mainstream: according to the “On pins and needles: How vaccines are portrayed on Pinterest” study by Jeanine P.D.Guidrya, Kellie Carlyle, Marcus Messner and Yan Jin, the proportion of vaccine-related posts online that was critical was about 25% in the mid 2000s. A decade later, a majority (75%) of such posts on Pinterest are anti vaccines. This is a result of the flattening of expertise online, according to Harry Collins: "On the Internet, anyone can join in the conversation about, for example, the safety of vaccines. The experience of John and Jane Doe and their children is right up there with the Nobel Prize-winning research because the Nobel Prize-winning research has been done by people like you and me”. As a result, concerned parents are able to find corroborating information that supports their skepticism. This process is accelerated by personalised algorithmic recommendations. According to Renee Diresta in “Social Network Algorithms Are Distorting Reality By Boosting Conspiracy Theories”: “once people join a single conspiracy-minded group, they are algorithmically routed to a plethora of others. Join an anti-vaccine group, and your suggestions will include anti-GMO, chemtrail watch, flat Earther, and “curing cancer naturally” groups. Rather than pulling a user out of the [conspiracy theory] rabbit hole, the recommendation engine pushes them further in”. In this sense, personalised recommendations herd users towards the views that are most expected of them, heedless of their validity.

2.7 People converge towards their likeliest behaviour

Facebook uses our past activities to determine what our future choices are going to be. Then, not only do they market these choices before we know we’re going to make them, but are trying to make us more dependably and predictably make these choices. Social media companies aim to reduce anomalous behaviour and get people to behave closer to the most statistically probable version of themselves. By nudging users towards their most statistically likely cohort, marketers fashion people into more legible precisely segmented demographics. They can use all the data users reveal about themselves with their browsing activity to make predictions about them with about 80% certainty. They can predict there is an 80% chance someone will come out as homosexual or go on a diet months before the person in question makes a determination (Rushkoff p42). The world as shown through a person’s device is being confirmed based on a probable reality that they didn’t get a say in choosing. To what extent is it creating an endogenous self fulfilling prophecy? These predictions are based on browsing data and metadata, which creates a trend towards homogenous categories by extrapolating from demographic information. In this sense, web users went from incidentally browsing to being branded like cattle. Social media slots its users in higher level categories and treats them in a way that exacerbates their proclivities by extracting the most value from a little tendency. We simplify ourselves by letting ourselves be defined along the lines of these prevailing dichotomies, without being aware of the categories in which we are being organised. This informational asymmetry means that the user is transparent to the platform owner and ad networks and data brokers and intelligence agencies, who are in turn rather opaque to us. This process is designed to make people more predicable without their knowledge, which is the very definition of determinism. YouTube’s recommendations tend to push viewers towards more extreme versions of an argument to keep them engaged.

2.8 Moulding voters

These techniques for shaping publics were utilised to great effect by presidential campaigns’ digital efforts. In Weapons of Math Destruction, Cathy O’Neil says “the goal for the Obama campaign was to create tribes of like-minded voters, people as uniform in their values and priorities” (p188). Reclusive hedge fund billionaire Robert Mercer exploited this political sorting in his support of the 2016 Brexit and Trump campaigns. Using his data mining company Cambridge Analytica, Mercer matched psychological messages to different demographic traits by using data that people gave them in self-filled out in personality quizzes about innocuous topics such as which sitcom character they most identify with. From these quizzes, Cambridge Analytica established psychographic models and micro-targeted issues that would be emotionally resonant to a given cohort, targeting their lies and fear mongering to those most susceptible to buy into them. These personalised political psychological appeals are a dangerous setup for authoritarian rule, since citizens no longer have a similar reference point to compare what the government is doing, and every cohort is armed with its own facts and narratives. This environment sows confusion and exhausts citizens’ ability for critical thinking and discerning truth. Personalised advertising removes the shared frame of reference around a cultural artefact in favour of narcissistic appeals. Walter Benjamin wrote in The Work of Art in the Age of Mechanical Reproduction how mass production robbed cultural works of their aura by removing them from their contexts. In today’s personalised feeds, people are alienated from ever considering a work in the same context as their peers. In fact, as content producers, web users are constantly performing a version of themselves on social media. On Instagram, for instance, users post snapshots to publicise the experiences they have consumed to signal social status: this was what Thorstein Veblen referred to as “conspicuous consumption”. People curate their social media production to present an idealised version of themselves, each running their own little reality show, curating their own lives for external consumption, what people unironically refer to as their “personal brand”. Naomi Klein wrote in 1999 in No Logo that success in the modern economy required people to “self-incorporate into [their] very own brand—a Brand Called You. […] [We will] lease ourselves out to targeted projects that will in turn increase our individual portfolio of “braggables””.  In order for employers to think favourably of us, we must manage our online brand to their liking, and turn ourselves into a product to be more “sellable”. On the internet, getting a large following is an end of itself, which means that popular trends are quickly co-opted for inauthentic commercial purposes. This logic of branding is zero sum, since it competes for attention, and it is personal and possessive, which is antithetical to building a broad expansive social movement.

PART 3: ALGORITHMS SYSTEMATISE UNFAIRNESS

3.1 Definition of Algorithm

An algorithm is a set of instructions for a computer to perform, like a recipe. It requires two things: data to process, and a definition of success. Algorithms are a simplified version of a decision making model in a person’s head, formalising a thought process. However, humans are imperfect at translating the heuristics of their thinking into algorithms, and the builder of an algorithm gets to define what success looks like, projecting their agenda. Examples of algorithms are sorting applicants, credit loans, and the Facebook Newsfeed.

3.2 Attention Economy

The mechanisms of personalisation and filtering described in part 2 are a consequence of the attention economy: algorithms on social media are designed to make users comfortable, and keep them in a placid state so that they are receptive to their next advertisement. Personalisation creates easily targetable groups, which goes against the universal ideal of the internet as equalising access to knowledge. Advertising was invented in 1832 by Benjamin Day for the New York Sun, the first “penny press” because the paper was cheap and made its money by attracting the attention of its readers and selling it. Retail magnate John Wanamaker exclaimed around the turn of the last century that “half of advertising was being wasted, I just don’t know which half”. This reflects how generalised advertising tried to hit relatively diffuse audiences, and lacked tools for monitoring its success rate. Online advertising, in comparison, lets marketers precisely monitor what gets clicked on, and can target very specific vulnerable populations, such as make-up advertisements for women feeling “fat, lonely or depressed”, or an advertisement for a trip to Vegas for a user having a manic episode. Facebook tricks its users to get as much attention out of them so it can sell it to advertisers. Even artists are auctioning off their fans’ information to the highest bidder: Jay-Z released an album as a "free" app on Samsung phones which collected and sold its users contacts, location data and phone records to third parties. The creator of Facebook’s FBX bidding marketplace for advertisers, Antonio García Martínez, writes that “everytime you go to Facebook or ESPN, you’re unleashing a mad scramble of money, data and pixels that involves undersea fiber optic cables, the world’s best database technologies and everything that is known about you by greedy strangers. And there are literally dozens of companies talking to each other in the background and doing complicated economics, real time auctions, to figure out how many cents one company is going to pay another company for the privilege of putting a particular ad in front of your eyes at that very moment”. This bidding market is informed by the profile each user has built with their online activity. As Zeynep Tufecki writes: “The only way for platforms to increase the price they are paid for ads is to create tailored ads that target particular users who are likely to buy specific products. […] These pressures to achieve huge scale and to minutely monitor users promote the centralization and surveillance tendency of platforms like Facebook and Google and their interests in monopolizing both ad dollars and users”. Google (which also owns YouTube) and Facebook (which also owns Instagram) collect as much information on their users as possible. They have a quasi duopoly on this digital advertising market, and continue to syphon all publishers into their orbit. Herbert Simon wrote in 1971 in “Designing Organizations for an Information-Rich World”, that in an information-rich world, the real scarce resource is attention, and the key question thus becomes how “to allocate that attention efficiently among the overabundance of information sources that might consume it”. Zeynep Tufecki argues that instead, “social media platforms are designed for inefficient allocation of attention; they aim to increase the amount of time spent on their site, often to the detriment of efficient consumption of important information”.

3.3 Surveillance Capitalism

Attention is like a currency that needs to be constantly spent. It can’t be stored for later, because we’re only ever living one moment at a time. Because digital goods can be copied with no marginal cost, attention becomes the scarce and valuable commodity of the internet. The users of social media platforms are a broad mass of unpaid labourers, they drive the value of the platform. The fact that these platforms are offered for ‘free’ makes people not think about their costs. We use systems that spy on us in exchange for services, which is what Al Gore described as a stalker economy. In “The New Country of Facebook" article in FT, Mark Zuckerberg describes power as shifting away from governments to corporations – a grandiose statement reflecting capital’s historical desire to run unfettered and unregulated. Facebook’s initiative of “Free Basics”, a plan to install a free version of the internet in India was rejected by Indian regulators because it limited access to the internet to Facebook and a few non-profit services to give it the appearance of philanthropy, imposing itself as the middle man for information access across the country. This was rightfully decried by Indian activists as a form of neocolonialism. Early adopters of the internet assumed that it was on a trajectory to eliminate middle men and disintermediate markets. Classified advertisements, antique stores and taxi companies were shunned as unnecessary relics of the past, capturing value away from a direct exchange and limiting the ability for interested people to find each other. These have been replaced by new power brokers (such as Taskrabbit, Airbnb, eBay, Lyft), who have a lot more power and information than the spaces they replaced. Bruce Schneier argues there is an “inherently monopolistic nature of information middle men: a variety of economic effects reward first movers, penalise late comer competitors, entice people to join the largest networks and make it hard for them to switch to a competing system”. Online platforms are totalising, centralised and monopolistic, leading to what Brazilian social theorist Roberto Unger call the "dictatorship of no alternatives”. Users enter an almost feudal relationship with omniscient technology companies: they pledge their information to a service and trust it to take care of them. Edward Snowden says that people without privacy only exist as a collective, in a state of reaction to their environment at all times. Bruce Schneier corroborates: “when we know everything is being recorded, we are less likely to speak freely and act individually”.

3.4 Addictive by Design

Websites that profit from advertising expend a lot of effort making sure their users spend as much time on those sites as possible, optimising their content for maximum addictiveness. User Experience designers are trained to identify and eliminate “common stopping cues”: moments that lead the user to log off. YouTube and Netflix automatically start playing the next video before the credits are over. Snapchat makes users schedule their interactions around it by keeping track of and displaying streaks of continuous messages between two people. This creates a strong social cue and displays a visible reward for using the app as the place to maintain quotidian social ties. Uber drivers get a prompt telling them that they’re close to an earning goal before they’re about to log out. Tech entrepreneur Nir Eyal wrote Hooked: How to Build Habit Forming Products comparing social feeds to slot machines. He describes how the mouse’s scroll wheel and subsequently the finger drag allow the user to pull continuously down an infinite feed for variable rewards without interruption. Human brains are hardwired to seek newness, which explains our appetite for endless feeds. These feeds also exploit our tendencies to be cognisant of threats, and to want to care for cute things, but also to share and engage with these things. This creates a bias towards outrageous and cute content. When we are not being pinged by the notifications on a device, we start to miss them. People whose smartphone battery has died scout power outlets as though it was their job to keep their phone powered, because being without one feels like losing a limb.

3.5 Ubiquitous with no way to opt out
Opting out of contemporary technological platforms isn’t a viable choice, it violates the norms of modern social life. Government collection of data is much more regulated and covered by existing law than the private sector, which is regulated by the Terms of Service agreements that are obligatory to click through and yet no one bothers to read. Upstart technological companies are pushed by the venture capitalists who fund them to get acquired and hired by a larger company, which leads to an ever more consolidated market. These giant monopolistic online platforms should clearly be regulated like the utilities that they have become. Surveillance capitalism gains a lot more access points for potential snooping with the so-called “Internet of Things”: “smart” TVs, voice assistants, online baby monitors all have passive microphones that record what happens in the house and leak it online. For example, Amazon’s Alexa uploads one minute’s worth of audio of what happened before its name was invoked, tied to the owner’s profile. Amazon can afford to sell its Alexa products at a loss, because getting access into people’s homes and detailed information about their conversations is much more valuable than a profit margin on hardware. These intelligent assistants give authoritative-sounding answers but don’t provide sources or explanations for them. Users are left to assume the artificially intelligent assistant must simply know best.

3.6 Numeracy not Neutrality

Technology companies spend a lot of money to lobby the public and politicians that they are apolitical, and that they are doing pure innovation with no opinion embedded. In fact, they argue that their expertise in efficiency and cost saving means that governments should turn to Silicon Valley to ask how to govern better. In The People’s Platform: Taking Back Power and Culture in the Digital Age, Astra Taylor puts forth that “programmers are the new urban planners”, which is to say their current decisions have immense political consequences. The historian Melvin Kranzberg posits that “technology is neither good nor bad; nor is it neutral” in Technology and History: ‘Kranzberg’s Laws’. Numeracy provides comfort, because people assume that numbers are neutral and that sticking to them will bring reason – but people’s idiosyncrasies tend to be poorly captured by categories, and life is messier than maths. Data science and big data were born out of the demands of the financial industry for “quantitative analysis” in the late 1980s. The numeracy they produced is the ultimate extension of neoliberalism, the reduction of all the world to quantifiable markets. Former VP of Kickstarter Fred Beninson coined the term “mathwashing” as exploiting “the objective connotations of math terms to describe products and features that are probably more subjective than their users might think” in an interview with Technically. He argues this is an abuse of the trust of mathematics for marketing purposes, and that computers are only as good as their programmers. Beninson argues that “algorithm and data-driven products will always reflect the design choices of the humans who built them” and that “anything we build using data is going to reflect the biases and decisions we make when collecting that data. […]  if we want to ‘stick to the numbers’, [we have to understand] how we recorded those numbers”. Adam Greenfield in “Radical Technologies: The Design of Everyday Life” argues that this makes the assumption that “the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion”. Yuval Noah Harari refers to this unquestioning allegiance to the sense that data must be right as “data-ism” in Homo Deus: A Brief History of Tomorrow. He presents a daunting vision of this dogmatism: “High-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimises the authority of algorithms and Big Data. This novel creed may be called “Dataism”. [“Dataists”] perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system — and then merge into it”. He argues that humans would be overwhelmed by an all-encompassing system that understood us better than ourselves, and that we would surrender authority to it as a result. The engineering company Siemens must have taken this as an invitation to start dreaming a “dataist” vision of the ideal city of the future as documented in its house journal: “countless autonomous, intelligently functioning IT systems […] will have perfect knowledge of users’ habits and energy consumption, and provide optimum service […] to regulate and control resources”. This model is utopian only from the perspective of an appliance or sensor designer – people have divergent views, and urban problems require arbitration, not a single solution. Furthermore, sensors will only ever capture what is amenable to being captured, but not everything is knowable, and people will distort their behaviour to suit the sensors. When incentives depend on a performance threshold, people tend to retrofit the statistics to make themselves look best – as is the case today with crime quotas, where police either over or under-report the numbers to match. Such statistical tools are often brought in as a bureaucratic mechanism to avoid responsibility, without considering the perverse incentives they may set in place. When algorithms that predict behaviour fail, no one notices – not even the person targeted, because they were never made aware of that fact. Such mechanisms are terrible at capturing the false negatives, e.g. the candidates who weren't considered for a job because the algorithm rejected them.  They also assume that society is fair and meritocratic as is, and that individuals are entirely responsible for their station in life at any point. Ultimately these predictive algorithms are making a political consideration: they abandon equity and assume the world as it currently is reflects Darwinian morality, not unlike the libertarianism espoused by Silicon Valley’s millionaires. These algorithms pick up on inequality and assume that they need to reproduce it. People grant authority to algorithms as applied to society because they believe their complexity will yield a purer access to “truth”, but algorithms simply reflect the power imbalances of their inputs and the metrics they are optimising towards.

3.7 Garbage In, Garbage Out

Machine learning algorithms are unaccountable. Instead of having a programmer write commands, a machine learning program generates its own algorithm based on example data and a desired output. Humans feed the algorithm with examples flagged with “hit” or “miss” and the machine learning algorithm tries to infer “ground truths” from them. In Plato’s Theaetetus, Socrates puts forth that for something to qualify as knowledge, it requires a theory that can be systematised and explained. By this definition, machine learning algorithms cannot produce knowledge, only reproduce existing correlations. The reasoning behind machine learning is opaque, it merely tries to retrofit the pattern it picked up on. We should caution with ascribing omniscience to these systems: as Pedro Domingo argues in The Master Algorithm, “[computers making a decision of who gets credit are needlessly bad because their] knowledge of credit scoring came from perusing one lousy database. People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world”. While billionaires muse about artificial intelligence exterminating our species in a couple decades, its current application invisibly punishes those born in less fortunate circumstances: Luminoso Chief Science Officer Rob Speer said “[The threat of Artificial Intelligence is] not some sci-fi scenario where robots take over the world. It’s AI-powered services making decisions we don’t understand, where the decisions turn out to hurt certain groups of people”. These mechanisms don’t reveal themselves: “some people [will] go through life not knowing why they get fewer opportunities, fewer job offers, more interactions with the police or the TSA…”  because they have been targeted invisibly. We must not anthropomorphise artificial intelligence: it is mostly an affair of statistics, databases and pattern recognition. There is no such thing as data without bias: artificial intelligences trained on existing language corpuses acquire historic cultural associations which reflect biases such as associating genders to professions, and they are less likely to recommend a Mexican restaurant because the word is affiliated with “illegal”, as seen in the study “Semantics derived automatically from language corpora contain human-like biases”. In this sense, “the data bias codifies the ugly aspects of our past” according to Josh Clark in Design in the Era of the Algorithm. Machine learning algorithms automate the status quo. By relying upon them, we are embedding our biases and calling them objective while actually obscuring them from view. Machine learning picks up patterns of structural inequality, assumes they are laws of nature rather than the result of oppressive forces, and locks them in, making them endogenous, entrenching a person’s habitus from birth in a rigid codified way. Humans attribute far greater capacities to artificial intelligence than it is actually capable of, they assume that because something appears complex it must be right. Algorithms never ask “why” the world is as it is, they replicate the patterns of the past in a feedback loop. As Cathy O Neil says, “instead of searching for the truth, the [algorithm’s result] comes to embody it” (p7). If Fox News were to automate its hiring practices, the algorithm would think women are bad employees, not that they’re being harassed. Along the same lines, an artificial intelligence looking at prosecution rates would assume there is no such thing as white collar crime, and all executives are beyond reproach.

3.8 Sentencing Algorithms and Criminal Injustice

Rashad Robinson, director of Color of Change, is attentive to how these automated mechanisms afflict the disadvantaged: “black and marginalised communities will continue to experience online price gouging, data discrimination and digital red lining”. So-called ‘crime prediction’ algorithms are more ‘police prediction’ tools: they are trained on where police is active, and police notoriously over monitor neighbourhoods with many people of colour. ProPublica reporters Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, published research on the COMPAS recidivism model used in Florida, which assigned risk assessment scores to defendants that informed decisions about their sentencing, bond amounts, and releases in courtrooms. The algorithm was twice as likely to call a black defendant a future criminal than a white one. It based its assessment multiple factors such as the defendant’s zip code, whether their parents have a high school diploma, whether they live in an area with high arrest records, and whether their parents were incarcerated. These algorithms embed society’s big structural problems (such as the fact poor people of colour are over-incarcerated) and shift responsibility of these problems onto the individual rather than the incentives. Because these risk assessment scores are factored into sentencing, defendants are getting punished in advance for the fact that an algorithm thinks they’re more likely to come back to prison, not unlike the “pre-cogs” in Minority Report arresting people for crimes they haven’t committed yet. Institutions using predictive models assign a station to people at birth, and treat them in accordance to it. Proponents of automating sentencing argue that removing racists judges from the equation will yield fairer results – but it merely freezes in the previously racist judgements of the past, and shatters the possibility for improvement! Some individuals are, by accident of birth, suspected more and imprisoned longer. These very same people have a harder time getting off their feet after a longer prison sentence, which restricts them from welfare benefits, or from being hired in many positions, and they end up back in prison. This creates a pernicious loop, where the sentencing algorithm shapes its own reality, but measures its effectiveness as a successful “hit”.

3.9 Algorithms for Assassination

Reporting from the Intercept and Ars Technica in 2015 uncovered an internal NSA program called “Skynet” using machine learning to identify terrorist couriers in Pakistan. Skynet takes in the mobile phone records (and information about where and when phones connect to the network) of 55 million Pakistani citizens and assesses their likelihood of transporting contraband to terrorists. It works like a typical modern big data application except instead of trying to sell the targets something, it’s targeting them with predator drones and death squads. NSA trains its algorithm on a subset of 100,000 randomly selected people and seven known terrorists. The machine learning algorithm was fed the information of six terrorists and tasked to identify the seventh. Patrick Ball, the director of research at the Human Rights Data Analysis Group, argues this process is not rigorous enough, that extrapolating a classification fit from a sample of 6 people is bad science, because what they’re measuring is what they’ve already obtained: "Classification is inherently probabilistic. […] The usual practice is to hold the data out of the training process so that the test includes records the model has never seen before”. The Gaussian distribution normally to classify human phenomena doesn’t apply for rare statistical events. According to the NSA’s own slides, the percentage of false alarms (people mistakenly identified as terrorists) yielded by their model is 0.18%, which may sound like a small error rate, but represents about 100 000 innocent lives and their families, if the NSA were to act on this information. Meanwhile, terrorists who did not happen to exhibit similar telephonic behaviour as their 7 counterparts whom the NSA already caught can evade targeting. NSA’s engineers illustrated a successful “hit” for their machine learning algorithm with a Powerpoint slide of Ahmad Zaidan, Al Jazeera’s bureau chief in Islamabad, who travelled to terrorist compounds to conduct interviews in the possession of his cell phone (hence exhibiting suspect phone locations), but has been working as a journalist for decades. An authoritarian government could extend this type of targeting onto its domestic population of protestors or drug dealers, with little consideration for collateral damage.

3.10 Metrics of Success

Cathy O’Neil points out that “a key component of every model, whether formal or informal, is its definition of success” (p21). An artificial intelligence program tries to optimise for a ‘good state’: what counts as success is referred to as its ‘objective function’. No matter how complex or smart, it will take in data from its environment and try to affect it to bring it closer to what it’s been taught is a more desirable version of the world. Consequently, if algorithms are going to be organising society, we need to consider what ideal society their metrics of success are optimising for. For instance, maximising tracking and attention for advertisers is the metric of success of modern social media. Since the 1980s, we have mechanical incentives in the private sector that exacerbate inequality. Tying CEO pay to quarterly earnings and the company’s stock price means that any choice to automate workers, cut costs, and drive the bottom line is in the interest of those with the decision-making power, and in the interest of the shareholders to whom they are accountable. In a free market of companies giving credit scores, the least scrupulous one in terms of fairness will be the most effective at making money. As our world becomes increasingly ruled by computer systems, we must make sure to account for better metrics of success than the ideology of maximising profit for a small few. In What Is Fairness?, danah boyd explains how Silicon Valley enables this: “the tech industry — very neoliberal in cultural ideology — embraces market-driven fairness as the most desirable form of fairness because it is the model that is most about individual empowerment. […] This form of empowerment is at the expense of others, [especially] those who have been historically marginalized and ostracized”. Cathy O’Neil corroborates this by pointing out that humans can learn and adapt, but “automated systems, by contrast, stay stuck in time until engineers dive in to change them. […] Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. [We have to embed values into algorithms]. Sometimes that will mean putting fairness ahead of profit”.

3.11 Growth is an Awful Metric to Optimise For

Growth is a measure of increases in a given country’s Gross Domestic Product, or the total market value of goods and services produced over a given period. Endless rapacious growth as a metric of success is incompatible with our material world of finite natural resources. An artificial intelligence optimising for GDP growth would reject ‘degrowth’ movements or discussions about post-scarcity economics as antithetical to its main mission. GDP growth measures increases in production but discounts welfare. Anything done without a price does not contribute to growth. Simon Kuznets, who invented the GDP as a measuring instrument in 1934, immediately warned against using it as the prime measure of well being: “the welfare of a nation can scarcely be inferred from a measure of national income”. David Rothkopf outlines valuable data elided by GDP: “Trade data, such as that used in measuring national surpluses and deficits, misses a big chunk of trade in services and much Internet activity, among many other swaths of trade, and is widely reported inaccurately. Labor statistics, such as unemployment rates, are cooked and deceptive”. One might argue the Dow Jones Industrial Average’s success in the early months of the Trump Administration says more about its 30 constituting companies expecting a windfall from tax breaks than it does about the perennial welfare of America’s citizenry. In 2015’s Papal Encyclical Laudato Si’, Pope Francis speaks out against using growth as a metric of success for society: “It is not enough to balance, in the medium term, the protection of nature with financial gain, or the preservation of the environment with progress. […] A technological and economic development which does not leave in its wake a better world and an integrally higher quality of life cannot be considered progress. Frequently, in fact, people’s quality of life actually diminishes— by the deterioration of the environment, the low quality of food or the depletion of resources— in the midst of economic growth”.

3.12 Better Metrics of Success

Tim O’Reilly says technology should not be yet another extractive industry, but that we should instead “all strive to create more value than we capture”. Douglas Rushkoff suggests subsidiarity as a better metric of success than growth. He describes the optimal organisation of society: “Power is to be granted to the maximum number of the smallest possible nodes— guilds, communities, cottage businesses, and the family. […] According to the principle of subsidiarity, no business should be bigger than it needs to be to serve its purpose— whether that’s feeding pizza to the town or making roads for the state. Growth for growth’s sake is discouraged” (p231). Such a principle can draw upon local expertise and surface it to the whole society. CouchSurfing is a social network that allows people to introduce guests to their city. It operates on a gift economy: members invite guests to stay in their home, meet each other and attend events together. The activities and social ties formed between members are the intrinsic goal, there is no expectation of future rewards. As a result, the company’s aim is to create as good relationships between people who hadn’t met before, and they optimise for a metric they call ‘net orchestrated conviviality’: it subtracts the time users spend on the website from the number of hours spent together that users reported as enjoyable. The platform operates on quite a utilitarian idea, trying to create positive contributions by adding pleasant moments to people’s lives. In terms of econometric measures, GDP is obsolete. We would do better to try and optimise for more distributed purchasing power, a smaller Gini coefficient (which measures inequality by looking at wealth distribution), or maximising productivity/work hours. Michael Green of the Social Progress Imperative created the Social Progress Index based on social and environmental metrics to provide a more holistic assessment of national welfare and to provide policy makers with more insights as to how they can elevate citizens’ quality of life. In A Theory Of Justice, John Rawls outlines the metric of Democratic Liberty, which seeks to optimise both (1) liberal equality (guaranteeing everyone a fair shot, an equal opportunity) and (2) the difference principle (any new unequal treatment must create a benefit to the least naturally advantaged person in the first place). In this framework, a fair system doesn’t treat everyone equally from birth, it recognises the context into which individuals are born and favours the disadvantaged. Following this, we might want to enforce affirmative action in algorithms, palliating for inequality of opportunity at birth by introducing a ‘just’ inequality in favour of the least fortunate.

3.13 Solutions

To fix the problems of like-minded groups living in confined homogenous information bubbles, we need to create spaces where subcultures can recognise and speak to each other. The internet needs to foster an ethical environment, optimise for better metrics, incentivise difference and create discussion spaces. The Wall Street Journal’s “Red Feed, Blue Feed” project highlights how partisan sources cover different issues and juxtaposes them. The “r/changemyview” subreddit is a website where people ask those whom they disagree with to change their mind. We also need to recreate serendipitous encounters with new information and groups to recreate an online public sphere and put potential innovators from different backgrounds and mindsets in touch, potentially creating new social movements. Facebook should surface posts by friends with whom a user is likely to disagree, and highlight divergent takes in related articles. Facebook is optimised to make users comfortable, but people like to think of themselves as open-minded. It should aspirationally appeal to what people wish to desire, what Harry Frankfurt calls their “second order desires”, instead of feeding them back their impulses. To address the problems of semantic bias, Rob Speer manually de-biased his ConceptNet database of language by removing negative associations for race, ethnicity, gender and religion. He did this based on crowdsourced data from the “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” research paper. Speer sells access to his database for others to use, using the fact that only his database doesn’t discriminate as a selling point. Arts, research and political discourse should not be optimised for market research, but they are indiscriminately quantified among other “content” online. We ought to emphasise the role of patronage and government grants for funding these endeavours for their own ends, rather than letting them be perverted by financial incentives. Governments may want to present an ultimatum to internet monopolies: either breaking them up or letting them be regulated as utilities. The European Union has a regulation on algorithmic decision-making which creates a “right to explanation” whereby a user can request the reasoning behind an algorithmic result about them – if they are made aware of it in the first place. It will be difficult for these results to be legible if they were decided by a machine learning algorithm, which is not easy to retroactively express as a set of factors, but could be displayed as percentages of bayesian probabilities. This would be even more meaningful if they disclose the metrics of success the algorithm is optimising for, revealing the inherently political considerations behind it. The proposal “Why Should I Trust You?: Explaining the Predictions of Any Classifier” by Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin aims to make algorithms accountable by making them return a LIME (Local Interpretable Model-Agnostic Explanation) which forces an algorithm to return the options it weighed and the percentage of certainty behind its potential return values. This helps disambiguate and troubleshoot the areas where algorithms fall short, and makes them more transparent to the people whose lives they mould.

Conclusion

The internet’s universalist approach to information created a cosmopolitan space as an emergent property. As per Kwame Anthony Appiah’s definition, cosmopolitanism is about the union of this universality and difference. If we allow difference to encounter itself on a universal scale, it can also become the realisation of Seneca’s cosmopolitan vision of a global city-state whose boundaries extended to the sun, in which citizens of the universal city share freely towards a common good and behave justly towards each other.

Sources and Reference texts:

Plato, The Republic (380 BCE)

Plato, Theaetetus  (369 BCE)

Seneca, De Otio (62 CE)

John Milton, Paradise Lost (1667)

Aristotle, The Nicomachean Ethics (politcal community, eudaemonia - moderating between extremes) (350 BCE)

Plato, Phaedrus (fixing thoughts in writing  loses memory) (370 BCE)

Homer, Iliad (warriors see their side honourably) (762 BCE)

Walter Benjamin, The Work of Art in the Age of Mechanical Reproduction (1935)

Langdon Winner, Do Artifacts Have Politics? (1980)

Benedict Anderson, Imagined Communities: Reflections on the Origin and Spread of Nationalism (1983)

Kwame Anthony Appiah, Cosmopolitanism: Ethics in a World of Strangers (2006)

Erving Goffman, The Presentation of Self in Everyday Life (1959)

John Rawls, A Theory of Justice (1971)

Howard S. Becker, Outsiders: Studies in the Sociology of Deviance (1973)

Pierre Bourdieu, Distinction: A Social Critique of the Judgement of Taste (1979)

John Perry Barlow, A Declaration of the Independence of Cyberspace (1996)

Naomi Klein, No Logo: Taking Aim at the Brand Bullies (1999)

Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (2006)

David Kirkpatrick, The Facebook Effect: The Inside Story of the Company That Is Connecting the World (2011)

Eli Pariser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (2011)

Harry Collins, Are We All Scientific Experts Now? (2014)

Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (2015)

Pope Francis, Laudato si’ (2015)

Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (2016)

Douglas Rushkoff, Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity (2016)

Angela Neagle, Kill All Normies: Online Culture Wars From 4Chan And Tumblr To Trump And The Alt-Right (2017)

Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2017)

David Rothkopf, The Great Questions of Tomorrow: The Ideas that Will Remake the World (2017)

Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media (2017)

Zeynep Tufecki, Twitter and Tear Gas: The Power and Fragility of Networked Protest (2017)