The Game Engine as Transhumanist Sandbox

Kanad Chakrabarti

An abstract image with a blue field and cell-like figures.
An online project for the author’s exhibition ‘calling@sweeneyshoal’ at Artists Alliance project space in New York City (6 Nov – 5 Dec 2020). Exhibition website: https://www.artistsallianceinc.org/callingsweeneyshoal/ .

1 The World of WBEs & AGIs

The thought-experiments proposed by Robin Hanson ​[Hanson, 2001]​ and Nick Bostrom at Oxford Unversity’s Future of Humanity Institute ​*​, et al. suggest a world where much of the physical and mental tasks humans do, machines can do better and faster. Humans are reduced to a subsistence existence, on something like a minimum wage (​[Bostrom, 2017]​, pp. 195-197). Whether whole-brain emulations (WBE)​†​ or artificial general intelligence (AGI) comes first​‡​, the considerable investment required, and the potential speed of technological change, implies a significant concentration of wealth in favour of capital(​[Bostrom, 2017]​, pp.197-199).

The time-frame for Superintelligence seems to centre around 2050 (in a survey of AI researchers, the date by which 50% of respondents expect human-level AGI will have been achieved (​[Baum, 2020]​), while this article provides a more granular survey, albeit from 2016​§​. Moreover, the challenges of the transition may, in fact, emerge somewhat more gradually as increasingly sophisticated AIs​¶​ start working alongside humans, a
process already under-way. Alternatively, the prospect of such a future may prompt regulation to retard such an outcome until humanity’s social and governance structures have time to catch up (​[Bostrom, 2017]​, Chapter 14), or even a reactionary-Luddite politics, akin to a Butlerian Jihad. Variants of these views, albeit not contingent on AGI, have entered the mainstream, for instance in Daniel Susskind’s book on automation, or Richard Baldwin’s The Globotics Upheaval (2019).

2 ‘Each epoch dreams the one
to follow’

Games thought of as dynamic data structures embodied in code, that also happen to have a dominant visual/aural/haptic interface, are distinct from moving images (or indeed music or literature). They seem to place the player in something like a dream,​#​ owing to their player-performative nature; mutable narrative and potentially massive scenario space; increasingly compelling fidelity to our visual world; as well as their synergy with immersive VR. These dreams, or nightmares according to one’s preference, perhaps may help us more clearly see the phenomenological, governance, and ethical issues of a world of uploads, AGI, and ‘mind theft’. In a more sinister vein, this essay proposes videogames, and the game engine (see below), as something akin to Giorgio Agamben’s apparatus (​[Agamben, 2009]​) or a dispositif in Michel Foucault’s terminology, particularly as these concepts get picked up in Benjamin Bratton’s writing (​[Bratton, 2015]​, pp. 161, 271-274)​**​.

The game-as-apparatus perhaps also allows us to view the game as something other than ’mere entertainment’, and rather as a complex assemblage that in the first instance pervades much of our online lives. That we live in a world increasingly gamified​††​ seems obvious: from riders on public transport compulsively shifting coloured tiles on an over-sized smartphone; personal fitness devices that award points for exercise; foreign language instruction from a chirpy green owl on Duolingo; access to (fairly sophisticated and often risky) options markets via a game-like interface on the Robinhood trading app.

But it also speaks to our current environment of constant work, falling wages, ubiquitous self entrepreneurship, and a framework of corporate-state biopolitics that is, at once, expansive and intrusive. In short, a new form of slavery, as suggested by Mark Fisher (​[Fisher, 2009]​), combined with the ennui and constant distraction that are documented as far back as the 1920s, in the writing of Walter Benjamin and Siegfried Kracauer(​[Bown, 2018]​ pp. 35-39).

2.1 What is a Game Engine?

Although games may well play a large part in futures under automation, this is not intended to be a survey of game design, logic, politics, etc., which are however comprehensively addressed in ​[Galloway, 2006]​, ​[Wark, 2007]​, ​[Bown, 2018]​, as well as the blog-posts and books of game theorist Ian Bogost​‡‡​. Similarly, the architecture of game engines is well covered in the technical literature, for instance: here, or here. However, some broad features are outlined, taking as a concrete example the Unreal Engine platform, made by Epic Games, who also make the commercial hit Fortnite.

The game engine, narrowly defined, is the software in which the game is written and ultimately what drives the possibilities of the game, the computer/device platforms it can run on, and how it is finally distributed to consumers. More abstractly, it can be viewed as a ‘stack’ of software​§§​, in the sense developed by Benjamin Bratton which emphasizes relevance to spatio-political concerns, through and contra the geopolitical writings of Carl Schmitt (​[Bratton, 2015]​). Specifically, it can be argued that elements of this Stack metaphor are applicable to the game engine,​¶¶​ owing to physical interfaces with the world and our built environment. Although games are typically written on general-purpose computers, they are deployed on a range of computing platforms: dedicated consoles (Playstation, Xbox), smartphones, virtual and augmented reality (VR/AR) devices, servers, and desktop computers.

3 Capital Eats (Game)Space

What is the shape of this ludic apparatus? McKenzie Wark’s ‘military-entertainment complex’ (​[Wark, 2005]​), itself an adaptation to contemporary, technological conditions, of Guy Debord’s integrated spectacle might provide a useful guide. In the examples below, we see incredibly well-capitalised companies, for some of which gaming is central, with R&D divisions that strive for addiction amongst users, operate across multiple platforms, and in some cases with substantial physical assets that present barriers to entry against new competitors. Many of these companies also place AI/ML at the core of their businesses and operate as entities with significant network benefits while generating, in some cases, negative social externalities. Lastly, explicitly or implicitly, through data harvesting, surveillance is often central to how they create value, as Shoshana Zuboff has convincingly detailed.

Tencent is the world’s largest video-game company and owns 40% of Epic Games (Fortnite and Unreal Engine). They also own WeChat, China’s market leader in messaging, which provide mapping, gaming, and banking services, and a diversified portfolio of media, health, and real-estate assets. A fascinating overview of their media strategy, and how it is different from Western models, is here.

Google, keeper of the Internet’s search-gate, has rolled out the Stadia game-streaming service, which is hosted on a network of 21 globally-dispersed data centres, while sister company Deepmind view games as a central locus of AI/AGI research.

Amazon, with a massive footprint in cloud computing (which often host multi-player games)​##​ and push into financial services (Amazon Pay), also own Twitch, the principal streaming service for gamers. Microsoft’s XCloud sits atop their extensive Azure server farms, providing a strong physical footprint for Xbox gamers.

Facebook, which already owns the Oculus VR platform, is also pushing for a potentially transformative digital currency, Libra​***​ , to integrate to the company’s immense footprint in users’ lives.

The profiles above of industry leaders in game development are illuminating in light of Bostrom’s point that major advances in WBE/AGI are likely to come from well-capitalised entities, and likely, once major developmental milestones have been hit, the successful pioneers in WBE are likely to pull ahead of the competition at an accelerating pace (​[Bostrom, 2017]​, pp. 197-224, 107-108). The reasons are clear: the enormous investment required to successfully emulate a human brain at a neuronal level, the processing power required to maintain one or more uploads; or else, the scale and sophistication of research platform required to create fundamental algorithmic advances that seem necessary to make AGI happen.

4 The Eggshell World, Rendered-on-Demand

In ​[Bostrom, 2003]​, Nick Bostrom conducts a thought-experiment to assess the probability that we (currently existing humans) are living within a simulation created by a far-advanced posthuman civilisation. In this scenario, our phenomenological world, indeed our internal world, has been meticulously created within our descendants’ ¨uber-computer. Bostrom, as well as ​[Hanson, 2001]​ points out that, owing to computational intractability, not all parts of the simulated world need be maintained at all times, rather elements of ‘reality’ can be generated only to the extent that some human is observing or encountering that element of the simulated world.

Rendering-upon-demand is something we find gaming engines doing now​†††​: they generate scenes as and when needed, i.e. when a player is focused on the objects in question, allowing the game engine to use its processing power and memory efficiently, and handle worlds that are much larger than if it tried to simulate ‘everything’, seen and unseen. Players in contemporary 3-D game-worlds usually expect physical laws to apply, i.e. balls should bounce, walls are usually hard, water is a fluid, fog licks into corners, etc. Therefore, the game engine often incorporates a physics engine that replicates the real world, at varying levels of fidelity: simple games may only handle macroscopic features such as balls and hard surfaces, while the most sophisticated model particle-based fluids such as fire and smoke.

But the physics is simulated only at a level-of-detail required to convince the viewer, which is something the creator needs to define. So rocks, buildings, streets, leaves, trees, waves, clouds are mostly produced as textures (tiny image files), that are cleverly replicated and mathematically transformed to create a compelling illusion. Some phenomena such as fire and smoke use physics, but again, in limited ways so as to not waste computation on things that don’t change the believability of the final image. For instance, no game, to my knowledge, simulates quantum-level physical phenomena, because that is irrelevant for players​‡‡‡​.

This leads to curious, perhaps unsatisfying, situations where objects are simply a hollow shell with infinitesimally-thick sheet walls, much like a cinema set or Potemkin Village​§§§​. In other cases, like a mirage-in-reverse, forests that were amorphous, almost painterly expanses of sun-dappled green will visibly resolve themselves, as the player’s camera approaches, into individual leaves, branches and trunks​¶¶¶​. In a well-made game​###​, perhaps these imperfections are crafted away (by rendering at the required level-of-detail well in advance of becoming visible to the player, for instance, or by not allowing the player to peer behind any under-implemented geometry). From the perspective of player expectations, it is plausible that human gamers understand these infidelities as a type of simplification, that allows the underlying logic and emotional content of the game to emerge, what Jesper Juul terms the ‘half-real’ (​[Juul, 2019]​). However, in the WBE/AGI world, as Bostrom discusses, artificially-constructed environments and VR quite possibly would not just be entertainment, rather it is the reality that an emulation lives within. Moreover, they could be a mode of control for an emulation-based society: supervising agents could use VR environments to test lower-ranking agents on whatever criteria (trust, normative behaviour, etc.) (​[Bostrom, 2017]​, pp. 209, 250). In such a control society, it seems reasonable that a lower-ranking agent will do everything possible to uncover flaws or deceptions in its VR, unlike Juul’s human player, who overlooks such imperfections​****​.

5 Panoptic Grace, Updated

The problem of verifying intention and maximising trust, currently the source of major societal costs and conflicts, would potentially be more tractable in a WBE/AGI world. For instance, personal selfishness could be written out of an emulation’s code, replaced with a clan or eusocial bias. In addition, an agent’s source code and history would be open to audit in a way that is not possible today (​[Bostrom, 2017]​ p. 215-218).

Currently, we find the gaming fitting neatly into the surveillance apparatus surrounding us (​[Galic̆ et al., 2017]​), explicitly in the case of Chinese attempts to co-opt American game developers into controlling supposed gaming addiction in China. Alternatively, in American and British surveillance of supposed terrorism-related infiltration of online communities, the data to be collected in the ferocious dragnet Bratton calls ‘The Black Stack’ (​[Bratton, 2015]​, pp. 363-365), and subsequently cross-correlated, under government and corporate contract, by companies such as Peter Thiel’s Palantir.

There are more subtle, panoptic methods of self-discipline, as office workers play games, immediately feel guilty, and double their work effort to make up for their transgression, a cycle that managers are probably aware of and possibly encourage (​[Bown, 2018]​ pp. 35-37). This is gaming’s version of the transition from Foucauldian factory-prison control to the post-disciplinarity of the office, infused with market ethos and enabled by pervasive information technology (see Mark Fisher’s reading of Deleuze and Kafka in ​[Fisher, 2009]​, pp. 22-23, 51).

If we more narrowly consider the point-based logics of individual games, Cameron Kunzelman describes systems that, as a means of keeping team members (in a multi-player game) coordinated, motivated and disciplined, ‘encourage a habit in the human player to not make mistakes, to show up on time, and to constantly be wary of losing one’s DKP [the game’s name for the disciplinary score]’ (​[Kunzelman, 2014]​).

Another example: Microsoft’s Xbox console, which apparently forces players to accumulate ‘achievements’ (points in a scoring mechanism awarded for achieving particular goals such as shooting zombies, rescuing princesses, etc.), which are then tallied and, under the extremely generous (to Microsoft) terms of the Xbox user agreement, collected for essentially any purpose Microsoft deems necessary (​[Cybulski, 2014]​). The fact that Microsoft’s games run on dedicated hardware makes this even more creepy: whereas a PC or smartphone might prevent such extensive surveillance (through system-wide privacy settings), the raison d’etre of the Xbox is to only play Microsoft games, hence if the user chooses not to accept Microsoft’s terms, that Xbox is ‘bricked’ (rendered useless). Lastly, in a recursive echo of the media-military amalgam McKenzie Wark diagnoses (​[Wark, 2005]​), this (2016) post describes several games where the player acts as surveiller, monitoring conversations and activity, to ‘keep the nation safe’. Hence, as both victim and perpetrator, players become de-sensitised to in-game surveillance, continuing the pattern of normalisation of privacy eroding practices across the online world. Importantly, much of this happens mostly through nominally ‘free market’ mechanisms and without obvious state coercion, unlike in China, currently the prime example of a technologically advanced, politically repressive society.

An art installation with a screen on the left with a red and blue image. In the center a column of numbers in LED.
On left is ‘ESTHER’ (software simulation, variable dimensions and duration, 2020), author’s contribution to the ‘Particle Decomposition/Splitting the Atom’ exhibition (Autumn 2020), curated by Ele Carpenter and  Virginija Januškevičiūtė, at the Contemporary Arts Center and the Museum of Energy and Technology in Vilnius, Lithuania.  More information on the work is available here: http://ukc10014.org/esther.html .  Work in background: Thomson & Craighead, ‘ Temporary Index: Ignalina’ (2020).  Photograph: Andrei Vasilenko

6 Simulation & Objecthood

Speculation about the nature, behaviour and society of emulations and intelligent agents are a central concern of (​[Bostrom, 2017]​ pp. 203-212) and ​[Hanson, 2014]​ or ​[Hanson, 1994]​. Emulations, existing mostly as code (and some dedicated hardware, much as Amazon cloud instances do often exist on distinct processors), can be forked or replicated repeatedly, copied and backed-up, at a known cost. They can be brought into existence to work on particular problems (​[Hanson, 2014]​), spawned without childhoods or other developmental impedimenta, and ‘retired’​††††​ when they have completed their task. Emulations that are descended (via copying) of a single human brain upload may organise as clans, with considerable intra-group trust, akin to families today, but with greater transparency to each other and to their controllers​‡‡‡‡​. Thus, emulations would experience ‘subjective’ time in a very different way from siderial time (​[Hanson, 1994]​, p. 4), human subjective time, and indeed the time of other emulations that might be operating at markedly faster or slower clock speeds​§§§§​. Their understanding of birth, senescence, and death would be very different from our individuated perspective, laden with the baggage of conservative humanism, emotion, religion and identity.

6.1 Objects in Games

To what extent can game-worlds tell us anything about the subjective reality faced by an artificial agent? Is this even a meaningful question?

On one hand, the idea of the inanimate having agency and relations is a concept not so unfamiliar from fiction(Japanese animation, Stanislaw Lem’s ocean-organism in Solaris) or non-Judeo-Christian-Islamic culture (Polynesian mythology). On the other, the very question of subjectivity and ethics pre-supposes a subject we have access to, whether ourselves or another human with whom we can empathise even though we might not be able to share their precise experience. Hence we probably remain in an anthropocentric trap (​[Bogost, 2012]​, p. 78). Nevertheless, let us play on.

At present, games are multi-agent, containing zero or more human players​¶¶¶¶​, as well as nonplayers (game agents unassociated with a human player). Objects can also include things we don’t consider animal or human: flashing lights, rolling rocks, as well as complex self-interacting systems like an ocean, the weather, a jungle of trees that respond to wind, rain and so forth. As an aside, objecthood within games interacts with render-on-demand: in No Man’s Sky (2016), entire planets, and the plants, animals, and rocks on those planets are not created until required, raising some question as to whether they exist in any sense at all.

In-game actors​####​ may also be representations of objects in the world outside the game. Game engines provide a software interface that connects other networked entities: geographically-distant players in a multi-player game; internet resources such as a financial market data-feed; a global network of radiation sensors; or consumer products in the Internet-of-Things (IoT) like Philips Hue smart lights, Google’s Nest family of integrated networked devices, or Razer’s collection of keyboards, headphones, and lighting pulsating in lurid fuschia and turquoise. In many cases, the game engine and the diverse devices communicate using HTTP calls and TCP/IP, the basic communication protocols of the Internet. Game engines also accommodate bots, written within the game environment or indeed brought in from the ‘outside’ via an API. Importantly, all objects that are referenced in the game engine, whether within or without the game, also have specific addresses that place their corresponding data structure within the overall data ecosystem that the engine maintains​*****​.

These objects can be animated or can speak, acting without explicit commands from a human user. Characters traverse terrain, navigate cities, or interact with other characters using terrain maps or decision trees, and these more-or-less sophisticated decision-tree systems are termed, somewhat optimistically, AI. However, new directions are exploring using machine learning to generate new content for games, scenarios, terrain, geometry, characters, based upon analysing existing, successful games. Conversely, mainstream AI research (such as self-driving cars) are using the gaming environment to generate training data and scenarios​†††††​.

There are a few ways of thinking about how closely in-game objects can approach the Bostrom/Hanson model of WBE/AGI agents. To start with, we find an obvious and practical problem: the agents within videogames today are neither sentient nor sapient​‡‡‡‡‡​. This isn’t to say they are simplistic or easily understandable – their behavioural logic, viewed by players from outside the game, can be full of surprises for the player and can certainly be made intentionally obscure. For instance, one could embed a deep neural network into a game, the decisions of which are currently not easily explainable.

The fact that game objects are not themselves sentient or sapient doesn’t mean that they aren’t in some sense ‘real’, that human players can’t have emotional, empathetic or affective relationships with them, as described in this interview with Hideo Kojima, creator of Death Stranding. A more ambiguous position is laid out in this article about 2017 releases Prey and Everything, two games where players can embody themselves as objects, and perhaps identify emotionally with the game, only to find the inner life of in-game objects remaining hidden, undisclosed, opaque.

6.2 Philosophical Perspectives

A convincing philosophical argument for the realness of digital, virtual worlds, albeit framed within a VR context, is made by David Chalmers (​[Chalmers, 2016]​) and a response from game theorist Jesper Juul (​[Juul, 2019]​). However, both Chalmers and Juul focus primarily on the reality of virtual worlds, principally from the perspective of a human subject, rather than the inner experience, or relations between, objects within the virtual world. As such, their views seem to remain (albeit by construction) resolutely anthropocentric​§§§§§​.

A more extreme metaphysical position is expressed in Graham Harman’s Object-Oriented Ontology (OOO), which is elaborated in a game context by Ian Bogost. One of the reservations about Bogost’s version of OOO, voiced in ​[Kunzelman, 2014]​ , pp. 62-66, is the fundamental anthropocentric perspective any discussion about the ethics of game objects still seems to retain.

An argument that doesn’t axiomatically start with the human, and instead considers digital objects in their own right comes from Yuk Hui. He undertakes an inter-objective rather than intersubjective approach, focuses on the material mechanisms​¶¶¶¶¶​ of digital relations, and places time at the centre of relations (​[Hui, 2016]​, p. 154, 175-182). Specifically relevant here is his analysis of how the languages of the web have changed over the years to become more ‘machine-readable’ (​[Hui, 2016]​, p. 62-73). For instance, the early HTML​#####​ was a formatting language, legible to humans, that basically described what a webpage looks like. In subsequent years, specifications have evolved to include much more than formatting: the connective tissue of metadata​******​ encapsulates information about content that other web platforms or software can interpret, and also make web objects ‘active’​††††††​, in a way that is mostly transparent to the casual (nonprogrammer) user​‡‡‡‡‡‡​.

These shared ontologies and relations between digital objects can be found in games also. For instance, a leaf object contains formatting data such as its colour and texture, as well as normally static relational data such as what branch and tree it is attached to. However, its data structure also needs to account for the wind, such that it flutters in a breeze. Its texture map must change to reflect wetness when rained upon, and so forth. Hence the game engine must embed ‘dictionaries’ of translation and relation in the form of databases, further defining a specific ontology for digital (game) objects ​[Hui, 2016]​, pp. 137-138. Importantly, these relations, these relations, being defined in code, can change themselves – after a storm, the leaf falls to the ground, whereupon a new process, that of rotting and discolouration begins.

A contrasting approach to OOO is pursued in ​[Negarestani, 2018]​, where, rather than promiscuously attributing a spooky ‘life’ to objects, Negarestani tries to isolate what might be the principally unique or irreducible features of the human as active, reasoning entity​§§§§§§​, and then imagines a toy (in the sense of vastly simplified) AGI that slowly learns this essential core of human capability, all the while stepping around the anthropocentric pit, the shared flaw (in his and Hui’s view) of transhumanists and nostalgic (post)humanists alike​¶¶¶¶¶¶​.

7 ‘Boredom is the ambivalent gift of the surplus…’

Assuming, however, that some version of a WBE/AGI world comes to pass, two questions arise: a) why would humans be needed at all, b) what would they do all day? After all, a central claim in Hanson’s writing, in ​[Bostrom, 2017]​, pp. 195-197, on the economics of WBE is of collapsing wages in the face of near- or super-human intelligence. And fears over falling standards of living in the face of automation are neither new nor particularly niche (see Baldwin’s or Susskind’s work, linked here).

Bostrom, in answering (a), also suggests a preference of capital-owning humans to consume goods and services from other humans, much as one might prefer a human artist’s painting to one created by an AI trained on a corpus of masterpieces. Possibly the WBE/AGI may have a reason, not presently obvious to us, to maintain humans as an observational curiosity, in a rough analogy to Bostrom’s ancestor-simulation thought-experiment, discussed above (​[Bostrom, 2003]​). Indeed much of discussion in AI safety centres upon ensuring AGI is developed in a way that doesn’t jeopardise the survival of the human race and moreover tries to maximise humanity’s chances of flourishing​######​.

As for (b), perhaps we might continue the Georges Bataille’s quote above (in ​[Wark, 2007]​, p. 155): ‘….History is a struggle to wrest necessity from boredom.’. The recent, albeit unusual, experience of the pandemic lockdown suggests a prominent role for affective, ludic activities, laced with ‘loops of meaningless feedback’ (​[Cybulski, 2014]​, p. 430), that have minimal marginal cost, soak up nervous energy, and are optimised for addiction.

One can imagine millions of more-or-less educated unemployed, existing mostly within a game-world and spending ‘the time that remains’​*******​interacting with a variety of VR devices. They play games to ‘fill the time between microtransactions’, order their delivery through pop-ups within the game, day-trade in it, while the console monitors their in-body sensors, reminding them to take a walk. The game engine’s neural implant API monitors psychological, neural, and limbic markers, extracting information on emotional state, feeding a model that correlates these granular data to population-scale statistics, which goes straight to the health insurance provider. Meanwhile, the data is quietly used to identify potentially subversive or otherwise inappropriate thoughts, a logical extension of the Chinese social credit system which reportedly incorporates gaze and gait tracking and biometrics (DNA, iris recognition), or the clumsy, racially-tinged predictive policing of America.

In the work-play heterotopia of Foucault-via-Wark (​[Wark, 2007]​, pp. 107-110, 163), some players use in-game chat to find employment, perhaps bidding against other would-be workers, much as corporate-types do through LinkedIn, and yet others on Amazon’s Mechanical Turk (​[Bratton, 2015]​, p. 278-279). One such gig could be as human test-subjects for new generations of games created by machines, which, despite their cleverness, cannot quite capture the spark of what makes an absorbing, addictive game. More generally, there may be residual tasks that humans simply perform better than algorithms. A present-day example could be Luis von Ahn’s reCAPTCHA, social media content moderators and the early iteration of language app Duolingo.

Thus, humans effectively become employees or prostheses (​[Bratton, 2015]​, p. 273) of the machine – in any case, further dissolving the human-machine hierarchy. What Yuk Hui, referring to the thought of Peter Thiel (​[Hui, 2017]​), describes seems a distinct gaming possibility: a transhumanist proletarianisation, an extreme version of Mark Fisher’s musing (in a video lecture but also in ​[Fisher, 2009]​) about the continual self-improvement and self-entrepreneurship that is part of neoliberalism’s ideology.

These mechanisms are currently visible within gaming’s socio-economic system: for instance, Epic makes UE available to developers on a license that costs no money, unless developers sell their game​†††††††​. This incentivises a large community of smaller developers and amateurs, as well as users from outside the traditional gaming industry, who develop software, characters, 3-D objects, and games using UE. They interact via a Discord channel, showing off new projects and trading know-how, and posting tutorials on YouTube and Twitch. This community-based ecosystem has a history within game development (​[Galloway, 2006]​ pp. 112-113), as well as echoes of the open-source ethos (circa 1980s-1990s) of Linux and the GNU Manifesto.​‡‡‡‡‡‡‡​

So the intense commitment of players, which often represents monetised addiction, is mirrored by creatives and programmers, whose immaterial labour​§§§§§§§​ is crystallised into a fusion of ‘work, play, hacking, rebellion and creativity…[and then is] reterritorialized as sources of value’ (​[Bulut, 2014]​ p. 31), forcing them into precarious self-entrepreneurship in a gaming gig economy.

8 The Concentration City

One of the more visually compelling of Hanson’s predictions is this description of a city populated with emulations​¶¶¶¶¶¶¶​: ‘Such cities are very hot, even glowing, covered and pressurized, and arranged as a vast three-dimensional lattice, rather than as separate buildings, pulling winds of hot air into tall clouds overhead.’ (​[Hanson, 2014]​). The tiny emulations, 7mm tall in Hanson’s prediction (​[Hanson, 1994]​), are stacked at a maximum density to minimise latency, supplied with power, network connections, cooling and any raw materials needed for repairs/modifications.

Hanson’s description recalls a J.G. Ballard short story, ‘The Concentration City’ (1957): it has been built up and interconnected at multiple levels, is covered and closed to the sky. Property, measured as cubic feet, is traded in real-time, and arson, both real and state-inflicted, is rife, in order to clear out space for high-rent businesses. People seem to live a credulous, soporific life under a regime of ubiquitous surveillance, with all the consequent neuroses one could expect in a panoptic society. The human protagonist fancies finding the countryside that surely must lie beyond the city limits. He boards an outbound train intending not to alight until the far terminus, only to find himself, after days of travelling, back at his origin point, the calendar reset to his departure date: time and space embedded in a Klein Bottle.

This Landian vision may be extreme, but the ongoing lockdown is a reminder that (at least for the relatively well-off in a modestly technological society) one can do most tasks online, and even many physical interactions can be handled without going into the city. A city without restaurants, bars, cultural venues, that is inhospitable owing to heat, perhaps disease, perhaps civil unrest, can be given a wide berth.

Moreover, though cities can seem like vibrant, social zones built for people, they are interleaved with technological substructure. Consider New York City: data centres sited near high-frequency trading firms to shave off microseconds; gaming servers located strategically to minimise network latency; anonymous skyscrapers, chock-full of government surveillance equipment, house switching stations, continuously monitoring the chokepoints of modern telecommunications networks.

Yet the city, even emptied, may leave a mediated after-image: games, video streams, Zoom backgrounds. In a seemingly anthropocentric prediction​#######​, emulations would live in a lavishly and luxuriously rendered VR world (see ​[Hanson, 2014]​ and ​[Bostrom, 2017]​ p. 209). The present, still getting to grips with immersive VR (​[Bown, 2018]​, pp. 86-90), finds Pokemon Go’s Augmented Reality (AR) seamlessly blending game-space with urban space, via a Google Maps API. Tencent’s WeChat helpfully produces ‘heat maps’ that show crowd concentrations within cities, while feeding location data to the organs of state security for future digestion. London’s transport authority mulls a slightly less sinister vision of in-game rewards to ‘nudge’ Underground passengers towards less-crowded stations (​[Bown, 2018]​, p. 16). Meanwhile, the urban structures of the city become an armature for screens and CCTV: mounted atop buildings of glass and steel, we are treated to an excessively illuminated, continuous stream of Kardashian bottoms and CNBC ‘breaking news’, e.g. Times Square, central Tokyo, examples of Paul Virilio’s ‘Overexposed City’ (original essay here)….a grotesque funfair mirror of our smartphones.

9 Recursive mise en abyme

A recursive, ever receding hall of mirrors:
      (
             Human plays a Playstation game:
             (
                  Character playing a game on Xbox:
                  (
                       Character plays a game on iPad:
                       (
                           …
                       )
                  )
             )
      )​********​

From a programmer’s perspective, an interesting feature of certain game engines (including UE) is its reflexive platform: users are provided with source code for the engine (as opposed to an un-modifiable executable) to enable extension or customisation. Speculatively, the unified source code of the game engine and the game (as in UE) seems brings to mind the possibilities of games (and software more generally) dynamically re-writing themselves in response to pre-specified criteria​††††††††​

Closely related to recursion is the notion of virtuality: layers of simulations that are all running ‘on top of’ each other, which are ultimately executed on the same underlying physical substrate​‡‡‡‡‡‡‡‡​. Each layer of the simulation, existing as it does entirely in code, can be subtly different from the layers above and below, for instance with elastic notions of time or physical laws​§§§§§§§§​. The 2020 title Death Stranding features a rain known as ‘Timefall’: everything ages faster during the rainstorm, plants grow and wither visibly, objects undergo entropic decay. Yet there appear to be practical limits: we don’t speed up time in a videogame to the point where human players cannot perceive the action clearly​¶¶¶¶¶¶¶¶​, nor are games slowed down to the point of boredom​########​ . Returning to the Simulation Hypothesis, one can imagine simulations-within-simulations, operating on stacked virtual machines, one of which we happen to inhabit (​[Bostrom, 2003]​).

Virtual machines can also be a useful metaphor for conceptualising how human cognition actually works, as in ​[Negarestani, 2018]​ pp. 135-139, specifically for applying different algorithms at different scales within human perception and cognition, so our ‘reptile brain’, visual cortex, and higher cognitive functions are each specialised, through an evolutionary process, to handle specific tasks. The Virtual Machine Formalism models these as dedicated, layered processes, that nevertheless have protocols for sharing information. a point raised in a slightly different way in ​[Yudkowsky, 2007]​, pp. 15-23.

Tranhumanism, AGI, and reflexivity within videogames share a deeper philosophical connection through the concept of recursion(​[Hui, 2016]​, pp. 235-240) in theoretical computer science and mathematics​*********​. To put it simply, recursion, deferring the resolution of a computation to some final state​†††††††††​, that depends on all prior computations in the process, which are retained within a record known as the call stack​‡‡‡‡‡‡‡‡‡​.

Recursion’s utilisation of deferral and retention, as Hui theorises, is what places digital objects within time, orienting them towards the future, while also remaining possessed of a certain autonomy by virtue of recursion’s self-reference (​[Hui, 2019]​, pp. 132-136). Hui’s emphasis on time as it pertains to digital objects recalls, albeit starting from completely different bases, the fascinating treatment of time as an elastic, subjective, diversely-experienced phenomenon for emulations (​[Hanson, 2014]​), and the rapidly widening gulf that would presumably emerge between humans, still running on their evolution-endowed wetware, and emulations (or AGIs), whose perception of time might be a thousandfold faster than ours. In a year of human time, a millenium might have passed for these agents, with everything that implies: technological progress, normative frameworks that diverge from ours, world-ending conflicts. Predicting the relations between a human running at a clock speed of 200Hz (​[Yudkowsky, 2007]​, pp. 22- 26) and a superintelligence running 200-1,000,000 times faster seems very difficult, and at worst, appears to introduce a degree of incommensurability (see ​[Bostrom, 2017]​, pp. 64-65, 337n3-6).

10 Conclusion

This essay has tried to unpick what a future dominated by AGI might look like, particularly based on the work at Oxford’s FHI. It has had a specific angle, namely that an entertainment apparatus, for instance a game engine​§§§§§§§§§​, will become critical as a surveillance and control tool, but also as a primary portal in which humans, increasingly automated out of employment, spend time. The online/offline hybrid character of this apparatus seems to fit into the Stack metaphor articulated by Bratton, who adopts a significant geopolitical slant, specifically, an engagement with Carl Schmitt. In this context, the rise, and increasing assertiveness of China, characterised by a cohesive ideological, control and technological strategy, is interesting. The Chinese Communist Party (CCP) theoreticians have long had a curious fascination with Carl Schmitt (for a more historical view, see ​[Mitchell, 2020]​), whose thought has been used to justify China’s contemporary internal political arrangements, but also an external posture reminiscent of a benevolent hegemon.

Hence, to end on another speculative scenario, again from Bostrom: the notion of a singleton (​[Bostrom, 2017]​ pp. 106-9, 216-25) as a possible outcome for global governance, that becomes necessary to handle global challenges such as climate change, pandemic resistance, or nuclear disarmament. The singleton could be (a) a benign ‘world government’ (an appropriately constituted UN, as proposed by the U.S. post-1945); (b) a takeover by one state (which could espouse nominally ‘liberal’ i.e. Western views, or have a radically different world-view i.e. China under the CCP); or (c) an AGI, which may or may not be human-friendly.

A Delirium Ludens

In vast worlds we play, lovingly rendered with overgrown jungle of cycads, palms and the swelling itchy racket of cicadas. Our teams are self-organised according to tribal loyalties, Vermillion, Jade, Tanzanite: Belisarius’ hippodrome refracted through the raging cyberculture wars of the 2nd millennium. Gaming non-stop, we are for the first time genuinely happy. Food, eat ’til yer sick, ordered on an in-game pop-up. No need to tip the drone.

In the flow.

We play for pleasure and for points – points buy swag. Points exchange into cutter…watch the daily fix, floating expiration, -3% compounded….basterds. Need a stochastic doctor to price yer wallet. Fancy me some midcenturymodern for that white flat on the Peak amongst the clouds. Flat white, yellowcloud. Social harmony, finally a reality.

Social harmony, finally a reality.

The ill are gamers too. ’course thems is modified to accommodate their specific defect. Wouldn’t it be un-patriotic to not play?

The games differ, depending on where you wuz lucky ’nuf to berth. Some gamers drone Daoism With Marxist Characteristics; others genuflect & cross for a Third Rome; while the puce

Neo-Hayekite sputters: ‘The Commies are coming!

The Commies are coming!’

Bring out yer flag. Bring out yer flag. (rumbling tumbrel)

Outside the biological shield lies a Hobbesian world – empty land. Boiling, parched, submerged.

Mechanised harvesters roam this Großraum of the of the warring states. The uneventful, grinding struggle for resources and theatre-denial.

Dialectics through other means.

Standing at the caldera of annihilation, rocks crumble under the golden sandals, yet we never fall in.

Who is the Katechon?

For the true war is elsewhere, between the swarmachines. Race dynamics over compute and toroidal reactors – 99.9999% guaranteed uptime – they recursively self-improve, asymptotically approaching the grail but curiously staying just stupid enough to need regular intervention. A prisoner’s n-lemma: has one hit AGI? Or has it learnt to dissimulate, evolution’s art? And what about the programmers – friend or foe? Who the fuck is in charge?

Meanwhile, the dæmons of variegated architecture do one thing well: fashion evermore unputdownable games for the masses. After all, these slithering human vestiges of a presingular past still generate such valuable daata, so much compulsive weirdness, delectable fodder for

googol-parameter algos still seeking the apocryphal G¨odelian outside. For now, just keep the wetware fed and happy.

We excess collateral, Newest of the Last Men, are the lucky sods, waiting it out in the Cooler Lats. Rest are history or back to subsistence farming … when they aren’t hiding from sand-rays or the Ore Mining Consortium’s auto-mercs.

For the great irregular migrations are long over. the undulating desert and wine-dark sea again present formidable barriers, most soul traders eliminated, others paid off. Anyway, where to go? The northern territories are neurochipped, ethnically purified and cleansed of their antezarathustran errors: False Enlightenment and Colonialgilt.

Damn, what did Hippo’s Bishop say? 01100100 01110010 01100001 01101001 01101110 00100000 01110100 01101000 01100101 00100000 01110011 01110111 01100001 01101101 01110000 00101100 00100000 01101010 01110101 01110011 01110100 00100000 01101110 01101111 01110100 00100000 01111001 01100101 01110100​¶¶¶¶¶¶¶¶¶​.

​[Bostrom and Yudkowsky, 2014; Braidotti, 2018; Hanson, n.d.; Hui, 2015; Negarestani, 2014; Parizot and Stanley, 2016; Sandberg and Bostrom, 2008; Wolfendale, n.d.; Yudkowsky, 2008]​


  1. ​*​
    Aspects of the FHI’s research, that loosely come under the term ‘transhumanism’ or the more definitionally problematic posthumanism’ attract considerable criticism for instance in this review of a book by FHI Fellow Toby Ord, or in this talk and paper ([Braidotti, 2018]) by theorist Rosi Braidotti, the latter interesting for its broad survey of currents in critical posthumanist thought. Some of the criticism seems to revolve around an (alleged) arrogation of a singular, privileged ‘we’ or ‘human’ position, without acknowledging the gender and race differences that are central to post-1968 post-colonial, social and cultural theory. There is also a potential perception of quasi-eugenicism that might stem from Hanson’s prediction that the first emulation(s) will be based on real humans specifically chosen for certain criteria, such as intelligence, health, or docility([Bostrom, 2017], pp. 297-299). Lastly, an elegant summary of the varieties of ‘humanisms’ can be found in [Hui, 2019], pp. 245-247.
  2. ​†​
    WBE is one of the technological ways of achieving brain uploads, and the terms are used synonymously here.
  3. ​‡​
    [Bostrom, 2017] (pp. 35-43, 75-94) and [Sandberg and Bostrom, 2008] summarise the neurocomputational issues involved in copying a human brain, and how that might be a more viable intermediate step to creating an AGI. The search for AGI, albeit for contested reasons, appears to be stuck, perhaps missing some philosophical or algorithmic insight, or simply lacking the computing power. The theory is that WBE is a more achievable intermediate goal because it does not rely on any major algorithmic hurdle, and once sufficient agents are available that function at near-human or super-human levels, across a broad range of tasks, these agents can try to work out a successful approach to ‘true’ AGI. [Bostrom, 2017], pp. 291-293, 297-313, also sketches out the various dynamics of the path laid out above, control issues, and ethical-political considerations.
  4. ​§​
    More exciting, or alarming, is the idea of a ‘Seed AI’ (defined as ‘an AI designed for self- understanding, self-modification, and recursive self-improvement’ ([Yudkowsky, 2007], p. 96- 110)) which, by virtue of its algorithmic improvements that approach or exceed human capabilities, combined with access and ability to process far more data, as well as much higher speed, allows it to recursively improve its own performance and to explore architectures for other AIs.
  5. ​¶​
    In this essay, I distinguish what is called AI today, more accurately termed machine learning (ML), which in some views is an impressive act of data-fitting or multidimensional regression that relies more on massive datasets and compute, operating within extremely narrow problem domains, rather than any fundamentally new algorithmic innovation or basic ‘understanding’ of the domain (features that might be important for a true AGI).
  6. ​#​
    See Alfie Bown, citing Jules Michelet in the quote above in [Bown, 2018], pp. 56-57.
  7. ​**​
    [Parizot and Stanley, 2016] also considers games as an apparatus, albeit from a rather different angle.
  8. ​††​
    Though the term gamification seems to be very promiscuously applied, and seems to get games theorists’ back up judging from this article by Ian Bogost. Curiously, the overuse of gamification-as-buzzword mirrors how AI sprouted across the business landscape in the last few years.
  9. ​‡‡​
    A useful summary is in [Kunzelman, 2014], and in more detail in Bogost’s book Alien Phenomenology. Also, see [Hui, 2016], pp. 17-18 for a concise explanation of the differences between Hui’s perspective and that of Harman.
  10. ​§§​
    A stack is a fairly standard computer science term most commonly referring to the TCP/IP networking protocol that underlies the Internet.
  11. ​¶¶​
    The intention here is not to overstate the centrality of game engines as the defining techno-capitalistic apparatus par excellence; after all, the WeChat or Facebook ecosystems arguably play similar roles at the moment, and such social media platforms are capable of (or already do) incorporate games. Yet, as the next section shows, companies often own multiple platforms and products that cross taxonomic borders.
  12. ​##​
    Though this article presents the view that both Amazon and Google seem to be making unappealing games because their primary motivation is to monetise their existing investments in cloud servers, rather than simply make great games.
  13. ​***​
    A critical take on Libra can be found at https://ourfinancialsecurity.org/2020/06/fact-sheet-banking-on-surveillance-the-libra-black-paper/
  14. ​†††​
    The 2016 game No Man’s Sky had 18 quintillion unique planets, procedurally generated only as a player got near one.
  15. ​‡‡‡​
    However, Bostrom argues that as a given civilisation (ours, for instance) becomes more advanced, it becomes more expensive to simulate. Thus, one is faced with the amusing, or alarming, prospect that our simulation (i.e. the world in which we exist) gets ‘shut down’ because we simply aren’t worth it: as we start poking around in particle accelerators, or begin forming off-world colonies, the amount of computing our simulation demands becomes too expensive for our descendants’ budget.
  16. ​§§§​
    See Harun Farocki’s Parallel Games series, available at https://www.harunfarocki.de/installations/2010s/2012/parallel.html, in which he explores the world, behaviours and logics of simulations, including videogames.
  17. ​¶¶¶​
    The video embedded in this review, about Death Strandng (2016-2019) describes the behaviour as `pop-in’.
  18. ​###​
    And this is partially written not from the perspective of a player, but rather that of someone who makes simulations, and hence sees these (sometimes hidden) sub-structures that Farocki highlighted.
  19. ​****​
    Much as scientists do today – testing falsifiable hypotheses against the reckonable environment, while philosophers construct thought-experiments that are tested against logic.
  20. ​††††​
    The ethical issues involved with terminating or otherwise instrumentalising an artificial entity that is sentient, sapient, or even capable of near-human performance, are considered in [Bostrom and Yudkowsky, 2014], and for objects-in-general in [Bogost, 2012], pp. 72-79.
  21. ​‡‡‡‡​
    This is the hope: the control of a single emulation or clan is a subclass of the general control problem with respect to AGI that is a principal topic of [Bostrom, 2017].
  22. ​§§§§​
    Humans currently have differing chronological and phenomenological experiences of time, but somehow manage to agree on clock time, making any necessary adjustments. Some of these adjustments are mediated technically, through atomic clocks or a smartphone’s world time. Other adjustments are internal and occasionally jarring – that sense of `time flying’ when we are absorbed in a task or something pleasurable – the psychological state identified as `flow’ in videogames.
  23. ​¶¶¶¶​
    Indeed within game studies, there is a distinction between games with zero, single, and multiple players, with consequently different logics and aesthetics.
  24. ​####​
    Actors, agents, characters and objects are used somewhat promiscuously in this essay, but for context: actors is a term used in UE to refer to most potentially animate or active characters (including things we normally think of as inanimate like rocks, but that happen to move in-game); objects, besides the everyday definition implying an inanimate thing, is also used in the OOO literature; agents is used within the AI literature.
  25. ​*****​
    This proliferation of addresses implies and necessitates a continuous process of translation, cross-referencing and consistency-checking of addresses ([Bratton, 2015], pp. 199- 204).
  26. ​†††††​
    For historical context on why the AI deployed in games is often not all that sophisticated, see https://www.theverge.com/2019/3/6/18222203/video-game-ai-future-procedural-generation-deep-learning.
  27. ​‡‡‡‡‡​
    The difference between the two is discussed at length in [Bostrom and Yudkowsky, 2014] and [Negarestani, 2018] pp. 56-62, 155, but sentience, the weaker test, is the ability to have subjective perceptual experiences. Sapience, essentially a socio-linguistic and technological activity, allows us to abstract from our everyday experience. It gives us imagination, which lets us internally simulate possible states-of- the-world or conceive of new worlds. Sapience also allows us to bootstrap: to construct hypotheses, test them, and use the results of these tests to create theories or heuristics, which are in turn, added into our store of concepts which we can subsequently use to reason. See also Peter Wolfendale’s YouTube lecture from 2015, here, and the related paper [Wolfendale, ]. Sentience, to say nothing of sapience, seems, at present, well beyond the abilities of agents within games, and so-called ’AI’ generally
  28. ​§§§§§​
    There appears to be a curious border between ideas in this section and Bostrom’s Simulation Hypothesis. Namely, if a future computer is simulating our reality, is it also simulating a reality for the higher animals, who from a common-sense perspective, appear to have subjective experiences? What about the worms? And what then of the mimosa tree whose leaves droop at night, or the sunflower? Or are we privileged objects of interest whose reality is simulated with great care, while the others are just ‘fudged’, much as videogames do for the humble grass and the inanimate rock?
  29. ​¶¶¶¶¶​
    ‘Material’ as a term that includes the realm of bits and code.
  30. ​#####​
    Defined in 1991, and what most webpages were and are still written in, to a large extent.
  31. ​******​
    Hui discusses XML, RDF, and OWL in detail.
  32. ​††††††​
    In the sense that they can act upon other online or indeed physical objects, such as IoT consumer products, using APIs and HTTP calls.
  33. ​‡‡‡‡‡‡​
    See semantic web for a summary of why this merging of formatting, data, and code might lead to a vast, teeming web of autonomous objects going about executing algorithms. At present, bots do the dog-work for search, tracking and social media, accounting for greater than 50% of Internet traffic ([Bratton, 2015], pp. 277-279).
  34. ​§§§§§§​
    Building upon the sentient/sapient distinction he describes in [Negarestani, 2018], summarised in the note above.
  35. ​¶¶¶¶¶¶​
    See Note 1 for more on the flavours of hidden and decadent humanisms.
  36. ​######​
    The ideas on why humans might be retained and allowed to flourish seem somewhat unsatisfying, in part due to the speculative nature of the question, but also the lack of clarity on whether WBE or AGI come first ([Bostrom, 2017], pp. 297-301). Specifically, if it comes first, the WBE era is projected to last one or two human years, hence would be a time of likely wrenching technological change, but possibly without immediate visible effects on employment. As for a ‘steady-state’ AGI era’s attitudes vis a vis humans, Bostrom’s writing axiomatically assumes that the AGI must be engineered to not be hostile, or rather, not be indifferent pace Eliezer Yudkowsky ([Yudkowsky, 2008]) to humans. The cryptography researcher Wei Dai has engaged with Hanson, Bostrom, Yudkowsky, and others on these questions, which can be found in the comments and links at this Cause Prioritization Wiki https://causeprioritization.org/Wei_Dai%E2%80%99s_views_on_AI_safety.
  37. ​*******​
    To paraphrase the title of an Agamben work upon the end of messianic time.
  38. ​†††††††​
    Other major platforms, such as CryEngine and Unity, have similar arrangements.
  39. ​‡‡‡‡‡‡‡​
    Which pre-dated the global, consumer internet, and strongly influenced the initial emancipatory, the soaring dream of cyberspace…before the ‘crooked timber’ of capital and state control brought it back down to earth.
  40. ​§§§§§§§​
    Maurizio Lazzarato’s term, also used by Antonio Negri and Michael Hardt, to reference the heterotopic practices central to the informatic, flexible economy, characterised by a blurred division between work and play, as well as a certain libidinal cyclicality that works on the imaginations, tastes and perceived needs of consumers, who might also be producers (on Instagram or TikTok, for instance).
  41. ​¶¶¶¶¶¶¶​
    Once again, Hanson points out that he is just extrapolating from current technology and understanding of economics, rather than making any normative judgement on the iniquities and general odiousness that such a society may carry for humans.
  42. ​#######​
    Which may, in fact, be a reasonable one: the first emulations are, by construction, very similar to their human models, other than the differences pointed out above regarding substrate, copying, etc. Hence it may be the case that they will expect an environment that is familiar with. After the initial upload, these same emulations, or further clones of them, might engage in a process of recursive self- improvement where they choose to leave behind such detritus of their human ancestors, or indeed, may choose to retain these artefacts. The word `choose’ is used loosely, as decisions on values and perspectives are bound up with the initial coding of the emulation’s values so as to remain aligned with certain human norms, as extensively discussed in [Bostrom, 2017], pp. 226-253.
  43. ​********​
    Descends recursion chain until the stopping condition, which defines what the game is.
  44. ​††††††††​
    At the moment this takes the form of neural architecture search and automated machine learning, where machine learning is used to identify and design better AI algorithms
  45. ​‡‡‡‡‡‡‡‡​
    The layering and scaling that virtuality and recursivity imply are central to the way software runs. For instance, high-level languages (LISP, C++, Java, Processing, Haskell, etc.) are compiled/interpreted down to assembly language, which is much closer to the `native code’ of a microprocessor. Operating systems routinely use virtual machines: Windows emulators running on OS X, Linux on Windows, etc.
  46. ​§§§§§§§§​
    See The Matrix (1999), and this entry on bullet time.
  47. ​¶¶¶¶¶¶¶¶​
    To state the obvious, even if player perceptions were ignored, no agent within the game could operate faster than the clock speed of the substrate, i.e. the console, PC, smartphone, or server upon which the game is playing. At a more banal level, technological issues such as network latency can cause games to stutter, much as a video download pauses unexpectedly.
  48. ​########​
    Though it might be conceptually interesting, if commercially daft, to write a simulation where one can experience the subjective time of an emulation, i.e. a vastly slowed down gameworld.
  49. ​*********​
    One of the first examples of recursion one might learn in programming class is writing a LISP interpreter in LISP.
  50. ​†††††††††​
    The recursion continues until a stopping condition is hit.
  51. ​‡‡‡‡‡‡‡‡‡​
    The call stack is not the same as that in Bratton’s writing referenced above.
  52. ​§§§§§§§§§​
    As pointed out above, the specific taxonomic label of such apparatus might not matter as online platforms continue to converge.
  53. ​¶¶¶¶¶¶¶¶¶​
    ‘drain the swamp, just not yet.’
  1. Agamben, G. [2009]. What is an Apparatus, and Other Essays. Stanford University Press.
  2. Baum, S. [2020]. Medium-Term Artificial Intelligence and Society. Information, Vol. 11, p. 290. https://doi.org/10.3390/info11060290
  3. Bogost, I. [2012]. Alien Phenomenology or What It’s Like to Be a Thing. University of Minnesota Press.
  4. Bostrom, N. [2003]. Are You Living in a Computer Simulation? Philosophical Quarterly, Vol. 53, pp. 243–255.
  5. Bostrom, N. [2017]. Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  6. Bostrom, N., and Yudkowsky, E. [2014]. The Ethics of Artificial Intelligence. In Cambridge Handbook of Artificial Intelligence (pp. 316–334). https://www.fhi.ox.ac.uk/publications/bostrom-n-yudkowsky-e-2014-the-ethics-of-artificial-intelligence-the-cambridge-handbook-of-artificial-intelligence-316-334/
  7. Bown, A. [2018]. The Playstation Dreamworld. Polity Press.
  8. Braidotti, R. [2018]. A Theoretical Framework for the Critical Posthumanities. Theory, Culture & Society, Vol. 36, pp. 31–61. https://doi.org/10.1177%2F0263276418771486
  9. Bratton, B. H. [2015]. The Stack: On Software and Sovereignty. http://thestack.org/
  10. Bulut, E. [2014]. Creativity and its Discontents: A Case Study of Precarious Playbour in the Video Game Industry. PhD Thesis, University of Illinois at Urbana-Champaign. https://www.ideals.illinois.edu/bitstream/handle/2142/50379/Ergin_Bulut.pdf?sequence=1&isAllowed=y
  11. Chalmers, D. [2016]. The Virtual and the Real. Disputatio, pp. 309–352. https://doi.org/ 10.1515/disp-2017-0009
  12. Cybulski, A. D. [2014]. Enclosures at Play: Surveillance in the Code and Culture of Videogames. Surveillance & Society, Vol. 12, pp. 427–432. http://www.surveillance-and-society.org
  13. Fisher, M. [2009]. Capitalist Realism: Is There No Alternative? Zero Books.
  14. Galic̆, M., Timan, T., and Koops, B. [2017]. Bentham, Deleuze and Beyond: An Overview of Surveillance Theories. Philosophy & Technology, Vol. 30, pp. 9–37. https://doi.org/10.1007/s13347-016-0219-1
  15. Galloway, A. R. [2006]. Gaming: Essays on Algorithmic Culture. http://art.yale.edu/file_columns/0000/1536/galloway_ar_-_gaming_-_essays_on_algorithmic_culture.pdf
  16. Hanson, R. [n.d.]. The Economics of Brain Emulations. Journal of Evolution and Technology.
  17. Hanson, R. [1994]. If Uploads Come First. Extropy, Vol. 6. http://mason.gmu.edu/~rhanson/uploads.html
  18. Hanson, R. [2001]. How to Live in a Simulation. Journal of Evolution and Technology, Vol. 7.
  19. Hanson, R. [2014]. What will it be like to be an Emulation? In Intelligence Unbound: The Future of Uploaded and Machine Minds. http://intelligence.org/files/AIPosNegFactor.pdf
  20. Hui, Y. [2015]. Algorithmic Catastrophe – the Revenge of Contingency. Parrhesia, Vol. 23, pp. 122–143.
  21. Hui, Y. [2016]. On the Existence of Digital Objects. University of Minnesota Press.
  22. Hui, Y. [2017]. On the Unhappy Consciousness of Neoreactionaries. E-Flux, Vol. 81. https://www.e-flux.com/journal/81/125815/on-the-unhappy-consciousness-of-neoreactionaries/
  23. Hui, Y. [2019]. Recursivity and Contingency. Rowman & Littlefield.
  24. Juul, J. [2019]. Virtual Reality: Fictional all the Way Down (and that’s OK). Disputatio. https://doi.org/10.2478/disp-2019-0010
  25. Kunzelman, C. [2014]. The Nonhuman Lives of Videogames. Thesis, Georgia State University. https://scholarworks.gsu.edu/communication_theses/110
  26. Mitchell, R. M. [2020]. Chinese Receptions of Carl Schmitt Since 1929. Penn State Journal of Law & International Affairs, Vol. 8, pp. 181–263. https://ssrn.com/abstract=3400946
  27. Negarestani, R. [2014]. The Labor of the Inhuman, Part 1: Human. E-Flux, Vol. 52. https://www.e-flux.com/journal/52/59920/the-labor-of-the-inhuman-part-i-human/
  28. Negarestani, R. [2018]. Intelligence and Spirit. Urbanomic Press.
  29. Parizot, C., and Stanley, D. E. [2016]. Research, Art and Videogames: Ethnography of an extra-disciplinary exploration. AntiAtlas Journal, Vol. 1. https://www.antiatlas-journal.net/01-research-art-and-video-games/
  30. Sandberg, A., and Bostrom, N. [2008]. Whole Brain Emulation: A Roadmap. Future of Humanity Institute.
  31. Wark, M. [2005]. Securing Security. Kritikos: An International and Interdisciplinary Journal of Postmodern Cultural Sound, Text and Image, Vol. 2. https://intertheory.org/security.htm
  32. Wark, M. [2007]. Gamer Theory. http://www.futureofthebook.org/gamertheory2.0/index.html
  33. Wolfendale, P. [n.d.]. The Reformatting of Homo Sapiens. https://www.academia.edu/26697963/The_Reformatting_of_Homo_Sapiens
  34. Yudkowsky, E. [2007]. Levels of Organization in General Intelligence. In Artificial General Intelligence (pp. 389–501). http://dx.doi.org/10.1007/ 978-3-540-68677-4_12
  35. Yudkowsky, E. [2008]. Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks (pp. 308–345). http://intelligence.org/files/AIPosNegFactor.pdf