⏳ ...

14/05/2020

Your mind cannot be uploaded

Book review

How We Became Posthuman

Virtual Bodies in Cybernetics, Literature, and Informatics

N. Katherine Hayles (1999)

This is one of several reviews I published at Goodreads – though “review” is too charitable; rather they’re brief notes on what caught my attention reading the book, or things I ought to remember in the future about it. Enjoy, maybe?

I actually finished this book a few months ago, but I never got around to updating it here, mostly because I dreaded having to write a review for it. In short, this is so because although I mostly loved Hayles’ book and her reflections on embodiment and how it was erased by cybernetics (and later tech-influenced philosophical thought on the mind, as well, such as the whole of functionalism and all the Kurzweils and Wolframs and out there), I really am not much interested (to this day at least, which is as much as I can always say about my intellectual hunger) in literary analysis: no matter how philosophical it might be, I would mostly prefer to read theory, I guess.

Cover for the 1999 edition.

In any case, I read Hayles mostly because of the history of cybernetics she presents. One of my interests in the analytic tradition in the philosophy of mind is understanding how anyone could fathom such an absurd idea as “mind uploading” or “conscious machines”. This is partly because of my path in these studies, I guess: I used to entertain the idea that machines might one day display conscious-like behaviour. Much like Alan Turing, when I learned programming, I too saw no good reason not to ascribe them consciousness – if they walk and talk like a duck… It was only when I was introduced philosophy of mind via Varela, Rosch and Thompson’s critique of internalist, “disembodied”, computational-based cognition in The Embodied Mind: Cognitive Science and Human Experience that I started to reject the idea of conscious machines. Human consciousness is itself far more ineffable and complex than our silly computational models can ever grasp. Funnily enough, Varela did not deny the possibility of conscious machines as I do (he was fond of the potential of Wolfram’s automata for enacting consciousness).

Hayles pinpoints some very specific and interesting moments in the history of scientific development in the mind-brain sciences in the 20th century in order to exemplify many of the assumptions which have formed the field since then – such as a type of scientific realism regarding biological neural modeling using computational neural networks. I turn to this example (probably my favourite passage in the book) in the rest of the review as I think it represents the tone of the book and Hayles’ main argument.

Warren McCulloch, a neurophysiologist and cybernetician in the 1940s developed one of the first theoretical accounts (or model) of a neuron cell as a basic logical unit. McCulloch observed that neurons’ networked organisation made them “capable of signifying logical propositions” (Hayles p. 58). For instance, if neuron A is connected to neurons B and C, and the excitation threshold of A is equal to that of C and B combined, then this setup can be said to instantiate the logical proposition “if A, then B and C” (or B ∧ C → A). A natural-science-philosophical parenthesis: one should point out how much work has been done here: it is often only after a model has been proposed that interesting science can be done. As Steve Heims (apud Hayles) has said, “it [is] not easy to extrapolate from amorphous pink tissue on the laboratory table to the clean abstractions of the model”. Logician Walter Pitts came later, and developed the network implications of McCulloch’s unit neuron – for instance, demonstrating that a neural network [of sufficient complexity, I assume] was able to “calculate any number (that is, any proposition) that can be calculated by a Turing machine”, i.e., it was Turing-complete. Such approach has proved a success by most measures. Today, a lot of neuroscience works exclusively with neural modelling using computational models. Neural network modelling has also led to huge developments in computer science and artificial intelligence in the last decade, as we all know.

For McCulloch (and many of our contemporaries), the success of his approach was vindicated by history. But what are the nature of models? How can a model of the brain prove anything regarding actual human minds? What does behaviour of artificial neurons, mere mathematical reductions, prove regarding actual nerve cells – or worse, regarding the human constituted by such cells? In 1947, a fellow cybernetician, psychologist H-L Teuber, asked this very question (more elegantly):

“Your robot may become capable of doing innumerable tricks the nervous system is able to do; it is still unlikely that the nervous system uses the same methods as the robot in arriving at what might look like identical results. Your models remain models-unless some platonic demon mediate between the investigators of organic structure and the diagram-making mathematicians.”

McCulloch’s answer, for me is exemplary of his fellow machine-functionalists to this day:

“I look to mathematics, including symbolic logic, for a statement of a theory in terms so general that the creations of God and man must exemplify the processes prescribed by that theory. Just because the theory is so general as to fit robot and man, it lacks the specificity required to indicate mechanism in man to be the same as mechanism in robot.”

Second natural-science philosophical parenthesis: you may treat a theory (such as the neural-scientific view of the nerve cell, derived from McCulloch) differently depending on whether you believe it describes reality more or less accurately, or whether you take it as only a useful fiction, one that leads to productive science, but which purports to bear no resemblance to reality whatsoever. These positions have been called, in the analytic debate on the philosophy of science, respectively realism and anti-realism. However, an individual must not limit their stance to either accepting/rejecting a theory. You may treat what it says as true only as a methodological principle for making scientific observations, and ignore everything about its view of reality once you leave the lab. Thus you need not assume a mind if you are trying to do behavioural psychology (your subjects are just larger pigeons, after all!) – you can act, methodologically, as if all a person is is a pretty complex machine, always readly apt for (re)programming like any other, and still, outside the context of scientific observation (say, while you discuss results with your peers and carry on daily activity) act as if everyone has a mind, feelings, and so on (although you might try to manipulate those around you with ‘positive reinforcement’…).

Neural-modelling-realism says that whatever brains are made of, they are fundamentally information-processing systems (or computers). Brains capture data from the external world via sense neurons in the retina or the nose and process this data in their internal wiring, and it is this aspect of neural behaviour (not, say, their chemical behaviour, although in principle that may also fit under a sufficiently loose definition of “computation”) that is relevant for understanding mental phenomena (and definitely not then-dreaded Freudian psychology and all of its subjectivity-leading traps).

In the terms above, then, McCulloch believed that his theory (“backed” or not by empirical observation, that isn’t the point here) not only explained neural behaviour in a way that was interesting, productive, or apt for the creation of technology (all very true) – it indeed provided us with the actual principles that governed the natural phenomenon under study. It is those principles, and not anything else, that make neural behaviour happen, he claimed. This is, I think, a complex philosophical position, but which is, as Hayles claims, a type of platonism. It is not dissimilar to those who are realists regarding the laws of physics, perhaps, for whom the laws described by physicists too are so general that God and man must obey. That is because, for McCulloch and others today, what “matters” is the organisational structure (or the functional behaviour, if you prefer) of neurons. And by matter, I mean what is necessary and sufficient for the emergence of mental phenomena – such as first-person consciousness or “qualia” (that is if the theorist grants them existence at all and is not an ‘eliminativist’).

Note that the distinction Teuber pointed out – between a model and the real thing, and the former’s insufficiency in mimicking the full reality of the latter – is more at home in the traditional debate in the philosophy of mind, outlined above. A model is not reality, and we ought to debate how acceptable it is as a description of the latter. However, this distinction fades into the background in McCulloch’s answer: he found a statement “so general” that whatever nature or man creates is its mere “example” – or in contemporary analytical metaphysical terms, it is an “instantiation” of it. What most essentially defines the ontology of a brain under this view, then, is the very organisation of its composing parts, as distinguished from the material substance they are made of. Given that the mind is a byproduct of the brain (a true given for materialists), and the brain is “only” instantiating general principles, it stands to reason (I suppose? I find this just crazy) that whatever medium is made to replicate the same principles (or structure, etc.), such as a silicon robot, shall also replicate whatever else a brain does – such as having a mind. Now, it is not so much that theory is seen as true, but it is seen as preceding reality itself.

Hayles calls McCulloch’s epistemological (if not rhetoric) strategy a “Platonic backhand”. In this move, one “works by inferring from the world’s noisy multiplicity a simplified abstraction” (Hayles, p. 12) observes the noise and complexity of reality and theorizes that some or other set of simpler, more understandable, general principles explains what is seen. This basic initial abstraction is of course part of science: the problem begins when the move “circles around to constitute the abstraction as the originary form from which the world’s multiplicity derives” (p. 12). Laws “so general” are thought to be found that their occurrence (or modelling, I’d rather say) is itself seen as an occurrence of the phenomena they were supposed to model in the first place.

For Hayles, writing in a more general context of 1990s feminist critique of traditional Western rationality, the above process is one of the points in which the body was “erased” in order to give place to abstractions such as “bodiless information” (p. 12), or the idea that individuality = brain structure = information patterns. Philosophically, this occurs by foregrounding the importance the physical organisation of matter, rather than any special substantive quality of matter itself, in bringing about mental phenomena. For many reasons, I find functionalism untenable as a position in the philosophy of mind. Hayles provided me with more reasons, showing how shaky assumptions made by daring theorists can be turned into religious dogma for technologists a generation later, and inform projects such as the Blue Brain project – that aims to replicate a human brain (and by materialist-biased extension, a mind) using IBM supercomputers. One can only hope their bots can read Hayles after reaching singularity.

Image/header credits: scene from Ex Machina (2014).

Posted in book reviewTaggs:
Write a comment