Mark Okrent notes an interesting phenomenon: though Heidegger effectively never explicitly interacted with anything close to what we today would recognize as the philosophy of mind or the cognitive sciences or artificial intelligence, it is nearly obligatory to reference Heidegger when one writes on the topic of artificial intelligence. This is largely due to Hubert Dreyfus’s reading of Heidegger. John Searle, by contrast, has interacted extensively with the cognitive sciences, artificial intelligence and philosophy of mind. Heidegger and Searle together, however, are an unlikely pair, but I think that implicit in Heidegger is what is explicit in Searle, especially in his (in)famous Chinese Room Argument.
Okrent draws out the core of the resemblance here:
What is interesting about this is that the deep structure of Heidegger’s argument is similar with the structure of Searle’s Chinese room argument, but the content is entirely different (Searle 1980). Searle argues, one will remember, that programs are syntactic, minds have a semantics, and syntax is insufficient for semantics; so programs are not minds. But Searle’s reason for thinking that syntax is insufficient for a semantics is that he thinks that consciousness is necessary for semantic content, and that syntax is insufficient for consciousness. Heidegger, on the other hand, waives the consciousness requirement. Instead he argues, based on his discussion of the intentionality puzzle, that acting practically for ends is necessary for semantic content, and syntax is insufficient for acting for ends.
There is a crucial difference between the structure of Heidegger’s argument and the structure of Searle’s, however. Since for Searle the semantics of thought is dependent on conscious states, only entities with conscious states could count as thinking, no matter how they behave. But for Heidegger the semantics of thought depends upon being-in-the-world, and being-in-the-world crucially involves a style of action. So it looks as if it might be possible for something to qualify as thinking in virtue of the style of its action alone.
I actually think that there’s more similarity here than Okrent allows, because when you boil it down, both Heidegger and Searle reject the idea that a syntactical engine could have understanding. While the language and context is pretty different, at bottom, both argue that understanding/consciousness/semantic content and intentionality can only be predicated of a certain kind of thing: for Searle, a conscious agent, for Heidegger, an agent-in-the-world. Perhaps an examination of both of these in a bit more depth will be helpful.
Searle’s chinese room is well known enough – no semantics from syntax is the slogan painted on the walls of the room – but interestingly enough, some years after he initially put forward the argument, he noted a deficiency (perhaps deficiency is the wrong word). Put more precisely, Searle argued that ‘semantics is not intrinsic to syntax’. In The Rediscovery of the Mind, he pushes further:
…notions such as computation, algorithm, and program do not name intrinsic physical features of systems. Computational states are not discovered within physics, they are assigned to physics.
This is a different argument from the Chinese room argument, and I should have made it ten years ago, but I did not. The Chinese room argument showed that semantics is not intrinsic to syntax. I am now making the separate and different point that syntax is not intrinsic to physics. (p. 210)
Syntax, for Searle, is assigned. It is assigned by an assigner, or an agent. Thus to even get to the Chinese room stage of getting semantics from syntax requires an agent in order to get the syntax. The agent is thus the fundamental intentional thing-in-the-world (Searle recognizes three kinds of intentionality, but the important kind, intrinsic intentionality, is what we’re thinking about here). Here is where I want to posit two deeper similarities between Heidegger and Searle: first while Searle, in good analytic fashion, is a representationalist about intentionality and Heidegger is not, for both, there is a deep reliance on the unintentional, and second, both reject the idea that intentional states, however they cash out, are isolated states or isolated things. This is made explicit in Searle’s notion of the Background, which is a form of know-how:
Intentional states do not function in isolation…I have to have a set of capacities that enable me to cope with the world. It is this set of capacities, abilities, tendencies, habits, dispositions, taken-for-granted presuppositions and “know-how” generally that I have been calling the “Background”, and the general thesis of the Background that I have been presupposing…is that all of our intentional states, all of our particular beliefs, hopes, fears and so on, only function the way they do…against a Background of know-how that enables me to cope with the world. (Mind, Language and Society, pp. 107-108)
The Background, according to Searle, is pre-intentional and non-intentional, and is essential for intentionality. No Background, no intentionality. The agent, then, can’t simply have intentionality as a kind of brute fact, or in virtue of being a certain kind of substance: it is only through (skilfully) coping with and in the world that the agent can be said to have intentionality. Okrent draws out the theme of coping in Heidegger clearly:
…we should understand the semantics of all intentions in terms of their relations with the semantics of “skillful coping.” So the necessary conditions on the possibility of describing an agent as skillfully coping with her environment while following social norms, whatever those conditions might be, are at the same time the necessary conditions on that agent having any intentions whatsoever. That is, nothing can think unless it is being-in-the-world.
The context for the above quote has to do with Heidegger’s rejection of a representational theory of intentionality but the point remains nonetheless. There is deep concord here between Searle’s background and Okrent’s reading of Heidegger: both require the agent to be-in-the-world, coping, for the agent to have any intentionality. Good Old Fashioned Artificial Intelligence, operating largely on ideas of symbolic manipulation using internal representations, stumbled at precisely this point:
…Rodney Brooks, who had moved from Stanford to MIT, published a paper criticizing the GOFAI robots that used representations of the world and problem solving techniques to plan their movements. He reported that, based on the idea that “the best model of the world is the world itself,” he had “developed a different approach in which a mobile robot uses the world itself as is own representation continually referring to its sensors rather than to an internal world model.” Looking back at the frame problem, he says: And why could my simulated robot handle it? Because it was using the world as its own model. It never referred to an internal description of the world that would quickly get out of date if anything in the real world moved.
Rita Carter notes a similar problem in Exploring Consciousness:
It has turned out that the ‘simplest’ of human achievements – the ability to move around without knocking the furniture; to recognize the expression on another’s face ; to know when to talk and when to shut up – are impossible to program in from scratch. The world in which humans engage, especially their social world, offers so many choices of behaviour that it is impossible to equip a conventional computing machine with enough symbols for it to meet every demand. Nor do essential human qualities such as values, humor and emotion seem translatable into symbolic representations. (p. 178)
In short: as Heidegger, Dreyfus and Searle (among others) have all argued, things such as value, or significance or relevance can’t be represented. Certainly all the relevant facts can be known and programmed, but it’s the giving of meaning and significance that is the problem. And these brute facts don’t have any intrinsic meaning to them. This is nothing less than Searle’s Chinese room argument in a more embodied form. From syntax alone (brute facts) semantics (meaning, value) will never arise. Intention can only be had by an agent-in-the-world with a Background. Put differently, it is embodiment that gives meaning. Good Old Fashioned A.I. has for this reason largely given way to more embodied kinds of cognitive science, as Michael Wheeler notes:
Heideggerian cognitive science is … emerging right now, in the laboratories and offices around the world where embodied-embedded thinking is under active investigation and development.
What was implicit in Heidegger, and explicit in Searle – that semantics can’t emerge from syntax – has done a good deal to invigorate the field of situated robotics as well as overcome or at the very least throw into sharp relief some of the deep issues with representational and dis-embodied theories of cognition (this dis-embodied cognition is, more or less, Strong A.I.). In Strong A.I., Searle detects whiffs of dualism, since for Strong A.I., the actual matter of the brain, the brain’s physicality, doesn’t itself matter. While this isn’t Cartesian dualism – a dualism of substances – it’s a dualism nonetheless, perhaps a dualism in principle, since the claim of dis-embodiedness means that ‘what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain’ (Searle, Minds, Brains and Programs). As unlikely of bedfellows as Heidegger and Searle intuitively appear, and as different as their contexts are, there is a deep commitment to a real, embodied agent-in-the-world, and it is just this commitment that has, on the negative side, shown the limits of GOFAI, and on the positive, opened up new expanses for situated and embodied robotics and AI.