Thinking About Simulacra and AI

Łukasz M.

Recently my thoughts have dwelt on simulacra — that is: copies, images, portraits of things and how these relate to reality1. Philosophers too have contemplated this issue for a long time. From the ancient Greeks to the modern and post-modern philosophers, it seems as if everyone has chipped in to some extent. The topic is perhaps thence saturated with ideas, but I don’t believe that it has become obsolete, for the age in which we live is not the same as the one of which the great philosophers wrote. Perhaps there is yet something new to be explored: the context may have changed, but the way that things work has probably stayed the same. It is exactly that new context which develops the concept and expands its meaning and implications.

What greater force for a change in outlook than technology? The drive of new breakthroughs has been revising perspectives for a long time. Currently, we are on the brink of another advancement: an AI boom. Large steps have already been made, but the field’s golden age has not yet approached. Thence it seems a perfect time to stop and think. At this particular turning point, the question of AI’s likeness to human minds and the implications thereof should be carefully examined. That is where simulacra and AI converge. Namely, at the analysis of AI’s identity and its similarity to the human mind. Herein I will investigate AI as a simulacrum of human cognitive processes.

Immediately, the Chinese room argument from John Searle’s famous paper2 comes to mind. It’s concerned with replicating human-like understanding in AI and/or machines. In the paper, Searle explains how a computer that has certain instructions (for example responding intelligently to Chinese input) does not necessarily understand what it is doing because of the way that it is constructed. I am summarising quite a lot here, but essentially, he uses this as an argument against the possibility of strong AI. I would say, more generally, that two functions that give the same output for a limited domain are not necessarily the same function.

Let’s say there is an arbitrary set: \( \ A = \lbrace \ldots \rbrace \). And two functions: \( f_1(x) \) and \( f_2(x) \), so that:

$$ f_1(x)=f_2(x),\ x \in A \nRightarrow f_1(x) = f_2(x), x \in \Reals $$

One function approximates the other for that domain, but they are not inherently the same. The fact that they give the same output in one domain, does not imply that they are the same function. The same is true for AI approximating human cognitive functions. Case in point, an optical character recognition neural network does not give logical output for blank responses unless it is trained to do so.

Similarly, AI that replicates a part of human cognition need not work the same way as a human mind. Searle points out that machines could think, for we are precisely such machines, but a computer’s purpose, on the other hand, is to compute. Therefore, a strong AI built with traditional computers would be a mere simulation or emulation of a mind, but it would not be a mind; it wouldn’t have “intentionality”, as Searle calls it. Simply put: it would be an ersatz like using chicory root instead of coffee. Conclusively, the construction of the simulacrum is important.

Anyhow, I am still considering the broader implications of what this entails. The naïve venture of replicating human minds in AI and thus learning more about them is bound with aforementioned traps. More broadly, I feel that there is a great deal to be thought about in regards to simulacra and modern problems. I would like to discuss this connection further within the context of objects and realities, with specific focus on hyperreality; there is plenty to be said about that in the context of the internet and social media.


  1. Am I sure that things actually exist? No. However, the perspectives that assume that there is no such thing as reality, at least an external one, don’t really get that far. I am glossing over the details here, but these philosophical assumptions fail to explain consistencies/inconsistencies that may exist in one’s phenomenological perception. I might go deeper into this, but at present moment this should suffice. ↩︎

  2. Searle, J.R. (1980). Minds, brains, and programs. The Behavioral and Brain Sciences, 3(3), 417-457. Cambridge University Press ↩︎