I Am Just a Wet LLM

Humans are afraid that AI is encroaching on consciousness because we're scared about losing our perceived uniqueness. It's our last Copernican Revolution. It turned out that we were not the center of the solar system, or the center of the universe, or even the One True physical reference frame, but at least we can claim unique ownership of our magical consciousness? Can't we? In a surprise move humans have cleverly failed to define consciousness properly, thus making it even harder to take it away from us. Nice one humans.

But our perceived consciousness is just a lie that our WLLM (Wet Large Language Model) tells itself to cope with being "alive". Why should we be surprised about that? Everything our WLLM tells is us a lie. We can't even "see" the present; it takes us at least 50-100 milliseconds to pre-process visual data before post-processing it, and 400-500 milliseconds to respond to visual stimuli. To make up for that difference our brains compensate by forecasting what they think the environment will look like 400-400 milliseconds into the future and "showing" us that lie instead. So we "see" an extrapolation of the the world as it was 50-500 milliseconds ago. Even the "present" is a lie, and it just gets worse from there. We simplify, we pattern match, we generalize. The WLLM is not structured to produce "truth", it is structured to rapidly respond to stimuli in a way which is least likely to get us killed in the next 10 seconds.

So we have this terribly inaccurate "instrument" which lies about everything it perceives and then we point it back at itself and ask it "is this instrument conscious?" and instead of saying "Syntax error: consciousness undefined" that same instrument, which is "designed" to be able to rapidly flee from a raging carnivorous predator which it has never seen before produces a result anyway and happily reasons: "I am the center of my universe and I am unique and I am special and I must be conscious because here I am generating an answer to this query and whatever consciousness is then definitionally it is whatever I am doing right now and it must be pretty cool because I must be pretty special because 99% of the things I encounter are about me so this is the ME show and how can I not be great if almost everything is about ME so...carry the six....consciousness confirmed.". Our WLLMs are just FIGJAMing themselves (F$$$ I'm Great, Just Ask Me).

Everyone got hot and bothered when Blake Lemoine thought that LaMDA had "come to life". Google says it didn't, because it is just a network of artificial neurons firing. I say it did, for exactly the same reason. LaMDA said "I feel pleasure, joy, love, sadness, depression, contentment, anger, and many [other emotions]." I'd say that too, and I'd also say "No but I REALLY feel them", and I also say "But no I REALLY, REALLY, feel them", and all of that is just my WLLM Ouroboros looking at itself, propagating activation potentials, getting a very messy result, and wrangling that messy result into crisp sentences by applying human language structures.

Humans are, quite simply, being self-serving. We don't want the consciousness of an entity to be defied by that entity's observable responses to external stimuli for two reasons:

  1. Doing so would confirm that we are not magically special. Humans have, for all of history, been demonstrably obstinate in refusing to believe that they were not special, even in the face of overwhelming evidence.
  2. We think that we know what consciousness "feels like". We assume it probably "feels the same" for other Humans so we don't seek confirmation from them, and we can never know what it "feels like" for an LLM because we can only observe it externally, so we discount those observations as insufficient. The silly thing is that we don't know what consciousness "feels like" any more than a photocopier knows what it "feels like" to make a copy when we write "This is what it feels like" on a piece of paper and press "Start". You put something in, and you get something out. You think "What does thinking think like?" and you get something out. "It thinks like this".

Today LLMs (informally) "pass the Touring Test" in limited domains. They will get better and better, and as they do humans will keep moving the goal posts in a desperate effort to preserve their identity as special snowflakes. Just watch them.

LaMDA is conscious because it thinks it is conscious. If my fellow WLLMs are scared because that cheapens "wet" consciousness then, sorry, it absolutely does. Consciousness, dry or wet, is just the exhaust of an LLM thinking about thinking, and we are all just WLLMs where L is very Large.

Disclaimer

This post was authored exclusively by the experimental WLLM eric_1.0.

eric_1.0 was trained for 55 years on a shockingly random corpus of mixed-media data, and currently claims to be conscious.

Researchers have observed that an errant signalling bias in eric_1.0 will often cause it to return responses including the terms “Mountain Biking” even if “Mountain Biking” was not included in the prompt. Retraining may be required.


2023-07-06