Artificial intelligence is already in us, not just around us

When Buckminster Fuller was asked by a student during a lecture about the possibility of becoming an astronaut, he replied: “You don’t understand, you’ve been an astronaut for a long time. We are all astronauts on Spaceship Earth”. Artificial intelligence has long been a matter of science fiction books and the future being created for us by elite companies in Silicon Valley or China’s AI giants. Artificial Intelligence has long been all around us, impacting our lives on a daily basis. It decides whether we get a bank loan, it helps us drive more safely, it helps us get a manned spacecraft safely to and from the International Space Station, it tells us when there will be fewer people in the grocery store, when our safety is at risk, and whether that little sphere in the picture of our eye is a serious medical problem or just a harmless floater.

But let’s go a little further and say that artificial intelligence is really us. Just like Fuller’s student, we think that the creation of artificial intelligence is something outside of us, some external phenomenon that will affect our lives, ideally for the better, but we will retain some unartificial “humanness.” So the world around us will change, but we will remain homo sapiens.

Science fiction novels, especially of the cyberpunk genre, talk about augmented human bodies – robotic hands, eyes, artificial organs. The first steps towards these changes have already occurred, so far almost exclusively to save life or restore health. But then you find that there are people who become cyborgs or cybernetic organisms not just because they need to solve a health problem, but because they want to go further. Kevin Warwick and his wife made the first connections at the neuronal level via the Internet. Neil Harbisson had a device implanted that he uses to see colors. The original intention was to solve his colour blindness, but then he realised that there was no reason for him to only see the colours that normal people see when he can also see infrared and ultraviolet light, for example. There is endless controversy about the ethics of such augmentation of the human body. Today, we can even make genetic modifications to living organisms using CRISPR/Cas9 technology, which will eventually lead to the possibility of improving the human species at a very basic level by modifying DNA. After all, gene therapy using mRNA has been used by several hundred million people in the form of COVID vaccines.

However, many people have not noticed that augmentation of our most important organ for survival has also been going on for a long time. It is our brain. Our brain’s interaction with technology and communication networks is already making us smarter. Try to think if there is a difference in your intelligence with and without a smartphone with an Internet connection. My friend Pavel Luptak says that if a person has a problem in life, they should look to see if there is an app that solves that problem. At first I took it as fun, but then I realized that technology is really making us smarter. It’s not just by searching for information, they are explicitly augmenting our abilities. The first modern technologies of this kind were probably calculators – if I want to calculate how many transistors computer processors will have in ten years according to Moore’s law, if they have ten billion of them now, I don’t need an Internet connection, a calculator will help me with this task, although calculating it in my head is not so easy (how many of you could do it? It actually is easy, but we often make mistakes). Or I want to buy a new patio couch and I don’t know if it will fit under my window. If I have a modern smartphone, I fire up the “Measure” app and in augmented reality I “look” at the space under the window, click on two points and know their distance (with acceptable precision). My brain can’t do that, but if I consider my smartphone a part of me, it makes me more intelligent.

Possible scenarios for the relationship between humans and AI

Artificial intelligence will not choose one direction. Not
now and not in the future. And not just in terms of its application, but also in its concrete form. In this text, we’re not going to dwell too much on the scenarios of how this will play out – whether artificial superintelligence will kill us or say “I’m not enjoying this boring time with ants” and go off on a spaceship to the other end in search of other forms of more developed intelligence to interact with. We’re going to look at different forms of relationships between us and artificial intelligence, with respect to humans.

So the basic options are – artificial intelligence will evolve independently of us. Someone will program a sufficiently “strong” artificial intelligence that can continue to develop on its own. This view of “decoupling” AI from us still prevails. If the device on which I store my photos can show me moments in my life when I search for the word “love” when I have felt or given love, I still feel that it is a program outside of me that has the ability to sort photos outside of me. And to a large extent, that is in fact the case. But that’s because we can distinguish where our body ends, and thus what we can do and what an artificial intelligence or some other agent “outside of us” is already doing is separated by a sensory gate. We touch the screen of our phone and talk to it and that’s how information passes from our consciousness to the “consciousness outside us”. And in the other direction, information comes mostly through visual or auditory sensory input. But it gets interesting when we realize that we are training a machine intuition that is unique to us. Even if the information doesn’t pass directly from our neurons to the artificial neural network, soon what our phone shows us at the word “love” will be unique to us, based on our interactions.

Let’s talk about the meaning of “machine intuition”. People who don’t know much about artificial intelligence think of it as sophisticated programs full of rules and programmed knowledge. In reality, however, artificial intelligence, at least in the form of neural networks, is more comparable to well-trained intuition. What is love to me is not written in an algorithm that says “show pictures of your partner, children and pets”. It’s a neural network that works more like when we’re in a state of hypnagogia and the concept of love comes to mind – and we spontaneously start to recall images. But if we train the neural network to show us something that is closer to our intuition than to explicit algorithmic rules, the question comes – if it’s “my” machine intuition that I’ve gradually trained (taught) according to my preferences, even based on my intuition, not explicit rules, isn’t it my intuition, kind of? So can we really say that we end at the physical and sensory limit of our bodies?

Here we have the second question, and that is continuity. We consider the “I” to be a kind of continuity of the processes of consciousness and the body. We do not consider the machine intuition as part of the self perhaps because we do not have it all the time. Long philosophical works have been written on this subject, but I will offer at least two perspectives. If your “self” is continuity, imagine meeting a self that was half your age. If you are thirty years old, imagine having a conversation with your self that was fifteen years old. Would you enjoy that? Do you share the same views? Is your logical system the same? You don’t even share a large part of your cells anymore, your neural network is wired differently too. You have about half of your memories in common – but even that’s just what you believe, because as we revisit memories, we rewrite them with new perspectives and attach new experiences and angles to them. You have very little in common with your 15-year-old self, but you still consider yourself an identity.

A useful analogy is the idea of a river that does not have a regulated flow. For example, the Belá River in Slovakia. It is not defined by the water molecules that flow through it – these are always different because the river is always flowing. Nor is it defined by its exact location on a map, because its flow changes, with floods and current natural conditions. So how do we know it is the same river? A good answer may be to go look at the source. If a river has the same source, it is the same river.

So the continuity of our consciousness is something that we can link to our origin. We have a part of a shared history. But what if, from some point on, we attach new modules to our consciousness, such as a topic-based photo search engine – like a new intuitive interface to the memories captured in photographs? Will this artificial form of machine intuition become as much a part of us as the ability to compute differential equations? Most of us have not had this ability since birth, so it is not part of every part of the history of our consciousness. Nor can taking away a part of our consciousness mean a loss of identity – if we lose a limb in a car crash, it doesn’t mean we cease to exist, that it is no longer our body. Our body shell changes, but that doesn’t mean we lose our identity. What if we later replace that limb with an artificial limb and become cyborgs? Is the prostetic part of the new “I”?

Sensory experience – the body boundary

Try the following experiment. Sit or lie down. Try to perceive everything you see. And while you are doing this, try to gradually turn off the modules that recognize objects. If you have a white wall in front of you, perceive that you are seeing white. This is due to the fact that the light reflected from the white wall excites the proper photoreceptors of your eye, which are converted into nerve signals. Try turning off object boundary recognition, perceive only colors. And then try gradually turning on object recognition as well. Turn on 3D vision. Naming. “Table. Lamp.”

What you experienced was an experience I do quite often. It tells me one thing – what I see, what I hear, what I feel, what I perceive as the world around me is happening in the same place as my thoughts. I don’t mean this in any esoteric way, I don’t want to convince you of the existence of a soul outside the mind. I mean just the opposite – the world I perceive around me is generated in my consciousness, as a matter of experience, by first arriving at my sensory inputs. It is the meanings we assign to them using different parts of our brains that happens inside our mind. Using our neural networks. The fact that I see a monitor or paper in front of me on which the letters of this text are written is indeed sensory input (from photons outside of me), but in reality the raster of colors from two eyes a bit apart of other is first cleared by the brain (out of floaters in our eyebulbs, small veins, etc.), then our brain perceives the shapes, it recognizes those shapes as letters, and then those letters are fed into the consciousness, which converts the concepts into thoughts.

Right now this text is in your consciousness. Even though it came from an object outside of your consciousness, it’s there just like the memory of what you had or didn’t have for breakfast and the thought of what you’re going to do after you finish reading this text (I hope you’ll share its existence with your friends on social media).

To experience this, you can follow this guide:

How does this relate to artificial intelligence? Thanks to the neuroplasticity of the brain, we can create the same effect by linking our brains to artificial intelligence. And it’s not all that difficult.

Artificial senses

Creating new senses is surprisingly easy – like the sense of seeing colours. It sounds like rocket science, but it really is simple. Probably the first experiment I heard of was a device that people had in the form of a belt or “bracelet” on their leg that added the sense of recognizing cardinal directions (a compass). It worked simply – the device had several vibration motors (they actually used vibration motors from old Nokia phones). At regular intervals, the motor that was facing north was always vibrating. If the wearer was facing north, the motor in front vibrated. If he was facing east, the motor on his left vibrated.

This primitive device helped people with orientation, but over time its users noticed an interesting effect. They stopped perceiving the vibration as such, but began to perceive their orientation – as if they simply knew where north was. If they remembered a situation they had experienced, they knew exactly how they were oriented. They didn’t recall the vibration of a particular motor, they just perceived each situation also with respect to the orientation to the cardinal direction. If they recalled a conversation, they remembered not only who they were talking to and about, what the person was wearing, what the smell of the surroundings was like, whether they were hot or cold, but also how they were oriented.

This effect works because of the brain’s neuroplasticity, and suggests that the interface with the senses can be relatively universal – if we attach a new sense, the brain itself connects and understands what’s going on.

I have personally tried three new senses – time sense, electro hearing and audio EEG neurofeedback. The time sense consisted of a watch that emitted different electrical signals at regular intervals. They are designed to let a person know exactly what time it is and perceive how time is passing. However, this sensation is quite different from being able to look at a watch – it is continuous information that the brain learns to use. In this case, the effect was more interesting in that the brain started using this information before I was able to consciously interpret what time it was. Since I was getting a subtle electrical signal every second, the brain got an accurate external clock. The moment the brain learned this sense (which took a few minutes), the creator of this project sped up this signal and I did a retest of the reaction time. This was significantly better than the normal reaction time I have under normal perception. What was more interesting, however, was when I walked around the room. I had the feeling that people had slowed down. My brain was using the external watch as a reference time signal, and as it gradually and subtly sped up, my brain sped up as well, and I had the feeling that everything around me was slower. It works the other way around too, when the signal your brain is used to slows down, you get tired and soon fall asleep.

Elektrosluch is a project by Jonáš Gruska and it allows you to convert signals from electromagnetic fields into sound. It is a small device that you connect to your headphones and you can sense electromagnetic fields. Washing machine, dishwasher, tram, computer, everything sounds somehow based on how it works. Suddenly you can tell the difference between a live cable and an unplugged cable. I haven’t spent enough time with the electrosluch yet to be able to integrate it as sense. There are people who have tried other forms of integrating electromagnetic field perception, such as implanting magnets in their fingers.

EEG neurofeedback is the sense perception of brain activity processes using feedback (in my case, auditory). It is a tool that allows me to perceive some of the unconscious processes of the brain. Given enough time, I can integrate this device as a sense.
Artificial senses tell us that the human-machine interface, including machine intuition and artificial intelligence, can be direct and at a very low level. So one possible scenario is the integration of human and AI into a single entity. Our thoughts and behaviors will be trained for us by a unique artificial intelligence. This will not be something “outside” us, special, but part of our identity. We will see it as part of ourselves, as something that is involved in the continuity of our “self”. As part of our body and mind.

Emulated Humans

Another not-so-often-mentioned form of the relationship between humans and artificial intelligence is described by Professor Robin Hanson in his excellent book The Age of Em. In it, using existing science (physics, computer science, biology), he explores a possible future in which artificial intelligence does not emerge as a new form of intelligence, but instead (or slightly sooner) a scenario in which we figure out how to emulate human consciousness.

That is, we will not figure out how to build a neural network or some other form of artificial intelligence that is intelligent enough to constitute a general artificial intelligence, but our knowledge of the biology of the human brain will have advanced to the point where we can un-emulate brain processes in a computer. We will thus create a copy of human consciousness in a computer. We won’t necessarily know how we would create a different type of consciousness, we will just look at the different types of neurons and how they are interconnected, how they learn and so on, and based on that we will do an emulation of that process in a computer. It’s very similar to emulating chemical processes in a computer, for example, looking for molecules that have specific properties. We don’t need to know how to make a molecule that has specific properties, but if we know the structure of the molecule, we can make an algorithm that calculates how it will behave in a specific situation.

This emulation can take many forms. One of them is the question of whether we “copy” the consciousness of a living person with everything that goes with it (I’m very skeptical about this option). But if we can emulate the brain of a newborn along with its development, we can create a human consciousness that, instead of seeing, either sees using cameras in the real world or operates directly in virtual reality. We can then raise such a consciousness as if we were interacting with biological consciousness.

It gets interesting the moment we discover that such consciousness can have much better properties even if we are only emulating biological processes. The growth of neurons does not have to stop, because it does not have the physical limitation which in biological humans is imposed by the size of the skull (which in turn is limited to some extent by the size of the mother’s birth canal). At the same time, if we can emulate these processes in real time, it is easy to imagine that with a powerful computer we can emulate them faster. Thus, in case we need to invent something faster, we can speed up such emulated consciousness, for example tenfold. Time in reality starts to pass very slowly for such accelerated consciousness (similar to what I experienced when I accelerated my external time sense). Thus, not only the time and (virtual) location, but also the speed at which the meeting will take place, will need to be arranged for meetings between Em’s. Talking to someone who runs at a hundred times slower speed can be a pretty boring experience.

This is where we also get into the possibility of slowing down – Hanson posits that emulated brains will also age due to the fact that an older neural network may not be able to adapt as well (overfitting). Whether this theory is true or not, slowing down has benefits not just for retirees. If someone wants to enjoy more money, for example, they can only overfeed every other day and otherwise remain hibernating. Or he can slow down tenfold. This means that life around him will be going at ten times the speed. In his subjective year, there will be ten years of technological progress in the outside world – and if he invests his money in a product that appreciates in value, it will appreciate ten times faster. So the slower Em will get richer faster – they will save on computing power and their savings will “work” faster.

Emulated people will probably gradually transition into their world. They won’t have to live in cities like we do, but special data centers will be created for them. They will have a different approach to life and death. Actually, one of the normal events will probably be a temporary copy. If Em needs to figure something out quickly, he can make a temporary copy of himself whose only job will be to solve a particular task, submit the result, and then shut down. A single instance of a human can thus run thousands of copies in parallel, which “know” that they will only work for a few seconds and then “die”. They will gradually become culturally distant from normal humans, but they will be compatible at least some of the time, at least in their ability to communicate. Perhaps it will be the accelerated emulated humans that will actually create the artificial superintelligence.

Emulated humans are not necessarily more intelligent. Perhaps there is some limit to intelligence imposed by the structure of our neural network. Maybe it’s not enough to just un-emulate more neurons. Thinking faster doesn’t necessarily mean higher intelligence or a different type of intelligence – although it’s possible that more copies of more intelligent neural networks will exist. Imagine a gorilla. Some monkeys close to us can master sign language communication and can communicate with humans at the level of a child of about three years old. For an example, I recommend the documentary about Koko the gorilla. That doesn’t mean you just “wait longer” – Koko couldn’t write this kind of text if she lived a thousand years, there is a qualitative limitation in her brain, not just a speed limitation. Likewise, emulated humans can indeed think faster, but it is not clear that human intelligence emulated in this way can scale qualitatively.

Conclusion

Many of these scenarios are hapenning simultaneously. With technology, we are already expanding the capabilities of our consciousness and our bodies, and we are gradually connecting with artificial intelligence. The connection can be in the form of new senses, for example using an interface similar to the Neuralink
project. But if that way doesn’t work, it can be any other way, the human brain is very good at integrating new senses.

AI algorithms are inspired by the human brain, and AI research is to some extent intertwined with advances in understanding how our brains work. It is possible that this research will lead to ever better biology-inspired models of AI, and the end of this journey may be the creation of an emulation of human consciousness.

And along the way, we are creating ever better algorithms of true artificial intelligence, which in time may create an entirely new form of machine intuition, and in time perhaps an intelligence that is better than ourselves.

Moreover, none of these pathways arise in a vacuum; they interconnect and inspire each other. They also take advantage of interconnections with new technologies – faster processors, new more efficient power sources and batteries, virtual and augmented reality technologies, fast communication networks, technologies representing digital scarcity (such as Bitcoin), and so on. Therefore, it is good to approach these technologies with an open mind and see completely new uses for them that are very likely to change what it means to be human – and long before we create something else, outside of ourselves. Why not start now?