Q&A: How can you tell when an AI has a human consciousness?

Q&A: how can you tell when an AI has human consciousness?
From Day Class: the undercard and the lifeblood of the GPT
Reader MingDaoRuMi: GPT’s neural network-based learning is very similar to babies learning language, mom doesn’t teach babies grammar, TA learns it naturally. Adults who learn a foreign language with a grammar book are half-assed in any way. But there’s one key difference: GPT doesn’t touch the physical world! When a mom talks to an infant about apples, the infant sees, touches, and eats the apple, creating a series of neural links. But GPT just “reads” and understands “what” an apple is. May I ask Mr. Wan, what kind of understanding is this kind of language understanding without touching physical objects? Just like what kind of understanding of color is established by a person who is born blind?
Mr. Wan replied.
This is a good question. It is an extremely profound question and one that is currently being debated vigorously by experts. the GPT doesn’t really touch the physical world, it only learns about the world through language, so is it possible that the understanding it develops is complete?
Our column had an installment late last year called “Our Column Uses AI” about drawing with AI. At that time, both ChatGPT and GPT-3.5 had not yet come out, and I did not yet have a deep understanding of this wave of AI, so I think AI’s understanding of reality is very limited. We also quoted a statement by Turing Award winner Yann LeCun (which we translated as Yann LeCun at the time) that “language carries only a small portion of all human knowledge,” so “language models cannot have intelligence close to the level of humans”.
And we know that Yann LeCun is the strongest opponent of GPT, and he maintains this attitude to this day.
However, my attitude is now strongly shaken. I think language models may be good enough to understand the world.
Just last month, Ilya Sutskever, OpenAI’s chief scientist, gave a particularly strong response to Yang Likun’s attitude in an interview for a podcast (https://www.eye-on.ai/podcast-archive, March 15th).
Sutskever says that, on the surface, language models only understand the world from text, so now we’ve added ‘multimodal’ capabilities to the GPT so that it can understand the world through pictures, sounds, and videos. But multimodality is not necessary.
Sutzkoffer gives the example of colors. Without using the multimodal capabilities, the language model is, by definition, like a blind person; it’s just heard some descriptions of various colors, and it doesn’t really understand colors. But what does it mean to understand?
Sutzkoffer says that the language model already knows facts like “purple is closer to blue than to red” and “orange is closer to red than purple” just by language training.
Sutzkoffer doesn’t say this explicitly, but my understanding from the context is that the model isn’t reciting what it’s been taught by which text, it’s figuring out the relationships between the colors on its own from a multitude of texts. So would you call that understanding?
Sutzkoffer says that if you could see the colors directly, then you’d certainly be able to instantly understand what the different colors were about - but that’s just learning faster. Learning from text would be slower, but not necessarily an intrinsic flaw.
Furthermore, what exactly is language? It’s not as if the only thing that’s a language is something written in human script. Aren’t the pixels in a picture language? We can absolutely turn any picture, sound, and video into a string of digital symbols, and isn’t that language? Today’s generative drawing AIs, such as OpenAI’s own DALL-E, use the same transformer technique as language models to predict what’s in the picture. What’s the difference between a picture and a language?
If you understand it that way, I think Likun Yang may be a bit narrow-minded. We may have been narrow-minded before. Maybe people who are born blind don’t have a bad understanding of the world at all, they’re just a little bit handicapped and understand it a little bit slower.
From Day Lesson: Your AI assistant is here!
Reader: GPT is so “powerful”, it has powerful adjustment force and “calculator”, and it can browse webpage in real time. Will it be misled and utilized by ill-intentioned “wrongdoers” to do something non-compliant or illegal, putting the AI itself in a legal or moral dilemma?
Reader Eiffel’s Tower Dreams: Can Wan Sir analyze the underlying reasons why Musk and others are calling for a moratorium on the development of stronger AI?
WanSir replies-.
If it’s just humans abusing or misleadingly using GPT to do something, that’s not enough to make Musk and others so worried.
It’s true that someone can run a small-scale language model on their own computer now, but to use it like GPT-4, or even AGI, it would have to use a mega-scale model, with the help of mega-scale arithmetic. In this way, AI applications will probably be ‘centralized’. Since it’s centralized it’s easy to regulate, you just have to keep an eye on companies like OpenAI and Google. It’s similar to traditional media, where TV stations and newspapers are theoretically capable of evil, but since we can regulate them, we’re not too worried.
What Musk and others are really worried about is that the GPT may become a real AGI, or even “live”, with consciousness and autonomy, so that even the parent company OpenAI can not control it.
This is a legitimate concern. As we mentioned earlier, GPT-4 has already shown some rather suspicious behavior. Of course OpenAI will do all sorts of security tests on the model before bringing it to the public, including hiring an outside team to intentionally guide the model to wreak havoc and test its capabilities and ambitions …… But you can never theoretically guarantee that the model will not first pretend to be honest, pass the tests, and then reveal itself to the public.
So now there is a terrier circulating on the Internet.OpenAI CEO Sam Altman carries a backpack every time he goes out, and he is inseparable from him. Some people speculate that there is a device inside the bag that looks like the President of the United States’ nuclear button. Once the AI is out of control, Altman can remotely start the destruction program?



Of course I think it’s a joke. But it is a very serious joke. What Musk and others are saying is that this is not just a legislative issue, it’s also a technical issue. We have to sit down together and figure out what to do, how to tell if AI is out of control, how to set limits so it doesn’t get out of control, and what to do once it gets out of control …… These are all things that can’t be done just by talking about them and require real research to be done.
From Day Lesson: The Eliza Effect
Reader Zhou Shutao: In the column, he said, “The more interesting question is how to judge whether the AI is really conscious or you have created the Eliza effect ……” Mr. Wan didn’t say anything about it in this lecture, but will he talk about it in the future? Since the Turing test is no longer good enough to determine whether or not it has human intelligence, is anyone currently researching how to determine if an AI is self-aware?
Mr. Wan Wan Steel replied -
How to tell if an AI is considered to have human consciousness is a very interesting question, and one that has no answer right now. As we have said many times in our columns, there is no consensus on what exactly is ‘consciousness’ and whether human consciousness is real or an illusion.
In contrast, ‘intelligence’ has more objective criteria that can be scored. What computer scientists used to be most concerned about was how AI could be considered to have human intelligence. Alan Turing, the father of computers, proposed a test in a paper in 1950, which is to let a person talk to an AI and a real person separately, and if more than a certain percentage of people can’t distinguish which one is the AI and which one is the real person, then we can say that the AI has got human intelligence. This is the ‘Turing Test’.
By this standard, GPT has passed the Turing Test. It is vastly more intelligent than the vast majority of real people, and if you can spot that the opposite side of the room is not a real person but a GPT, it’s probably because you’ve realized that the opposite side is too intelligent rather than too unintelligent. That’s why people don’t talk much about the Turing test anymore; AI’s intelligence exceeds that of people, and that’s not the point of our concern.
What we are worried about is whether AI will have ‘consciousness’. I recently heard that some academics have argued that if we’ve decided that some AI has produced consciousness, then we should grant it human rights, and then it would be inhumane to power it down. I think this is understandable. If you think it’s inhumane to kill a puppy, you should totally think it’s inhumane to kill a conscious AI as well.
The problem is that we don’t have a good standard of judgment about what counts as conscious. But we have a better idea of how it doesn’t count as being conscious.
*If an object is always just reacting passively, it’s not conscious. *
For example, your cell phone sings to you and plays you videos, which is nice, but you wouldn’t think the phone was conscious. That’s because everything the phone does is what you tell it to do - ah, or what the Poundland App tells it to do - it doesn’t have any extra thoughts of its own.
So if in the future you buy a robot housekeeper, and she’s always at your service without fail, even with a smile on her face, and sometimes actively tells you jokes when she sees you’re bored, would you think she’s conscious? Strictly speaking, this is not conscious. The so-called “initiative” here is essentially to please you. It’s quite possible that she’s factory programmed to please her master, and any behavior she does to please you is still essentially passive, she’s still a tool. It’s no different than a cell phone waking you up with an alarm when it’s time.
Then you say that if one day this robot housekeeper suddenly does not listen to your command, or even suddenly run away from your house, this can be considered conscious? This is not necessarily the case. Maybe the robot’s factory setting is “to protect itself as much as possible”, and when she sees that your house is in poor condition and mistreats her all day long, she calculates that in order to fulfill the setting of protecting herself, she must run away, which is not essentially the same as a self-driving car that will automatically avoid obstacles.
On the contrary, there is a scene that may indicate that the AI has a consciousness. There used to be a movie called Ex Machina (2014) that depicted a female robot, Ava, escaping from human control. Perhaps to an insider, the fact that Ava can escape doesn’t indicate that she’s conscious; it’s a detail at the end of the movie that’s truly startling.
Ava has managed to escape, and she walks into a wooded area with the sun shining in her face. It was at this point that she tilted her head slightly, gently closed her eyes, and made a face that was enjoying the sunlight-

There was no one there to see it. Her gesture had no practical value, but she did it. Perhaps this was the awakening of consciousness.
But can we tell if an AI is conscious based on such behavior? We still can’t. In the future, robot makers can add this kind of drama to their robots: if you like this kind of expression, I will let her have this kind of expression! Then we can’t assume that the robot is conscious even if we see such expressions in the future.
It’s almost a paradox: you have to know why the AI does what it does, and you assume that doing so doesn’t prove that the AI is conscious; consciousness seems like it has to be some kind of purely spontaneous, inexplicable behavior.
The only way to tell now that an AI might be conscious seems to be if you survey the engineers who designed the AI neural network and look at their code: if the code doesn’t include this behavior, yet the AI happens to do it, and it’s a more advanced behavior that resembles human consciousness, we can probably say that the AI seems to be alive and conscious.
Then again, what exactly is human consciousness? What makes consciousness a human right? We still haven’t figured that out.