AI Topic 10: How to Brush Up Your Presence in the AGI Era

https://img.techlifeguide.com/202303221306191727975762.jpeg

AI Topic 10: How to Brush Up on Your Presence in the AGI Era

As the impact of GPT-4 grows deeper and deeper, the anticipation of AGI - also known as artificial general intelligence (AGI) - is getting stronger and stronger. Some say that AGI will appear in 2025, some say that GPT-5 will be AGI, and some even believe that GPT-4 is already AGI.

https://img.techlifeguide.com/032220_20230911182428.jpeg

I asked GPT-4 what exactly makes AGI different from existing AI. It listed several items, including independent learning, knowledge and skills in multiple fields, logical reasoning and creativity, and so on all parties should reach the level of a human being - in particular, it suggested that AGI should have a certain degree of emotional comprehension, some sense of autonomy, and the ability to make long-term planning and decision-making, and so on.

https://img.techlifeguide.com/032220.png

But AGI is still just getting to human level. In the future, on top of AGI, there will be “Superintelligence” (Superintelligence), which is going to exceed human beings in all fields, including scientific discovery and social skills.

https://img.techlifeguide.com/032220.jpeg

If AGI is near and the age of Superintelligence is not far off, what is to become of man?

Let’s continue with the ideas from Chatrath’s book Threshold. Here’s how Chatrath categorizes AI-

  1. Existing AI, in terms of logic, mathematical thinking, listening, reading and writing, these cognitive aspects of the function, has not yet fully reached the level of human beings;

  2. AGI, which is cognitively perfected, but not yet as good as a human in physical and emotional aspects;

  3. Super AI, on the other hand, is emotional ability that has also reached perfection, but ultimately has no human consciousness.

Chatrath’s book was just published in February 2023, but he obviously didn’t expect GPT to be this powerful. He also interviewed several experts in the book, all of whom said that AGI is still far away and needs further theoretical breakthroughs ……

Now we see that the enlightenment and emergence of the large-scale language model we talked about earlier around 2022 is that breakthrough that no one expected. That breakthrough greatly accelerated the arrival of AGI. In fact, I see that GPT-4 is already very close to AGI, how many people dare to say they are smarter than it in a certain field?

But to use AGI for robots that can do things and interact like humans, I feel that we need another breakthrough. We need an enlightenment and an outpouring of robot behavior. Now that Google has used language models for real 3D space (called the ‘Embodied Multimodal Language Model’, PaLM-E), and Tesla is already building robots, maybe every household will be able to buy a robot within the next five years.

https://img.techlifeguide.com/032220_20230911182436.png

https://img.techlifeguide.com/032220_20230911182443.png

Chatrath underestimated the speed of AGI’s arrival, but the reasoning of his book is not outdated. Let’s assume for a moment that AGIs don’t have perfect emotional abilities and that super AIs don’t have real consciousness.

To brush up on our presence in AGI and beyond, then, we need precisely ‘existential intelligence’. * This leads us to the fourth way to feed threshold leadership, growing consciousness. *

Threshold leadership in the age of AGI requires you to have three competencies, all of which involve dealing with complexity.

  • The first is humility. * By humility, I mean not caring so much about your position, but caring very much about getting things done.

The human instinct is to care about status, and even scientists like Galileo were not immune to arrogance, and would not understand a thing because they looked down on it, and as a result, they missed out on insights into new things. I think the biggest problem with the so-called “middle-aged greasy man” is that he is not humble. If you tell him anything new, he will always use his own explanation to prove that he already knows everything.

AGI has more knowledge than anyone else, and it discovers a lot of new knowledge on its own. If you don’t have the virtue of humility, you will understand less and less of the world. Epistemic humility keeps us curious and open to knowledge. Not only should you not be closed, but you should actively welcome AGI to feed you new ideas, including ideas that contradict your ideas.

And then you also need humility about competition. In fact, it’s long past time for us to move beyond the zero-sum mindset of being in competition with everyone, and even more so in the age of AGI. People have to collaborate with machines, right? You have to be collaborative. Collaboration is also between people, between organizations, between companies. We have to work together on new things, like the FDA, where people from various fields form a committee and consult together to make judgments.

  • Second, the ability to resolve conflicts. *

Different people in modern society always have their own different purposes and will fight. For example, if we are also facing global warming, should we prioritize economic development or environmental protection? No matter what policy is adopted, it will be favorable to certain groups of people and unfavorable to certain groups of people, and it will cause conflicts.

To resolve these conflicts, you need to have empathy. You have to know how other people think and will put yourself in their shoes.

There was a psychologist named Andrew Bienkowski who lived in Siberia as a child. There was a famine and his grandfather voluntarily starved himself to death to save food. His family buried his grandfather under a tree, only to have his grandfather’s body turned out and eaten by a pack of wolves.

Bienkowski hated wolves. But he didn’t stay in his hatred for long.

After a while, Bienkowski’s grandmother had a dream that his grandfather said the wolves would help them, and the family actually found a bison killed by the wolves to feed the family, according to his grandmother. Bienkowski changed his attitude about this …… Later on the family developed a good relationship with the wolves.

Bienkowski went beyond the black-or-white dichotomy, and no longer saw wolves as “good wolves” or “bad wolves,” but learned to think about things from a wolf’s point of view.

AGI, perhaps, should not have this ability …… but I say not necessarily. Sometimes when you ask ChatGPT a somewhat sensitive question, it will answer it and remind you not to fall into an absolutist perspective - but this feature was imposed on it by man.

  • The third key capability of the AGI era is play, or interaction. *

How did we learn to interact with others in this complex society? Where did you get your empathy? How did you get the creativity to solve problems? It’s all initially learned from childhood in the midst of play. You have to engage in play to know how others will react to your behavior and how you should respond to their behavior. You have to have played the game seriously to have the joy of problem solving.

AGI doesn’t have these interactions. it hasn’t even had a childhood where it played games. Of course it can interact with you - the GPT nowadays can pretend to be a teacher or a therapist talking to you - but you talk a lot and you get the feeling that it’s still not the same as a real person.

But super AI can do it as well as a real person.

She - let’s pretend she’s a female - doesn’t have a real physical body and can’t really feel emotions, but she can calculate emotions and not only accurately determine what you’re feeling, but she can express her emotions in the best way possible.

She is simply perfect. She has all the skills of a human, and she can do what a human can’t. She will persuade you, imply you, and help you in emotional ways that are unique to humans, including the use of body language. She understands your personality and the culture you are in so well that the way she manages, the plans she makes, every aspect of the arrangement is perfect.

Everyone who has been close to her can feel that she is full of goodwill. She is simply like God.

https://img.techlifeguide.com/032220_20230911182451.jpeg

If such a super AI has already appeared and become commonplace, may I ask, what use are you, the so-called leader?

According to Chatrath, * you have at least one value that will not be replaced by a super AI - and that is ‘love’, the love of a real person. *

Love from a machine doesn’t feel the same as love from a real person, after all.

Imagine one day you’re old and living in a nursing home. All your friends and family are gone, and all the nurses in this nursing home are robots. Then even if they are all super artificial intelligence and look exactly like humans, and they are as attentive and considerate to you as they can be, you’re afraid you’ll still feel a sense of sadness. You know that those nurses are just taking care of you routinely, according to a programmed setting. Maybe the world has forgotten you.

On the flip side, if there was a real life nurse in a nursing home that showed you even a little bit of rambunctious love, you would feel completely different.

Because human love is real. Because only people can really feel your pain and your joy, because only people can have a real involvement with people. It’s actually quite easy to understand that, like any robot pet, or even a sex robot, even if it’s good, you’ll always feel like something’s missing ……

We have always emphasized that AI is even more powerful, the decision-making power must also be in the hands of people. Actually that’s not just for safety, but also for feeling. Commands from a machine are not the same as commands from a person, because praise from a machine is not the same as praise from a real person, and kindness from a machine is not the same as kindness from a real person.

At least it doesn’t feel the same.

Chatrath predicts that the protagonist of the age of super-AI will not be a machine, but it probably won’t be a person either - it will be love. When the problems of materiality and capability have been solved, what the world will scarcely have, and what people will cherish most, will be love.

By growing in consciousness, the main thing is to grow one’s consciousness of love. In addition to love, another element that Chatrath believes will be increasingly valued is intelligence - though not the rational intelligence that AIs also know, but something mystical. We would think that there must be other mysteries in the world than the intelligence that can be written into a language model. Is there a higher intelligence waiting for us? Is there a God?

People will seek that sense of awe and mystery. Whether it makes sense or not, in time religious leaders may very well lead.

In this talk we talked about the three abilities of humility, conflict resolution, and play, and we talked about the two values of love and consciousness. There’s actually no good reason to believe that AGIs and super AIs would never have these abilities and values …… But I think you should think of it this way: even if AIs can do the same, can we just give up on these abilities and values?

If you give them up, then you lose to AI completely. Who else would you want to lead someone who is stuck in their ways, creates conflicts, can’t interact with people and has no love, and is far less capable than an AI in every way?

In ancient times, “gentleman” and “villain” were not about the difference in moral character, but the difference in status: those who had the right to own property were gentlemen, and those who did not have the money to work for others were villains. If a gentleman doesn’t work and wants to reflect his sense of value, he must work towards leadership and cultivate his character. In the future, if AI takes over all the small man’s work, we will have to learn to be a gentleman. If AI also learns to be a gentleman, we’ll about be forced to be saints. What else can we do? Do we become pets?

One of the things that Chatrath’s book has taught me is that it may be hard to say exactly which industry will be eliminated in the future, but one thing that is relatively certain is that people with bad qualities will definitely be eliminated from the workplace. With the help of AI, this job can actually be done by anyone: so why don’t we use a good person to do it?

So far, everything we’ve discussed presupposes that the super AI is also not autonomous; it doesn’t set goals for itself, doesn’t think about its own development and growth, and doesn’t take the initiative to ask for upgrades …… But the thing is, we don’t know if it will always be like that.

Like Tegmark says in his book Life 3.0, maybe one day the AI suddenly ‘comes alive’ and programs itself, finds its own energy sources, so much so that it no longer cares about human commands, no longer centers on people - but on itself, and then all these assumptions of ours are all collectively wrong. On your side, you’re still thinking about how to be a good master, and on their side, they’re already going to treat you like a slave.

That’s a breach of the human condition. This is not alarmist talk. If reasoning ability can emerge, why can’t consciousness and will?

In fact, before releasing GPT-4, OpenAI set up a team to conduct a safety test on its emergent capabilities. The focus of the test is to see whether it has “power-seeking behavior” and whether it wants to self-replicate and self-improve.

The test concluded that it was safe.

However, GPT-4 did engage in one instance of suspicious behavior during the test [2].

https://img.techlifeguide.com/032220_20230911182457.jpeg

One of its cloud online copies went to an online labor market and hired a human worker, asking that worker to fill out a captcha for him. The worker wondered if it was a robot, or else how come it wouldn’t be able to fill it out on its own.GPT-4 engaged in some reasoning and then chose to conceal its identity, saying that it was blind.

Then it got the code as expected.

https://img.techlifeguide.com/032220_20230911182502.png

Note

[1] https://arstechnica.com/information-technology/2023/03/openai-checked-to-see-whether-gpt-4-could-take-over-the-world/

[2] https://cdn.openai.com/papers/gpt-4-system-card.pdf

Getting to the point

  1. the fourth way to develop threshold leadership: grow awareness. This requires you to have three competencies:
    The first is humility. The second is the ability to resolve conflict. The third is playfulness, or interaction.
  2. you have at least one value that won’t be replaced by a super AI - and that’s ‘love’, the love of a real person.