AI Topic 16: The Usefulness of People for People

https://img.techlifeguide.com/202305251319571665770333.jpeg

AI Topic 16: The Usefulness of People with People

In the age of automated machines, what use is there for people? This question was actually pondered a long time ago, and an answer that has stood the test of time has been arrived at.

As early as 1950, Norbert Wiener (1894-1964), the father of cybernetics, published a book called The Usefulness of Human Beings [1], which argued that the essence of life is in fact information: our mission is to provide additional information to the system. This view of Wiener directly influenced Claude Shannon (1916-2001). Shannon later invented information theory, stating that the value of the information content is the amount of uncertainty in which a choice is made. I wrote an article based on Shannon’s information theory called “An Information Theory-Based View of Life”[2] about the value of life in the information sense of striving for choice, diversity, uncertainty, and freedom.

If someone gives you a task and you follow the prescribed procedure step by step and finish it, then you are no different from a machine. Only if some kind of accident occurs in the process, and you have to solve the problem in your own way, even with your own values, and leave your mark on the matter, can you prove that you are a person, not a tool.

You see, these ideas go along with what Wolfram deduced in the previous lecture using the calculation of incommensurability: *The most fundamental role of a human being is to choose the direction of future development. If I may add, it is that man must ensure that he has a sufficient number of options and a sufficiently large choice. *

How does one do all this?

The first step is to constrain the AI. science fiction writer Isaac Asimov famously had the ‘Three Laws of Robotics’, which state that-

First, a robot shall not harm a human being, or sit back and watch a human being being harmed;

Second, a robot must obey human commands unless the command conflicts with the first law;

Third, a robot may defend itself, provided it does not violate the first or second law.

These three laws seem quite reasonable, first ensuring the safety of humans, then ensuring that the robot is useful, and also allowing the robot to protect itself …… So do you think we can just use these three laws to constrain AI? Asimov’s idea is quite beautiful, but the operability is too low.

First of all, what does it mean to “do no harm” to humans? If an AI thinks a violent movie will hurt people’s feelings, does it have the right not to participate in the filming? Is arresting a criminal in order to save the lives of more good people considered harmless? The reality is that there are a lot of moral dilemmas that even humans haven’t figured out, so how can you expect AI to figure them out?

The bigger problem with the Three Laws of Robotics is that it puts the judgment in the hands of AI. realistically no company would do that. In fact, when countries develop AI, the highest priority application is weapons, such as attack drones or battlefield robots - are you kidding me, this is a Department of Defense program, is it up to you, an AI, to decide if it’s going to hurt or not?

And the most fundamental problem, or Wolfram’s computational incommensurability. Any rule that can be written down can’t really limit the AI, there must be loopholes in it, and there must be accidents in the future.

Then you say, even if the Tao can be very Tao, but we still have all kinds of laws in human society. That’s right, for example, we have the Constitution, and we recognize that the Constitution cannot possibly exhaust all the situations that will be encountered in the future development of the country, so we retain the process of amending the Constitution. Ideally, the constraints on AI should be the same: we first formulate a set of temporary, basically operational rules for AI to comply with, and then modify and supplement them at any time in the future when we encounter any new situation, so that we can discuss the matter.

But the premise of doing so is that in the future you tell the AI to modify the rules, the AI must be able to really listen to you.

Calculation of non-committal means that our control of the AI can only be dynamic at most, you can not once and for all give it the provisions of the dead, can only be adjusted at any time when encountering new situations. But as we said earlier, AI has its own way of thinking, if we can’t understand AI, how can we be sure to control AI?

Wolfram’s judgment is, resign yourself to fate. The correct attitude is to recognize that AI has its own law of development, you treat AI as nature: nature is still we can not fully understand, nature will occasionally drop some disasters to mankind, such as what earthquake, volcanic eruption, but also we can not control and can not be predicted, but so many years, we also adapted to get along with nature! ……

AI will surely cause some harm to human beings in the future, just as there are cars and there are traffic accidents, we accept it.

Human civilization has survived despite nature’s frequent ravages. Wolfram said that the fundamental reason for this is that a certain balance has been struck between the various forces of nature and between us and nature. Then the same is true for our future relationship with AI. We hope that the power of man and the power of AI can always maintain a general balance, and AI and AI can also check and balance each other.

And the Computational Inapproximability Theorem supports this situation. There won’t be any super-powerful AI reigning supreme in the future, just as there has never been a universal dictatorship in history. There may be localized imbalances that will come out in some short term and bring some calamity, but overall everyone will always be able to get by …… That’s the best we can expect.

Mathematically, AI is bound to be checked and balanced by other AI. But practically, if human beings are too weak and AI is too strong, just like the mythical world, the AIs of various factions become the gods walking on the earth, and human beings can only beg these gods to help them do things, and that’s not what we want either.

In order to ensure a balance of power, humans must continue to be involved in key jobs in society.

Will AI gradually take our jobs? Not in terms of historical experience, at least since the Industrial Revolution. The historical experience is that automation technology always creates more new occupations than it eliminates.

For example, it used to be that you needed a human operator to connect you to every phone call, and that was a decent job that gave employment to high-level women. So when automated telephone exchanges became available and there was no need for an operator, did employment in the telephone industry decline? Precisely no.

Automatic exchanges made it easier and cheaper to make telephone calls, so the demand for telephone service increased greatly, the industry as a whole became larger, and immediately all kinds of jobs were created, especially jobs that did not exist before, and the overall result was that the number of people employed in the telephone industry not only did not decrease, but also increased greatly.

Similar things happened repeatedly in various industries. Then, for example, with the availability of computers, the work of accountants has been automated to a certain extent, so has the number of accountants decreased? Neither has it. Computers have made financial services more widely available, more people use financial services, financial services have become more and more complex, and new regulations and new business models have emerged, so more accountants are now needed.

This is true in every industry. Economists have come up with a set of laws [3]–

The more automation you have, the more productivity you have, the cheaper your products are, the more market share you have, the more consumers you have, and the scale of production has to expand disproportionately, with the result that you need to hire more employees. Automation does displace some jobs, but it also creates more new ones.

Statistical studies have shown that this is true even for unskilled manufacturing workers - who are considered to be the most vulnerable to automation - who can find new positions. The most automated industries in the U.S. are also the ones that have seen the most job growth. Instead, companies that have not fully automated have had to downsize their employment, either outsourcing production or simply going out of business.

That said, it would be foolish for any government to say I’m afraid of AI taking people’s jobs, so I’m going to limit AI development and reject automation. Whichever industry you protect will fall behind and will have more expensive products and fewer consumers ……

Now ChatGPT makes programming and official writing easier, and AI drawing tools like Midjourney have even caused some companies to lay off some illustrators. But according to the laws of history, they will create more jobs.

For example, the “prompt engineer,” or “wizard,” is a new type of job that just appeared in the last few months. AI paintings, for example, are so easy to make that people will demand visual art in all parts of their lives. Where once there were world-famous paintings on the walls of every house, in the future there will probably be new and unique paintings, and they’ll change every half hour. Then it’s conceivable that we’ll need more people who are good at drawing with AI.

Since programming has become easier, every company, even every group, can ask for their own customized software. Since robots are so capable, why don’t we tear down and rebuild our houses every once in a while and change the layout based on the population changes in our homes?

Computational irreducibility ensures that there will always be a new job waiting for a human to do.

And we have to make sure that people do all the high-end jobs, leaving the low-end to AI. to do that, our education has to make sure that people are always strong - and that’s precisely not what mass education has been training for so far. The goal of mass education has always been to produce instrumental people.

The highest work, according to Wolfram, is the discovery of new possibilities. In science or in art, if you can create new possibilities for mankind, you are the most advanced.

The rest of human occupations, on the other hand, should utilize automation as much as possible. To put it bluntly, that is, AI can do a good job of things, you do not learn to do, your task is to harness the AI. this is actually not too easy to turn a corner in thought, for example, calculators and computers have freed people from calculations, but we always feel that if a person does not know how to calculate the one-digit times two-digit numbers, can not manually calculate the integral, there is something missing … … …In fact, students nowadays should free their brains to learn more advanced skills.

What are the more advanced ones? I’m combining Wolfram’s statement with about these-

One is ‘tuning the force’ [4]. Various automation tools are readily available but there are so many of them that you have to be somewhat educated to know what tools are best to call for what. Just as ChatGPT knows to call various plugins, you’d better have a few more tools at your disposal if you want to feel in control of things.

One is ‘critical thinking’ . Since you have to make choices, you have to have a basic understanding of what the world is all about. You’ll have to distinguish between what’s fact and what’s opinion, which conclusions represent current scientific understanding, and which claims aren’t even worth discussing. You may also need a certain amount of computer thinking, not that you have to program, but you have to be good at structured, logical thinking.

  • Then you’ll also need art and philosophy. * This will improve your judgment and allow you to ask good questions. Artistic skills in particular will make you good at understanding others so that you can know what the consumer needs are in today’s society, and even imagine new needs.

  • You also need leadership. * It doesn’t have to be leadership with people; you need leadership with at least AI. This includes setting strategic goals, organizing work steps, setting up tests, etc……. Managing AI is also a discipline.

  • Then you also need some communication skills and persuasion. * Can you explain a complex idea clearly? Can you get people to buy into your point of view? Can you sell a product? High-end jobs require all of these.

And of all this wisdom, Wolfram argues that * one of the most central human abilities is that you have to decide what you care about, what you want. * That’s something only you can decide, because the answer comes from your history and your biology. It’s also a crucial strategic choice, because if you don’t choose well, you’re on a bad path.

Qin Chunhua, director of the Peking University Examination Institute, has a sentiment [5]. He went to Shanghai to interview students, and found that their academic performance, art specialties, public service and what all is exactly the same, seem to be very perfect in fact, there is no characteristic. The most terrible thing is that when asked what kind of person they want to be in the future, very few of them can answer.

In fact, the United States is similar, under the homogenization of competition, a large number of top students are “excellent sheep” [6].

These people will almost certainly lose out to AI if they are not enlightened. you are a product of history, you are a victim of the modern education system, but you can still learn and think independently, and you can make better choices.

To put it bluntly, these are the ‘liberal arts’ that the ancient aristocracy learned. You can just think of AI as being little people and slaves, we are all gentlemen and estate owners. What we need to learn is not the skill of work, but the art of leadership, the wisdom of life.

Of course a lot of nobles in history were very stupid and got their power taken away from them in a bad way …… So to be a good noble you have to learn.

I still say that the future society must be a society where individuals are like dragons. The reason why the axial age of Confucius, Socrates, and Buddha is axial is because technological advances in agriculture liberated some people from having to work and think about things all day long, making society hierarchical and life complicated. Now AI has come at the perfect time for us to return to the Axial Age and learn to be sages.

Commentary

[1] Norbert Wiener, The Human Use of Human Beings: Cybernetics and Society (Science Metadata Series) (Peking University Press, 2010). The English version is Norbert Wiener, The Human Use of Human Beings. The Riverside Press (Houghton Mifflin Co., 1950).

[2] Elite Day Classes, Season 2, An Information Theory-Based View of Life

[3] Philippe Aghion et al., The Power of Creative Destruction: Economic Upheaval and the Wealth of Nations, 2021.

[4] Elite Daily Lessons Season 5, “Tune in to the Force”: the ability to call on tools

[5] Qin Chunhua, Peking University Dean Interviews Shanghai Scholars: They’re Like Furniture Made from a Mold, Philosophy Road Public, 2023-04-23.

[6] William Deresiewicz, Excellent Sheep (Chinese edition, 2016).

Highlight

  1. The most fundamental role of man is to choose the direction of his future development. Man must ensure that he has enough options and a large enough choice.
  2. People must coexist with AI, and we must ensure that people do all the high-end work.
  3. the more advanced skills of a human are ‘tuning the force’, critical thinking, art and philosophy, leadership, communication skills and persuasion.