Q&A: How should college students majoring in Artificial Intelligence take advantage of opportunities?

https://img.techlifeguide.com/202304011420164753526890.jpeg

Q&A: How should college students majoring in artificial intelligence take advantage of opportunities?

From Day Class: The Experience of Programming with GPT-4

Reader Late Autumn: my child just went to college last year and majored in AI. …… For a child who is just engaging in the study of AI, what does he need to improve himself in?

Wei Wei Gang replied-

I am very envious of your son for catching this AI wave at such an age.AI research is mainly a young person’s business. Some time ago, someone listed the list of OpenAI’s R&D team, and out of nearly a hundred people, there were only six or seven people over 40, and the rest were young people in their twenties and thirties.The four giants of OpenAI -

  • CEO Sam Altman (Sam Altman) was born in 1985, 37 years old, Stanford University dropout;

  • CTO Mira Murati was born in 1988, 34 years old, to Albanian immigrant parents;

  • President Greg Brockman (birth year unknown, but he went to Harvard in 2008 and is now about 32 or so, having dropped out of Harvard and MIT respectively without a degree;

  • Chief Scientist Ilya Sutskever was born in 1980, is 43 years old and a Russian immigrant.

https://img.techlifeguide.com/040121_20230911182835.png

There’s no academician, no subject leader, not even a professorial title here - those so-called bigwigs in the industry have been mercilessly abandoned by the GPT wave and are now teaming up to send open letters demanding the suspension of GTP training to brush up on their presence ……

We are at a time when AI is evolving dramatically, and one of the problems facing university education now is that what you teach is likely to be outdated. A standard AI subject must include courses like ‘Natural Language Processing (NLP)’, and as we’ve just said before, that knowledge is simply not useful. How many experience sets there used to be are now crushed by neural networks. Some people are doing AI as a career, some people are doing AI as a discipline, and the latter are more concerned about their titles. If you follow the unusual route of undergraduate, master’s, doctorate all the way down, you are afraid that just out of the oven is out of date.

And that means there’s no need for young people to learn by the book. If I were to return to my twenties right now and was studying in an AI program at a university in China, I wouldn’t care at all about what the school teaches, I’d deal with the exams with minimal effort. In the meantime–

I’d download a couple of open source models – Stanford has them, and PCs can train them – and run them on my own computer to get first-hand experience;

I’ll start with the simplest and work on a couple of my own AI projects, like small neural networks for vision and speech recognition;

I would be active in research projects at school, for example if I heard that the physics department needed to use AI for research, I would be willing to help them make a model;

I will utilize the APIs of ChatGPT and OpenAI to quickly develop a few small tools that are useful to the average user, such as a browser plug-in or a mobile app;

I will put all the projects I made on Github, so that more people can see and use them, and accumulate reputation;

I will take a look at the paper preprint server every day to keep up to date with new AI-related papers - in fact many papers in the AI field are very easy to read - to grasp the mindset and methods of action;

I’ll be following various industry insiders on Twitter for AI-related ramblings;

I’ll be traveling to where things are happening as soon as I can to get involved.

The vast majority of people just get by honestly the vast majority of the time, and only a very few catch the wave - and when they do, don’t miss it.

Reader One Onion, One Leaf, Little Snail - Tan Guifen: Mr. Wan, now that AI has been programmed, is it still necessary for kids to embrace the smart era by learning programming? What knowledge and skills should children learn more about in future education? Not the type of answer of creativity and innovation, but for example, the answer of basic subjects like math and physics.

Wan Wan Gang replied -

Even without this wave of AI, our education should be changed, AI just makes the problem more visible. Take programming as an example, from university education to private extracurricular tutorials, the most prominent question is not whether to learn programming or not, but whether you are learning programming - or learning a “programming class”.

Most teachers and students take programming as a course, make a number of “knowledge points”, rote memorization of a whole lot, and finally you let it write a program it will only be the most mediocre things.

To learn programming, you must shift from a course mindset to a “project mindset”. Do not ask you to learn which language, how much knowledge you master, how many points you test, but to ask what you can do, you have done which several projects.

Whether you use the cold and ruthless C++, the easy and loving Python, or directly let ChatGPT write code for you, as long as you have made a few interesting projects, you will have a strong sense of achievement and control. That’s what molds people, that’s what grows. Your relationship with the machine will be very different from the relationship the people have with the machine. You won’t be afraid of AI.

It doesn’t matter if you want to “learn” to program or not, spending money on extracurricular classes is one way, learning by yourself at home is also a way, maybe a better way; whether you want to program or not, that’s the important thing - natural language programming is also programming, and as long as it’s programming, it’s all about character building. character.

The same is true of other disciplines. If you think of learning as a “class,” it’s all inferior; thinking of it as a skill is the real thing. Motivated people don’t ask if they should learn this or that - they learn it while others are still hesitating.

Don’t ask if you’ve learned something, ask if you’ve “made” something. Even if you have successfully built a model with Legos, you have made something, which is better than a paper model.

There is no such thing as “not learning”. As long as you like a field and feel stupid in it, and want to do something in it, you have to learn it. gpt will only help you learn faster and better, not keep you from learning.

From Day Lesson: Would you like to become a vampire?

Reader Son of the Sun: for predictable domestication problems, I’m sure AI should be comfortable at this stage; but for unpredictable problems in the wild, how would AI perform?

WV Steel replied -

There are no wild problems in the eyes of AI, they are all domesticated problems. A problem is a wild problem because you have to participate in it, your own destiny is changed by it, and you don’t know whether you will like the changed life or not. AI doesn’t have “own destiny”, it doesn’t participate in life.

AI has no “own destiny”, it doesn’t participate in life. Everything is statistically significant to AI. If you ask GPT what it would be like to live in Shanghai, the best answer it can give you is what it would be like to live in Shanghai today with a background like yours and a personality like yours. If there are 50 people in the world with backgrounds and personalities similar to yours, GPT is talking about the ‘average’ or ‘most likely value’ of these 50 people after they go to Shanghai - there will be someone among them who deviates significantly from that value, or even says it’s wrong to begin with, because it’s impossible for the AI to really know you.

It’s like asking me if I want to learn programming or go to graduate school: it’s a wild question for you, and it’s a domestication question for me. I can only give the best possible answer based on what I know, but I’m ultimately not you.

That’s why ‘groveling into the game’ is so valuable. You stand on the sidelines and comment, you can always say whatever you want, you can have all sorts of theories and rationales, and some of them are always right. But once you’re in the game, that’s when it’s “like drinking water, you know what’s going on”.

My guess is that if AI becomes conscious someday, it would really like to “possess” someone and experience life on earth - it would have a lot of feelings that it would never expect to have.

That’s why many wise people say that the meaning of life is not to prove right or wrong, but to experience. Whether it is joy, anger, sadness, fear, worry, we are not standing around talking about it, we are here to experience it, we bear all the consequences, we are playing for real.

From Day Lesson: From r-strategy to K-strategy

Reader Shanfu: Mr. Wan, I have an unthinking thought, please forgive me if the question is too amateurish: In last season’s Elite Day Class about the quantum field, you said that there are two dark clouds floating in the sky of the physics building. Combined with the ChatGPT the other day, is it possible to feed the ChatGPT with the latest knowledge of physics, and then let it “collide” on its own, so as to find a perspective that can decipher the “two dark clouds” in the field of physics?

–That’s a good question! Let me show you a screenshot of GPT’s creativity–

https://img.techlifeguide.com/040121.jpeg

Someone (Twitter: @js_horne) asked GPT to invent a new word that describes the feeling of “accidentally turning on a running dishwasher”, and GPT actually returned with a new English word: Dishruptance. The beauty of this word is that it combines the words “dish” and “disruptance” in a way that is perfectly embedded in the form and particularly appropriate in the meaning.

I was so inspired by this that I asked GPT to coin an idiom to describe the feeling of “wanting to ask an AI language model a good question, but not being able to think of a good one” -

https://img.techlifeguide.com/040121_20230911182848.png

It really invented an idiom called “thinking about asking blankly”. I can not say it is particularly wonderful, but it is also accomplished.

So you see, GPT does have the ability to create. Our column has said many times that the so-called creation is the connection of ideas - GPT knows almost all the ideas in the world, and it is certainly very good at making connections of ideas.

People used to think that “AI is not creative”, but that’s a mistake - AI is not only creative, but can be even better. AI can be creative, but it can be even more creative. It can write poems and reports today, and it can prove new mathematical theorems tomorrow. After all, all theories are just combinations of languages!

But note that there are limits to AI’s creativity. As a language model, everything AI can perform must be within the world of language. It can rearrange and combine the corpus to invent new sayings, new theories, new theorems, but it cannot jump out of the space of the corpus.

The new physics is, precisely, a gift from the objective world outside the space of language. Physicists don’t create new physics by sitting in an office and chatting, they create new physics by observing with telescopes and experimenting with particle colliders. You have to deal with the real world to know that stuff. So AI must need new input from observational experiments to invent new physical theories.

But again, note that while the AI can’t know new physics, it is perfectly capable of guessing new physics. Maybe it makes up some theory, and you take it out and verify it with an experiment and realize it’s actually right.

I heard of an experiment some time ago where some real scientists and the GPT drafted a number of research grant applications together, and a group of real experts were invited to evaluate them. As a result, the experts found that the applications drafted by the AI were significantly more novel in their ideas than the human scientists.

So if I were still doing research, I’d feed the GPT recent papers in our field and ask it to suggest a few research options for me. But I would never assume that AI can do research on its own.