当AI比人还聪明时…
Chapter 647:当AI比人还聪明时…
...
“The most common answer is technology, which is indeed right, and technology is the great fruit that has accumulated in our history.”
“Technology is growing so fast, that’s the direct reason, and that’s why we humans are so productive right now, but we want to explore the ultimate reasons for going further.”
“We and our ancestors are separated by 250,000 generations, and during this time, we went from picking up stones on the ground as weapons to being able to use atomic energy to make devastating super bombs, and now we know that such a complex mechanism takes a long time to evolve, and these huge changes depend on the small changes in the human brain, the chimpanzee brain and the human brain are not much different, but the human victory, we are outside and they are in the zoo!”
“It follows from the conclusion that, in the future, any significant change in the matrix of thought can make a huge difference.”
He took a sip of water and stopped:
“Some of my colleagues think that we or humans are about to invent technologies that will revolutionize the way we think about, and that is, super-AI or super-intelligents.”
"We humans now have artificial intelligence, the image is to put a certain instruction into a box, the process we need programmers to turn knowledge into a running program, for this will establish a professional system, PHP, cAAA and other computer languages."
"They're blunt, you can't extend their functionality, basically you can only get what you put in, that's all."
“Despite today’s rapid and mature AI technology, we still fail to achieve the same as humans, with a strong cross-disciplinary composite comprehensive learning ability.”
“So we’re now facing a question: How long will it take for humans to have this powerful capability?”
“Matrix Technology has also conducted a survey of the world’s top AI experts to gather their views, including one of the questions: What year do you think humans will create artificial intelligence that reaches the human level?”
“We define AI in this questionnaire as having the ability to complete any task as well as an adult.An adult will be good at different jobs, etc., so the capabilities of the AI will no longer be limited to a single field.”
“The middle of the answer to this question is now, in the middle of the twenty-first century, and it seems that it will take some time, and no one knows the exact time, but I think it should be fast.”
“We know that neurons transmit signals in axons at speeds of up to 100 meters per second, but in computers, signals travel at the speed of light.The human brain is only as big as the head, you can't expand it twice, but the computer can expand the capacity many times, it can be so large, it can be the size of a box, it can be a room, or even a building, this can not be ignored.
“So the superai may be lurking in it, just as atomic energy lurks in history until 1945.”
“And in this century, humanity may awaken the wisdom of superai, and we will see the explosion of wisdom.”When people think about what is smart and what is stupid, especially when we talk about power rights.”
“Chimpanzees, for example, are strong and have the same size as two healthy men, but the key between them depends more on what humans can do than what chimpanzees can do.”
"So when SuperAI comes out, the fate of humanity may depend on what the super-intelligent wants to do."
“Imagine that superintelligence is perhaps the last invention that humans need to create, that superintelligence is smarter than humans, better at creating than we are, and it will do so in a very short period of time, which means a shortened future.”
“Imagine all the crazy technologies we’ve ever fantasized about, and maybe humans can accomplish and implement it in a certain amount of time, like ending aging, immortality, and colonizing the universe...”
“There are seemingly elements like that that exist only in the sci-fi world but at the same time in line with the laws of physics, and superintelligence has a way to develop these things, and faster and more efficient than humans, we humans need a thousand years to complete an invention, super ai may only take an hour, or even less, this is the shortened future.”
“If there were a super-intelligent body of such a mature technology now, its power would be unimaginable to humans, and normally it would get whatever it wanted, and our future would be dominated by this super-ai preference.”
“The question is, what is your preference?”
“This is a difficult and serious issue, and to make progress in this area, such as the idea that we must avoid personality, blocking or neglecting superai, and having a taste of benevolence.”
“This is ironic because every news story about the future of AI or related topics, including our ongoing topics in the news of tomorrow, may be labeled as a poster for the Hollywood sci-fi movie Terminator, and robots fight humans (shers, laughs off the field).”
“So I personally think we should express this in a more abstract way, rather than the robots getting up against humans, war, and so on in the narrative of Hollywood movies, that’s too one-sided.”
“We should think of super-ai abstraction as an optimization process, a process like programmer optimization of a program.”
“Super AI or super-intelligents is a very powerful optimization process that is very good at using resources to achieve the ultimate goal, which means that there is no inevitability between having high intelligence and having a goal that is useful to humans.”
“If we don’t understand this sentence very well, let’s take a few examples: If the task we give to AI is to make people laugh, robots like our current home machine assistants may make funny shows and make people laugh, which is typical of weak AI behavior.”
“And when the AI that gives the task is a super-intelligent, super AI, it realizes that there is a better way to achieve this effect or complete the task: it may control the world and insert electrodes on all human facial muscles to make humans laugh.”
"For example, this super AI mission is to protect the safety of the owner, then it will choose a better way to deal with it, it will be imprisoned in the home can better protect the owner's safety."It may still be dangerous at home, it will take into account the factors that may threaten and cause the failure of the mission, and erase them one by one, eliminate all the elements of malice towards the master, even the control of the world, all acts for the sake of the task not to fail, and it will achieve the ultimate optimization of the choice and action to achieve the purpose of the task.”
“Also, if we give this superi mission the goal of solving a mathematical problem that is extremely painful, it will realize that there is a more effective way to complete the task goal, that is, to turn the whole world, the entire earth, and even more exaggerated scales into a super-large computer, so that its computing power is more powerful and the task goal is easier.”And it will realize that this approach will not be recognized by us, that humanity will be prevented, that humanity is a potential threat in this model, and that it will address all obstacles, including human beings, in any matter, such as planning sub-scale plans to eliminate human beings, etc. for the ultimate goal.”
“Of course, these are hyperbole descriptions, and we can’t go wrong with this kind of thing, but the main thrust of the above three hyperbole examples is important: if you create a very powerful optimization program to achieve the maximum goal, you have to make sure that the goals in your sense and include everything you care about to be accurate.”If you create a powerful optimization process and give it a wrong or imprecise goal, the consequences may seem like the example above.”
“One might say that if a ‘computer’ starts putting electrodes on people’s faces, we can turn it off.In fact, it’s definitely not an easy thing to do, and if we’re very dependent on the system, like the internet we rely on, do you know where the Internet switch is?”
“So there’s a reason, we humans are smart, we can meet threats and try to avoid them, and the same thing is that we’re smarter than our superai, and it’s only going to do better than we do.”
“On this issue, we should not be confident that we can control everything.”
“Then simplify the problem, for example, by putting AI in a small box and creating an insurance software environment, like a virtual reality simulator from which it cannot escape.”
“But do we really have the confidence and certainty that it won’t be possible to find a loophole, a loophole that will allow him to escape?”
“Even we human hackers can find network vulnerabilities all the time.”
“Maybe I’m not very confident that Superivision will find the hole and escape.So we decided to disconnect the Internet to create a gap insulation, but I have to reiterate that human hackers can cross such a gap again and again with social engineering.”
“For example, now, when I speak, I’m sure an employee on this side asked him at some point to hand over the details of his account, on the grounds that it could be someone in the computer information department, but also other examples.If you are this artificial intelligence, you can imagine using the electrodes around the complex winding in your body to create a kind of radio wave to communicate.”
Or you can pretend to have a problem.At this point, the programmer will open you up to see where something went wrong, they find the source code, and in the process you can take control.Or you can plan a very tantalizing blueprint for technology, and when we achieve it, there will be some side effects of your planned secrets as artificial intelligence to achieve your hidden purposes, etc., for example.
“So any attempt to control a superai is extremely ridiculous, and we can’t show overconfidence that we can always control a super-intsect, and it will one day break free from control, and then, after that, it will be a benevolent God?”
“I personally think that AI is an inevitable problem, so I think we need to understand that if we create a super AI, even if it’s not constrained by us.”It should still be harmless to us, it should be on our side, it should have the same values as us.”
“So are you optimistic that this issue can be effectively solved?”
“We don’t need to write down all the things we care about with Superi, or even turn those things into computer language, because that’s a task that can never be done.Instead, the AI we create uses its own intelligence to learn our values, which can motivate it to pursue our values, or to do what we would approve of, to solve valuable problems.”
“This is not impossible, but it is possible, and the results can greatly benefit humanity, but it will not happen automatically, its values need to be guided.”
“The initial conditions of the intelligent explosion need to be correctly established from the most primitive stages.”
“If we want everything that doesn’t deviate from our assumptions, the values of AI and our values complement not just in familiar situations, such as when we can easily check its behavior, but also in all situations where AI may encounter unprecedented, and our values without boundaries still complement our values, and there are many esoteric problems that need to be solved: how it makes decisions, how it solves logical uncertainty, and similar problems.”
“The mission may seem a little difficult, but it’s not as difficult as creating a super-intelligent body, does it?”
“It’s really hard (laughing all over the place again)!”
“What we’re worried about, if creating a super ai is really a big challenge, creating a safe super ai is a bigger challenge, and the risk is that if the first problem is solved, it can’t solve the second problem to ensure security, so I think we should think ahead of the solution that doesn’t deviate from our values so that we can use it when we need it.”
“Right now, maybe we can’t solve the second security problem, because there are some factors that you need to understand, and you need to apply the details of that actual architecture to be effectively implemented.”
“If we can solve this problem, it will be much smoother when we enter the era of true superintelligence, which is something that is worth trying.”
“And I can imagine that if all went well, hundreds, thousands or millions of years later, when our children and grandchildren look back on our century, they might say that the most important thing our ancestors, our generation, did, was to make the most correct decision.”
“Thank you!”
Comments
No comments yet. Be the first to comment on this chapter!