Max Tegmark is a well-known physicist and co-founder of the Massachusetts Institute of Life's Future. The motto of the institute is: "Because of technology, life may go to unprecedented prosperity... or self-destruction. Let us create difference!" Now, in "Life 3.0," he examines a future that cannot be ignored - artificial intelligence Evolution. He stressed that we should seriously consider the risks involved, lest we inadvertently lose the talent of the universe to humans.

Robot bees (RoboBee) are used for artificial pollination but may have unpredictable environmental effects. Thierry Falise/Lightrocket Via Getty
First clarify the relationship, Tegmark and I are partners, and have the same book agent. In 2014, we published an article in the Huffington Post with physicists Stephen Hawking and Frank Wilczek (human blind optimism about super smart machines). 》. The article is ostensibly commented on Wally Pfister's dystopian AI film "Super-Inspector", which actually calls for the AI ​​circle to take the risk of artificial intelligence seriously. Therefore, I am unlikely to completely reject Tegmark's presupposition in Life 3.0: because of the decisions that humans take now, life may spread throughout the universe and “prosperate for generations to come†or not. The possibilities are both enticing and heavy.
The title of this book refers to the third stage in the history of evolution. Life has spent almost 4 billion years perfecting hardware (body) and software (the ability to generate behavior). After another 100,000 years, learning and culture enable humans to adapt and control their own software. In the upcoming third phase, life's software and hardware will be redesigned. It sounds like superhumanism – advocating a redesign of the human body and the human brain – but Tegmark's focus is on AI, which complements mental intelligence through external instruments.
Tegmark also thinks about risks and benefits. Short-term risks include an arms race in smart weapons and a sharp decline in employment. The AI ​​circle actually condemned the creation of machines that can kill themselves, but there is controversy on employment issues. Many people predict that the economy will benefit – just like the industrial revolution of the past, AI will bring new jobs and eliminate old ones.
The Tegmark language fictionalized the two horses in 1900 to discuss the rise of the internal combustion engine; one of them said that "the horses will get a new job, which has always been the case...just like the wheels and plows were invented That way." However, for most horses, "new jobs" are turned into pet food. Tegmark's analysis is impressive, and economists including Paul Krugman also support this view. But the problem is not over: what is the ideal economic state that humans hope to achieve when most of what is currently called work is done by machines?
Long-term risks are vital to human survival. The fictional preface in the book describes a scene in which super intelligence is likely to occur. Then, Tegmark talks about the various endings of mankind—towards quasi-utopia, or to become slaves, or to perish. We don't know how to sail to a better future. It means that we didn't seriously think about why it might be a bad thing to make AI stronger.
Computer pioneer Alan Turing proposed a possibility in 1951 - we humans can only be dignified in front of AI, expressing the general uneasiness caused by making something smarter than ourselves. It may not be feasible or desirable to restrict the development of AI in order to alleviate this uneasiness. The most interesting part of Life 3.0 points out that the real problem is the potential consequences of a target deviation.
The founder of cybernetics, Norbert Wiener, wrote in 1960: "We'd better be very sure that the intention to give the machine is our intention of real expectations." Or, according to Tegmark, "Now I don't know how to infuse a super-smart AI. It does not define the ultimate goal of unclearness and does not lead to human demise." In my opinion, this is both a technical issue and a philosophical issue. It needs to mobilize all intellectual resources to think.
Only after answering the above questions can we harvest results, such as expansion in the universe, expansion power or from some strange techniques, such as the Dyson ball (capable of capturing the energy of the star), the accelerator built around the black hole, or the Sphalerizer conceived by Tegmark theory. (Like a diesel engine, but fueled by quarks and is 1 billion times more efficient). The laws of the universe and physics will explain where the upper limit of expansion is, but if you simply consider the taste of science, it is not good to refute these explanations. Perhaps one day, we can expand the scope of the biosphere by "about 32 orders of magnitude," but we will be disappointed to find that the expansion of the universe may allow us to move to 10 billion galaxies. We can appreciate the anxiety of future generations, because "dark energy splits the threat of cosmic civilization and prompts people to carry out vast space engineering projects."
The book concludes with an introduction to how Life Research will introduce these issues into mainstream thinking about AI – Tegmark has contributed. Of course, he is not the only one who sounds the alarm. In a nutshell, Life 3.0 is most similar to Nick Bostrom's 2014 book Super Intelligence (Oxford University Press). However, unlike Bostrom, Tegmark did not try to prove that the risk was inevitable; he also circumvented a profound philosophical exploration and threw the question to the reader: which scenes you think are more likely to happen or more attractive.
Although I strongly recommend both books, I suspect that Tegmark's book does not cause a common overreaction among AI researchers: retreat to demonstrate the rationality of indifference in a defensive manner. A typical example is: Since we are not worried about the possibility of distant species that may lead to the destruction of species - such as black holes formed in low Earth orbit, why worry about super intelligent AI? The answer is: if it is a physicist In making such a black hole, would we not ask if it is safe to do so?
The Economist magazine briefly describes this important question: "The introduction of a second intelligent species on Earth has far-reaching implications and deserves serious consideration." Life 3.0 is far from a conclusion on artificial intelligence and future topics, but it sums up what serious thinking we should do with a fascinating brushstroke.
Shanghai Chuangsai Technology has excellent performance, interleukin cytokines, fetal bovine serum, electrophoresis equipment scientific instruments, raw material drug standards, chemical reagents, cell culture consumables, Shanghai Chuangsai, mass products special promotions, welcome to inquire!
Soundproofing Workplace Cabin,Portable Soundproof Phone Booth,Glass Soundproof Booth,Public Silence Booth For Airport
Guangzhou Mingli Intelligent Equipment Co.,Ltd , https://www.minglibooth.com