Xu Yingzhao commented on “A Brief History of Artificial Intelligence”: Does Artificial Intelligence really make philosophy go away?

(Original title: Xu Yingyi's Commentary on "A Brief History of Artificial Intelligence" - Artificial Intelligence, Can It Really Let Philosophy Go Away?)

Recently, Mr. Nick wrote a new book entitled "A Brief History of Artificial Intelligence" (People's Posts and Telecommunications Press, December 2017). Because I myself have been engaged in the research of artificial intelligence philosophy for the past ten years, naturally, I bought a book for the first time. In this book review, I would like to talk specifically about Chapter 9 of the book. The ninth chapter of the book, "Philosophers and Artificial Intelligence," was written mainly for the philosophers, especially for those philosophers who had something to say about artificial intelligence. This is largely based on the profound prejudice of philosophy among most of the domestic researchers in science and technology, namely: our land, philosophers cut in.

Whether philosophers are qualified to intervene in scientific issues, as a dualist researcher of philosophy of science and Western philosophy, I feel I have something to say. I admit that: Philosophers do not have to talk about all science and engineering issues. For example, the philosopher would not speak on the question of why the skeleton shape of the 歼-20 is used in the form of a duck, at least not as a philosopher. However, these scientists are not necessarily contingent on whether “the theory of evolution can be applied to the field of psychology” and “What is the nature of quantum mechanics?” Philosophy of philosophy, philosophy of biology, and philosophy of physics certainly have something to say. Many people will ask: What qualifications do you have for studying philosophy? The answer is simple. Overseas, the philosophers who deal with these issues often have more than two degrees. For example, Paul Churchland (Patricia Churchland), a neuro-philosophical expert, has a deep neuroscience background. Even if you see that a Chinese scholar with a degree in philosophy has failed to speak to a scientific question, it cannot be inferred that the business as a whole does not work, because the truth of the matter may only be that: The expert in this profession is not in your circle of friends. in.

According to the same logic, philosophers can certainly speak on the issue of artificial intelligence. The reason is simple: the basic definitions of what the symbolic AI and the connectionism AI are about intelligence are unclear. It can be seen that there is no consensus in the industry on how to do artificial intelligence. To listen to the philosophers' views, I am afraid there are no flaws. Some people may ask: The problem is that philosophers cannot even write a single line of program. Why should we listen to philosophers? For this question, the two responses are enough to refute it.

Pollock

First, how do you know that philosophers will not write programs? For example, John L. Pollock, a heavyweight scholar in the study of knowledge theory, has developed an inference system called "Oscar". The relevant research results have been published in mainstream artificial intelligence magazines. Another example is David Chalmers, a spiritual philosopher famous today in the Anglo-American philosophical world, who was a big AI expert at the University of Indiana Bloomington, Douglas Richard Hofstadter. Has he ever made an AI dissertation, does he actually not write programs?

Second, will it be necessary to write a program to be a necessary condition for expressing opinions on AI? As a low-level operation, the work of writing specific code is similar to the simplest shooting action in the military. However, please think about it: Can Mao Zedong defeat Chiang Kai-shek's millions of troops because he has the talent to manage it or because he is proficient in shooting? The answer is undoubtedly the former. Obviously, the underlying operation of philosophy in artificial intelligence is similar to the tactical actions such as Mao Zedong's strategic thinking and shooting.

Chalmers

But as I mentioned earlier, Mr. Nick apparently did not give philosophers such a high degree of respect as I did. In Chapter 9 of the book, he listed three philosophers who had an intersection with artificial intelligence and criticized one by one. The first is Hubert Dreyfus, a phenomenologist who tried to criticize symbolic AI with Heidegger's intellectual resources. The second is John Searle, a linguistic philosopher who tried to refute the possibility of strong AI through the "Chinese House" argument. The third is Hilary Putnam, an analytical pragmatist who tries to prove the correctness of semantic externality through the “brain's mind” experiment. However, from the point of view of argument, Nick's related discussion clearly has the risk of “incomplete induction”: Can such three philosophers represent the philosophical general view of artificial intelligence? For example, the above mentioned Chalmers and Pollock, the author did not mention it. Take the Sel's “Chinese House” thought experiment mentioned by the author, he also seems to have completely ignored a basic fact related to this argument: at least several hundred reviews that can be found on the Internet. Most of the English philosophical thesis is criticized by Searle. Isn't Naser's point of view typical of philosophers in this kind of situation a bit biased?

Searle

In addition to incomplete problems, Mr. Nick's second question is: Did he really understand the work of the philosophers he criticized? Take Putnam, he is actually a very good mathematician. His research on the analytic hierarchy process is mentioned in the literature of general computational theory, and the Davis-Putnam algorithm mentioned in the computer literature, which also condenses the South's efforts (later the algorithm evolved again for the DPLL algorithm). It is true that Putnam had been hostile to artificial intelligence in his later years, but his early research on the concept of “multiple realizability” actually provided a basic expression for the discourse framework of the strong artificial intelligence topic. In Nick's description, Putnam is basically obliterated as an AI comrade, and what remains is a stupid, comic-style science layman.

Putnam

More misunderstandings appear in the author's description of Dreyfus' thoughts. The author seems to disdain the true ideological background of German philosophy - Heidegger's philosophy - and believes that this philosophy has no support for algorithmic explanations. It is purely for people who sell powerful pills and dog skin plaster. Frankly, on this issue, I am not 100% opposed to Nick's criticism of Hai's philosophy. As an analyst of Anglo-American analytical philosophy, I sometimes feel mad at Hai's expression. However, unlike Mr. Nick, I do not doubt that Hai philosophy has at least mentioned some very important things. Although the opinions on how to make these insights clearer, my views on the mainstream of the “sea school circle” are not entirely correct. Consistent. The author's positive view is that as long as he can "translate" Hai's philosophical thoughts clearly, his insights will be more easily absorbed by workers in the field of empirical science.

- So, how do you do this "translation"? In a very general way, a basic point of view of Hai's phenomenology is that Western philosophical traditions are concerned with the “existence” rather than the “existence” itself. And his own new philosophy is to re-expose this forgotten “existence”. I admit that this is the "philosophical blackspot" of the sea. I have no idea what I have not explained. However, they cannot be made clear in principle. Now I try to explain it in normal Chinese.

The so-called "existence" is something that can be clearly objectified in language representation. For example, propositions, truth values, subjects, and objects are all such beings. And "existence" itself is difficult to be objectified in linguistic representations, such as the vague background knowledge you rely on when using a metaphor. Can you clarify the background knowledge of opening a joke, just as you list your ten fingers? Can you find clear boundaries between background knowledge and non-background knowledge? The trouble with traditional AI is here. Human real intelligence activities rely on these unclear background knowledge, and programmers can't write programs without making them clear. This constitutes a tremendous tension between the phenomenological experience of humans and the mechanical presupposition of machine writing.

Some people will say: Why should the machine ignore the phenomenological experience? Artificial intelligence is not a clone. Can you ignore how people perceive the world? For this very shallow question, the following answer is sufficient: Why do we have to do artificial intelligence? Isn't it just to add help to humans? Suppose you need to build a carrying robot to help you move. So don't you want him to understand your command? - For example, the following command: "Hey, robot Ajay, you moved that thing here, and then took another thing over there." - Obviously, this command contains a lot of bearing pronouns. The specific meaning must be determined in a specific context. Under such circumstances, how could you not expect the robot to share the same contextual awareness with you? How can you tolerate your robot being a monster in another time-space scale? Since such robots must have contextual awareness similar to humans, some of the basic structures of the human phenomenological experience revealed by Hai's philosophy cannot be applied to real artificial agents in a certain sense.

Heidegger

Some people will also say: Then, how do we find an algorithmic structure to implement the above insights of Hai's philosophy? For example, how do you describe the "possibility structure of existence" at the level of the algorithm? Can't you say that the algorithmic structure doesn't mean white? However, the reader is conscious of the fact that this requirement should be referred to AI rather than to the philosopher. Or to put it another way: Hai's philosophy can be said to enrich the "user expectations" of human users for artificial intelligence, and the burden of realizing these expectations should be placed on the shoulders of artificial intelligence workers. This is like saying that if the military requires aircraft development units to create a stealth fighter, then the task of designing such an aircraft should be the responsibility of the development unit, not the military. To put it another way: You can't blame it in reverse because the user doesn't understand the details of the technology and is not qualified to put forward "user requirements", just like you can't blame the military representative for not knowing some details of the aircraft design and not accuse him of writing. The design of military aircraft is the same. So, if we were really like Mr. Nick, just because the Heideggerians had no algorithmic expression to support them, they would be overwhelmed by one stick. Then, we can use the same reasons to dissolve consumer rights protection all over the world. Organized - what consumers know about technical details. It is precisely because the conclusion of this “reduction method” reasoning is absurd, so we can put it back: Mr. Nick put the responsibility that should be placed on the shoulders of the artificial intelligence researcher onto the philosophers and pass it on. Responsibility and good people.

Dreyfuss

The main point of Dreyfus’ critique of artificial intelligence is that AI philosophers do not want to ignore philosophy even if subjectively, but objectively they always unconsciously presuppose a certain philosophical position—and it is precisely Because they lack philosophical appraisal, their unconsciously adopted philosophical position is often still very low. For example, the basic idea of ​​Minsky's framework study is that Husserl had long played with the rest of the matter and had long been criticized by Hussein's disciple Heidegger. However, Mr. Nick disapproves of this comment. In his view, philosophers are narcissistic, thinking that the thoughts of others are from their own. In other words, Minsky's idea can be completely independent of Husserl's framework. In this context, raising Husserl's name is completely unnecessary.

In the author's opinion, Mr. Nick was once again caught in a serious misreading of philosophers' opinions. The meaning of Dreyfus certainly does not mean that Minsky had read his design of the framework because he had read Husserl. What he meant was rather that some kind of erroneous thoughts were widely circulated in the Western ideological world, so that philosophers and engineers would not be consciously affected by it, even though the engineers themselves might not know that philosophers had similar ideas. It is precisely because philosophers are more concise and systematic about the expression of similar erroneous ideas. Therefore, discussing this issue at the philosophical level can explain the problem.

Husserl

Of course, my own support for Dreyfus is also limited. In a sense, I am more radical than him. I agree with his criticism of the so-called symbolic AI, but I cannot accept the warmth of neuron network technology. Rather, the neuron network cannot flexibly switch between different problem areas (for example, the system that will use the Go system cannot directly deal with stocks), nor can it deal effectively with the flexibility and creativity of syntactic production (because the The statistics are unable to predict new combinations of meanings - on this issue the cognitive scientist Zenon Pylyshyn and his just-departed philosopher Jerry Fodor criticized the article in 1988 ( Fukudo's famous cognitive science philosopher, Mr. Nick's book also hardly mentioned the word). In other words, even if I personally accepted the literal feasibility of the "Heideggerian AI" formulation, my high estimate of this benchmark is even more pessimistic than Dreyfuss.

Fodor

Aside from Mr. Nick’s misunderstanding of Dreyfus, his misunderstandings of some of the other great philosophers, such as Wittgenstein’s late philosophy, are astonishing. For example, he believes that Terry Winograd's "building block world" approach is close to the later Vickers language philosophy. This is actually a conclusion that anyone who knows a little about the history of analytic philosophy must laugh. Wittgenstein’s "friend circle" is a member of the daily linguistic school of Austin and Strauss. What they like to do most is to put clean syntactic analysis back to everyday language full of chaos and marsh. The ground used has shown a great sense of alienation to any axiomatic ideas. Taking into account the clear axiomatic colors behind the "Building Blocks World" programming, this approach is considered to be an earlier analog of Wittgenstein's "Logical Philosophy," and I am afraid it is relatively more reliable. From this point of view, although the author may not be unfamiliar with certain gossips in Wittgenstein's life, he certainly does not understand the "Philosophical Studies" and certainly has not read the author's own book "Mind, Language and Machine - Dialogue between Wittgenstein's Philosophy and Artificial Intelligence Science (People's Press, October 2013).

"Mind, Language and Machine - Dialogue between Wittgenstein's Philosophy and Artificial Intelligence Science"

Speaking of Mr. Nick's misunderstanding of philosophy, I also want to talk about his neglect of cognitive science so that people do not think that the author is too "philosophical-based." In fact, shortly after the Dartmouth Conference in 1956, cognitive science was born in the West. In 1956, it was the "double year of artificial intelligence and cognitive science." But looking at the book, Mr. Nick seems to have raised very little about cognitive science. For example, the “bounded rationality” study of H. Simon, an AI veteran, actually has a triple meaning that spans artificial intelligence, cognitive psychology, and economics. Otherwise, the elderly will not Became the winner of the Turing Award and the Nobel Prize in Economics. But for Sima He's work in this area, the author seems to be indifferent (the economists are paying attention! Mr. Nick is not ignorant of our philosophers.) Fortunately, the author himself does not reject cognitive science and economics as Nick rejects philosophy. For readers who want to understand a similar ideological background, it is advisable to read the relevant popular science book “Cognitive Preconceptions” written by the author (Fudan University Press, December 2015).

Cognitive stereotypes

Moreover, because Mr. Nick lacks the discussion of the connection between cognitive science and artificial intelligence science, the structural framework of his entire work is very scattered. He attached great importance to the proof of the machine theorem, snatched him to discuss some other important topics such as the Bayesian network (Bayes network inventor, Turing Award winner Judea Pearl's work is also ignored). His discussion of neuron network technology also ignores the recent development of deep learning technology in this direction (for example, he only mentioned some related technologies when discussing "Alpha-Dog"), but he did not provide a good introduction to depth. Learn from the work of Geoffrey Hinton.) As for his basic knowledge of computational theory, the introduction of the Turing Machine, he was actually put in Chapter 10, which is like a teacher who teaches Japanese in the first lesson, teaching the most difficult honorific in his birthday and then waiting until Ten lessons will teach the most basic of the fifty sounds. Of course, this book also has a unique contribution. For example, some of the details disclosed in the fourth chapter of the book when introducing the fifth-generation computer program in Japan cannot be read in the general Chinese reading. If the rest of the book is justified, how good it is!

In the end, the author also wanted to make two exaggerative comments. First, philosophy is of course related to artificial intelligence, although in fact there are not many philosophers who have the ability to cut into artificial intelligence topics. However, the correlation between the two is first of all a normative proposition, not a factual proposition, and the latter cannot push out the former—for example, you start from the point that “in the late Qing dynasty, there are very few foreign talents in China”. Can not push "China in the late Qing Dynasty does not require foreign talent." By the same token, the author's observation that "the current philosophical literature has little to do with the artificial intelligence literature" obtained from Wikipedia's literature statistical system also fails to suggest that "artificial intelligence does not require philosophers to intervene." A conclusion.

Second, if readers really want to systematically understand the history of the interaction between artificial intelligence and cognitive science, they must still read books written by philosophers of cognitive science, because the training of cognitive science philosophers spans philosophy and Cognitive science is relatively easy to avoid the influence of narrow academic biases. In terms of readings in this regard, the author wants to recommend the “Mind, Language, and Machines” in addition to his cheeky genre. He also wants to recommend the famous masterpiece of the British cognitive science philosopher Margaret Boden as a machine. Mind - History of Cognitive Science, which is also a must-read (unfortunately, there is no Chinese version of this book, by the way, the author of this book has a multidisciplinary background in computer science, medicine and philosophy, and a lot of artificial intelligence in the history of development Having a private relationship is a big coffee. If the reader can read the book of Old Lady Boden and Nick's book mutatis mutandis, I am afraid that one can immediately see the kind of quality difference between "歼20" and "歼7." However, Mr. Nick’s book does not mention Boden’s 1,631 pages.

"As a Machine's Mind - A History of Cognitive Science"

The author acknowledges that the author's view of “letting philosophy into artificial intelligence” is not the current mainstream voice of the Chinese public opinion circle. Regarding the topic of AI, behind the sound of the mainstream public opinion circle in China, there is probably the promotion of capital power. There has always been a kind of lag between the eager expectation of return of interest in capital circles and the “retarded” style of work that philosophers repeatedly scrutinized. Huge tension. However, perhaps because of this, I especially feel that philosophers need to make a sound. The firmness of all those who are against the wind is based on their confidence in the change of the wind direction. The author does not lack this confidence.

(The author of this article is the Chinese chairman of the "Artificial Intelligence Philosophy" session of the 2018 World Philosophy Conference).

Insulation Sheet FR4

Longkou Libo Insulating Material Co.,Ltd. , https://www.liboinsulation.com

Posted on