Please login first
Previous Article in event
The Threat to Human Intelligence From Today's AI
1  Rensselaer Polytechnic Institute (RPI)
Academic Editor: Gordana Dodig Crnkovic

Abstract:

Reflections upon the nature of today’s Artificial Intelligence (AI), on the one hand, and human intelligence (HI), on the other, are now intertwined, catalyzed by the stunning increase in the breadth and speed of artificial agents produced (mostly) by for-profit corporations, and used by the many humans who pay for the privilege of using (the larger, more intelligent(?) versions of) these agents. Unfortunately, because of the particular nature of this AI, this reflection constitutes an attack on HI. This is so because the basis for today’s ascension of AI is a form of venerated machine learning (ML) inconsistent with a large part of what, undeniably, distinguishes HI, and has done so since the dawn of recorded history. Specifically, the sub-type of ML known as “deep learning” (DL) is fast reducing the received conception of HI to the sub-human level, while at the same time audaciously raising the banner of “foundation models” over today’s AI to signal the subsuming of all of HI (see, e.g., Agüera Y Arcas, B. & Norvig 2023). Humans have long been distinguished by their ability to create and assess chains of logical reasoning—deductive, analogical, inductive, abductive—over internally stored, structured, declarative knowledge, in order to produce conclusions, themselves structured and declarative. But AI qua DL literally has no such knowledge in the first place: there is nowhere in an artificial deep neural network (DNN) where such knowledge resides (see, e.g., Russell & Norvig 2020; and for a lively, lucid treatment of the issue, see Wolfram 2023). Ironically, none of the corporations such as OpenAI that sell and promote DL could survive for a week without humans therein employing logical reasoning on a broad scale. Genuine HI, for a simple but telling example, has for millennia consisted in part in being able to (i) memorize the algorithm of long multiplication over two whole numbers, (ii) apply it to compute the function of multiplication over, for instance, 23 and 5 (to yield 115), and (iii) certify as correct and gauge, logically, how feasible this algorithm is in the general case. AI qua DL does not have even this intelligence, since no algorithms whatsoever are stored in a DNN; and as a matter of settled science, DNNs only approximate the computing of functions. Hence, since HI is increasingly identified with DL (and human minds identified with DNNs), HI is regarded to be dramatically lower than what it is. In short, today’s DNN-based GenAI, itself painfully illogical (see, e.g., Arkoudas 2023a, 2023b), portrays HI as constitutionally illogical. This is a growing cancer that philosophy as a field must uncover, and seek to heal, or at least mitigate. Our ability to engage in logical reasoning is in large part what sets us apart from all other natural species, and this hallmark, while under attack today, must be protected. The need to do so is all the more dire because Artificial General Intelligence (AGI), taken to have HI as its initial target (before aiming at superhuman intelligence), is defined as being devoid of logical reasoning (see, e.g., the entirely statistical Legg & Hutter 2007).

I end by (briefly) offering and defending a course of action to address the threat to HI.

References

Agüera Y Arcas, B. & Norvig, P. (2023) “Artificial General Intelligence is Already Here” Noema, October.

Arkoudas, K. (2023a) “ChatGPT is No Stochastic Parrot. But It Also Claims That 1 is Greater than 1” Philosophy & Technology 36.3: 1–19.

Arkoudas, K. (2023b) “GPT-4 Can’t Reason: Addendum” Medium.

https://medium.com/@konstantine_45825/gpt-4-cant-reason-addendum-ed79d8452d44

Russell, S. & Norvig, P. (2020) Artificial Intelligence: A Modern Approach (New York, NY: Pearson).

Legg, S. & Hutter, M. (2007) “Universal Intelligence: A Definition of Machine Intelligence” Minds and Machines 17.4: 391-444.

Wolfram, S. (2023) “What is ChatGPT Doing … and Why Does It Work?”

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work

Philosophical Issues

Such issues are myriad in the present case. The nature of intelligence—artificial, natural/human, supernatural/superhuman—must be engaged. A brief intellectual history, rooted in philosophy, must be provided for context. The growing “science of intelligence” within the fields of AI and AGI must be analyzed philosophically. The analysis of how humans learn must be compared with the analysis of how ML (DL in particular; and RL) systems “learn.” Substantiation as to how today’s “GenAI” seriously limits intelligence in both AI and HI, and threatens the latter, must be provided by argument. The presentation/paper concludes with a proposed solution for improving the current state of affairs.

Keywords: Artificial Intelligence; Human Intelligence; GenAI; Machine Learning; Logic; Logic-Based AI

 
 
Top