The following is inspired by a March 13, 2019 post of Rich Sutton, The Bitter Lesson.
One of the fundamental quests of human beings has been to understand themselves – what is intelligence, what is consciousness, is there free will, how do thoughts and actions flow from the self, etc. One of the many ways of investigating these questions is through AI. I don’t know if there is a branch of Philosophy that “understands-things-by-implementation” – kind of like “learning-by-doing Philosophy of Mind”. This was an implicit goal that suited Cognitive Science to join hands with AI. AI is also happy thinking that brain/mind-inspired algorithms would help them achieve the goal of “intelligent machines” faster. This was also pragmatic as the only working prototype of an “intelligent machine” is us and it might as well be that we subject ourselves to reverse engineering! The only catch is our hardware is neurons and AI uses silicon. The former is analog, dynamical, non-linear and the latter is binary, stationary and stored-program-driven for the most part. Obviously the strategies of human minds such as representation of the invariances in the world, redundancy, memories as attractors of a dynamical system, one-shot learning, etc are suited for a “neural” hardware. Problem solving by Depth-controlled search, statistical learning from huge cache of representative examples, multi-criteria optimization, constraint satisfaction, stochastic search, etc might be more suitable for silicon systems.
What I see is that we are hitting a wall in AI with the human-mind analogy and precisely this is what is good for Cognitive Science and Cognitive Psychology! All their theories of cognition do not have any working notion of what it means for a human being to be “conscious”. Take any theory, for example, working memory and the information bottleneck imposed by the limit of how many chunks of information you can hold in your working memory buffer. This theory does NOT need any working definition of consciousness and self-consciousness! Consequently, the same theory would hold good for animals and humans, modulo the complexity of the human system in terms of number of neurons and their connectivity. The closest CogPsych got to is to throw in “attention” into the scheme of things. Who possesses this attention and who directs it is not clear yet!
So it appears that while CogSci goes back to the drawing board, AI is better left to its own devices! As Rich Sutton suggests we are better off leaving machines to self-learn and discover than incorporating strategies of how humans discover knowledge for themselves or insert human-discovered-knowledge into machines. We can extend Rich Sutton’s logic beyond search and learning combined with (ever-increasing) computational resources (self-learning, computational behemoths) as the foundations of machine intelligence to achieve human-level intelligence capabilities. Remember, machines evolve in human minds but biological systems evolve in the real world. We need to let machines evolve in the real world as well for us to see what is perhaps possible for them. It is not clear what pressures forced living beings to develop consciousness and then conscious animals to develop self-consciousness. This is probably the direction we can think of to understand and enforce similar scenarios for machine intelligence to develop its own version of consciousness (C) and self-awareness (SA). But we do not need to expect the emerging machine-version of C and SA to be anything human-like, but we need to characterize them well enough so that we do not find ourselves superseded at some point!