Is the path to AGI clear?

So, is the path to AGI clearly defined and is the quest of achieving AGI just a matter of time, computational resources and scalability?

Because by following the current trajectory of making models larger and collecting larger datasets doesn’t address the issues of parsimony, explainability and subsequently trustworthiness, a simple answer to the rhetoric is “highly unlikely”.

To elaborate on the answer, allow me to indulge on the basics of what AGI is, which would inevitably intermingled with the history of AI, the history of understanding Human Cognition. A pertinent starting point would be to reminisce on the millennia-old debate on nativism and empiricism. The former advocates that each Human Being is born with innate mental capacities or structures that are useful to make sense out of external sensory experience, while the latter says that a Human Being is born as “tabula rasa” (a white paper) who acquires knowledge from learning from environmental stimuli. As Human Knowledge grew over the years, and the debate permeated into several fields concerning the study of Humans, when faced with these dichotomies, a central theme of this book which is derived from the Yin-Yang is that the answer shouldn’t follow a zero-sum mentality, as in the correctness of one entails the incorrectness of the other. Rather, both hypotheses of human knowledge complement each other, and this is especially evident when we consider how these two schools of thoughts can address the limitations of each. Namely, the core issue of empiricism-based models of deep learning is its over-reliance of external data, to the point that the construction of these models have morphed from a scientific problem of understanding the mind to an engineering problem of how to extract the most knowledge from data, which fuels the pursuit of getting more data. Implications of this approach is that models which are proposed often fall into the issues of lack of parsimony, being black-boxes and hence aren’t trustworthy. A way to address them would arguably to develop the capability to reason about the environment, however, to give form to a structure for reasoning requires the machine to have its own language.

How would AGI look like?

Hopefully, the preceding subsections have laid out concisely some of the current perspectives, challenges, and possible pathways the quest to first understanding what machine intelligence is, which would be a pre-requisite to achieving AGI. Despite being far from unravelling the mysteries of intelligence, we still have the freedom to indulge ourselves in imagining how would AGI look like, as in ponder what it is both conceptually and physically assuming we have already understood what algorithm breeds intelligence and managed to incorporate into a body which follows the biological principles of being alive. of One of the common misconceptions I’ll attempt in avoiding as best as a human as I can do is to prevent anthropomorphising the outlook of such AGI. Namely, an AGI doesn’t necessarily have to incorporate humanoid features. The argument for this is as follows: we human beings are by-product of evolution, which at its core dictates that the fundamental unit of life, the gene, should replicate. The engines of nature and probability shapes optimal templates for the fulfilment of such primaeval goals, however, these are not the best templates1. In other words, the Human Being, our height, weight, facial features, method of replication, way of survival, way of organization, are but one template of the myriad of possibilities nature could have shaped a living being for the fulfilment of biological principles. As such, the design of AGI need not to replicate the successes of the Human Being, but rather enjoy the freedom of improving on it.

Can the AI learn for itself? – there are several means to serve a biological principle, and random mutation is one of them. While our machines can undergo a controlled mutation as guided by the design of the environment, the cradle, from which it will nurture itself, we can make sure that its evolution doesn’t lead to precarious consequences for the good of humanity. This will not necessarily go against the biological principle of replication of components (organic or inorganic), because it can still be retained without random mutation.

are not the purpose, but serve to a purpose. In other words, random mutations are not necessary, but rather useful mechanisms for adaptation. One can digitally bypass such mechanisms and still design an AI which follows the same principles of replication. Similarly, we acquiere energy from the sun from a lightly complex mechanism. This mechanism can be simplified by our engineered system to directly acquire energy. Is there anything lost? It’s an open question.

Hardware: Supplementing what biological evolution missed

By abjuring from anthropomorphic biases, we can instrospect on ways to build a machine body that which is more optimal geared for survival. Consider, for instance, energy consumption. Our Human Bodies are a node of a complicated network of energy transfer, of which main (if not only) source of energy is the Sun. Plants convert solar energy to chemical energy through photosynthesis, which can be both transferred to humans directly or indirectly through non-human animals which consume them, as well as intermediate predators. Upon retrospect, however, such process can be artificially designed to be more straightforward and avoid “unnecessary detours”, so as to obtain energy directly for the sun in order to power the bodies of AGIs.

Furthermore, our extremities are arguably optimal for survival thanks to our feet suitable for navigation, evolved to escape from predators, and hands to manipulate tools. The AGIs we design can inherit our adroitness in manipulating tools, but instead of getting feet we it’s arguably better to include wings or wheels which are better for optimal navigation and obstacle avoidance.

Software: Giving form to the black box

An important step that is arguably important in the progress towards AGI is a focus on building an internal-state apparatus that would ascribe the properties of reasoning on machines, for it to develop a language from which we can give form to the black box. Because this language may follow mathematical principles, it has properties of self-reference which would allow for introspection, which is additional to the capabilities of current deep learning models which are mainly external and focus on extracting knowledge from environmental stimuli. The model, hence, should be able to:

  • Prove Goldbach’s Conjecture and devise with new mathematical conjectures
  • Be able to defeat humans in every benchmark possible that was designed on the basis of testing pattern extraction abilities and reasoning capabilities, including the Turing Test.
  • Interact with the real world for both self-fulfilmment, and fulfilling its own purpose of understanding the world.

Dangers of AGI

One of the most archaic moral principles an AGI can develop is that of reciprocity, that is, to react to mitigate damage inflicted upon It. If the source of damage is that of Human Activity or Collective Action, we can’t help but wonder whether It would react in a tit-for-tat fashion. Amidst this source of concern lurks, however, some relief given that this suggests that the AGI, through introspection of the Golden Rule, should be aware that mistreating us Humans would also be harmful for Itself given that we Humans can as much reciprocate damage inflicted upon us. Hence, it is an act of rationality not to take the first step towards cyclical doom. However, this relief is also interwoven with fear that certain Human, maybe out of curiosity or stupidity, decided through their irrational decision-making to harm the AGI, be harmed, and through the butterfly phenomenon open the Pandora Box.

  1. This is reminiscent of how in error minimization, we can hit local minima, rather than the global minimum.