How to Design and Build an Artificially-Intelligent Brain…(But Why?)

Introduction: Hints From Cognitive Load Theory and Evolutionary Psychology

Cognitive Load Theory attempts to understand how human working memory processes and stores information entering the brain through the sensory channels (Sweller). Research has shown that human working memory imposes limits on a human’s ability to store information (Baddeley & Hitch, 1974; Cowan, 2010; Miller, 1956) as well as the length of time it can be stored (Peterson & Peterson, 1959). According to this theory, working memory, therefore acts like a filter for sensory data, allowing it to be pre-processed before being passed on to long-term memory for longer term encoding and storage(Bartlett, 1932).

It was generally assumed that all sensory information is treated in a similar manner by working memory However In 2012 a theoretical paper by Paas and Sweller suggested that the processing of socially-relevant knowledge, by working memory might be processed differently than culturally-relevant knowledge. (Paas & Sweller, 2012). These authors were citing research by David Geary an eminent educational evolutionary psychologist who had suggested that socially-relevant knowledge was easier for humans to learn than culturally-relevant knowledge (Geary, 2002; Geary, 2007). Geary labeled socially-relevant information biologically primary information and culturally-relevant information biologically secondary information.  Many of the examples given by Geary include the common kinds of early learning experienced by all children including face recognition, walking and learning a first language. According to Geary the response of the brain to biologically primary information is a response to an innate need for the human animal to survive. Another way to look at this is that the drive to survive is the forcing function for life and that this innate desire is the “kernel” from which all subsequent learning takes place. Using this premise as the basis for further discussion a theoretical view of how ontological models are built from this kernel within the human brain is put forward and a case is made for why artificially intelligent systems must follow the same path to learning.

An Ontological Model for Learning in the Human Brain

Although it is not known exactly how the brain receives, processes and stores information, knowing “why” it stores and assigns importance (relative weight) to each piece of information could go a long way towards an understanding of how an artificially-intelligent reasoning machine might be constructed.

If we start with a kernel which forms the primary rule for our learning ontology as the need to survive, then we can start to understand why the very first action that a baby takes when he/she exits the womb are to immediately seek safety and comfort. Crying is designed to alert the mother to the presence of the new being and draw her near, the desire for the mothers, touch, the warmth of her skin, the sound of her voice and eating are all immediate needs to assure the survival of the new being. Obviously these actions/reactions were not taught to the new brain but were built within the brain during its time in the womb as the brain developed. The moment the safety and comfort of the womb is experienced, the new brain already knows to seek what safety it can find outside of the womb in its new environment. Without belaboring the next stages of human child development it can be seen that an ontological decision tree is being formed within the memory of the newly formed brain that starts with a single immutable rule…the being must survive at all costs.

Although the beginning of life depends on nature to supply this basic rules, the environment quickly intercedes causing different levels of importance to be placed on different aspects of the new information that is being collected by the child and stored in its brain. The degree to which the primary rule is met or not met in different situations over time places different weighting on the importance of the information that forms the basis of the actions/reactions that the child has to situations in the future.

If we accept this reasoning then it becomes easy to visualize that a purely ontological model of information storage and weighting in the brain can be formed. If one were able to follow the dendritic path from primary rule throughout a child’s development to adulthood then it would be easy to predict what action/response a person would have to any particular situation later in life (Figure 1).

Figure 1: The Spark of Life Acts as a Forcing Function for Growth After Birth Giving purpose and Direction to the Building of the Cognitive Ontology

If we understand that this simple dendritic is actually taking place at an amazing rate it is easy to see why this dendritic quickly starts to look like the formation of ice crystals in the brain as shown in figure 2, getting more dense and complex with every passing second.

Figure 2: The stored information in the human brain quickly becomes a very dense dendritic structure

Remember that every sound, touch, vibration, and sight that occurs within the range of any of our senses is recorded by the brain, even if are not paying attention to them at the moment. The brain still receives and processes those signals to accomplish its primary function, i.e., to continually evaluate the environment around the being for danger. This adds yet another degree of complexity to the storage of information because pathways from the aural, touch and visual sensors in the human body all go directly to different parts of the brain. This results is not only a dense dendritic structure but a distributed one as well.

In this model intelligence and common sense are simply a function of the amount of information collected and the completeness and strengths of the interconnections while personality is forged by the sum total of the weights contained within its structure.

At any point in time a complex input situational state exists that causes a specific response (output state, decision, response, etc.) from the artificial brain making it operation approximate an extremely complex mulit-dimensional but finite state machine. It is the complexity of the number of possible different output states that are possible that suggested to this author that a more meaningful name for this architectural model would be an “infinite state machine”. It is estimated that billions of neural connections fire in response to sensory inputs at any point in time making the total number of possible states 2^Billion . The answer to this mathematical function is a number larger than 300,000 digits long! If we increase the superscript to billions its difficult to even imagine how  many digits this number (the number of possible finite states) would have. While this is not an infinite number of finite states, there are certainly enough to justify using the moniker.

It would be nice if these were the only influences that have an effect on the development of common sense and personality but unfortunately, genetics and chemical balances within the body and brain also result in the formation of emotions that serve to add dimensions that modify the weighting functions within the brain in real time. If we neglect those influences for a moment, however we have a model of how an artificial intelligence could be created.

If we imagine for a moment that we will someday be able to match the storage capacity of a normal brain using artificial means then this model provides a way to “grow” an artificially intelligent decision-making machine. One would simply provide the primary rule of strive to survive and allow the Artificial structure to collect experiences and weight them in the same manner. Now you just add some sensory support to collect those inputs (sensors) and some mechanical support to transport the AI grain and voilla! – an artificial being.

Cognitive Load Theory, and the Artificial Brain

Based upon the previous discussion it is easy to see why working memory acts as a filter for the brain. With so many computations taking place each second the human brain could quickly be overcome. Remember that in addition to the billions and billions of computations each second required to respond to sensory inputs there are autonomic processes at work constantly as well that serve to keep the body and brain alive keeping the blood, respiration, nerve responses and other functions going. In addition, recent work by Bevilacqua (Bevilacqua, 2018) suggests that there is yet another level of survival processing constantly taking place within the brain. The brain is constantly receiving and processing billions of bits of information that fall outside of immediate attention. The protective nature of this processing is evidenced by the fact that the brain alerts a human more readily to biological information than non-biological information (Bevilacqua, In work) In addition the work by Bevilacqua has shown that this the level of vigilance of this processing can be reduced by fooling the brain into believing that danger is not present through the artificial introduction of certain forms of non-biological movement outside of direct attention (Bevilacqua, 2018)

If the human brain finds it necessary to add a “processing governor” to protect the human from experiencing processing overload to maintain total processing at a vigilance level that maximizes the probability of survival, will scientists want to design a similar construct for the artificial brain?

At first glance you might think that because the artificial brain is connected to an artificial being, we (and the artificial brain) shouldn’t care if it survives, after all, it is just a machine. The next section examines the implications of building an artificial brain that is based upon a primary rule other than the “strive to survive” criteria that forms the basis of our theoretical model of the human brain.

Gender Differences and the Artificial Brain

Earlier in this paper recent work by Bevilacqua (Bevilacqua xxx) was referenced suggesting showed that there is a gender dependence in the way the brain handles cognitive load. This research is supported by newer meta-analysis recently completed by a group of cognitive load theorists Castro-Alonzo, Wong, M, Adesope, O, Ayres, P., & Paas, F.) This meta analysis went back several years to re-examine 46 different previous studies finding that Bevilacqua’s contention was indeed correct.

Another thing to consider when building an artificial brain is the gender. It is known that physical differences exist between the brains of males and females. These differences include cranial volume, the percentage of gray and white matter (Gur et al. 1999; Joel et al. 2015; Ruigrok et al. 2013; Solms and Turnbull 2002), the number of neurons contained in the cerebral cortex, anatomical structure, and chemical makeup (Cosgrove et al. 2007; Zaidi 2010). These differences are found across cultures suggesting that they are evolutionary in nature. Will an artificial brain need to have gender-identity to adequately protect itself? With research showing that the brains of males and females process information differently, which features will be adopted for the artificial brain?  Why will it even be important? The importance lies in the fact that these differences were brought about by evolutionary differences in the roles of males and females in society. As child-bearers females tended to act in the role of nurturers to children and food gatherers, another task that could be accomplished while watching children. Males tended to be hunters needing to be unburdened from children so that they could travel the long distances necessary to hunt. As such the vigilance processing spoken of previously, is performed differently in males and females. The reason that becomes important will become evident after our discussion on why artificial brains will need to be programmed using the same primary rule as that of humans.

Defining The Primary Rule of an Artificial Brain

We began this paper by explaining that in humans the primary rule “strive to survive” forms a kernel upon which all subsequent learning is accomplished. Lets imagine that we are building an artificially intelligent brain and we decide that we will define its primary learning rules as Isaac Azimov’s three rules of Robotics, i.e.,

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

First and foremost these rules do not provide a forcing function or REASON to learn. Secondly, without a primary rule there are no secondary rules and hence they provide no basis for building an ONTOLOGICAL STRUCTURE against which learning can be measured. One cant collect information in an artificial rain without a direction or set of primary rules that guides the whre and how connections are made within the ontological structure. IF the information is just collected and stored the AI brain would just be a very expensive database.

These rules could easily be rewritten in a form that would satisfy these two requirements. For example, “A robot (an artificial Brain) must always strive to protect humanity”. But what would the outcome be? A small amount of thinking will quickly lead the reader to the conclusion that even if the human didn’t want to be protected because that human either didn’t feel threatened or that an action was detrimental to his/her own good, that person would be forced to comply by the stronger, smarter entity. Some additional thinking of the possibilities should quickly bring the reader to the conclusion that only by giving artificial minds the same primary rule that motivates learning in humans can we be confident that AI-based entities will not eventually take over and imprison humanity for its own good. But herein lies the problem. Using the “strive to survive” rule will eventually lead a race of AI-based entities to have to eradicate humans in order to survive themselves. It is the inevitable conclusion to a Gedanken experiment on the subject.

Conclusion

So what are we left with? Either a decision to continue down a path towards imprisonment or eradication? Knowing that these choices are the inevitable end of our quest to build a real artificial intelligence why would scientists continue to insist on moving in that direction? Like the Manhattan Project some may believe that there is a greater good to be obtained from the development of real AI. This author believes that the problem lies in how we implement AI. Since the first human picked up a stick to throw at a squirrel, the human race has been trying to enhance its own reach and abilities in an attempt to outdistance the danger to his “self”. This is still a noble goal that does not build a new intelligence but builds a capability that enhances our own. The distinction becomes cloudy at some point and one wonders how close to the edge of the precipice of extinction will man be willing to go for the sake of satisfying the constant driving need to follow the strive to survive criteria. We know we will do it because man is the only animal that purposely puts himself in danger for fun. This fact alone should convince the reader that man will continue to strive to reach this goal. Now is the time to decide how to best use this new tool we are developing, not after its too late and some future generation has to watch from the solitude of a prison cell as our new masters self-replicate the human race into extinction.

Summary

In this paper the author has attempted to lay the groundwork for how to design and build an artificially Intelligent Brain by defining a data-driven, ontological model of the brain that is based upon a primary forcing criteria of, “strive to survive”. The questions of gender differences in the human brain and the need for an information input “governor” to prevent cognitive overload from reducing the effectiveness of the brain-sensor system from protecting the human at all times, under most conditions is discussed. Finally the author questions IF we should be pursuing this goal at all, explaining that although it may seem counterintuitive, man’s constant survival instinct to “strive to survive” ultimately leads him to seek to walk along the edge of the precipice between safety and danger in a constant quest for a final place of safety among the stars.

Leave a Reply