The ALMA project addresses the profound challenges involved in machine learning of ethics and cultures within world models (more extensive information in ALMA Deliverable D5.5 of Work Package 5).
Background
The concept of world models goes back to at least the 1960s. For example, when considering evolutionary psychology, John Bowlby wrote that: “the achievement of any set goal requires that an animal is equipped so that it is able to perceive certain special parts of the environment and to use that knowledge to build a map of the environment that, whether it be primitive or sophisticated, can predict events relevant to any of its set-goals with a reasonable degree of reliability. To call our knowledge of the environment a map is, however, inadequate because the word conjures up merely a static representation of topography. What an animal requires is something more like a working model of its environment. If an individual is to draw up a plan to achieve a set-goal, not only must he have some sort of working model of his environment, but he must also have some working knowledge of his behavioral skills and potential”[1]. In other words, an animal requires a model of itself in its environment in order to be able to make goal-related plans and predictions within that environment.
Further development of the world models concept took place in the 1970s in relation to psycho-social transitions. In particular, Colin Murray Parkes wrote “the assumptive world is the only world that we know and it includes everything we know or think we know. It includes our interpretation of the past and our expectations of the future, our plans and our prejudices. Any or all of these may need to change as a result of changes in the life space.” He also opined that the assumptive world comprises not only a model of the world as it is but also models of the world as it might be including representations of probable situations, ideal situations or dreaded situations [2]. Parkes went on to propose that there can be three types of change in world models. One type of change is that a world model may be abandoned and cease almost completely to influence behaviour. Another type of change is that a world model may be modified or abandoned incompletely so that it continues to influence behaviour in major or minor ways for long or short periods of time. A third type of change, according to Parkes, is that a world model may be retained in encapsulated form independently of the new model and as an alternative determinant of behaviour [3]. Subsequently, in the 1980s, Parkes wrote about the capacity of the central nervous system to organize the most complex impressions into internal models of the world, which enable us to recognize and understand the world [4].
Thus, through the 1960s to the 1980s, work on world models encompassed evolutionary psychology, social psychology, and moved towards neuroscience. In the 1990 and 2000s, work focused on the self in world models. A notable work of the 1990s being Thomas Metzinger’s book Subjekt und Selbstmodell [5], which was followed by his papers in the 2000s in journals such as Progress in Brain Research [6]. In the 2010s, research into models of self in the world sought to describe interactions between agents and their environments in terms of first principles. In particular, the reduction of uncertainty about self survival in changing environments as measured in terms of entropy [7,8]. This combination of physics and mathematics in the ongoing work of Karl Friston facilitates consideration of world models that encompass natural and artificial agents [9, 10,11].
ALMA
Research in Work Package 5 of the ALMA project addresses gaps in previous research by others with the aim of facilitating machine learning of ethics and cultures within world models. An analysis of interrelationships between personal world models, organizational world models, and machine learning world models is provided in Fox, S. (2022) Human-articial intelligence systems: How human survival first principles influence machine learning world models. Systems, 10(6), 260. In Fox, S. (2021) Psychomotor predictive processing. Entropy, 23(7), 806, the ecological perspective of self survival in environments is applied with the focus on changing self. In Fox, S. (2022) Behavioral ethics ecologies of human-artificial intelligence systems. Behavioral Sciences, 12(4), 103, the ecological perspective of self survival in environments is applied with the focus on changing environments. The relationship between these two perspectives can be summarized in the following simple practical example.
Consider, for example, a human truck driver who is paid nothing unless deliveries are made on time. From the perspective of changing self, truck drivers’ long hours of continuous sitting and high repetition of a narrow range of movements is far removed from the hunter-gatherer lifestyles in which human psychomotor functioning evolved. Furthermore, human psychomotor functioning evolved for occasional stress, for example from encountering a wild animal. It did not evolve for long periods of stress that cannot be alleviated by fight, flight or freeze responses, such as when repeatedly encountering traffic jams that reduce the amount of time available to make a delivery on schedule and so be paid the money necessary to survive.
From the normative ethical perspective of what should be done ideally, it is the choice of society that there should be road speed limits that vary in accordance with population density and other factors that can increase potential for grave traffic accidents. This normative perspective is operationalized through environmental changes such as the introduction of traffic lights, speed cameras, and in-vehicle artificial intelligence that monitors the actions of truck drivers including their eye movements.
However, from the perspective of behavioral ethics entailing situation-specific interactions between moral motivation and ethical temptation, truck drivers who are not paid unless they make deliveries on time can experience reducing moral motivation not to break traffic regulations and increasing ethical temptation to drive above speed limits and through traffic signs as they turn to red. This can happen when time available to make deliveries is reduced by road works, and as self-control decreases amidst long periods of stress with repeated instances of having to slow down and to stop when time to the delivery destination is dwindling.
This simple example illustrates that machine learning of ethics and cultures within world models could be relatively straightforward in terms of culture-bound social choice of normative ethics, such as vehicle speed limits in relation to population density. This could be sufficient for realization of normative ethics if artificial intelligence could undertake fully autonomous driving in all situations. However, human truck drivers are needed to take care of the so-called first mile and last mile of deliveries where there is too much task variation for AI to deal with. Accordingly, machine learning has to encompass human behavioral ethics. Even in the preceding simple example, this is very challenging because interactions between moral motivation and ethical temptation are mediated by the dynamic influence of psychomotor functioning on internal models of self in the world.
Overall, it is important to recognize that people are psychomotor beings rather than sensorimotor machines. Hence, while it may already be quite practical to map human motor functioning with sensors, it is a far more complex problem to identify possible generalizable interrelationships between human motor functioning and varying human psychological states in changing environments. The concept of world models provides a structure for addressing this complex problem. An important issue in ongoing scientific debate about world models is how to formalize interfaces between the self (a.k.a. internal state) and the environment (a.k.a. external state). In this scientific debate, blankets is a term used in describing such interfaces. ALMA has contributed to the scientific debate with research reported in the paper, Fox, S. (2022) Practical implications from distinguishing between Pearl blankets and Friston blankets. Behavioral and Brain Sciences, 45, E194. In this paper, it is explained that while there remains much to be researched about the nature of interfaces between internal states (self) and external states (environment), it is nonetheless possible to take some practical actions based on what is already known. Future work in ALMA’s Work Package 5 will continue with this science-to-practice perspective. This is important for ALMA as it is an EU Future and Emerging Technologies (FET) project; and FET projects carry out multidisciplinary frontier research in order to develop proof of concepts that have potential to be converted into commercial / industrial practice.
By Stephen Fox from VTT
MORE INFORMATION ABOUT ALMA:
For any questions please contact This email address is being protected from spambots. You need JavaScript enabled to view it..
References
[1] Bowlby, J. 1969. Attachment and Loss, Vol. 1: Attachment. London, Hogarth Press.
[2] Parkes, C.M. 1971. Psycho-social transitions: A field for study. Social Science and Medicine, 5(2), 101-115.
[3] Parkes, C.M. 1975. What becomes of redundant world models? A contribution to the study of adaptation to change. British Journal of Medical Psychology, 48: 131-137.
[4] Parkes, C.M. 1988. Bereavement as a psychosocial transition. Process of adaption to change. Journal of Social Issues, 44(3), 53-65.
[5] Metzinger, T. 1993. Subjekt und Selbstmodell. Paderborn: Schoningh.
[6] Metzinger, T. 2008. Empirical perspectives from the self-model theory of subjectivity: A brief summary with examples. Progress in Brain Research, 168, 215–245.
[7] Friston, K.J. and Frith, C., 2015. A duet for one. Consciousness and Cognition, 36: 390-405.
[8] Bruineberg, J., Rietveld, E., Parr, T., van Maanen, L.; Friston, K.J. 2018. Free-energy minimization in joint agent-environment systems: A niche construction perspective. Journal of Theoretical Biology, 455, 161–178.
[9] Linson, A., Clark, A., Ramamoorthy, S., Friston, K., 2018. The active inference approach to ecological perception: general information dynamics for natural and artificial embodied cognition. Frontiers in Robotics and AI, 5: 21.
[10] Friston, K., Moran, R. J., Nagai, Y., Taniguchi, T., Gomi, H., Tenenbaum, J. 2021. World model learning and inference. Neural Networks, 144, 573-590 (2021).
[11] Ramstead, M.J., Sakthivadivel, D.A., Heins, C., Koudahl, M., Millidge, B., Da Costa, L., Klein, B. and Friston, K.J., 2022. On Bayesian Mechanics: A Physics of and by Beliefs. arXiv preprint arXiv:2205.11543.