Margaret Boone Rappaport 1 * and Christopher J Corbally2
1 Former President, Policy Research Methods, Inc., USA
2 Vatican Observatory, University of Arizona, USA
*Corresponding Author: Margaret Boone Rappaport, Former President, Policy Research Methods, Inc., USA.
Received: January 29, 2026; Published: February 28, 2026
Importance: This research paper emphasizes the need for knowledge of biology, neurology, and evolutionary science in order to assist artificial intelligence engineers in the alignment of superintelligent AIs (ASIs) with human values. A fundamental background in these natural sciences will hopefully lead to installation of capacities in artificial units that are similar to natural capacities in the human species. This background in the natural sciences will help to ensure that artificial units have similar “neurological” features, compared to humans, and that they will be aligned with human values. The capacities in artificial units will not be exactly the same as natural neurological capacities, but they will achieve a certain similarity—which is essential to the future of the human species. It is important that artificial units are as similar as possible to their human originators. Similarity creates safety.
Objective: A sequence of neurological features on the evolutionary line leading to humans, beginning 55-65 million years ago with appearance of the Order Primates, is presented as a guide for an in-depth alignment of ASIs with human values and capacity for culture—and perhaps even emotions, if artificial units are instructed in the importance of human emotions in their motivations and their accomplishments. Initially, artificial units will have difficulty comprehending the importance of values and emotions in humans, but this paper proposes that instruction in the importance of human values may be possible. In addition, and perhaps even more important, the paper proposes that the alignment of ASIs will be an educational opportunity to determine if the artificial units have moral decision-making and a capacity for moral adjudication. This discussion takes place within the context of the new field of NeuroAI, which captures the mutual influence of neuroscience and AI engineering. Ethical and safety issues are included throughout, as well as an emphasis on the critical importance of alignment in the future of humankind.
Keywords: NeuroAI; Superintelligent AIs; Theory of Mind; Goldilocks Evolutionary Sequence; Mechanistic Interpretability; LLM
Abbreviations
ANN: Artificial Neural Network; CNN: Convolutional Neural Network; DNA: Deoxyribonucleic Acid; fMRI: Functional Magnetic Resonance Imaging; LLM: Large Language Model; OCR: Optical Character Recognition; RNN: Recurrent Neural Network; ASI: Superintelligent AI.
Citation: Margaret Boone Rappaport and Christopher J Corbally. “NeuroAI, Paleoneurology, and the Alignment of Superintelligent AIs with Human Val- ues. Our Future Depends on It.". Acta Scientific Neurology 9.3 (2026): 03-12.
Copyright: ©2026 Margaret Boone Rappaport and Christopher J Corbally. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.