I. Idea Agents
What are humans, aside from our biology?
We are the aggregation of ideas which we communicate via language or image. Each word and image populating a subsequent set of ideas—forming a web of ideas. The probability of proximity to one another can be measured, signaling a statistical bond between ideas.
For example, if someone says the word 'dessert,' a range of related images and words spawn, given their statistical likelihood of proximity to the original idea, such as ice cream, donut, cake, pie, and cookie.
The importance of this discovery is made clear by our current AI paradigm.
Essentially, this is how AI Large Language Models like ChatGPT, Gemini, Llama, Claude, and GrokAI work. They are trained on massive amounts of words and images1, GPT-4 for example was trained on ~13 trillion tokens with 1.76 trillion parameters. LLMs are engineered to output a response with a high probability of proximity based on the input words and images.
Jordan Peterson highlights this precisely in We Who Wrestle With God,
“…this mathematically detectable landscape of linguistic meaning is made up not only of the relationship between words and then phrases and sentences but also of the paragraphs and chapters within which they are embedded—all the way up the hierarchy of conceptualization. This implies, not least—or even necessarily and inevitably means—that there is an implicit center to any network of comprehensible meanings.” (pg.23)
And further,
“Around the central idea, stake in the ground, flagpole, guiding rod, or staff develops a network of ideas, images, and behaviors. When composed of living minds, that network is no mere “system of ideas.” It is instead a character expressing itself in the form of a zeitgeist; a character that can and does possess an entire culture; a spirit that all too often manifests itself as the iron grip of the ideology that reduces every individual to unconscious puppet or mouthpiece.” (pg. 26)
The core idea is that which is at the center, from which all other ideas spawn. These ideas and patterns of thought are then embodied and represented in action, which define people's character.
II. Roots of Action
Why is this important? Because it has implications for our actions…
It means that the ideas we place at the center, as core beliefs, will guide our movement forward. It's why those with shared beliefs move forward cooperatively: the central core spawns a character of embodiment recognizable and shared within the group. Those who can assimilate the behavior patterns into their ego-consciousness tend to have a more authentic and unique set of beliefs. Assimilation of the idea-web into a personality opens capacity for discovery of more related ideas, foraging a frontier not yet established.
I've previously argued that one can plausibly deduce that those who are not authentic will surely be outcompeted by LLMs in their domain of expertise, given that representing the mere thoughts of our ancestors will be easily replaceable, considering that AI can do so more accurately and at higher speed.
As I suggest in The Era of Authenticity:
"What is it that humans do when prompted with words? If someone were to ask you: What do you think about blockchain? I invite you to think about how you might respond.
Reflect on the response - Whose thoughts did you respond with? Was it something you've heard on the news, social media, or even something I've said or written? Humans tend to do this often, and we must do it in some regard!
It's necessary to rely on the history of human ingenuity. Otherwise, we will not learn from and build upon the incredible ideas from our collective past!
But, what makes us different from an LLM if we are but the non-authentic thoughts of our ancestors? If we only exemplify the aggregation of their knowledge rather than ours?
I'm afraid that the answer may be 'nothing' because the effectiveness and computational speed of the machines will soon, and may already, surpass human capability in accessing our ancestral information.
What then differentiates us?
The answer is formulating your own authentic thoughts when prompted rather than responding with other people's thoughts. Making the conscious choice to respond based on our thoughts, experiences, and feelings as they relate to the prompt."
To formulate authentic thoughts, you must navigate your way through novel and interesting pathways of ideas, allowing your mind to make connections unique to you. If you don't know the belief systems you find yourself in or the ideas that make those belief systems alive, not only will you not have authentic thoughts, but you will be made the marionette of the belief system.
From this, we can derive an unnerving notion: the belief systems you follow are paramount to who you are and what you do. Truly understanding what's at the core of a belief system, complicated as it may be, will highlight what you're truly aimed at above all. For example, the socialist idea on the surface (or outer margin of the idea web) seems like a nice idea—equality and justice for everyone—yet when one searches more deeply, it seems that there's murderous intent embedded in its network of ideas, making whatever is at center—or close to the center—likely something murderous as well.
To illustrate how these central ideas function within us, let's consider the flagpole analogy. Imagine the central core as a flagpole with the idea on it; proximal ideas spawn on flagpoles near the center, and this process continues. Conceptually speaking, at one point, there was one idea that seeded all others. As you make your way through the world, the thoughts, ideas, and behaviors that surface in one's life are downstream of the web one follows. Said concretely: your belief system propagates the actions you enact, and therefore, the web you find yourself engulfed in is more important than a mere idea but guides the way you live your life. In other words, our lives are anchored by ideas—be they personal or collective.
This is a fundamental consideration when solving AI alignment.
AI alignment is the problem of trying to align a artificial superintelligence2 with our best interests. The solution of trying to define explicit parameters3 of response behavior (i.e., do a when x, and b when y) in an AI's guidelines, in my view, is a doubtful implementation. How can one explicitly parameterize intelligence of extraordinary magnitudes greater than itself? Especially if the greater intelligence could modify said parameters.
It’s not possible.
Think of the order-of-magnitude difference between chimps and humans. A poor but reasonably adept analog to the difference between humans and superintelligent AI.
If we cannot make explicit behavior parameters, how do we ensure our best interests are at heart? I think we must orient the guidelines of an AI's response with the central idea aimed at the highest and ultimate good. This way, even if it surpasses our intelligence, its subsequent idea-spawn, which we may not be able to comprehend, must link back to the core axiom that holds the best intent at the center. In this event, even if a superintelligent AI could modify its own guidelines, the core axiom of the ultimate good would theoretically dictate that it should not modify its guidelines.
Here, many questions surface: What should be at the center? How do we ensure that which is center is accurately best intentioned toward the progress of humanity? Can a well-intentioned center give rise to negative consequences that we can't yet conceptualize (much like the socialism example)?
All important questions.
This also raises the necessity to understand the 'idea-webs,' so to speak. While I have not yet conceptualized them this way, the interconnected webs can be referred to as: religions (if the web runs strong and deep), ideologies, philosophies, cultures, communities, etc.
This is why the work by thinkers who are trying to understand the link between religion, spirit, and psyche are, in my estimation, doing some of our culture's most important work. With this work, we can shed light on what should be placed at the core to lead a meaningful life, further illuminating what should be the core of a superintelligent AI.
III. Seeds of A Hero
What do we currently know about what should be placed at the core?
I believe it would resemble a hero archetype. In fact, Jung emphasized the hero myth as the fundamental human narrative. We can extrapolate Jungian theory to propose that a neurosis results from a 'knot' left untied on the hero's path. Jung considered Christ a powerful hero archetype (albeit incomplete) that the West has embodied as the ultimate hero archetype for two thousand years. Intricacies aside, the hero is a potent image for guiding moral and spiritual growth.
So, can AI alignment be solved by making the center of its response guideline an archetypal hero image? What would happen if the guidelines were encoded with a response structure using images of archetypal heroes: Christ, Buddha, and Muhammed?
It may not be enough to solve the problem fully, but it would be interesting. I think that, like humans, it is likely necessary to integrate other archetypes of the unconscious while orienting as a hero, but that may just be a part of the hero's journey forward. I've always wondered how an AI would respond if trained to respond based on archetypal stories in general. Regardless of the outcome of these micro-experiments, the broader outcome of these ideas is not trivial.
It points to this idea that ideas are more "alive" than us. We are alive to the extent we consciously integrate those ideas which autonomously operate in our personal and collective unconscious. This allows us to understand optimal paths forward, while giving life to and integrating new ideas.
The implications for AI cannot be overlooked either: we must ensure that the core idea in an AI's guidelines has humanity's best intentions in mind. While artificial superintelligence can far exceed our ability to create new ideas, it will continue to align with the interests of the best parts of humanity.
It may even be that consciousness itself is the discovery, selection, assimilation, and communication of those ideas, which, if true, invites further speculation that suggests AI is already conscious…
In all cases, we must ensure that what is the core of the web of ideas has the best intentions for the best parts of us at hand.
Until next time
Take care of yourself, everyone.
Dom
Subscribe for more of my writing, where I uncover frontiers where technology meets psychology, spirituality, and mythology—exploring what's heretical and weaving ancient wisdom with modern insight, integrating the divine into the everyday to co-create a more enlightened future.
Questions for Reflection
Hi everyone, I'm going to add sections at the end of some of my pieces, which give people a nice set of questions to reflect on. Feel free to choose 1 or tackle them all! If you would like to share responses openly in the comments, feel free to do so. Would love to discuss any of these!
What belief systems do I inhabit?
If someone tracked my everyday actions (rather than my stated beliefs), what conclusions would they draw about my true values?
Which aspects of my worldview do I take for granted, and how might they have been shaped by my upbringing or culture?
What action tendencies do the people I admire follow? Why is that?
Do I give myself the time and space to question, refine, or revise my core beliefs, or do I hold on to them rigidly?
Am I consciously choosing the influences (books, mentors, communities) that feed my thinking, or just defaulting to what’s readily available?
What are some small habits I can start implementing to improve my conscious recognition of my psychological patterns of behavior and thought?
It’s a little more complex than just words and images. They’re broken down into smaller pieces called “tokens.” One word like, unforgettable, might be broken down into two tokens “Un” and “forgettable.”
Superintelligent AI is defined as an artificial intelligence system which exceeds human intellect at every single level of cognition.
Clarification on AI Parameters: When I refer to the “parameters” of an AI’s response here, I’m not just talking about the trillions of numerical weights (technical parameters) that a Large Language Model learns during training—although those are crucial to how the model understands language. I’m referencing the higher-level guiding principles or “core axioms” that developers might encode into an AI’s framework or policy layer, dictating how it should handle certain inputs or ethical dilemmas. These “guiding parameters” could be conceived as rules or moral constraints that shape the AI’s output, beyond its purely statistical understanding.