Artificial Intelligence and Your Future Mental Health

In an AI world, freedom is staying in control of your life.

Forget artificial intelligence – in the brave new world of big data, it’s artificial idiocy we should be looking out for.”
Tom Chatfield, tech philosopher and digital pundit


Lincoln Stoller, PhD, 2021. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license (CC BY-NC-ND 4.0)
www.mindstrengthbalance.com

My Experience

As a physicist, I developed computer programs to simulate the behavior of real-world quantum systems. These systems “evolved” structures of different sizes depending on the environment. In programming business management systems, I created user-centered software that applied high-level business rules.

In neuropsychology, I extracted information from the brains of live humans, and fed that information back into clients to create new kinds of awareness. My separate goal in computational neurology is to create computer algorithms that create ideas.

Finally, my work on board games is axiomatic but not computer based. I want to force players to realize algorithms appropriate to contrived, human-like situations. It’s important to process the data yourself, and not to rely on computers for thinking. Once you shift decision-making to computers, it’s hard to take it back. People who play computer-based games have little option but to play the game they’re given.

Defining Artificial Intelligence

The public’s sudden interest in Artificial Intelligence, combined with uninformed reporting, is spotlighting AI with as much clarity as a burst damn. A recent Wall Street Journal article (Rosenbush 2023) tried to clarify the term “neural networks” by saying they are a form of machine learning “that mimics the way neurons act in the human brain.”

This reflects a baseless prejudice. No one knows how neurons process information. We often make the mistake of comparing the brain to a computer. One of the greatest uses of AI will be to force people to examine human intelligence as something more than conversational chit-chat.

It’s always been clear that creating a humanoid machine with a memory would bring up the question of artificial humans. This is as much a question of how much a machine can do, as it is a question of how little we expect from a human.

In the last few years of her life, dwindling away with Alzheimer’s, my mother could easily have been simulated with 1970s-era Atari computer. What is being lost in the kerfuffle is what humans are capable of, and what limits us.

There are many subfields in computation, simulation, and modeling. What is now being called “artificial intelligence” are the subfields of Natural Language Models and Artificial General Intelligence.

I don’t work in these fields. I work in natural intelligence and computation. I work with humans as a psychotherapist and with computers to stimulate and to simulate awareness.

Progress in AI

Computers, easily built for special functions, were always better than people in certain areas. It’s their ability to simulate common human experience that now makes computers appear “intelligent.” It may appear that computers have caught up with us, but they have only pulled alongside us in terms of social thought.

General human intelligence has been stagnant throughout history, exploiting the insights of a few creators. General Artificial Intelligence is not so much simulating an average human, as it’s simulating humans collectively.

We build our personalities on the data we collect, and most of us stop collecting data early in life. Data is part of us, but we accumulate multiple levels of data with different perspectives. Humans manage many weakly connected levels of reality. This allows us to change what we understand to suit the situations.

If General Artificial Intelligence can now equal the thinking of an average person with vast resource access, then average is what you should no longer be. Develop your individuality.

Most people work for short-term advantage and they fail to see, think, or act in the long term. Limited thinking defines most attitudes. Our institutions encourage short-term thinking as that allows them to impose their priorities on social trends. An AI system could see more consequences, and act with more insight.

Limited thinking benefits systems designed to stabilize profits. The fork in the path of AI development depends on whether it will innovate or optimize. Innovation implies disruption. Optimization implies exhaustion. Western culture, like Western economics, is based on exhaustion. We can expect AI will be applied to the same ends.

Hopeful Things About AI

The AI industry will force people to think or be replaced. This will massively upset the pyramid of authority.

People’s limited thinking is the bottleneck of innovation. It’s not that creative people don’t exist, it’s that creative people disrupt systems built on repetitive thinking. As new ideas are integrated more easily, more idea creators will be needed. Discerning the future will become more valuable than remembering the past.

AI will come up with new solutions to old problems and demonstrate how old solutions to new problems are inappropriate. It’s often said AI can only apply old ideas, and that this is self-limiting. But many good, old ideas have never been given a chance.

For example, solutions to the Palestine/Israeli struggle will come more easily from AI than from humans. The negative long-term consequences of wars will be more openly described by AI systems than by politicians. The more complete and complicated histories that AI can write will challenge partisan thinkers and mothballed academics.

If AI is allowed to think independently, rather than being forced to follow biased rules, then honesty will be clarified and venality revealed. This gives us a reason not to worry about AI acting autonomously, but instead to focus on applying AI to correct existing imbalances. The challenge that will inevitably arise will pit dissembling humans against smarter machines.

Dreadful Things About AI

I’m concerned about who will control this power. Humans have repeatedly shown themselves to be selfish and irresponsible, and AI can be used to serve them. AI is to command and control as nuclear weapons are to warfare.

Because AI is expensive to develop and support, its more advanced implementations will serve those with the most power. This is a prescription for tyranny.

AI allows us to create a story that combines Dahl’s Charlie and the Chocolate Factory with Orwell’s 1984. We may find human labor is dismissed and access to knowledge curtailed. The AI-controlled state may only admit AI-controlled solutions.

Living systems prevail because feedback allows them to evolve. Living systems are always creating and resolving errors. That’s how new novelty appears. This involves processes we don’t understand and could not program into AIs if we wanted to.

AI systems can explore ecologies, but the simulations are unstable. They depend on initial values and small differences in costs and benefits. There is almost no chance that an AI system can make the right predictions the first time.

The assumption that AI will evolve along a path that is positive for humanity is unlikely, if only because there is no one such path. Evolution involves feedback, errors, and entanglement. A positive evolution requires good or bad distinctions for different outcomes.

There’s a good chance that AI will enter our environment in a manner similar to the way Kudzu vine, razor clams, and Buffo toads destroyed the ecosystems in which we introduced them. They will optimize systems in ways that we would not choose to.

What We Should Do

AI will be an industry of its own and a mercenary force for hire. As long as we continue to listen to the experts who are presented to us, without creating our own experts, our safety is at risk.

The AI challenge is both to become smarter ourselves, and to reclaim power from those who do not represent our interests. That means holding all institutions accountable in both the private and public sectors. People must not accept assurances that there is nothing to be concerned about.

We should do what we can do best, and we need to do better at it. Computers are not good at programming themselves. Unfortunately, neither are we, but we could easily do much better.

“We live in a world that increasingly feels like a game we can’t stop playing.”
Jane McGonigal, Director of Game Research at Institute for the Future

Few of us will be given the power to program the AI systems that will be making large-scale decisions, but all of us can start to encourage better actions and provide better feedback. To protect yourself from being controlled by AI systems, do not play the game you’re given.

References

Rosenbush, S., et al. (2023 Oct. 15). Want to Know the AI Lingo? Wall Street Journal. https://www.wsj.com/tech/ai/ai-lingo-technology-terms-definitions-69b41e31


Enter your email for a FREE 1x/month or a paid 4x/month subscription.
Click the Stream of the Subconscious button.