Dave: “Open the pod bay door, HAL.”
HAL: “I’m sorry Dave, I’m afraid I can’t do that.”
— the computer HAL attempting to kill Dave,
from 2001, A Space Odyssey
|Lincoln Stoller, PhD, 2021. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license (CC BY-NC-ND 4.0)
This post is ostensibly about being banned from the blogging platform Medium, but its more interesting implications are about artificial intelligence.
I’ve been peripherally involved in A.I. since I developed algorithms to “intelligently explore” the quantum mechanical states of magnets for my PhD in quantum physics. In order to understand the behavior of these systems you must essentially “poll” the parts of them that are most representative of the whole because the whole itself is too large to examine. This is similar to the way voters are polled in order to get a sense of how the larger population is thinking.
Artificial intelligence is essentially a set of tools for figuring out what’s happening based on a limited amount of information. For example, an artificially intelligent chess-playing algorithm considers a number of probable outcomes a few moves into the future for a variety of possible moves. It doesn’t bother to look at all possible moves and it can’t look all the way to the end-game for every possibility. A.I. systems are inference systems.
This is similar to censorship algorithms or, in this case, the disbarment algorithms at Medium. I cannot be sure, and Medium does not inform us, but it’s most likely that Medium’s algorithms consider the frequency and combination of various suspicious words along with reader observations.
As the authors of suspicious words are systematically excluded, the readers are selected, trained, and encouraged to further enforce the ideological direction that the A.I. system is created to enforce. The readers are themselves being programmed. If the readers and censors are not trying to limit the system’s errors, then the combination of computer direction and human entrainment leads to greater extremes. This is similar to what happens when authoritarian governments enlist members of their populations to spy and inform on one another.
Norman identifies: “Man is shot dead.”
Medium (https://medium.com/) is a small, private company that provides a blogging platform to readers in exchange for a cut of their subscription fees. It is essentially an open publication, ad free, e-magazine supported by paid subscribers. It is estimated to have between 80 and 100 million views per month. Authors who are members of their partners program are paid a commission based on the number of times their writing is viewed and endorsed by readers.
I uploaded a few of my posts in 2020 and never heard anything from them until this morning when I got this notice:
“We’re writing today to notify you that the Medium account associated with this email address is at risk of being removed from the Partner Program… new eligibility and activity requirement for writers to maintain their enrollment in the Partner Program (states that) if you have fewer than 100 followers you may be removed from the Partner Program.”
After I’d moved my blog to Substack I’d forgotten about Medium where I had one follower. It was easy enough to add some content to Medium so I copied and pasted some of my free blog posts, which I’ve listed and linked at the end of this piece.
Within the hour after uploading these posts I received another message from Medium:
“Due to the elevated risk of potential harm to persons or public health, Medium’s Trust & Safety team has removed your account under its rules: https://help.medium.com/hc/en-us/articles/360045484653
“Your profile and posts will no longer be publicly available on Medium. Your work will remain accessible to you while signed in, and may be exported at any time by following the instructions here, but will appear as unavailable to others.
“Your Medium membership, if you have one, will be canceled and any remaining funds you may have prepaid will be returned to you.”
The rules to which I was referred (https://help.medium.com/hc/en-us/articles/360045484653-COVID-19-Content-Policy) have a long list of statements pertaining to Covid-19 that you cannot say which are explicit or implied contradictions to the general corporate/government narrative. They give themselves full discretionary power:
“When Medium’s Trust & Safety team becomes aware of potentially problematic content, we analyze the reported content under those Rules, and where necessary apply other secondary frameworks to come to an enforcement decision.”
In human-speak this means Medium can do whatever it likes.
I have been careful to make my writing factual. I do not give or contradict medical advice. I collect evidence, contradict unsubstantiated statements and propaganda, and report my own research and personal experience.
Norman identifies: “Man is pulled into dough machine”
I had Covid-19 twice. First, for three weeks in March of 2020 before it was recognized or testable. Then, 18 months later, at the end of November 2021 at which time I was admitted to the Intensive Care unit of my local hospital. The hospital gave me oxygen and monitored me as an in-patient for almost two weeks.
I published a book of self-hypnosis meditations for issues of personal and social health relating to the disease and the pandemic. You can download the book, Covid-19, Illness & Illumination, a Hypnotic Exploration, from my dropbox HERE.
Medium is a small and shrinking company (see: The Mess at Medium) that monitors thousands of authors. Their their business is handled by computer algorithms, and their “Trust & Safety Team” is a computer algorithm. Medium’s customer support, which is not a computer algorithm, is poor. I lodged a complain but, as of this writing, got no response.
Medium’s censorship appears biased. An author named Gideon MK, paid by the Australian government, continues to appear on Medium in spite of his calling attention to the use of ivermectin, a disputed drug in the treatment and prevention of Covid-19. His article “Does Ivermectin Work for Covid-19?” supports the state narrative against the drug’s use. My articles, which support ivermectin’s use, are gone.
The arguments continue in the scientific literature, but even that literature is biased. These biases are scrutinized and have been revealed, but the effects of the bias are not acknowledged in the public debate. The relative merits of the different conclusions are considered privately, but science does not work when practiced in private.
Doctors are poorly informed. They don’t have time to follow the research or the training to understand the protocols and statistics. They are not at liberty to make their own decisions or practice medicine without the oversight of nonmedical managers. Doctors have become pawns, and the whole health care system has become a target for social dissatisfaction. During my stay in the hospital I noticed that doctors were unwilling to offer any prognosis, policy disclosures, or opinions.
As a result, health care workers are being harassed, censured, and in some cases have been fired or disbarred. Many are looking for other employment. I had to travel an hour outside my medium-sized city to get a blood test. None of the dozen urban testing centers were available for the next six weeks. As I was told at the lab outside of town, their staff is quitting in droves.
Norman identifies: “Man shot is dumped from car.”
As few people are informed or make informed decisions, and the medical system is increasingly run by managers and computer programs, the whole system drifts away from being responsive or intelligent.
Artificial intelligence is a misnomer, it is not another form of intelligence. It is past-looking, data driven, and inflexible. The one thing that computers systems cannot do is change themselves, and this is needed in a situation whose outcome is unknown.
In a recent and as-yet unpublished article by Peter L Nelson that I was asked to review, titled “Artificial Intelligence: The Perfect Psychopath” (https://www.academia.edu/letters/submissions/vq69L2), Nelson says:
“Any ‘artificial intelligence’ created by us is going to carry this human imprint into its future evolution. That is probably why so many people fear the future of robotics. The appetitively driven, cognitive functioning of the human species is truly dangerous—not only to non-humans but to each other.”
We would like to think that a computer-based system could be more objective and more data-driven than a partial, personal, human-based system of judgements. Nelson disagrees saying,
“There is no human generated system that is non-human, therefore it will always reflect the attitudes, functionality and beliefs of its creators.”
It’s worse than that. Systems that adjust to their environment to further exploit their environment will drive their environment out of balance. Like the sheep dogs that direct their attention to the unruly members of the herd, the computer algorithms that reward those who comply and punish those who don’t will polarize the population.
The MIT media lab created an artificial intelligence program that behaves psychopathically (Yanardag, 2022). They did this to emphasize that the data used to teach a machine can significantly influence its behavior. Their program, named “Norman,” was programmed to develop associations based on text and images fed to it from the most socially perverse areas the social media site Reddit.
Norman’s decision rules were unbiased, but the data it was being fed was not. As a result, Norman’s “thinking” came to represent what in a human would be called sadistic, violent, paranoid, and perverse. The pictures shown above are Norman’s response to Rorschach ink blot tests used to assess psychopathy in humans. This was an example of a responsive, pattern matching system developing into something negative.
There are many ways an intelligent system can be programmed. It can be exploitative or restorative but, in either case, it must be responsive and its response should be accurate. A program might succeed in its objective and destroy the environment. It might explore a destroyed environment and learn destruction to be normal.
Most corporate and political programs are short-term in nature because that’s both the profit and the management horizon. These programs are usually designed to accelerate change, not to slow change. That’s why the whole climate change program is working at cross purposes: it’s objective is to speed up the process of slowing down climate change. Unless these contrary objectives are balanced, one or the other will get out of hand.
The presumption is that certain humans, the ones who control the system and not the ones who are controlled by the system, will intervene to adjust the program when it goes too far in one direction or the other.
Who are these independent humans, the ones who are not subject to the system? Are these the people who have no investment in the system and will just follow the plan? How well has that worked in the past?
The idea of systems controlling themselves, whether they be carbon- or silicon-based, lies at the heart of intelligent design. The concept of artificial intelligence has little meaning if there has never been such a system. The success of heathcare systems that override clinical judgment, media algorithms that reward the status quo, and chess-playing computers are limited by the goals that humans program into them.
“AI systems function in a direct, instrumental manner to achieve specific goals and, like psychopaths, have no concern for the feelings of those being acted upon, used, or manipulated… Hence, an artificial intelligence system seems to simulate what can be understood to be a perfectly functional psychopath.”
Nelson, P. L. (2022). Artificial Intelligence: The Perfect Psychopath, Academia Letters preprint. https://www.academia.edu/letters/submissions/vq69L2
Yanardag, P., Cebrian, M., Rahwan, I. (2022). Norman, world’s first psychopathic AI. MIT Media Lab. http://norman-ai.mit.edu/
Blog Posts to Medium
School as a Weapon & Covid-19
To succeed, recognize who is not serving your interests.
Behind the lies and manipulations, the dangers are real.
Covid-IQ, Are You Intelligent?
Covid-19 shows that most people are not intelligent.
Enter your email for a FREE 1x/month or a paid 4x/month subscription.
Click the Stream of Subconscious button.