Excerpt: Strategy & Philosophy, part two
We began this chapter with the philosopher Peter Singer and his utilitarian approach to discriminating across individuals and species. Other philosophers use their studies into consciousness in much the same way. They consider species to be more or less near to homo sapiens in terms of consciousness, and derive conclusions or principles from their observations. They’ve even begun to do it with things they’ve created.
Say you are one of the leaders in artificial intelligence (AI). Maybe you contributed to breakthroughs in machine learning, with its underlying neural network structure meant to mimic the functioning of the human brain. Or the follow-on uncanny generative AI software, with its massive large language models (LLMs) trained on just about everything on the internet. Like so many creators before you, you stand back in awe at your creation and dream. Dream big. In this case, of artificial general intelligence, or AGI, whereby the software appears to be sentient. To have consciousness.
But you are basically a software programmer or a mathematician. The strategist in you, on entering this new territory, should turn to philosophers for guidance.
If the philosopher is a materialist, then naturally they will be predisposed to believing that consciousness can arise from human activity. If, on the other hand, they are a dualist, then, maybe not.
“When thinking about the mind, past philosophers have tended either to accept some sort of materialism—roughly the view that consciousness is a physical state of the brain—or to take the dualist outlook that mental and physical things, minds and brains, were irreducibly different. Over the past century, materialism has gained an upper hand among philosophers, and it is certainly the common working-assumption of psychologists, neuroscientists and artificial-intelligence researchers.”*
* Anonymous review of John Searle’s “The Mystery of Consciousness” (New York Review of Books, 1997), in “...and minds?” The Economist (December 13 1997): 11-12
UC Berkeley’s Searle considers himself a materialist, but not without controversy, arguing that philosophers face a scientific problem of consciousness, in that science has yet “to show how neural structures produce consciousness, and nobody can be sure in advance that it will fail.”*
* Ibid
What materialists also have to contend with are the very real subjective experiences that people have of the world. How does one go about explaining subjectivity from an objective, rational, scientific standpoint?
Take the example of hallucinations. People hallucinate under duress or sensory deprivation, others under the influence of chemical stimulants such as alcohol or drugs. Amputees feel the presence of phantom limbs. Those who lose loved ones see their ghosts. And when generative AI software makes gross errors, they are said to hallucinate.
Tufts University’s Daniel Dennett contends that hallucinations are one of the ways we humans cope with the deluge of information and stimuli encountered during a typical day. In a sense, humans evolved to hallucinate as a strategy for processing information. A nifty way of explaining the snafus of ChatGPT, Bard, Dall-E, or Stable Diffusion.
“A cursory review of the literature on hallucinations certainly does suggest that there is something of an inverse relation between strength and frequency - as well as between strength and credibility… One of the endemic features of hallucination reports is that the victim will comment on his or her rather unusual passivity in the face of the hallucination. Hallucinators usually just stand and marvel… It is likely… that this passivity is not an inessential feature of hallucination but a necessary precondition for any moderately detailed and sustained hallucination to occur…
“The key element in our various explanations of how hallucinations and dreams are possible at all was the theme that the only work that the brain must do is whatever it takes to assuage epistemic hunger - to satisfy ‘curiosity’ in all its forms… The world provides an inexhaustible deluge of information bombarding our senses, and when we concentrate on how much is coming in, or continuously available, we often succumb to the illusion that it all must be used, all the time.. If our brains can just satisfy all our particular epistemic hungers as they arise, we will never find grounds for complaint. We will never be able to tell, in fact, that our brains are provisioning us with less than everything that is available in the world.”*
* Dennett, D., “Consciousness Explained,” Little Brown (1991): p8, 16
Although both Searle and Dennett may differ in their underlying beliefs about consciousness, each of them offers (as do most leading philosophers) to strategists an array of ways of thinking about problems, intractable problems. Problems both tangible and intangible. Problems both subjective and objective. Learning how to think like a philosopher is powerful.
Keep reading with a 7-day free trial
Subscribe to The Strategy Toolkit to keep reading this post and get 7 days of free access to the full post archives.