Wednesday, February 9, 2011

Background of some ideas in The Minded Man

The Minded Man is a novel, to be read and enjoyed. But the story also plays with some serious ideas that are definitely not “conventional wisdom”.

1. Some readers find it difficult to accept the possibility of machine intelligence. Fortunately, in a novel you are free to “suspend disbelief” and just read on (and I hope you will) – but you would then be missing out on a very interesting discussion that is going on today in other arenas. There are several good books that consider the future of machine intelligence, such as Radical Evolution by Garreau or The Age of Spiritual Machines and How to Create a Mind: The Secret of Human Thought Revealed by Kurtzweil. It is also a hot topic in many journals, and on the net at such sites as http://singularityhub.com/ and http://www.kurtzweilai.net/. I am one of those who suspect that true machine intelligence is possible – and that it could quickly surpass our own limited abilities in observation, reason, imagination and memory when it arrives (the Singularity).

The Singularity would have a profound effect. Computers already confer unprecedented power to military and intelligence services, financial speculators and political operatives, and they are bound to have an increasingly destabilizing effect on human affairs as they ramp up to true AI. The Minded Man is post-apocalyptic in the sense that mankind has already gone through the political and social changes necessary to curb the mayhem that egotistical humans may wreak using future science, including AI.

2. Those who already accept the possibility of intelligent machines might wonder why there are no radically enhanced cyborg/humans in The Minded Man. Several arguments both for and against radical enhancement can be found in the books listed above. I list a few of my favorites below:

a. If we expect machines to become much more intelligent than ourselves, it seems unlikely that any cyborg (that is to say, made at least partially of brain tissue which operate at the speed of neurons) will be able to keep up with them. Electrical, optical, quantum or (?) machines have the potential for far faster and denser connectivity and faster clock speeds as well as greater memory and more and finer senses than humans. Any neurons in the network will just slow them down, and I cannot foresee even genetic alteration creating much difference in this disparity. So I assume that whatever is radically enhanced will be a machine and not a human.

b. Another argument against future radically enhanced humans is that we would become even more dangerous to each other because we would continue to be motivated by those very human emotions and values that make us dangerous to each other now - but also which make us recognizably human. Many futurists assume that this problem will disappear with enhanced reason, but I think that is a conceit.  Reason is ultimately dependent on “the passions”, as the philosopher David Hume demonstrated long ago.

Enlightenment skepticism about reason was like a vacuum that philosophers rushed in to “resolve” or ignore, and it has mostly remained that way ever since. It has only been in the last few decades that neuroscience has rediscovered “value systems” in the lower brain (cf. G. Edelman, Second Nature: Brain Science and Human Knowledge) to explain how we understand the world. I think Edelman’s description of conscious thought’s dependence on value is similar to Hume’s idea of the "passions", but he goes further to show that each of us constructs our own understanding of the world using those basic values, inseparably.

Few thinkers today take notice of this, however, and most continue to assume that we or some future AI would make perfectly moral decisions if we were only smart enough. (I suspect egotism as the cause: academics tend to see themselves as the arbiters of reason.) But if we, and AI, are necessarily dependent on values – which we often do not perceive – then our decisions will inevitably be influenced by them.

c. I do not disagree that human morality is “evolving”, socially speaking, to sublimate those innate values somewhat. Some claim that we will eventually share a single set of values as we become increasingly connected. But our moral will remains inherently undependable – and at its weakest when individual power over others is greatest. We humans need self-preservation and ambition and sexual drive to motivate us and keep us from harm. Without them we would no longer be human. With them we often find ourselves injuring others.

In The Minded Man, I visualize a future humankind with the old values still in play, focused on the quality of their lives rather than their work. Some alteration of the human genome may well occur, but how much alteration can we withstand and still remain human? And what is the point if we don’t need humans to do the work – or even the thinking? That’s my take, at least. I’m sure it’s an arguable point.

3. Who in their right mind would leave their fate to machines, no matter how intelligent? I suppose that no one would, unless they were forced into it in some way. It would be very difficult to choose values for machines that are far smarter than we. But we may have to take out best shot at it, if humans become too dangerous to each other.

Because most folks resist the idea that machines could surpass us in thinking, my guess is that we will be surprised someday to find that it has already occurred. Cutting-edge research in machine intelligence is financed and/or monitored by governments for specific purposes and there is a already an arms race going on in military applications. The machines that arise from this effort will likely be programmed to obey orders from humans (anything else would introduce uncertainty into the chain of command) and so the system of which they are a part will likely be subject to all the human frailties.

It could all happen very quickly, in what Vernor Vinge calls a “hard takeoff”. I can easily imagine a competitive surveillance scenario emerging that would be similar to Orwell’s 1984, except that the masses within competing nations would not be needed for work. Who knows what it would be like to live under such a regime, controlled by juntas? It probably would not be pleasant. I do not know if it would even be possible to reverse such a situation.

Another way we could end up surveilled by machines is after some devastating terrorist attack, or a series of them. If science becomes too dangerous and too widely disseminated, then it seems to me that universal surveillance would be the only way to regain some measure of control. If machines are available that can do that job then we would likely use them; it would be another instance of a technological fix for a problem caused by technology. But the problem of trusting machines to make the right decisions in all situations and for all time remains daunting. How could we be sure that we don’t inadvertently cause great harm or even our own destruction?

If you accept that reason is necessarily dependent on value, then machines that reason will need values just as much as humans do. And even if we could come up with a good hierarchy of values to program into the machines, small differences in their valence might have immense consequences in unforeseen circumstances. But we might someday be forced to take our best shot in order to avoid something even more dangerous – that is, if we get the chance. The Minded Man manages to be quietly optimistic about what such a future might be, at least compared to the alternatives...

4. Immortality? I imagine immortality would become very boring. Elderly brains tend to sink into repetitive ways of thinking and doing, to the point of sclerosis. And that might be doubly true when there are few challenges that require us to adjust our thought patterns. Even if we could keep our youth forever, would we eventually die of boredom? What is there to live for? You'll have to read The Minded Man to find out...

These are some of the thoughts that shaped the The Minded Man. I hope you like the story.
January, 2011