The Intelligence Revolution (LONG)

TTI is known for its intellectuals. This is a place for thinkers to gather and exchange quotes, thoughts, or other topics that might not appeal to the average gamer.
Post Reply
Cashel Kinnane
Posts: 13
Joined: Thu Nov 04, 2004 2:56 pm

The Intelligence Revolution (LONG)

Post by Cashel Kinnane »

This is an essay I was halfway through. Finally having a forum on which to post it, this site has inspired me to finish it. Let me know what you think :)

THE INTELLIGENCE REVOLUTION

"There has been no significant change in human DNA in the last ten thousand years, but it is likely that we will be able to completely redesign it in the next thousand."
-- Stephen Hawking, "The Universe In A Nutshell"

With a few lines in his recent 'theoretical physics for the layperson' book, Stephen Hawking makes a piercing analysis of what the future holds in store for the human race. It's as exciting as it is terrifying, and chillingly probable.

Imagine a world where humans echew their bodies, where we outgrow the confines of our brains and develop immortal sentience in amneotic vats or quantum super-computers. Envision a future where genetic increases in intelligence compound themselves, where smart science breeds even smarter scientists. The vast majority of our current science fiction describes a future where humans are largely unchanged; genetic engineering takes a back seat to warp drives and pulse rifles. Today, genetics is one of the most progressive scientific disciplines. We are on the cusp of spring-boarding our evolution -- exponentially. We're an eyeblink away from what can accurately be called an intelligence revolution.

Inevitable Genetics

On an evolutionary timescale, our scientific progress has been lightning swift. We've gone from alchemy and bloodletting to mapping the human genome in mere centuries. The next step is clear: The uninhibited modification of human life. We're about to take evolution into our own hands, with unfathomable consequences.

Moralists are facing a losing battle against genetic science; it's simply too exciting and too full of potential for improving human life to remain checked by treaties and regulations for long. Unless we establish a totalitarian world order, we must face the fact that unscrupulous genetic engineering won't merely be a part of our future, it will define it.

Western civilization as a whole sees genetic engineering as dangerous and morally ambiguous; the fact that it is a complex and expensive science has kept in in check for these past decades. But we're already seeing the inevitable decay of this caution. It will begin with the curing of disease, and from that genetic science all the rest will spring. Certainly, the US will not begin breeding super soldiers pre-emptively, but what happens when its enemies begin doing so? Political leaders will see potential in genetic science; they'll see a fountain of immortality or a holy grail for increasing the attributes of their heirs and successors.

But developing physical traits is not the real issue. Super strong soldiers are one thing, but the true (and thus far largely ignored by popular science fiction) consequence of genetic engineering will be an exponential increase in human intelligence. Will the conservative western world stand idly by while its enemies breed tactical super-geniuses, military savants and legions of Einsteins to theorize and invent weapons of unparalleled mass destruction?

Obviously not. Genetic engineering is perhaps a century or two away at most, and the world had best be ready.

Consequences

The true consequence of genetic tinkering has been mentioned: An exponential increase in human intelligence.

Imagine a team of four scientists who genetically engineer four humans to be slightly more intelligent than they are. These genetically engineered humans grow up and become smarter scientists. This second generation is now even MORE capable of increasing human intelligence, leading towards a third generation of even MORE brilliant humans. This third generation creates a fourth generation, putting all of their genetically engineered smarts into further augmenting brain power.

While the rate of this growth cannot be estimated with any compelling accuracy, the form this growth will take certainly can. There are multiple patterns of mathematical growth that may apply. Logarithmic growth, where the rate of expansion is in a state of constant deceleration, is applicable when the agent behind the growth is rendered less effective over time. Linear growth assumes the agent behind this growth remains constant. Given the intelligence of the scientists themselves is also increasing, both of these growth patterns are only applicable if we assume some limiting factor, some barrier that cannot be overcome. But the results of this growth, of increased intelligence, can be applied to any barrier that approaches. The size of our brains, for example, may be a limit now. But when our scientists achieve four-digit IQ's? Can one even comprehend the solutions they will devise?

An impressive possibility is that genetically augmented intelligence may be one of the few truly unlimited exponential growth applications we can conceive. It is purely recursive. We increase our intelligence, which makes us better able to increase our intelligence, which in turn further augments our ability. Look at any graphic representation of exponential growth. The end result almost always defies belief, regardless of what the initial conditions are. I present this story to further emphasize the true consequences of an exponential rate of growth:

Once upon a time, long ago there was a king who ruled a prosperous land. Poverty was unknown there and every person was gainfully employed. Hence the sight of a beggar making his way along Main Street caused quite a stir in the capital of the land. The king demanded to see this strange man. When brought to him the beggar revealed that he indeed did not have any possessions nor any money for the purchase of food. The king magnanimously offered all-you-can-eat meals for the rest of the week and clean clothes so that the beggar could continue his journey to the next land. Surprisingly, the beggar declined the royal offer and asked for a modest favor. The king demanded to know what the wish was. The beggar humbly requested a grain of rice for the first day, two on the second, four on the third day and so on - doubling the previous days contribution.

The king looked through the window at the overflowing granaries and almost accepted it when his grand vizier, remembering something that he had learnt in Elements of Numbers advised his highness that he should reconsider. To calculate the implication of the wish he pulled out a dusty abacus to perform exponential calculations. He fumbled with it for a while but could not express the magnitude of the numbers involved because he ran out of beads. The king getting impatient with his vizier on such a simple wish from a poor man, officially granted the beggar the wish. Little did he know that he had sounded the death knell of his reign.

The next day the beggar came to claim his grain of rice. The townsfolk laughed at the beggar and said that he should have taken the king's kind offer for a full meal instead of the measly grain of rice. On the second day he was back for the two grains. A week later, he brought a teaspoon for the 128 grains that was due to him. In two weeks it was a non-negligible amount of half a kilo. At the end of the month it had grown to a whopping 35 tons. A few days later the king had to declare bankruptcy. That is how long it was needed to bring down the kingdom.

-- Adapted by Raju Varghese, Intellisoft, Inc

It is possible to present artificial limits, much as the capacity of the Earth will soon limit our exponential population growth rate. Cranial size, for one, will limit how large one's brain can grow -- the birth canal in the human female is only so large. But if we double our intelligence (which, biologically-speaking, should be feasible), is it not likely that we will devise some means of bringing a baby to term outside the womb? And if, upon doing that, we augment intelligence by a factor of ten, can we not assume that computer or cybernetic implants, quantum technologies and the like will be able to breach any barriers brought about by further limits of our imperfect biology?

And from there, our intelligence continues to grow.

The Likelihood of Star Trek (or EVE)

The likelihood of 'normal' humans existing for much longer is not high. There will be a surge in genetic experimentation just as there is currently a surge in nuclear proliferation. Even those with idealistic opinions of the 'moral West' must concede the fact that we will be forced to initiate genetic experimentation to keep up with our myriad enemies. And when we do, the cycle begins. How smart will a scientist be before he can amplify a human's memory? Create a neural uplink to teach a toddler calculus, Matrix-style? Apply quantum particle entwining to bring about infinite and immediate communication of all information? Bridge the sociological gap between mankind, eliminating secrets and the imperfect nature of human communication?

But that's what we can conceive. When humans grow to become so smart that we, today, are ants by comparison, what will be possible? We cannot apply our antiquated morals or inhibitions to the far future: Gigantic, pulsating brains in vats, simulataneously controlling armies of robots may seem morally repugnant now, but what if the day comes when victimless morality and the sanctity of the human form are seen as relics of a bygone era? The Borg of the Star Trek franchise may be the closest science fiction has ever come to the true outcome of our scientific meddling -- the loss of individuality. Perhaps, one day, we will eschew the physical altogether, becoming beings of contained energy, developing omniscience and omnipotence that today we cannot fathom. Perhaps, in the future, God will become a reality -- and we will be He.

Are We Alone?

This theory would have natural applications to the existence of alien life. The argument posed here is that the intelligence revolution is a natural consequence of evolved intelligence. It can be boiled down to a single tenet: With intelligence comes the power to augment intelligence. It is a recursive statement with boundless repercussions. To believe that any intelligent civilization would blissfully disregard it is illogical. Thus, we are either alone in the universe, or it is fundamentally impossible to cross the interstellar void (physically or by communication). Personally, I lean towards the former. Here's why:

The Annihilation Conclusion

Imagine a world where everyone is unfathomably brilliant, where the secrets of the atom and the complexities of quantum physics are not only understood by the average toddler, but the average toddler can innovate and extrapolate from that information. Today, our four-year-olds play with lego. Tomorrow, they might play with reactors. Limitless intelligence brings with it the limitless capacity to destroy. Perhaps, before we address the fundamentals behind human intelligence, we need first address the fundamentals behind human nature. Without conditioning our species to wield this power responsibly, we will no doubt burn ourselves out in an exponential flash of chaotic potential (which, as I see it, has already begun). We must condition ourselves, learn from ourselves, and build our wants and needs into something compatible with an unfathomable ability to bring them about.
Image

If you can count your money, you don't have enough.
User avatar
Joe diNimiki
Posts: 20
Joined: Mon Oct 25, 2004 2:20 pm

Post by Joe diNimiki »

I believe that capacity to destroy comes peer with the capacity to protect. Some centuries ago, people believe there was no protection against the first guns - and now, there is no protection against nuclear bombs.

But in the Middle Ages, there also was no weapon strong enough to destroy a fortress.

There is no way to say if the weapon or the protection will be stronger, but in any case, remember than if Man is spread through space, there will unlikely be any weapon strong enough to destroy the whole of Mankind.
Gage
Posts: 7
Joined: Wed Nov 03, 2004 9:42 pm

Post by Gage »

I liked your essay. This should interest you

http://www.ugcs.caltech.edu/~phoenix/vi ... -sing.html

Vernor Vinge on the singularity, similar concept to yours.

enjoy.

oh and

Joe diNimiki, the USA is working on an anti-balistic missle project (ABM) to be used to shield themselves from nuclear ICBM's. This threatens to unbalance world politics. MAD (mutually assured destruction) wont hold ground anymore...
Veni, Vedi, Vici
Cashel Kinnane
Posts: 13
Joined: Thu Nov 04, 2004 2:56 pm

Post by Cashel Kinnane »

Gage wrote:I liked your essay. This should interest you

http://www.ugcs.caltech.edu/~phoenix/vi ... -sing.html

Vernor Vinge on the singularity, similar concept to yours.
I don't know if I should be pleased that someone else came up with the same idea I did, or irritated that my essay had already been written ;)

Thanks :)
Image

If you can count your money, you don't have enough.
Gage
Posts: 7
Joined: Wed Nov 03, 2004 9:42 pm

Post by Gage »

You should be pleased, Vernor Vinge is a PhD in Pure Mathematics. =D thinking on the same wavelength as him can only be good news
Veni, Vedi, Vici
Cashel Kinnane
Posts: 13
Joined: Thu Nov 04, 2004 2:56 pm

Post by Cashel Kinnane »

Hmm, this has interesting repercussions on objectivist theory, too -- how can one relate morality to productivity when mankind becomes less and less capable of being productive in comparison with technological super-machines? Objectivism, as I see it, works when it's man vs. man. But when it becomes man vs. machine, and the machines keep getting better, will objectivism remain a beneficial philosophy for the betterment of mankind?
Image

If you can count your money, you don't have enough.
Gage
Posts: 7
Joined: Wed Nov 03, 2004 9:42 pm

Post by Gage »

perhaps we can integrate these super machines into us, bettering humans. We could very well become Cyberneticus Sapiens, or something like that ;)
Veni, Vedi, Vici
Raaz Satik
Taggart Director
Posts: 2026
Joined: Wed Aug 25, 2004 2:40 pm

Post by Raaz Satik »

Gage wrote:Joe diNimiki, the USA is working on an anti-balistic missle project (ABM) to be used to shield themselves from nuclear ICBM's. This threatens to unbalance world politics. MAD (mutually assured destruction) wont hold ground anymore...
And in return the Russians are developing a missile to negate the ABM's.
The warhead of this intercontinental ballistic missile developed at the Moscow institute of thermal engineering is uniquein its capability to change course and altitude in order to effectively overcome successive anti-missile defense lines.
<<Source>>
ImageImage
User avatar
Joe diNimiki
Posts: 20
Joined: Mon Oct 25, 2004 2:20 pm

Post by Joe diNimiki »

No machine has a mind like man. The lazy man is less productive than a machine, but the industrious man is much more productive than the best machine in the world - because he comes up with new things.
Cashel Kinnane
Posts: 13
Joined: Thu Nov 04, 2004 2:56 pm

Post by Cashel Kinnane »

Sorry Joe, I missed your previous post:
Joe diNimiki wrote:I believe that capacity to destroy comes peer with the capacity to protect. Some centuries ago, people believe there was no protection against the first guns - and now, there is no protection against nuclear bombs.

But in the Middle Ages, there also was no weapon strong enough to destroy a fortress.

There is no way to say if the weapon or the protection will be stronger, but in any case, remember than if Man is spread through space, there will unlikely be any weapon strong enough to destroy the whole of Mankind.
But our ability to destroy has consistently outpaced our ability to defend against destruction. Personal armor was effective in the middle ages, today it is useless. Strong walls were effective in the past, now they cannot hope to stand up against nuclear weaponry. And it has only been in the last century that mankind has developed the power to destroy the entire planet -- and there's no defense against it, either.

You do make a good point, though -- if mankind can spread throughout the universe, it is highly unlikely that a weapon would be developed to destroy us all.

But that could just be our twenty-first century naivete speaking, and may well be the sort of statement that people millenia from now look back upon and snicker.
Joe diNimiki wrote:No machine has a mind like man. The lazy man is less productive than a machine, but the industrious man is much more productive than the best machine in the world - because he comes up with new things.
You're right... for now. But what happens if we unlock this mysterious source of innovation, and learn how to reproduce it with circuitry and electricity? I don't think anyone can claim with any authority that machines will never achieve the same level of sophistication as the human mind.
Image

If you can count your money, you don't have enough.
Post Reply