Browsed by
Tag: Law of Accelerating Returns

Why Transhumanism Needs More Positive Science Fiction – Article by Rykon Volta

Why Transhumanism Needs More Positive Science Fiction – Article by Rykon Volta

Rykon Volta


In the modern Age of Accelerating Returns, more commonly known as the Information Age, technological growth is accelerating at an unprecedented rate. Never before in the history of humanity has technological growth shown itself so clearly to the human race. As noted by famous futurist Ray Kurzweil, the trend of exponential growth in technology follows a double exponential curve.

One famous example of this exponential growth that you might be familiar with if you are into the world of tech is, of course, Moore’s Law, but in The Singularity is Near, Kurzweil demonstrates that other technological fields, including medicine, have been accelerating as well. Ray Kurzweil shows that technology has actually been accelerating since before the Stone Age, although a man in the Roman Empire would not have noticed any ramifications of progress considering that his grandchildren would not live in a very different society from the one his grandfather and he inhabited. For the first time in recorded history, we are commonly thinking about where we will be in 100 years, where we will be in 50 years, and now we are even thinking about where we will be in a decade as technology progresses into the 21st Century. If Ray Kurzweil is right, machines will have sentience, and AI, or artificial intelligence, will be greater than human intelligence, resulting in a hypothetical event known as an “intelligence explosion” or “technological singularity”. After this point, machines will be much smarter than average human beings and will be able to carry on progress much faster than we can even begin to comprehend with our natural brains.

In the wake of the recognition of these future possibilities, many science-fiction authors and script writers have created a plethora of media to warn us that AI and future genetic augmentation pose many existential threats to the human race. Examples that now dominate the mainstream media include Terminator, 2001: A Space Odyssey, The Matrix, and many more that warn us that AI might kill us all. Gattaca expresses the great fear of an unfair society of elitism in a genetically enhanced world where a man who was born naturally is unable to get his dream career because he wasn’t born with genetic modifications. In parallel, people demonize the idea of genetic modification by ruthlessly attacking GMOs and saying that they’re bad for us when GMOs have in fact solved famine in some parts of the world due to higher yields. People are always fearful of something they do not understand.

In the Golden Age of Science Fiction, a period during the mid-20th Century that saw many sci-fi works hitting the stage, spreading optimism and futurism, science fiction had a brighter outlook on the future. Isaac Asimov imagined future Spacer societies and a Galactic Empire in his Robot Series and Foundation Series. Gene Roddenberry took us on fantastic voyages across the stars in the Enterprise alongside Captain James T. Kirk and Spock. Other authors inspired visionaries to have a brighter outlook on the future as the Space Race sent the first humans to the Moon.

Today, we have, in a way, a form of cultural stagnation. While some still see the future in an optimistic light, it seems much more popular today to look at the future as a dystopia, and New Age movements all over the place actually act like demonizing technology is some kind of “morally right” position. Despite the trends of growth continuing to accelerate, mainstream culture seems to be propagating more fear of the future than hope and inspiration. Why are we doing this? While I agree that dystopian sci-fi has it’s place and that we should in very deed analyze and contemplate existential risks in our future that we might steer clear of, progress is going to happen and we are going to try everything we can to “play god”, as the enemies of transhumanism like to say transhumanists are trying to do. To them, of course, I say, “Were we not created in God’s image? Did God not give the Earth to mankind? Were we not meant to achieve our full potential, to subdue the Earth and conquer it, bending it to our will?” Indeed, this phrase in Genesis seems to be divine permission to modify our bodies and accelerate a brighter future. However, this is mainly an appeal to my fellow religious folks who may be averse to progress. We are not playing God because, quite honestly, God would not even make that possible. We are just using our God-given talents to hack our own genetic code and modify the machinery of our initial, still quite wonderful creation. To those Christians who say that we are insulting God and telling him “You didn’t make me good enough”, the beauty of mankind is that we were in fact created with the ability to modify ourselves. Don’t modify yourself with the intention of insulting your creator, but with the intention of becoming closer to your creator. Why would he give us the ability for self-modification if he didn’t intend for us to use it? It’s like saying that we shouldn’t work out because self improvement is some kind of blasphemy against God. Do you really believe God wants us to intentionally limit ourselves from our full potential?

Others may fear the coming of AI as a usurping of humanity as the apex predator upon this planet, and they may be afraid of a Skynet scenario where a rampant AI destroys us all. I argue that the solution is to merge ourselves with the machines, allowing us to cause ourselves to evolve. Ray Kurzweil and many other singularitarians would make the same argument. By evolving our own bodies and replacing our cells with nanobots whereby we can enhance our brains to the point where neural signals travel at light speed, we will be able to keep up with AI in the evolutionary arms race to come. You can choose to live in fear in the face of the Singularity that is coming, getting left behind in its wake, or you can step boldly and bravely forward into the new world that it will create, surpassing all your physical, mental, and morphological limitations and ending your mortality fully.

As I have written before, mainstream media is overwhelmingly sending out negative signals and warnings about the future, painting into the memespace, or ideaspace, of mainstream culture the notion that technology is a negative influence and that it should be contained and controlled. Society is largely crying for a return back to the caves because many people are fearful of what they don’t understand. This trend needs to cease. People need to see that the light of the future is much brighter than they think. AI is coming, and the technological Singularity is coming, and it’s going to be better than anyone can imagine. This is a call to arms; artists and sci-fi writers who see the ramifications of the future and how it can create an abundant, prosperous utopia, I urge you to write science fiction that portrays AI not in a negative, but rather in a positive manner. Show AI in a benevolent form and show how it can aid humanity in its future quest for survival. Show how it can solve global problems like hunger and global warming and cure disease. Stories that put the Neo-Luddites in their place, and show that the pseudo-religious zeal of anti-progress-minded people is ultimately a negative factor only holding us back from creating a better world in the long run. Know and understand that the content in the mainstream media has a huge effect on the minds of the people, and indeed much of culture is shaped by what is put out there and consumed by the masses. Transhumanism needs more positive science fiction to help gain support for the movement and to inspire the next generation of scientists and inventors to design the future we all desire!

Rykon Volta is the author of the novel Arondite, Book I of The Artilect Protocol Trilogy. Arondite is available on Amazon in hard-copy and Kindle formats here. Visit Rykon Volta’s website here

Watch the U.S. Transhumanist Party Virtual Enlightenment Salon of  July 19, 2020, when Rykon Volta was the guest of honor and discussed science fiction, his novel Arondite, and the ideas surrounding it with the U.S. Transhumanist Party Officers.

 

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Gennady Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can I Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can I Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

Gennady Stolyarov II is Chairman of the United States Transhumanist Party. Learn more about Mr. Stolyarov here

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II).