Browsed by
Tag: Artificial Intelligence

Gennady Stolyarov II Interviews James Strole Regarding RAAD Fest 2019 and Life-Extension Advocacy

Gennady Stolyarov II Interviews James Strole Regarding RAAD Fest 2019 and Life-Extension Advocacy

James Strole
Gennady Stolyarov II
Johannon Ben Zion

On Tuesday, July 16, 2019, U.S. Transhumanist Party Chairman Gennady Stolyarov II invited James Strole of the Coalition for Radical Life Extension and People Unlimited to discuss the upcoming RAAD Fest 2019 in Las Vegas on October 3-6, 2019 – the fourth RAAD Fest in history – – and the first in a new venue. Mr. Stolyarov and Mr. Strole discussed the importance of unity in the transhumanist and life-extensionist movements, as well as what opportunities for education and inspiration RAAD Fest will offer to those who wish to live longer and healthier. They also addressed audience questions and were briefly joined by Johannon Ben Zion, Chairman of the Arizona Transhumanist Party. Watch the interview on YouTube here.

Become a member of the U.S. Transhumanist Party / Transhuman Party for free, no matter where you reside. Apply here in less than a minute.

Watch some of the U.S. Transhumanist Party’s prior appearances at RAAD Fests in 2017 and 2018 below.

RAAD Fest 2017

The U.S. Transhumanist Party – Pursuing a Peaceful Political Revolution for Longevity – August 11, 2017

RAAD Fest 2018

The U.S. Transhumanist Party: Four Years of Advocating for the Future – Gennady Stolyarov II at RAAD Fest 2018 – September 21, 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018 – September 21, 2018

U.S. Transhumanist Party Meeting at RAAD Fest 2018 – September 22, 2018

Andrés Grases Interviews Gennady Stolyarov II on Transhumanism and the Transition to the Next Technological Era – September 23, 2018

Register for RAAD Fest 2019 here

Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence – Essay by Gennady Stolyarov II in Issue 2 of the INSAM Journal

Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence – Essay by Gennady Stolyarov II in Issue 2 of the INSAM Journal

Gennady Stolyarov II

Note from Gennady Stolyarov II, Chairman, United States Transhumanist Party / Transhuman Party: For those interested in my thoughts on the connections among music, technology, algorithms, artificial intelligence, transhumanism, and the philosophical motivations behind my own compositions, I have had a peer-reviewed paper, “Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence” published in Issue 2 of the INSAM Journal of Contemporary Music, Art, and Technology. This is a rigorous academic publication but also freely available and sharable via a Creative Commons Attribution Share-Alike license – just as academic works ought to be – so I was honored by the opportunity to contribute my writing. My essay features discussions of Plato and Aristotle, Kirnberger’s and Mozart’s musical dice games, the AI-generated compositions of Ray Kurzweil and David Cope, and the recently completed “Unfinished” Symphony of Franz Schubert, whose second half was made possible by the Huawei / Lucas Cantor, AI / human collaboration. Even Conlon Nancarrow, John Cage, Iannis Xenakis, and Karlheinz Stockhausen make appearances in this paper. Look in the bibliography for YouTube and downloadable MP3 links to all of my compositions that I discuss, as this paper is intended to be a multimedia experience.

Music, technology, and transhumanism – all in close proximity in the same paper and pointing the way toward the vast proliferation of creative possibilities in the future as the distance between the creator’s conception of a musical idea and its implementation becomes ever shorter.

You can find my paper on pages 81-99 of Issue 2.

Read “Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence” here.

Read the full Issue 2 of the INSAM Journal here.

Abstract: “In this paper, I describe the development of my personal research on music that transcends the limitations of human ability. I begin with an exploration of my early thoughts regarding the meaning behind the creation of a musical composition according to the creator’s intentions and how to philosophically conceptualize the creation of such music if one rejects the existence of abstract Platonic Forms. I then explore the transformation of my own creative process through the introduction of software capable of playing back music in exact accord with the inputs provided to it, while enabling the creation of music that remains intriguing to the human ear even though the performance of it may sometimes be beyond the ability of humans. Subsequently, I describe my forays into music generated by earlier algorithmic systems such as the Musikalisches Würfelspiel and narrow artificial-intelligence programs such as WolframTones and my development of variations upon artificially generated themes in essential collaboration with the systems that created them. I also discuss some of the high-profile, advanced examples of AI-human collaboration in musical creation during the contemporary era and raise possibilities for the continued role of humans in drawing out and integrating the best artificially generated musical ideas. I express the hope that the continued advancement of musical software, algorithms, and AI will amplify human creativity by narrowing and ultimately eliminating the gap between the creator’s conception of a musical idea and its practical implementation.”

Kindness, the Greatest Temperer of Hubris – Article by Hilda Koehler

Kindness, the Greatest Temperer of Hubris – Article by Hilda Koehler

Hilda Koehler

In light of the increasingly alarming reports on climate catastrophe that have been released in the past few months, more and more transhumanists are taking up the gauntlet and putting climate-change solutions on their political agenda. Sadly, the transhumanist movement hasn’t exactly been well-received by the environmentalist movement. Environmentalists such as Charles Eisenstein have blamed “scientism” and excessive faith in the scientific materialist worldview as being primarily responsible for the overexploitation of the natural world. Other environmentalists are hostile towards the transhumanist imperative to find a cure for biological aging, arguing that curing aging will further exacerbate the resource scarcity (a common criticism which LEAF has dealt with so extensively, they have a page dedicated to it).

It probably doesn’t help that a handful of transhumanists are very vocally “anti-nature”. One of transhumanism’s primary goals is to knock down fallacious appeals to nature which are propped up against the pursuit of radical human lifespan extension or cyborgification. However, the way we present these ideas could perhaps be phrased in a more palatable manner.

Environmentalists and bioconservatives are fond of claiming that transhumanism is the apogee of human hubris. They claim that transhumanism’s goals to overcome humanity’s biological limits are inseparable from the rapacious greed that has driven developed economies to violate the natural world to a point of near-collapse. Deep Greens go so far as to call for a total renunciation of the technological fruits of civilization, and a return to a hunter-gatherer lifestyle. Radical environmentalists claim that a return to Luddism is the only thing that can save humanity from pillaging the natural world to a point where it becomes utterly inhabitable. But I would argue that the either-or split between human progress through technological advancement and compassion towards non-human life is a false dichotomy.

Drawing on David Pearce’s hedonistic imperative, I will argue that transhumanism and environmentalism aren’t necessarily at loggerheads with each other. You could even say that transhumanism entails a benevolent stewardship of nature, and that care for all non-human life is a logical extension of human exceptionalism. If the core imperative of our movement is to minimize suffering caused by biological limitations, that should apply to minimizing non-human suffering as well.

Benevolent stewardship: the Aristotelian mean between Deep Green Ludditism and Radical Transhumanist Anti-Naturism

I don’t think I’ve ever met somebody whose ideas have so radically changed my views on existential teleology and the natural world as quickly as David’s have. What I love about David’s hedonistic imperative and his involvement in the Reducing Wild Animal Suffering (RWAS) movement is how radically his ideology reframes the idea of human exceptionalism.

“Human exceptionalism” is generally seen as a bad thing, and with good reason. For the better part of human civilisation’s history, humans have been exceptionally bad – exceptionally bad to ethnic minorities who didn’t have guns or cannons,  exceptionally bad to women by depriving them of equal status to men and bodily autonomy, and exceptionally bad to all the animals humans have needlessly slaughtered or whose habitats they obliterated. Human beings are stand out as being exceptionally intelligent amongst the animal kingdom, and they also stand out for using that intelligence in extremely innovative ways to amass vast amounts of resources for their “in” groups, by brutally exploiting “out” groups in the most unimaginably vile ways.

But the hedonistic imperative puts a new spin on “human exceptionalism”. The hedonistic imperative is the great Uncle Ben lesson for humanity. With our exceptional intelligence comes great responsibility – responsibility not just to currently marginalized ethnic groups, genders, and social classes within humanity, but to non-human species, too. If we have the intelligence to turn humanity into a planet-ravaging force, then we have the intelligence to find a way to repair the damage humans have done.

The hedonistic imperative movement has also been credited with helping to convert a growing number of transhumanists to veganism, and to supporting planet-saving initiatives.

Aristotle is best known for describing virtue as the golden mean between two vices. I wouldn’t go so far as to call Deep Green environmentalism or radically anti-naturist transhumanism “vices”, but I would say that the hedonistic imperative manages to gel the most effective aspects of both schools of thought while avoiding the practical blind spots of both.

Deep Green environmentalists like Charles Eisenstein tend to promulgate the idea of nature’s sacredness as entailing an acceptance of natural malaises. These include death due to biological aging, but a logical extension of this is that it is immoral for human beings to intervene in nature and prevent animals from harming each other, since it is part of the “natural order”. Radically anti-naturist transhumanists tend to view anything natural as being automatically inferior to whatever man-made alternatives can be technologically manufactured. While we shouldn’t accept invocations of naturalism prima facie, this view isn’t quite tenable for primarily practical reasons. It would probably be extremely unwise to replace all the organic trees in the world with man-made synthetic ones, because the Earth’s biosphere is an exceedingly complex system that even our best biologists and geologists still do not fully understand. Likewise, we cannot solely on carbon-capture technology or geoengineering to be the ultimate solutions to the ongoing climate crisis. Much more still needs to be invested in reforestation and the restoration of currently endangered animal and plant species which have been afflicted by habitat loss or resource depletion.

Homo Deus: Already Here

For all the utter destruction that humanity has wrought over the past 10,000 years, we can’t overlook the great capabilities we hold as stewards of nature. Say what you will about humanity, but we’re literally the only species on Earth that has evolved to a point where we can use science to resurrect the dodo bird, the woolly mammoth, and the pterodactyl. And we can do that with all the other species we’ve driven to extinction. Perhaps those will be the reparations we pay to the animal kingdom for the previous damage done.

Humanity is also the only species in existence that actually has the power to contradict the forces of natural selection and end natural suffering in its tracks. We just choose not to because we can’t be bothered to. I had never in my life thought about how powerful the implications of this were until I listened to David speak about it. We are the only species with the requisite technological power to end hunger, disease, and infant mortality amongst animals, if we so choose.

Basically put: we’re already gods and goddesses.

We are literally gods in the eyes of animals.

But many humans have chosen to emulate the very worst behaviours of the Old Testament Biblical God rather than being the kind of God all human civilizations would long hope would care for them kindly.

One of Ben Goertzel’s major life goals is to create the most benevolent possible AI nanny who will be programmed to watch over humanity, make us immortal and create a post-scarcity condition where all of our physical needs can be met through the application of nanotechnology. Ben acknowledges that deliberately programming an AI to be as benevolent and compassionate is possible, because at present, everyone and their mother is preparing for a possible Terminator scenario where AI goes rogue and decides that it is under no obligation to be kind to its human creators.

If you would like to know exactly how badly an indifferent or uncompassionate posthuman AI could treat us, you need only look at how badly humans treat chickens and cows. You would only have to look up YouTube videos of desperate orangutans feebly trying to push aside construction cranes that are in the midst of pulverising the trees in which they reside.

And it wasn’t too long ago that humans treated different races of human beings in a similar fashion (although they weren’t slaughtered for consumption).

A posthuman ultra-intelligent AI inflicting the same treatment on humans in developed industrial economies might just be karma coming to pay what’s long been due.

“The benevolent AI god who will resurrect the dead and keep us prosperous forever” is the one wild fantasy which transhumanist forums are constantly salivating over. But why should we expect the AI god to be so propitious to us when humans are not even showing a fraction of that expected mercy to the elephants, cows, and salmon alive today?

Gandhi said, “be the change you want to see in the world.” Pearce and the RWAS movement crank this imperative up a notch:

“Be the ultra-intelligent, highly-evolved benevolent steward whom you’d like to see overseeing the well-being and survival of your species.”

The New Narrative of Human Exceptionalism

At their core, the primary message of the Deep Green environmentalism and the transhumanist hedonistic imperative aren’t so different. Both movements say that the narrative of Man as the Mighty Colonizer must now come to an end. Charles Eisenstein and Jason Godesky propose we get there by returning to having Animism as the overarching religious paradigm of global society, and by returning to a more hunter-gatherer-like lifestyle.

Julian Savulescu argues that we nip the problem in its biological bud by using biotechnological intervention to delete the human genes that predispose us to excessive aggression towards “out” groups, excessive resource hoarding, and rape. For reasons I’ve explained in detail elsewhere, I tend to side more with Savulescu. But put aside the means, and you’ll realise that both the Deep Greens and more pacifist-humanitarian transhumanists are both proponents of the same end.

One reason why I tend more towards siding with Savulescu and Pearce is because I think that forsaking technological advancement would be a mistake. If transhumanism is about transcending our biologically-saddled limitations through the application of technology, it follows that the shortcomings of primate-based moral psychology shouldn’t be an exception. As leading primatologist Richard Wrangham points out in his often-cited Demonic Males, our primate ancestors evolved to wage war on hominids from other “out” groups and to be predisposed towards hyper-aggression and selfishness, as a means of surviving on the resource-scarce savannah. And our neurobiological hardwiring hasn’t changed significantly since then. One of Savulescu’s favorite argument points is claiming that had genetic moral editing been available earlier, we’d probably have averted the climate catastrophe altogether. Savulescu sees the climate catastrophe as being a glaring symptom of still-dominant monkey brains’ failures to consider the long-term consequences of short-term consumer capitalist satisfaction.

Furthermore, renouncing the fruits of technology and modern medicine would make us far less effective stewards of the animal world. If we go back to a hunter-gatherer existence, we’ll be renouncing the technology needed to resurrect both long and recently extinct species. Another major goal of the RWAS movement is to use CRISPR gene-editing to help reduce the propensity towards suffering in wild animals, and to engage in fertility regulation. Pearce claims that we might even be able to make natural carnivorism and mating-season-induced violence obsolete using gene-editing in various aggression-prone species. While we’re at it, we could edit the physiological basis for craving meat out of human beings, since our primate ancestors evolved to be omnivorous. Or we could at the very least try to create a future where all of our meat is lab-grown or made from plant-based substitutes.

It’s also worth noting that human beings are the only species on the planet to find out about the ultimate fate of life on Earth. We’ve very, very recently found out that the duration of the planet’s habitability has an expiry date, and that the Sun will eventually turn into a red dwarf and fry the Earth into an inhospitable wasteland. Given that human beings are the only species which has the necessary intelligence to engage in space travel and colonization, the survival of every single non-human species on the planet falls into our hands. The sole hope for the perpetuation of non-human species lies in future humans setting up space colonies in other habitable planets outside our solar system, and taking all of Earth’s animal species with us. Again, this isn’t something we can achieve if we renounce technological progress.


Yuval Noah Harari’s Homo Deus has become a staple read for many in the transhumanist movement. But in the eyes of the world’s animals, we have already become all-powerful gods, who can dole out exploitative cruelty or interventional mercy on a whim. The criticisms of the Deep Green environmentalist movement are increasingly forcing techno-utopians to confront this question; exactly what kind of gods and goddesses will we continue to be to the non-humans of the Earth? If we are going to reconceptualize human exceptionalism from being associated with exceptional human greed and exploitation, to being based on exceptional human wisdom and interventionary benevolence, we need to heed the words of both Savulescu and Eisenstein, and pursue a different human narrative. We’re generally kinder towards women, ethnic minorities, sexual minorities, and the working class than we were three hundred years ago, so there is hope that we’re steadily changing course towards a more altruistic track. If every great moral school of thought has an overarching axiom, the one that defines the hedonistic imperative should be this: “Treat less sentient animals the way you would like the posthuman AI god to treat you and your family.”

Hilda Koehler is a fourth-year political science major at the National University of Singapore. She is a proud supporter of the transhumanist movement and aims to do her best to promote transhumanism and progress towards the Singularity.

Augmented Democracy: A Radical Idea to Fix Our Broken Political System Using Artificial Intelligence – Presentation and Announcement by César Hidalgo

Augmented Democracy: A Radical Idea to Fix Our Broken Political System Using Artificial Intelligence – Presentation and Announcement by César Hidalgo

César Hidalgo

Editor’s Note: Is AI the future of politics? The U.S. Transhuman(ist) Party features this TED talk, in both English and Spanish, by César Hidalgo, Director of MIT’s Collective Learning group, where he presents the idea of Augmented Democracy – a system to automate and enhance democracy by empowering citizens to create personalized AI representatives to aid in legislative decision-making.

Mr. Hidalgo has launched a contest with cash prizes where participants are encouraged to submit proposals to explore new ways to practice democracy and direct participation in collective decision-making using AI. Below, you can find a statement from Mr. Hidalgo and a link to the contest. We encourage members of the USTP, and non-members, to look into this opportunity to participate and collaborate in building a more just future!  

                                                                                             ~ Dinorah Delfin, Director of Admissions and Public Relations, United States Transhuman(ist) Party, March 27, 2019


Source: TED2018 


Source: TED en Español

“Imagine that instead of having a (human) representative that represents you and a million of other people you can have a representative (AI) that represents only you. With your nuanced political views […] liberal on some […] and conservative on others.”   – Cesar Hidalgo


What I Learned a Week After Publishing a Talk about Augmented Democracy

Last week I released a talk presenting the idea of Augmented Democracy. Since then, I have been looking at people’s reactions to understand how this idea fits the larger context. Here are three things I would like to rescue:

First, the idea was received much better than I expected. I received many encouraging emails and replies. This honestly surprised me. I’ve noticed that the idea was received surprisingly well in South America and among young people. In fact, it appears that for many people, the idea of augmenting the government through data and A.I. technologies seems natural. Of course, people imagine this differently, and some are quick to paint a doomsday scenario. But I think that this is an idea that may be flying under the radar, because the people that are activated by it do not align neatly along the left-right axis of politics. As such, they do not have the shared political identity that is key to left-righters, and hence, go undetected. That may change as post-millennials come of age, and may be unexpected to many people.

Second, despite the talk receiving a large number of views, surprisingly few people visited the FAQ. This is interesting, because it leads to a funny but also important contradiction. Many critical comments were phrased as rhetorical questions of the form: “But how would you do that?!” Yet, all of the rhetorical questions I’ve seen so far were in the FAQ. What is funny here is that the talk is about the use of technologies to help people augment their cognitive capacities, by, for instance, reading text they don’t have time for. Yet, the people skeptic about the idea are also people who did not read the text. Of course, this does not mean that there are no questions missing in the FAQ (I have many of these), what it means is that, in the comments I’ve seen, I’ve yet to encounter a question that was not in the FAQ.

Third, going forward my focus–on this front–will be on the Augmented Democracy prize. What I want to do next, is to encourage people to imagine future users interfaces and systems of technologically augmented democracy. For that, I am giving up to USD 20,000 in prizes. If I get less than 100 proposals, I will give away two team prizes of 4,000 USD and two individual prizes of USD 1,000. If I receive more than 100 proposals I will open two more teams and two more individual prizes. So in the next days, I will start sharing links directly to the prize page. If you know of students, creatives, designers, artists, scientists, and writers, please help me share the prize-related posts.




In Defense of Resurrecting 100 Billion Dead People – Article by Hilda Koehler

In Defense of Resurrecting 100 Billion Dead People – Article by Hilda Koehler

Hilda Koehler

Editor’s Note: The U.S. Transhumanist Party / Transhuman Party has published this manifesto by Hilda Koehler to bring attention to a prospect for a more distant future – the technological resurrection of those who have already died. This idea has been posited by such proto-transhumanist thinkers as the Russian Cosmist Nikolai Fyodorov and is involved to various degrees in transhumanist projects such as cryonics, the creation of mindfiles, brain preservation, and the pursuit of various approaches toward mind uploading. There also arise various philosophical dilemmas as to the identities of such hypothetically resurrected individuals. Would they indeed be continuations of the original individuals’ lives, or, rather, close replicas of those individuals, with similar memories and patterns of thinking but distinct “I-nesses” which would come into being upon “resurrection” instead of continuing the “I-nesses” of the original individuals? For a more detailed exploration of this question, please see the essay “How Can Live Forever?: What Does and Does Not Preserve the Self” (Gennady Stolyarov II, 2010). Nonetheless, even if a “resurrected” individual is a distinct person from the original, it may be valuable to have that person’s memories and patterns of thinking and acting available in the future. However, the question of the continuity of identity is crucial for addressing the issues of justice raised in the article by Ms. Chowhugger. For example, if a “resurrected” individual is not the same person as the original, it would not appear to be justified to hold that individual responsible for any transgressions committed by the original, previously deceased individual. Thoughts on these and other relevant questions and ideas are welcome in the comments for this article.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party / Transhuman Party, March 24, 2019

One of the long-term goals of the transhumanist movement is the physical resurrection of every single human being who has passed away since the beginning of homo sapiens as a species. This would entail using highly advanced technology to resurrect approximately 100 billion people. This sounds implausible. This sounds absolutely mad. But I would argue that it still has to be done. This is not only a potential project humanity must consider; it must be an absolutely imperative goal. In my argument below, I will explain some of the reasons why humanity needs to consider the scientific resurrection of every deceased human being in history to be an imperative long-term goal for all of humanity.

If there’s no afterlife, we have to make one for ourselves.

Unless there is some completely unforeseen breakthrough in science providing conclusive evidence that human consciousness can survive outside the brain beyond there, it is safe to say that developments in neuroscience have very much proven that all religious notions of the afterlife do not exist. If you take an agnostic position about the afterlife and claim that there is still a possibility that a physically-manifested afterlife could exist out there and one day be scientifically proven, fair enough. But I personally believe that we have a higher likelihood of finally being able to travel to a parallel universe only to discover that it is entirely inhabited by sentient Pikachus or clones of Brad Pitt.

An unfortunate position which currently plagues the modern atheist community is one of existential nihilism. The vast majority of atheists acknowledge that the afterlife does not physically exist.

But that’s defying the laws of nature!

And since when have things being unnatural stopped us from recognizing and utilizing their beneficial aspects? Birth control is unnatural; so is laser eye surgery. So are motor vehicles, and so is all of modern medicine. At this point I would like all our readers that there are people out there adamantly trying to stop their children from being vaccinated against measles on the grounds that vaccination is “unnatural”. Perhaps one day our descendants living in an age when technologically-enabled resurrection is as common as Botox shots or bypass surgeries are today will look back at us in condescending amusement.

You have a personal stake in it; so does everyone you love. If you had the option to be revived and continue living indefinitely after your initial demise, would you choose it?

You might ask, “What value is there in resurrecting a random Chinese peasant from the 15th century?” but one day in the far future, our descendants who actually have the viable technology to execute this may ask the same of you and your family.

It’s the economy, bruh.

Consider this final practical implication of the mass technological resurrection of 100 billion deceased people: it’s going to need a lot of manpower and a lot of resources to carry out. And it’s going to be a very long-term process from start to finish. One of the biggest concerns amongst economists right now is the possibility that artificial intelligence will leave the vast majority of the human population unemployed, or underemployed. Imagine the vast number of jobs that could be created if the governments of the world collaborated to undertake a massive resurrection project. We would not just need scientists and engineers to complete the biological process. A major implication our future descendants will have to deal with is the moral re-education of those who lived in more backwards societies or time periods. Imparting modern notions of racial and gender equality to the vast majority of people born before the 1900s is going to be no mean feat. So will educating them about the major historical events and technological advancements that have taken place since their passing.

The ultimate reparative justice

The current run-up to the 2020 US presidential elections has reignited the debate about whether or not African-Americans should receive reparations as a form of compensation for the injustices done to their ancestors during the Transatlantic Slave Trade. Shashi Tharoor caused an international stir with his claims that Britain has a moral obligation to pay reparations to India for the economic damage and loss of lives caused by the ravages of british colonialism. However, I would now like to propose an even more radical solution to the question of reparative justice for historical systemic injustices. What if we resurrected all 25 million slaves who were captured and trafficked during the Transatlantic Slave Trade, and then awarded compensation to each one of them? What if we resurrected all 26 million Russians killed during the Nazi invasion of the USSR and offered personal compensation to them, as well as telling them of the satisfying knowledge that the Nazis were the losers at the end of World War II. Zoltan Istvan has remarked that he himself has Jewish acquaintances who would be happy to see Hitler get resurrected if only to see him get officially tried in court and sentenced (presumably to an exceptionally harsh prison sentence like 6 million years of hard labor). Through resurrecting victims of past injustices, we could pursue the a direct form of reparative justice and give them the peace of mind they have been waiting decades, centuries, or even millennia to receive.

Hilda Koehler is a fourth-year political science major at the National University of Singapore. She is a proud supporter of the transhumanist movement and aims to do her best to promote transhumanism and progress towards the Singularity.

James Hughes’ Problems of Transhumanism: A Review (Part 3) – Article by Ojochogwu Abdul

James Hughes’ Problems of Transhumanism: A Review (Part 3) – Article by Ojochogwu Abdul


Ojochogwu Abdul

Part 1 | Part 2 | Part 3 | Part 4 | Part 5

Part 3: Liberal Democracy Versus Technocratic Absolutism

“Transhumanists, like Enlightenment partisans in general, believe that human nature can be improved but are conflicted about whether liberal democracy is the best path to betterment. The liberal tradition within the Enlightenment has argued that individuals are best at finding their own interests and should be left to improve themselves in self-determined ways. But many people are mistaken about their own best interests, and more rational elites may have a better understanding of the general good. Enlightenment partisans have often made a case for modernizing monarchs and scientific dictatorships. Transhumanists need to confront this tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites.” (James Hughes, 2010)

Hughes’ series of essays exploring problems of transhumanism continues with a discussion on the tensions between a choice either for liberal democracy or technocratic absolutism as existing or prospective within the transhumanist movement. As Hughes would demonstrate, this problem in socio-political preference between liberalism and despotism turns out as just one more among the other transhumanist contradictions inherited from its roots in the Enlightenment. Liberalism, an idea which received much life during the Enlightenment, developed as an argument for human progress. Cogently articulated in J.S. Mill’s On Liberty, Hughes re-presents the central thesis: “if individuals are given liberty they will generally know how to pursue their interests and potentials better than will anyone else. So, society generally will become richer and more intelligent if individuals are free to choose their own life ends rather than if they are forced towards betterment by the powers that be.” This, essentially, was the Enlightenment’s ground for promoting liberalism.

Read More Read More

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018


Gennady Stolyarov II
Ray Kurzweil

The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More


Gennady Stolyarov II
Zoltan Istvan
Max More
Ben Goertzel
Natasha Vita-More

Gennady Stolyarov II, Chairman of the United States Transhumanist Party, moderated this panel discussion, entitled “Advocating for the Future”, at RAAD Fest 2017 on August 11, 2017, in San Diego, California.

Watch it on YouTube here.

From left to right, the panelists are Zoltan Istvan, Gennady Stolyarov II, Max More, Ben Goertzel, and Natasha Vita-More. With these leading transhumanist luminaries, Mr. Stolyarov discussed subjects such as what the transhumanist movement will look like in 2030, artificial intelligence and sources of existential risk, gamification and the use of games to motivate young people to create a better future, and how to persuade large numbers of people to support life-extension research with at least the same degree of enthusiasm that they display toward the fight against specific diseases.

Learn more about RAAD Fest here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentations of Gennady Stolyarov II and Zoltan Istvan from the “Advocating for the Future” panel.

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

Gareth John

Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019

In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.


1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.

2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)

3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5

4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015

5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)

6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129

7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia)

8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference

9. Pensky, N, 2014, article taken from Pando.

10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’.

11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015.

12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future.

Gareth John lives in Cardiff, UK and is a trainee social researcher with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party. 



Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.

I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.

Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?

Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.



Rich Casada

Hi Rich,
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!

I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:

“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).

Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.

I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)

So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.

This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.


Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Gennady Stolyarov II

How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can I Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can I Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

Gennady Stolyarov II is Chairman of the United States Transhumanist Party. Learn more about Mr. Stolyarov here

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II).