Browsed by
Tag: Ray Kurzweil

Brent Nally Interviews U.S. Transhumanist Party Chairman Gennady Stolyarov II at Sierra Sciences

Brent Nally Interviews U.S. Transhumanist Party Chairman Gennady Stolyarov II at Sierra Sciences

Gennady Stolyarov II
Brent Nally


On October 12, 2019, U.S. Transhumanist Party Chairman Gennady Stolyarov II spoke with Brent Nally at the venerable Sierra Sciences headquarters in Reno, Nevada. They discussed developments in the U.S. Transhumanist Party, fitness, health, and longevity – among other subjects.

Watch the interview here.

Become a member of the U.S. Transhumanist Party / Transhuman Party here for free, no matter where you reside.

Show Notes by Brent Nally

3:15 Brent Nally’s RAADfest 2019 YouTube playlist: https://www.youtube.com/playlist?list=PLGjySL94COVSO3hcnpZq-jCcgnUQIaALQ

4:24 Go to RAADfest primarily to network: https://www.raadfest.com/

6:28 People Unlimited website: https://peopleunlimitedinc.com/

6:30 The Coalition for Radical Life Extension website: https://www.rlecoalition.com/

7:20 We need to increase your healthspan to increase your lifespan.

9:01 Watch Bill Andrews and Gennady discussing transhumanism and radical life extension: https://www.youtube.com/watch?v=U7GJrVBp8FQ

13:35 Gennady just ran the Lakeside Marathon at Lake Tahoe the day before, October 11, 2019.

19:30 Do whatever type of exercise that you enjoy. Don’t push yourself to the point of exhaustion where you’re throwing up or getting injured or not having fun because that’s bad for your telomeres.

24:10 Audit your own thoughts daily as a meditation to recognize your limiting beliefs.

26:15 How Gennady became Chairman of the U.S. Transhumanist Party

29:35 How the 9 USTP presidential candidates competed using a “ranked preference” approach

32:07 43 articles are currently in the 3rd version of the Transhumanist Bill of Rights: https://transhumanist-party.org/tbr-3/

34:27 https://transhumanist-party.org/

35:40 We should be more concerned with ideas rather than people. We’re in an “ideas” economy.

36:28 Politicians are more followers than leaders.

40:27 3 core ideals of USTP: https://transhumanist-party.org/values/

41:40 Gennady shares more details about how his role as chairman may evolve.

44:28 Positives and negatives of centralization and decentralization

49:15 Sophia the AI robot: https://www.hansonrobotics.com/sophia/

51:30 Truth is sometimes stranger than fiction. Try to educate others about transhumanism.

55:30 Ray Kurzweil points out that technology growth and stock market growth are separate. Here’s Brent talking to Ray in February 2011: https://www.youtube.com/watch?v=3TuVn…

57:37 Join the USTP: https://transhumanist-party.org/membership/

58:40 USTP Twitter: https://twitter.com/USTranshumanist; USTP LinkedIn: https://www.linkedin.com/company/19118856/; Gennady’s online magazine – The Rational Argumentator: http://www.rationalargumentator.com/index/

1:01:14 Gennady, Bill Andrews, and Brent ran about 8.4 miles the next morning above Carson City, NV on the Upper Clear Creek Trail.

1:02:15 Don’t forget to subscribe, like, comment on this video, and share on your social media accounts!

Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence – Essay by Gennady Stolyarov II in Issue 2 of the INSAM Journal

Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence – Essay by Gennady Stolyarov II in Issue 2 of the INSAM Journal

Gennady Stolyarov II


Note from Gennady Stolyarov II, Chairman, United States Transhumanist Party / Transhuman Party: For those interested in my thoughts on the connections among music, technology, algorithms, artificial intelligence, transhumanism, and the philosophical motivations behind my own compositions, I have had a peer-reviewed paper, “Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence” published in Issue 2 of the INSAM Journal of Contemporary Music, Art, and Technology. This is a rigorous academic publication but also freely available and sharable via a Creative Commons Attribution Share-Alike license – just as academic works ought to be – so I was honored by the opportunity to contribute my writing. My essay features discussions of Plato and Aristotle, Kirnberger’s and Mozart’s musical dice games, the AI-generated compositions of Ray Kurzweil and David Cope, and the recently completed “Unfinished” Symphony of Franz Schubert, whose second half was made possible by the Huawei / Lucas Cantor, AI / human collaboration. Even Conlon Nancarrow, John Cage, Iannis Xenakis, and Karlheinz Stockhausen make appearances in this paper. Look in the bibliography for YouTube and downloadable MP3 links to all of my compositions that I discuss, as this paper is intended to be a multimedia experience.

Music, technology, and transhumanism – all in close proximity in the same paper and pointing the way toward the vast proliferation of creative possibilities in the future as the distance between the creator’s conception of a musical idea and its implementation becomes ever shorter.

You can find my paper on pages 81-99 of Issue 2.

Read “Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence” here.

Read the full Issue 2 of the INSAM Journal here.

Abstract: “In this paper, I describe the development of my personal research on music that transcends the limitations of human ability. I begin with an exploration of my early thoughts regarding the meaning behind the creation of a musical composition according to the creator’s intentions and how to philosophically conceptualize the creation of such music if one rejects the existence of abstract Platonic Forms. I then explore the transformation of my own creative process through the introduction of software capable of playing back music in exact accord with the inputs provided to it, while enabling the creation of music that remains intriguing to the human ear even though the performance of it may sometimes be beyond the ability of humans. Subsequently, I describe my forays into music generated by earlier algorithmic systems such as the Musikalisches Würfelspiel and narrow artificial-intelligence programs such as WolframTones and my development of variations upon artificially generated themes in essential collaboration with the systems that created them. I also discuss some of the high-profile, advanced examples of AI-human collaboration in musical creation during the contemporary era and raise possibilities for the continued role of humans in drawing out and integrating the best artificially generated musical ideas. I express the hope that the continued advancement of musical software, algorithms, and AI will amplify human creativity by narrowing and ultimately eliminating the gap between the creator’s conception of a musical idea and its practical implementation.”

Video of Cyborg and Transhumanist Forum at the Nevada State Legislature – May 15, 2019

Video of Cyborg and Transhumanist Forum at the Nevada State Legislature – May 15, 2019

Gennady Stolyarov II
Anastasia Synn
R. Nicholas Starr


Watch the video containing 73 minutes of excerpts from the Cyborg and Transhumanist Forum, held on May 15, 2019, at the Nevada State Legislature Building.

The Cyborg and Transhumanist Forum at the Nevada Legislature on May 15, 2019, marked a milestone for the U.S. Transhumanist Party and the Nevada Transhumanist Party. This was the first time that an official transhumanist event was held within the halls of a State Legislature, in one of the busiest areas of the building, within sight of the rooms where legislative committees met. The presenters were approached by tens of individuals – a few legislators and many lobbyists and staff members. The reaction was predominantly either positive or at least curious; there was no hostility and only mild disagreement from a few individuals. Generally, the outlook within the Legislative Building seems to be in favor of individual autonomy to pursue truly voluntary microchip implants. The testimony of Anastasia Synn at the Senate Judiciary Committee on April 26, 2019, in opposition to Assembly Bill 226, is one of the most memorable episodes of the 2019 Legislative Session for many who heard it. It has certainly affected the outcome for Assembly Bill 226, which was subsequently further amended to restore the original scope of the bill and only apply the prohibition to coercive microchip implants, while specifically exempting microchip implants voluntarily received by an individual from the prohibition. The scope of the prohibition was also narrowed by removing the reference to “any other person” and applying the prohibition to an enumerated list of entities who may not require others to be microchipped: state officers and employees, employers as a condition of employment, and persons in the business of insurance or bail. These changes alleviated the vast majority of the concerns within the transhumanist and cyborg communities about Assembly Bill 226.

 

From left to right: Gennady Stolyarov II, Anastasia Synn, and Ryan Starr (R. Nicholas Starr)

This Cyborg and Transhumanist Forum comes at the beginning of an era of transhumanist political engagement with policymakers and those who advise them. It was widely accepted by the visitors to the demonstration tables that technological advances are accelerating, and that policy decisions regarding technology should only be made with adequate knowledge about the technology itself – working on the basis of facts and not fears or misconceptions that arise from popular culture and dystopian fiction. Ryan Starr shared his expertise on the workings and limitations of both NFC/RFID microchips and GPS technology and who explained that cell phones are already far more trackable than microchips ever could be (based on their technical specifications and how those specifications could potentially be improved in the future). U.S. Transhumanist Party Chairman Gennady Stolyarov II introduced visitors to the world of transhumanist literature by bringing books for display – including writings by Aubrey de Grey, Bill Andrews, Ray Kurzweil, Jose Cordeiro, Ben Goertzel, Phil Bowermaster, and Mr. Stolyarov’s own book “Death is Wrong” in five languages. It appears that there is more sympathy for transhumanism within contemporary political circles than might appear at first glance; it is often transhumanists themselves who overestimate the negativity of the reaction they expect to receive. But nobody picketed the event or even called the presenters names; transhumanist ideas, expressed in a civil and engaging way – with an emphasis on practical applications that are here today or due to arrive in the near future – will be taken seriously when there is an opening to articulate them.

The graphics for the Cyborg and Transhumanist Forum were created by Tom Ross, the U.S. Transhumanist Party Director of Media Production.

Become a member of the U.S. Transhumanist Party / Transhuman Party free of charge, no matter where you reside.

References

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

• “A Word on Implanted NFC Tags” – Article by Ryan Starr

Assembly Bill 226, Second Reprint – This is the version of the bill that passed the Senate on May 23, 2019.

Amendment to Assembly Bill 226 to essentially remove the prohibition against voluntary microchip implants

Future Grind Podcast

Synnister – Website of Anastasia Synn

James Hughes’ Problems of Transhumanism: A Review (Part 5) – Article by Ojochogwu Abdul

James Hughes’ Problems of Transhumanism: A Review (Part 5) – Article by Ojochogwu Abdul

logo_bg

Ojochogwu Abdul


Part 1 | Part 2 | Part 3 | Part 4 | Part 5

Part 5: Belief in Progress vs. Rational Uncertainty

The Enlightenment, with its confident efforts to fashion a science of man, was archetypal of the belief and quest that humankind will eventually achieve lasting peace and happiness. In what some interpret as a reformulation of Christianity’s teleological salvation history in which the People of God will be redeemed at the end of days and with the Kingdom of Heaven established on Earth, most Enlightenment thinkers believed in the inevitability of human political and technological progress, secularizing the Christian conception of history and eschatology into a conviction that humanity would, using a system of thought built on reason and science, be able to continually improve itself. As portrayed by Carl Becker in his 1933 book The Heavenly City of the Eighteenth-Century Philosophers, the philosophies “demolished the Heavenly City of St. Augustine only to rebuild it with more up-to-date materials.” Whether this Enlightenment humanist view of “progress” amounted merely to a recapitulation of the Christian teleological vision of history, or if Enlightenment beliefs in “continual, linear political, intellectual, and material improvement” reflected, as James Hughes posits, “a clear difference from the dominant Christian historical narrative in which little would change until the End Times and Christ’s return”, the notion, in any case, of a collective progress towards a definitive end-point was one that remained unsupported by the scientific worldview. The scientific worldview, as Hughes reminds us in the opening paragraph of this essay within his series, does not support historical inevitability, only uncertainty. “We may annihilate ourselves or regress,” he says, and “Even the normative judgment of what progress is, and whether we have made any, is open to empirical skepticism.”

Hereby, we are introduced to a conflict that exists, at least since after the Enlightenment, between a view of progressive optimism and that of radical uncertainty. Building on the Enlightenment’s faith in the inevitability of political and scientific progress, the idea of an end-point, salvation moment for humankind fuelled all the great Enlightenment ideologies that followed, flowing down, as Hughes traces, through Comte’s “positivism” and Marxist theories of historical determinism to neoconservative triumphalism about the “end of history” in democratic capitalism. Communists envisaged that end-point as a post-capitalist utopia that would finally resolve the class struggle which they conceived as the true engine of history. This vision also contained the 20th-century project to build the Soviet Man, one of extra-human capacities, for as Trotsky had predicted, after the Revolution, “the average human type will rise to the heights of an Aristotle, a Goethe, or a Marx. And above this ridge new peaks will rise”, whereas for 20th-century free-market liberals, this End of History had arrived with the final triumph of liberal democracy, with the entire world bound to be swept in its course. Events though, especially so far in the 21st century, appear to prove this view wrong.

This belief moreover, as Hughes would convincingly argue, in the historical inevitability of progress has also always been locked in conflict with “the rationalist, scientific observation that humanity could regress or disappear altogether.” Enlightenment pessimism, or at least realism, has, over the centuries, proven a stubborn resistance and constraint of Enlightenment optimism. Hughes, citing Henry Vyberg, reminds us that there were, after all, even French Enlightenment thinkers within that same era who rejected the belief in linear historical progress, but proposed historical cycles or even decadence instead. That aside, contemporary commentators like John Gray would even argue that the efforts themselves of the Enlightenment on the quest for progress unfortunately issued in, for example, the racist pseudo-science of Voltaire and Hume, while all endeavours to establish the rule of reason have resulted in bloody fanaticisms, from Jacobinism to Bolshevism, which equaled the worst atrocities attributable to religious believers. Horrendous acts like racism and anti-Semitism, in the verdict of Gray: “….are not incidental defects in Enlightenment thinking. They flow from some of the Enlightenment’s central beliefs.”

Even Darwinism’s theory of natural selection was, according to Hughes, “suborned by the progressive optimistic thinking of the Enlightenment and its successors to the doctrine of inevitable progress, aided in part by Darwin’s own teleological interpretation.” Problem, however, is that from the scientific worldview, there is no support for “progress” as to be found provided by the theory of natural selection, only that humanity, Hughes plainly states, “like all creatures, is on a random walk through a mine field, that human intelligence is only an accident, and that we could easily go extinct as many species have done.” Gray, for example, rebukes Darwin, who wrote: “As natural selection works solely for the good of each being, all corporeal and mental endowments will tend to progress to perfection.” Natural selection, however, does not work solely for the good of each being, a fact Darwin himself elsewhere acknowledged. Nonetheless, it has continually proven rather difficult for people to resist the impulse to identify evolution with progress, with an extended downside to this attitude being equally difficult to resist the temptation to apply evolution in the rationalization of views as dangerous as Social Darwinism and acts as horrible as eugenics.

Many skeptics therefore hold, rationally, that scientific utopias and promises to transform the human condition deserve the deepest suspicion. Reason is but a frail reed, all events of moral and political progress are and will always remain subject to reversal, and civilization could as well just collapse, eventually. Historical events and experiences have therefore caused faith in the inevitability of progress to wax and wane over time. Hughes notes that among several Millenarian movements and New Age beliefs, such faith could still be found that the world is headed for a millennial age, just as it exists in techno-optimist futurism. Nevertheless, he makes us see that “since the rise and fall of fascism and communism, and the mounting evidence of the dangers and unintended consequences of technology, there are few groups that still hold fast to an Enlightenment belief in the inevitability of conjoined scientific and political progress.” Within the transhumanist community, however, the possession of such faith in progress can still be found as held by many, albeit signifying a camp in the continuation therefore of the Enlightenment-bequeathed conflict as manifested between transhumanist optimism in contradiction with views of future uncertainty.

As with several occasions in the past, humanity is, again, currently being spun yet another “End of History” narrative: one of a posthuman future. Yuval Harari, for instance, in Homo Deus argues that emerging technologies and new scientific discoveries are undermining the foundations of Enlightenment humanism, although as he proceeds with his presentation he also proves himself unable to avoid one of the defining tropes of Enlightenment humanist thinking, i.e., that deeply entrenched tendency to conceive human history in teleological terms: fundamentally as a matter of collective progress towards a definitive end-point. This time, though, our era’s “End of History” glorious “salvation moment” is to be ushered in, not by a politico-economic system, but by a nascent techno-elite with a base in Silicon Valley, USA, a cluster steeped in a predominant tech-utopianism which has at its core the idea that the new technologies emerging there can steer humanity towards a definitive break-point in our history, the Singularity. Among believers in this coming Singularity, transhumanists, as it were, having inherited the tension between Enlightenment convictions in the inevitability of progress, and, in Hughes’ words, “Enlightenment’s scientific, rational realism that human progress or even civilization may fail”, now struggle with a renewed contradiction. And here the contrast as Hughes intends to portray gains sharpness, for as such, transhumanists today are “torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities” on the one hand, and, on the other, their “rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future.”

The risks of new technologies, even if not necessarily one that threatens the survival of humanity as a species with extinction, may yet be of an undesirable impact on the mode and trajectory of our extant civilization. Henry Kissinger, in his 2018 article “How the Enlightenment Ends”, expressed his perception that technology, which is rooted in Enlightenment thought, is now superseding the very philosophy that is its fundamental principle. The universal values proposed by the Enlightenment philosophes, as Kissinger points out, could be spread worldwide only through modern technology, but at the same time, such technology has ended or accomplished the Enlightenment and is now going its own way, creating the need for a new guiding philosophy. Kissinger argues specifically that AI may spell the end of the Enlightenment itself, and issues grave warnings about the consequences of AI and the end of Enlightenment and human reasoning, this as a consequence of an AI-led technological revolution whose “culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.” By way of analogy to how the printing press allowed the Age of Reason to supplant the Age of Religion, he buttresses his proposal that the modern counterpart of this revolutionary process is the rise of intelligent AI that will supersede human ability and put an end to the Enlightenment. Kissinger further outlines his three areas of concern regarding the trajectory of artificial intelligence research: AI may achieve unintended results; in achieving intended goals, AI may change human thought processes and human values, and AI may reach intended goals, but be unable to explain the rationale for its conclusions. Kissinger’s thesis, of course, has not gone without both support and criticisms attracted from different quarters. Reacting to Kissinger, Yuk Hui, for example, in “What Begins After the End of the Enlightenment?” maintained that “Kissinger is wrong—the Enlightenment has not ended.” Rather, “modern technology—the support structure of Enlightenment philosophy—has become its own philosophy”, with the universalizing force of technology becoming itself the political project of the Enlightenment.

Transhumanists, as mentioned already, reflect the continuity of some of those contradictions between belief in progress and uncertainty about human future. Hughes shows us nonetheless that there are some interesting historical turns suggesting further directions that this mood has taken. In the 1990s, Hughes recalls, “transhumanists were full of exuberant Enlightenment optimism about unending progress.” As an example, Hughes cites Max More’s 1998 Extropian Principles which defined “Perpetual Progress” as “the first precept of their brand of transhumanism.” Over time, however, Hughes communicates how More himself has had cause to temper this optimism, stressing rather this driving principle as one of “desirability” and more a normative goal than a faith in historical inevitability. “History”, More would say in 2002, “since the Enlightenment makes me wary of all arguments to inevitability…”

Rational uncertainty among transhumanists hence make many of them refrain from an argument for the inevitability of transhumanism as a matter of progress. Further, there are indeed several possible factors which could deter the transhumanist idea and drive for “progress” from translating to reality: A neo-Luddite revolution, a turn and rise in preference for rural life, mass disenchantment with technological addiction and increased option for digital detox, nostalgia, disillusionment with modern civilization and a “return-to-innocence” counter-cultural movement, neo-Romanticism, a pop-culture allure and longing for a Tolkien-esque world, cyclical thinking, conservatism, traditionalism, etc. The alternative, backlash, and antagonistic forces are myriad. Even within transhumanism, the anti-democratic and socially conservative Neoreactionary movement, with its rejection of the view that history shows inevitable progression towards greater liberty and enlightenment, is gradually (and rather disturbingly) growing a contingent. Hughes talks, as another point for rational uncertainty, about the three critiques: futurological, historical, and anthropological, of transhumanist and Enlightenment faith in progress that Phillipe Verdoux offers, and in which the anthropological argument holds that “pre-moderns were probably as happy or happier than we moderns.” After all, Rousseau, himself a French Enlightenment thinker, “is generally seen as having believed in the superiority of the “savage” over the civilized.” Perspectives like these could stir anti-modern, anti-progress sentiments in people’s hearts and minds.

Demonstrating still why transhumanists must not be obstinate over the idea of inevitability, Hughes refers to Greg Burch’s 2001 work “Progress, Counter-Progress, and Counter-Counter-Progress” in which the latter expounded on the Enlightenment and transhumanist commitment to progress as “to a political program, fully cognizant that there are many powerful enemies of progress and that victory was not inevitable.” Moreover, the possible failure in realizing goals of progress might not even result from the actions of “enemies” in that antagonistic sense of the word, for there is also that likely scenario, as the 2006 movie Idiocracy depicts, of a future dystopian society based on dysgenics, one in which, going by expectations and trends of the 21st century, the most intelligent humans decrease in reproduction and eventually fail to have children while the least intelligent reproduce prolifically. As such, through the process of natural selection, generations are created that collectively become increasingly dumber and more virile with each passing century, leading to a future world plagued by anti-intellectualism, bereft of intellectual curiosity, social responsibility, coherence in notions of justice and human rights, and manifesting several other traits of degeneration in culture. This is yet a possibility for our future world.

So while for many extropians and transhumanists, nonetheless, perpetual progress was an unstoppable train, responding to which “one either got on board for transcension or consigned oneself to the graveyard”, other transhumanists, however, Hughes comments, especially in response to certain historical experiences (the 2000 dot-com crash, for example), have seen reason to increasingly temper their expectations about progress. In Hughes’s appraisal, while, therefore, some transhumanists “still press for technological innovation on all fronts and oppose all regulation, others are focusing on reducing the civilization-ending potentials of asteroid strikes, genetic engineering, artificial intelligence and nanotechnology.” Some realism hence need be in place to keep under constant check the excesses of contemporary secular technomillennialism as contained in some transhumanist strains.

Hughes presents Nick Bostrom’s 2001 essay “Analyzing Human Extinction Scenarios and Related Hazards” as one influential example of this anti-millennial realism, a text in which Bostrom, following his outline of scenarios that could either end the existence of the human species or have us evolve into dead-ends, then addressed not just how we can avoid extinction and ensure that there are descendants of humanity, but also how we can ensure that we will be proud to claim them. Subsequently, Bostrom has been able to produce work on “catastrophic risk estimation” at the Future of Humanity Institute at Oxford. Hughes seems to favour this approach, for he ensures to indicate that this has also been adopted as a programmatic focus for the Institute for Ethics and Emerging Technologies (IEET) which he directs, and as well for the transhumanist non-profit, the Lifeboat Foundation. Transhumanists who listen to Bostrom, as we could deduce from Hughes, are being urged to take a more critical approach concerning technological progress.

With the availability of this rather cautious attitude, a new tension, Hughes reports, now plays out between eschatological certainty and pessimistic risk assessment. This has taken place mainly concerning the debate over the Singularity. For the likes of Ray Kurzweil (2005), representing the camp of a rather technomillennial, eschatological certainty, his patterns of accelerating trendlines towards a utopian merger of enhanced humanity and godlike artificial intelligence is one of unstoppability, and this Kurzweil supports by referring to the steady exponential march of technological progress through (and despite) wars and depressions. Dystopian and apocalyptic predictions of how humanity might fare under superintelligent machines (extinction, inferiority, and the likes) are, in the assessment of Hughes, but minimally entertained by Kurzweil, since to the techno-prophet we are bound to eventually integrate with these machines into apotheosis.

The platform, IEET, thus has taken a responsibility of serving as a site for teasing out this tension between technoprogressive “optimism of the will and pessimism of the intellect,” as Hughes echoes Antonio Gramsci. On the one hand, Hughes explains, “we have championed the possibility of, and evidence of, human progress. By adopting the term “technoprogressivism” as our outlook, we have placed ourselves on the side of Enlightenment political and technological progress.”And yet on the other hand, he continues, “we have promoted technoprogressivism precisely in order to critique uncritical techno-libertarian and futurist ideas about the inevitability of progress. We have consistently emphasized the negative effects that unregulated, unaccountable, and inequitably distributed technological development could have on society” (one feels tempted to call out Landian accelerationism at this point). Technoprogressivism, the guiding philosophy of IEET, avails as a principle which insists that technological progress needs to be consistently conjoined with, and dependent on, political progress, whilst recognizing that neither are inevitable.

In charting the essay towards a close, Hughes mentions his and a number of IEET-led technoprogresive publications, among which we have Verdoux who, despite his futurological, historical, and anthropological critique of transhumanism, yet goes ahead to argue for transhumanism on moral grounds (free from the language of “Marxism’s historical inevitabilism or utopianism, and cautious of the tragic history of communism”), and “as a less dangerous course than any attempt at “relinquishing” technological development, but only after the naive faith in progress has been set aside.” Unfortunately, however, the “rational capitulationism” to the transhumanist future that Verdoux offers, according to Hughes, is “not something that stirs men’s souls.” Hughes hence, while admitting to our need “to embrace these critical, pessimistic voices and perspectives”, yet calls on us to likewise heed to the need to “also re-discover our capacity for vision and hope.” This need for optimism that humans “can” collectively exercise foresight and invention, and peacefully deliberate our way to a better future, rather than yielding to narratives that would lead us into the traps of utopian and apocalyptic fatalism, has been one of the motivations behind the creation of the “technoprogressive” brand. The brand, Hughes presents, has been of help in distinguishing necessarily “Enlightenment optimism about the “possibility” of human political, technological and moral progress from millennialist techno-utopian inevitabilism.”

Presumably, upon this technoprogressive philosophy, the new version of the Transhumanist Declaration, adopted by Humanity+ in 2009, indicated a shift from some of the language of the 1998 version, and conveyed a more reflective, critical, realistic, utilitarian, “proceed with caution” and “act with wisdom” tone with respect to the transhumanist vision for humanity’s progress. This version of the declaration, though relatively sobered, remains equally inspiring nonetheless. Hughes closes the essay with a reminder on our need to stay aware of the diverse ways by which our indifferent universe threatens our existence, how our growing powers come with unintended consequences, and why applying mindfulness on our part in all actions remains the best approach for navigating our way towards progress in our radically uncertain future.

Conclusively, following Hughes’ objectives in this series, it can be suggested that more studies on the Enlightenment (European and global) are desirable especially for its potential to furnish us with richer understanding into a number of problems within contemporary transhumanism as sprouting from its roots deep in the Enlightenment. Interest and scholarship in Enlightenment studies, fortunately, seems to be experiencing some current revival, and even so with increasing diversity in perspective, thereby presenting transhumanism with a variety of paths through which to explore and gain context for connected issues. Seeking insight thence into some foundations of transhumanism’s problems could take the path, among others: of an examination of internal contradictions within the Enlightenment, of the approach of Max Horkheimer and Theodor Adorno’s “Dialectic of Enlightenment”; of assessing opponents of the Enlightenment as found, for example, in Isaiah Berlin’s notion of “Counter Enlightenment”; of investigating a rather radical strain of the Enlightenment as presented in Jonathan Israel’s “Radical Enlightenment”, and as well in grappling with the nature of the relationships between transhumanism and other heirs both of the Enlightenment and the Counter-Enlightenment today. Again, and significantly, serious attention need be paid now and going forwards in jealously guarding transhumanism against ultimately falling into the hands of the Dark Enlightenment.


Ojochogwu Abdul is the founder of the Transhumanist Enlightenment Café (TEC), is the co-founder of the Enlightenment Transhumanist Forum of Nigeria (H+ Nigeria), and currently serves as a Foreign Ambassador for the U.S. Transhumanist Party in Nigeria. 

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

logo_bg

Gennady Stolyarov II
Ray Kurzweil


The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

Gareth John


Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019


In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.

NOTES

1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.

2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)

3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5

4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015

5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)

6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129

7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia) https://intelligence.org/files/PredictingAI.pdf

8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference

9. Pensky, N, 2014, article taken from Pando. https://goo.gl/LpR3eF

10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’. http://goo.gl/ujQlyI

11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015. https://goo.gl/nFzi2y

12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future. http://goo.gl/dZptO3

Gareth John lives in Cardiff, UK and is a trainee social researcher with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party. 


HISTORICAL COMMENTS

Gareth,

Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.

I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.

Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?

Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.

[…]

Thanks,

Rich Casada


Hi Rich,
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!

I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:

“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).

Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.

I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)

So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.

This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.

Gareth

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Gennady Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can I Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can I Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

Gennady Stolyarov II is Chairman of the United States Transhumanist Party. Learn more about Mr. Stolyarov here

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). 

Is the Soul Digital or Analogue? – Article by C. H. Antony

Is the Soul Digital or Analogue? – Article by C. H. Antony

logo_bgC. H. Antony


I am probably not the ideal Transhumanist; I do believe that I have a soul, that it is more the essence of me than the sum of my neurons and how they interact with each other to create my thoughts, and that it is an extremely fragile thing. Should I die and preserve myself to be revived at a later date, I fear that I would never know of the success or failure of that endeavor. That a living breathing thinking person who acts like me and reasons like me will rejoin society is not in question; I only wonder that I might miss it as my essence passes on into some other form of existence… or worse – not. I do not believe that a digital substrate will, in fact, carry my soul on uninterrupted.

I want to explore the question of the soul for a moment. In The Singularity is Near (2005), Ray Kurzweil stated that the Calculations Per Second of the human brain are in the vicinity of 10 to the 14th power, based on the assumption, and rightly so, that each neuron in the brain could be considered a digital on/off or 1/0. Around six years ago, we began seeing articles describing microtubules in the axons of the neuronal cells that seemed to have quantum properties I freely admit to not understanding. I cheerfully invite anyone to correct me on this, but it seems that while the neuron either fires or doesn’t as it communicates with the neighboring cell, the microtubule seems to exist in a sort Schrödinger-like state of possibilities – like a multiplexing wire that might convey one piece of information by doing so at a particular combination of wattage, voltage, and resistance, then convey a completely different set of instructions with another combination of the same. It seems to me that if every neuron is operating in a digital on/off state, then 1014  computations per second (CPS) are likely given the average number of neuronal cells in the human brain, and if that number might be horribly wrong because of what we now know of the activity within the axon – then this suggests that superposition state of neural activity might very well be the essence of our consciousness and, if interrupted, could be lost and what remains would be something else only a comfort to those we would have left behind.

I agree that an entirely biological existence is not only a seriously limiting factor in our future development, but also something we are destined to outgrow and will do so. However, I would say that my ideal manifestation of this is a seamless combination of man and machine. Medical technology could eliminate all the senescence we suffer to the point where the next logical step is enhancement over a timeless organic form. I, for one, would hate to live for hundreds of years and gather all the knowledge and experience of those times only to die because of some future equivalent of a drunk driver. That in itself is good enough reason to fortify my existence any way I can. If that means that my body must be replaced with an artificial one, so be it. But, I want to keep my squishy, limited, fragile brain! I want my cake and to eat it, gleefully, with a nearly indestructible form that doesn’t need the cake, won’t get fat from it, and still let’s me enjoy the flavors and textures as I do now. I want to enjoy all the many hedonistic joys freely and with only greater precision than my limited biological form can experience.

I believe we’re seeing this very trend emerge and that the collective instinct of man is far more ready to accept an enhanced human/cyborg than uploading oneself to a purely artificial substrate. Evidence of this can be seen in the amazing promise of Elon Musk’s Neuralink project, the recent X-Prize challenge for a robot avatar, and the many amazing advancements in prosthetic limbs and organs. As I previously stated, medical technology will soon overcome senescence, allowing our tissues to go on indefinitely, so to essentially cure our brain of degeneration, enhance it with a neural mesh, and go about our lives in a perfected cybernetic body akin to Ghost in the Shell: Altered Architecture is probably a pretty good direction to be steering ourselves as Transhumanists. It’s also the most likely Next Step, if you will, considering how well society is conditioned for these themes. I would certainly feel more comfortable with my own enhanced mind in a perfect and durable body that can be easily upgraded and modified as the centuries pass.

So now I ask the members of this community to bring their thoughts here. What is your ideal existence?

C. H. Antony is a member of the U.S. Transhumanist Party. He may be contacted here

Will We Build the Future, or Will the Future Build Us? – Article by Arin Vahanian

Will We Build the Future, or Will the Future Build Us? – Article by Arin Vahanian

Arin Vahanian


There is an idea or perception bandied about the general public that unstoppable technological forces are already upon us like a runaway train, threatening to derail our way of life and everything we have ever known, and that there is nothing we can do about it.

However, I would like to offer some hope and at the same time dispel this seemingly apocalyptic scenario.

There appear to be two main schools of thought when we discuss the future; the Ray Kurzweil school of thought, which states that the future will evolve as it will and that we will reach Singularity by a certain date, and the Peter Thiel school of thought, which says that the future won’t be built unless we build it.

I would like to add upon Mr. Thiel’s idea by saying that the future will indeed be built, but unless we, as a society, a human race, and a world, join forces to build a future we would like to live in and which reflects our values, we will indeed have a future, but perhaps not one we are completely comfortable with.

Thus, this is a call to action for not only those who are actively involved in the fields of technology, science, and engineering, but all people around the world, because the sum of our collective actions will decide the fate of the world, and the future we live in. Whether we want to admit it, all of us are, on some level, responsible for how the world develops every day.

I urge those of you who may have resigned yourselves to the idea that there is nothing you can do to help change the trajectory of the world to take a look with new eyes. There is always something all of us can do, because every day we are interacting with others, building relationships, helping to create products, working on resolving problems that affect humanity, contributing to the success of an organization, company, or family, and performing actions that help the world develop, no matter on how small a scale that might be.

Everyone on Earth has a role to play in the creation of our future. That is what you are here for – to help fulfill your personal vision and mission while also contributing to the development of the world. That is how important you are.  

So the next time someone remarks that the writing is on the wall and that we should just accept that we have no say in how the world evolves, please remember that we are all architects of our own future, which hasn’t even been written yet. How it will be written depends on the actions every one of us takes every day. Therefore, the question we should be asking ourselves every day is, what kind of future will we build? And then, of course, after answering this question, we should not waste any time in building that future we have envisioned.

Arin Vahanian is Director of Marketing for the U.S. Transhumanist Party.

Beginners’ Explanation of Transhumanism – Bobby Ridge and Gennady Stolyarov II

Beginners’ Explanation of Transhumanism – Bobby Ridge and Gennady Stolyarov II

logo_bg

Bobby Ridge
Gennady Stolyarov II


Bobby Ridge, Secretary-Treasurer of the U.S. Transhumanist Party, and Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, provide a broad “big-picture” overview of transhumanism and major ongoing and future developments in emerging technologies that present the potential to revolutionize the human condition and resolve the age-old perils and limitations that have plagued humankind.

This is a beginners’ overview of transhumanism – which means that it is for everyone, including those who are new to transhumanism and the life-extension movement, as well as those who have been involved in it for many years – since, when it comes to dramatically expanding human longevity and potential, we are all beginners at the beginning of what could be our species’ next great era.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside.

See Mr. Stolyarov’s presentation, “The U.S. Transhumanist Party: Pursuing a Peaceful Political Revolution for Longevity“.

In the background of some of the video segments is a painting now owned by Mr. Stolyarov, from “The Singularity is Here” series by artist Leah Montalto.