Browsed by
Tag: Ray Kurzweil

Why Transhumanism Needs More Positive Science Fiction – Article by Rykon Volta

Why Transhumanism Needs More Positive Science Fiction – Article by Rykon Volta

Rykon Volta


In the modern Age of Accelerating Returns, more commonly known as the Information Age, technological growth is accelerating at an unprecedented rate. Never before in the history of humanity has technological growth shown itself so clearly to the human race. As noted by famous futurist Ray Kurzweil, the trend of exponential growth in technology follows a double exponential curve.

One famous example of this exponential growth that you might be familiar with if you are into the world of tech is, of course, Moore’s Law, but in The Singularity is Near, Kurzweil demonstrates that other technological fields, including medicine, have been accelerating as well. Ray Kurzweil shows that technology has actually been accelerating since before the Stone Age, although a man in the Roman Empire would not have noticed any ramifications of progress considering that his grandchildren would not live in a very different society from the one his grandfather and he inhabited. For the first time in recorded history, we are commonly thinking about where we will be in 100 years, where we will be in 50 years, and now we are even thinking about where we will be in a decade as technology progresses into the 21st Century. If Ray Kurzweil is right, machines will have sentience, and AI, or artificial intelligence, will be greater than human intelligence, resulting in a hypothetical event known as an “intelligence explosion” or “technological singularity”. After this point, machines will be much smarter than average human beings and will be able to carry on progress much faster than we can even begin to comprehend with our natural brains.

In the wake of the recognition of these future possibilities, many science-fiction authors and script writers have created a plethora of media to warn us that AI and future genetic augmentation pose many existential threats to the human race. Examples that now dominate the mainstream media include Terminator, 2001: A Space Odyssey, The Matrix, and many more that warn us that AI might kill us all. Gattaca expresses the great fear of an unfair society of elitism in a genetically enhanced world where a man who was born naturally is unable to get his dream career because he wasn’t born with genetic modifications. In parallel, people demonize the idea of genetic modification by ruthlessly attacking GMOs and saying that they’re bad for us when GMOs have in fact solved famine in some parts of the world due to higher yields. People are always fearful of something they do not understand.

In the Golden Age of Science Fiction, a period during the mid-20th Century that saw many sci-fi works hitting the stage, spreading optimism and futurism, science fiction had a brighter outlook on the future. Isaac Asimov imagined future Spacer societies and a Galactic Empire in his Robot Series and Foundation Series. Gene Roddenberry took us on fantastic voyages across the stars in the Enterprise alongside Captain James T. Kirk and Spock. Other authors inspired visionaries to have a brighter outlook on the future as the Space Race sent the first humans to the Moon.

Today, we have, in a way, a form of cultural stagnation. While some still see the future in an optimistic light, it seems much more popular today to look at the future as a dystopia, and New Age movements all over the place actually act like demonizing technology is some kind of “morally right” position. Despite the trends of growth continuing to accelerate, mainstream culture seems to be propagating more fear of the future than hope and inspiration. Why are we doing this? While I agree that dystopian sci-fi has it’s place and that we should in very deed analyze and contemplate existential risks in our future that we might steer clear of, progress is going to happen and we are going to try everything we can to “play god”, as the enemies of transhumanism like to say transhumanists are trying to do. To them, of course, I say, “Were we not created in God’s image? Did God not give the Earth to mankind? Were we not meant to achieve our full potential, to subdue the Earth and conquer it, bending it to our will?” Indeed, this phrase in Genesis seems to be divine permission to modify our bodies and accelerate a brighter future. However, this is mainly an appeal to my fellow religious folks who may be averse to progress. We are not playing God because, quite honestly, God would not even make that possible. We are just using our God-given talents to hack our own genetic code and modify the machinery of our initial, still quite wonderful creation. To those Christians who say that we are insulting God and telling him “You didn’t make me good enough”, the beauty of mankind is that we were in fact created with the ability to modify ourselves. Don’t modify yourself with the intention of insulting your creator, but with the intention of becoming closer to your creator. Why would he give us the ability for self-modification if he didn’t intend for us to use it? It’s like saying that we shouldn’t work out because self improvement is some kind of blasphemy against God. Do you really believe God wants us to intentionally limit ourselves from our full potential?

Others may fear the coming of AI as a usurping of humanity as the apex predator upon this planet, and they may be afraid of a Skynet scenario where a rampant AI destroys us all. I argue that the solution is to merge ourselves with the machines, allowing us to cause ourselves to evolve. Ray Kurzweil and many other singularitarians would make the same argument. By evolving our own bodies and replacing our cells with nanobots whereby we can enhance our brains to the point where neural signals travel at light speed, we will be able to keep up with AI in the evolutionary arms race to come. You can choose to live in fear in the face of the Singularity that is coming, getting left behind in its wake, or you can step boldly and bravely forward into the new world that it will create, surpassing all your physical, mental, and morphological limitations and ending your mortality fully.

As I have written before, mainstream media is overwhelmingly sending out negative signals and warnings about the future, painting into the memespace, or ideaspace, of mainstream culture the notion that technology is a negative influence and that it should be contained and controlled. Society is largely crying for a return back to the caves because many people are fearful of what they don’t understand. This trend needs to cease. People need to see that the light of the future is much brighter than they think. AI is coming, and the technological Singularity is coming, and it’s going to be better than anyone can imagine. This is a call to arms; artists and sci-fi writers who see the ramifications of the future and how it can create an abundant, prosperous utopia, I urge you to write science fiction that portrays AI not in a negative, but rather in a positive manner. Show AI in a benevolent form and show how it can aid humanity in its future quest for survival. Show how it can solve global problems like hunger and global warming and cure disease. Stories that put the Neo-Luddites in their place, and show that the pseudo-religious zeal of anti-progress-minded people is ultimately a negative factor only holding us back from creating a better world in the long run. Know and understand that the content in the mainstream media has a huge effect on the minds of the people, and indeed much of culture is shaped by what is put out there and consumed by the masses. Transhumanism needs more positive science fiction to help gain support for the movement and to inspire the next generation of scientists and inventors to design the future we all desire!

Rykon Volta is the author of the novel Arondite, Book I of The Artilect Protocol Trilogy. Arondite is available on Amazon in hard-copy and Kindle formats here. Visit Rykon Volta’s website here

Watch the U.S. Transhumanist Party Virtual Enlightenment Salon of  July 19, 2020, when Rykon Volta was the guest of honor and discussed science fiction, his novel Arondite, and the ideas surrounding it with the U.S. Transhumanist Party Officers.

 

Forecasting Whole-Brain Connectomics – A Kurzweilian Approach – Article by Dan Elton

Forecasting Whole-Brain Connectomics – A Kurzweilian Approach – Article by Dan Elton

Daniel C. Elton, Ph.D.


Editor’s Note: In this article, U.S. Transhumanist Party Director of Scholarship Dr. Daniel C. Elton describes the recent advances in mapping the connectomes of various organisms, as well as the technological advances that would be needed to achieve effective human whole-brain emulation. Given extensive discussion of these subjects among U.S. Transhumanist Party members, including at the Virtual Enlightenment Salon of September 27, 2020, with Kenneth Hayworth and Robert McIntyre, it is fitting for the U.S. Transhumanist Party to feature this systematic exploration by Dr. Elton into what has been achieved in the field of connectomics already and what it would practically take for human whole-brain emulation to become a reality. As Dr. Elton convincingly illustrates, this possibility is still several decades away, but some steady progress has been made in recent years as well.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, March 7, 2021


The connectome of an organism is a map of all neurons and their connections. This may be thought of as a graph with the neurons as nodes and synaptic connections as edges. Here we take the term ‘connectome’ to refer to the graph and the underlying electron microscopy images of the neurons, which contain much more information. However, to successfully simulate an organism’s brain using a connectome, more information will be needed.  Retrieving a detailed scan of an entire brain and mapping all the neurons is a prerequisite for whole-brain emulation. In their landmark 2008 paper, “Whole Brain Emulation: A Roadmap“, transhumanists Anders Sandberg and Nick Bostrom construct a detailed “technology tree” showing the prerequisite technologies for realizing whole brain emulation:

Tech tree from Sandberg & Bostrom, 2008

In this article, we focus on the “scanning” component along with part of the “translation” component, namely neuronal tracing. By plotting technological progress on a logarithmic plot, similar to how Kurzweil does, we attempt to forecast how many decades away we are from being able to scan an entire human brain (and trace/segment all neurons to determine the connectome).  Of course, while Kurzweilian projections have been known to hold (most famously for Moore’s law), we caution that the start of a logistic function can look like an exponential function. In other words, exponential trends can and often do plateau. As any investment advisor would say, “past returns are no guarantee of future results”.

The complete connectome of the nematode worm (Caenorhabditis Elegans) was published in 1986. A complete set of images of the fruit fly (Drosophila melanogaster) was published in 2018. However, all of the neurons and their connections have not yet been segmented or traced. In January 2020 researchers published the connectome of the central brain of the fruit fly, containing 25,000 neurons, which to my knowledge is the largest connectomics dataset published to date.

I thought it would be fun/interesting to plot the progress of connectomics over time and try to extrapolate out any trend observed. So, I did a literature search for all studies to date which either traced or segmented neurons and marked out synapses in electron microscopy data:

[1] D. D. Bock, et al. “Network anatomy and in vivo physiology of visual cortical neurons”, Nature 471 (7337) (2011) 177–182. doi:10.1038/nature09802. [link]
[2] K. L. Briggman, M. Helmstaedter, W. Denk, Wiring specificity in the direction-selectivity circuit of the retina, Nature 471 (7337) (2011) 183–188. [link]
[3] D. J. Bumbarger, M. Riebesell, C. Rodelsperger, R. J. Sommer, System-wide rewiring underlies behavioral differences in predatory and bacterial-feeding nematodes, Cell 152 (1-2) (2013) 109–119. [link]
[4] C.-Y. Lin, et al., A comprehensive wiring diagram of the protocerebral bridge for visual information processing in the drosophila brain, Cell Reports 3 (5) (2013) 1739–1753. [link]
[5] S. ya Takemura, et al., A visual motion detection circuit suggested by drosophila connectomics, Nature 500 (7461) (2013) 175–181. [link]
[6] M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, W. Denk, Connectomic reconstruction of the inner plexiform layer in the mouse retina, Nature 500 (7461) (2013) 168–174. [link]
[7] N. Kasthuri, et al., Saturated reconstruction of a volume of neocortex, Cell 162 (3) (2015) 648–661. [link]
[8] A. A. Wanner et al., 3-dimensional electron microscopic imaging of the zebrafish olfactory bulb and dense reconstruction of neurons, Scientific Data 3 (1). [link]
[9] K. Ryan, Z. Lu, I. A. Meinertzhagen, The CNS connectome of a tadpole larva of Ciona intestinalis (l.) highlights sidedness in the brain of a chordate sibling, eLife 5 (2016) [link]
[10] S.-y. Takemura, et al., A connectome of a learning and memory center in the adult Drosophila brain, eLife 6 (2017). [link]
[11] K. Eichler, et al., The complete connectome of a learning and memory centre in an insect brain, Nature 548 (7666) (2017) 175–182. [link]
[12] C. S. Xu, et al., A connectome of the adult drosophila central brain (preprint) [link]
[13] L. K. Scheffer, et al., A connectome and analysis of the adult drosophila central brain, eLife 9 (2020). [link]
[14] J. S. Phelps, et al., Reconstruction of motor control circuits in adult drosophila using automated transmission electron microscopy, Cell 184 (3) (2021) 759–774.e18. [link]

Next I plotted most of the data for the number of neurons versus the date of publication:

Next I did linear regression on the (year, log(# neurons)) data which is equivalent to fitting an exponential function to the data. (The reason for fitting the data in this way was to avoid the bias that occurs when fitting an exponential function with least-squares regression that leads to the larger values on the y axis being fit more accurately than smaller ones.) After doing the linear regression I extrapolated it forward in time.

The projection for the fruit-fly connectome (2024) seems about right. If anything, we may see it slightly sooner. It will be interesting to see how much longer it will take before we have physically realistic models of the fruit fly and fruit-fly behavior.  U.S. Transhumanist Party member Logan T. Collins has advocated for  building biophysically and behaviorally realistic models of insects to better understand nervous systems. For one thing, interesting neuroscience experiments may be performed on a simulated “virtual fly” much faster and easier than on a real fly (for instance, certain neurons may be removed or manipulated, and the effects on the virtual fly’s behavior observed).  A project to produce the mouse brain connectome is underway, and again, the date extrapolated to — 2033 — seems plausible if the funding for the project continues. Beyond that though, I have very little idea how plausible the projections are!

Here are some numbers that show the challenges just with scanning the entire brain (not to mention segmenting/tracing all the neurons accurately!).

Assuming an isotropic voxel size of 20 nm, it is estimated that storing the images of an entire human brain would require 175 exabytes of storage. It seems we are approaching hard drives which cost about 1.5 cents per gigabyte. Even at those exorbitantly low prices, it would still cost $2.6 billion to store all those images!

The volume of the human brain is about 1.2 x 10^6 cubic millimeters. The Zeiss MultiSEM contains either 61 or even 91 electron beams which scan a sample in parallel. According to a Zeiss video presentation from April 8th, 2020, it can scan a 1×1 mm area at 4 nm resolution in 6.5 minutes. Assuming a slice thickness of 20 nm, a single such machine would require 742,009 years to scan the entire brain!

X-ray holographic nano-tomography might be the path forward …


Dan Elton, Ph. D., is Director of Scholarship for the U.S. Transhumanist Party.  You can find him on Twitter at @moreisdifferent, where he accepts direct messages. If you like his content, check out his website and subscribe to his newsletter on Substack.

Charlie Kam – 2020 U.S. Presidential Candidate Endorsed by the U.S. Transhumanist Party

Charlie Kam – 2020 U.S. Presidential Candidate Endorsed by the U.S. Transhumanist Party

On June 11, 2020, the United States Transhumanist Party (USTP) has endorsed Charlie Kam to run for the office of President of the United States in the 2020 General Election. Mr. Kam was the USTP’s endorsed Vice-Presidential candidate from October 5, 2019, through June 11, 2020. By the rules of succession, and as confirmed by the USTP Officers, Mr. Kam has been endorsed to carry the USTP Presidential ticket forward for the remainder of the 2020 election season.

Charlie Kam is CEO of a software company that creates interactive, life-like, digital, avatars of humans.

He was the sponsor, organizer, and Master of Ceremonies of the highly successful 3-day TransVision 2007 conference, bringing together over 30 scientists and celebrities from around the world to the Natural History Museum of Chicago to present and discuss technologies that interact with humans for a better future. The event included celebrity icon William Shatner; Director of Engineering at Google, Ray Kurzweil; Emmy-nominated and award-winning actor Ed Begley Jr; Founder of Sirius Satellite Radio and United Pharmaceuticals, Martine Rothblatt; Founder of the X PRIZE Foundation, and Zero-Gravity Corporation, Peter Diamandis; Robotics designer, David Hanson; Chief Science Officer of SENS Research Foundation, Aubrey de Grey; CEO of the Alcor Foundation, Max More; Chairperson of Humanity+, Natasha Vita-More; Award-winning author and MIT Cognitive Scientist, Marvin Minsky; Founder of the Future of Humanity Institute, Nick Bostrom; and many others. It was at this conference where the idea for the Singularity University was first conceived by Ray Kurzweil and Peter Diamandis. It was also where Martine Rothblatt met with David Hanson and went on to create the world’s first interactive sentient robot, BINA48.

Charlie was one of the executive producers of the film about Ray Kurzweil, Transcendent Man.

He has also composed and sang many futuristic songs, some of which have been featured in films, including the aforementioned Transcendent Man and the docu-drama The Singularity Is Near, starring Pauley Perrette.

He has created many music videos for his songs about life-extension, including his most famous one, “I Am the Very Model of A Singularitarian”, which is based on Ray Kurzweil’s book, The Singularity Is Near.

Recently, he has been working with Ray Kurzweil promoting Ray’s latest book Danielle and will be helping to promote Ray’s upcoming book, The Singularity Is Nearer, in 2020.

Brent Nally Interviews U.S. Transhumanist Party Chairman Gennady Stolyarov II at Sierra Sciences

Brent Nally Interviews U.S. Transhumanist Party Chairman Gennady Stolyarov II at Sierra Sciences

Gennady Stolyarov II
Brent Nally


On October 12, 2019, U.S. Transhumanist Party Chairman Gennady Stolyarov II spoke with Brent Nally at the venerable Sierra Sciences headquarters in Reno, Nevada. They discussed developments in the U.S. Transhumanist Party, fitness, health, and longevity – among other subjects.

Watch the interview here.

Become a member of the U.S. Transhumanist Party / Transhuman Party here for free, no matter where you reside.

Show Notes by Brent Nally

3:15 Brent Nally’s RAADfest 2019 YouTube playlist: https://www.youtube.com/playlist?list=PLGjySL94COVSO3hcnpZq-jCcgnUQIaALQ

4:24 Go to RAADfest primarily to network: https://www.raadfest.com/

6:28 People Unlimited website: https://peopleunlimitedinc.com/

6:30 The Coalition for Radical Life Extension website: https://www.rlecoalition.com/

7:20 We need to increase your healthspan to increase your lifespan.

9:01 Watch Bill Andrews and Gennady discussing transhumanism and radical life extension: https://www.youtube.com/watch?v=U7GJrVBp8FQ

13:35 Gennady just ran the Lakeside Marathon at Lake Tahoe the day before, October 11, 2019.

19:30 Do whatever type of exercise that you enjoy. Don’t push yourself to the point of exhaustion where you’re throwing up or getting injured or not having fun because that’s bad for your telomeres.

24:10 Audit your own thoughts daily as a meditation to recognize your limiting beliefs.

26:15 How Gennady became Chairman of the U.S. Transhumanist Party

29:35 How the 9 USTP presidential candidates competed using a “ranked preference” approach

32:07 43 articles are currently in the 3rd version of the Transhumanist Bill of Rights: https://transhumanist-party.org/tbr-3/

34:27 https://transhumanist-party.org/

35:40 We should be more concerned with ideas rather than people. We’re in an “ideas” economy.

36:28 Politicians are more followers than leaders.

40:27 3 core ideals of USTP: https://transhumanist-party.org/values/

41:40 Gennady shares more details about how his role as chairman may evolve.

44:28 Positives and negatives of centralization and decentralization

49:15 Sophia the AI robot: https://www.hansonrobotics.com/sophia/

51:30 Truth is sometimes stranger than fiction. Try to educate others about transhumanism.

55:30 Ray Kurzweil points out that technology growth and stock market growth are separate. Here’s Brent talking to Ray in February 2011: https://www.youtube.com/watch?v=3TuVn…

57:37 Join the USTP: https://transhumanist-party.org/membership/

58:40 USTP Twitter: https://twitter.com/USTranshumanist; USTP LinkedIn: https://www.linkedin.com/company/19118856/; Gennady’s online magazine – The Rational Argumentator: http://www.rationalargumentator.com/index/

1:01:14 Gennady, Bill Andrews, and Brent ran about 8.4 miles the next morning above Carson City, NV on the Upper Clear Creek Trail.

1:02:15 Don’t forget to subscribe, like, comment on this video, and share on your social media accounts!

Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence – Essay by Gennady Stolyarov II in Issue 2 of the INSAM Journal

Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence – Essay by Gennady Stolyarov II in Issue 2 of the INSAM Journal

Gennady Stolyarov II


Note from Gennady Stolyarov II, Chairman, United States Transhumanist Party / Transhuman Party: For those interested in my thoughts on the connections among music, technology, algorithms, artificial intelligence, transhumanism, and the philosophical motivations behind my own compositions, I have had a peer-reviewed paper, “Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence” published in Issue 2 of the INSAM Journal of Contemporary Music, Art, and Technology. This is a rigorous academic publication but also freely available and sharable via a Creative Commons Attribution Share-Alike license – just as academic works ought to be – so I was honored by the opportunity to contribute my writing. My essay features discussions of Plato and Aristotle, Kirnberger’s and Mozart’s musical dice games, the AI-generated compositions of Ray Kurzweil and David Cope, and the recently completed “Unfinished” Symphony of Franz Schubert, whose second half was made possible by the Huawei / Lucas Cantor, AI / human collaboration. Even Conlon Nancarrow, John Cage, Iannis Xenakis, and Karlheinz Stockhausen make appearances in this paper. Look in the bibliography for YouTube and downloadable MP3 links to all of my compositions that I discuss, as this paper is intended to be a multimedia experience.

Music, technology, and transhumanism – all in close proximity in the same paper and pointing the way toward the vast proliferation of creative possibilities in the future as the distance between the creator’s conception of a musical idea and its implementation becomes ever shorter.

You can find my paper on pages 81-99 of Issue 2.

Read “Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence” here.

Read the full Issue 2 of the INSAM Journal here.

Abstract: “In this paper, I describe the development of my personal research on music that transcends the limitations of human ability. I begin with an exploration of my early thoughts regarding the meaning behind the creation of a musical composition according to the creator’s intentions and how to philosophically conceptualize the creation of such music if one rejects the existence of abstract Platonic Forms. I then explore the transformation of my own creative process through the introduction of software capable of playing back music in exact accord with the inputs provided to it, while enabling the creation of music that remains intriguing to the human ear even though the performance of it may sometimes be beyond the ability of humans. Subsequently, I describe my forays into music generated by earlier algorithmic systems such as the Musikalisches Würfelspiel and narrow artificial-intelligence programs such as WolframTones and my development of variations upon artificially generated themes in essential collaboration with the systems that created them. I also discuss some of the high-profile, advanced examples of AI-human collaboration in musical creation during the contemporary era and raise possibilities for the continued role of humans in drawing out and integrating the best artificially generated musical ideas. I express the hope that the continued advancement of musical software, algorithms, and AI will amplify human creativity by narrowing and ultimately eliminating the gap between the creator’s conception of a musical idea and its practical implementation.”

Video of Cyborg and Transhumanist Forum at the Nevada State Legislature – May 15, 2019

Video of Cyborg and Transhumanist Forum at the Nevada State Legislature – May 15, 2019

Gennady Stolyarov II
Anastasia Synn
R. Nicholas Starr


Watch the video containing 73 minutes of excerpts from the Cyborg and Transhumanist Forum, held on May 15, 2019, at the Nevada State Legislature Building.

The Cyborg and Transhumanist Forum at the Nevada Legislature on May 15, 2019, marked a milestone for the U.S. Transhumanist Party and the Nevada Transhumanist Party. This was the first time that an official transhumanist event was held within the halls of a State Legislature, in one of the busiest areas of the building, within sight of the rooms where legislative committees met. The presenters were approached by tens of individuals – a few legislators and many lobbyists and staff members. The reaction was predominantly either positive or at least curious; there was no hostility and only mild disagreement from a few individuals. Generally, the outlook within the Legislative Building seems to be in favor of individual autonomy to pursue truly voluntary microchip implants. The testimony of Anastasia Synn at the Senate Judiciary Committee on April 26, 2019, in opposition to Assembly Bill 226, is one of the most memorable episodes of the 2019 Legislative Session for many who heard it. It has certainly affected the outcome for Assembly Bill 226, which was subsequently further amended to restore the original scope of the bill and only apply the prohibition to coercive microchip implants, while specifically exempting microchip implants voluntarily received by an individual from the prohibition. The scope of the prohibition was also narrowed by removing the reference to “any other person” and applying the prohibition to an enumerated list of entities who may not require others to be microchipped: state officers and employees, employers as a condition of employment, and persons in the business of insurance or bail. These changes alleviated the vast majority of the concerns within the transhumanist and cyborg communities about Assembly Bill 226.

 

From left to right: Gennady Stolyarov II, Anastasia Synn, and Ryan Starr (R. Nicholas Starr)

This Cyborg and Transhumanist Forum comes at the beginning of an era of transhumanist political engagement with policymakers and those who advise them. It was widely accepted by the visitors to the demonstration tables that technological advances are accelerating, and that policy decisions regarding technology should only be made with adequate knowledge about the technology itself – working on the basis of facts and not fears or misconceptions that arise from popular culture and dystopian fiction. Ryan Starr shared his expertise on the workings and limitations of both NFC/RFID microchips and GPS technology and who explained that cell phones are already far more trackable than microchips ever could be (based on their technical specifications and how those specifications could potentially be improved in the future). U.S. Transhumanist Party Chairman Gennady Stolyarov II introduced visitors to the world of transhumanist literature by bringing books for display – including writings by Aubrey de Grey, Bill Andrews, Ray Kurzweil, Jose Cordeiro, Ben Goertzel, Phil Bowermaster, and Mr. Stolyarov’s own book “Death is Wrong” in five languages. It appears that there is more sympathy for transhumanism within contemporary political circles than might appear at first glance; it is often transhumanists themselves who overestimate the negativity of the reaction they expect to receive. But nobody picketed the event or even called the presenters names; transhumanist ideas, expressed in a civil and engaging way – with an emphasis on practical applications that are here today or due to arrive in the near future – will be taken seriously when there is an opening to articulate them.

The graphics for the Cyborg and Transhumanist Forum were created by Tom Ross, the U.S. Transhumanist Party Director of Media Production.

Become a member of the U.S. Transhumanist Party / Transhuman Party free of charge, no matter where you reside.

References

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

• “A Word on Implanted NFC Tags” – Article by Ryan Starr

Assembly Bill 226, Second Reprint – This is the version of the bill that passed the Senate on May 23, 2019.

Amendment to Assembly Bill 226 to essentially remove the prohibition against voluntary microchip implants

Future Grind Podcast

Synnister – Website of Anastasia Synn

James Hughes’ Problems of Transhumanism: A Review (Part 5) – Article by Ojochogwu Abdul

James Hughes’ Problems of Transhumanism: A Review (Part 5) – Article by Ojochogwu Abdul

logo_bg

Ojochogwu Abdul


Part 1 | Part 2 | Part 3 | Part 4 | Part 5

Part 5: Belief in Progress vs. Rational Uncertainty

The Enlightenment, with its confident efforts to fashion a science of man, was archetypal of the belief and quest that humankind will eventually achieve lasting peace and happiness. In what some interpret as a reformulation of Christianity’s teleological salvation history in which the People of God will be redeemed at the end of days and with the Kingdom of Heaven established on Earth, most Enlightenment thinkers believed in the inevitability of human political and technological progress, secularizing the Christian conception of history and eschatology into a conviction that humanity would, using a system of thought built on reason and science, be able to continually improve itself. As portrayed by Carl Becker in his 1933 book The Heavenly City of the Eighteenth-Century Philosophers, the philosophies “demolished the Heavenly City of St. Augustine only to rebuild it with more up-to-date materials.” Whether this Enlightenment humanist view of “progress” amounted merely to a recapitulation of the Christian teleological vision of history, or if Enlightenment beliefs in “continual, linear political, intellectual, and material improvement” reflected, as James Hughes posits, “a clear difference from the dominant Christian historical narrative in which little would change until the End Times and Christ’s return”, the notion, in any case, of a collective progress towards a definitive end-point was one that remained unsupported by the scientific worldview. The scientific worldview, as Hughes reminds us in the opening paragraph of this essay within his series, does not support historical inevitability, only uncertainty. “We may annihilate ourselves or regress,” he says, and “Even the normative judgment of what progress is, and whether we have made any, is open to empirical skepticism.”

Hereby, we are introduced to a conflict that exists, at least since after the Enlightenment, between a view of progressive optimism and that of radical uncertainty. Building on the Enlightenment’s faith in the inevitability of political and scientific progress, the idea of an end-point, salvation moment for humankind fuelled all the great Enlightenment ideologies that followed, flowing down, as Hughes traces, through Comte’s “positivism” and Marxist theories of historical determinism to neoconservative triumphalism about the “end of history” in democratic capitalism. Communists envisaged that end-point as a post-capitalist utopia that would finally resolve the class struggle which they conceived as the true engine of history. This vision also contained the 20th-century project to build the Soviet Man, one of extra-human capacities, for as Trotsky had predicted, after the Revolution, “the average human type will rise to the heights of an Aristotle, a Goethe, or a Marx. And above this ridge new peaks will rise”, whereas for 20th-century free-market liberals, this End of History had arrived with the final triumph of liberal democracy, with the entire world bound to be swept in its course. Events though, especially so far in the 21st century, appear to prove this view wrong.

This belief moreover, as Hughes would convincingly argue, in the historical inevitability of progress has also always been locked in conflict with “the rationalist, scientific observation that humanity could regress or disappear altogether.” Enlightenment pessimism, or at least realism, has, over the centuries, proven a stubborn resistance and constraint of Enlightenment optimism. Hughes, citing Henry Vyberg, reminds us that there were, after all, even French Enlightenment thinkers within that same era who rejected the belief in linear historical progress, but proposed historical cycles or even decadence instead. That aside, contemporary commentators like John Gray would even argue that the efforts themselves of the Enlightenment on the quest for progress unfortunately issued in, for example, the racist pseudo-science of Voltaire and Hume, while all endeavours to establish the rule of reason have resulted in bloody fanaticisms, from Jacobinism to Bolshevism, which equaled the worst atrocities attributable to religious believers. Horrendous acts like racism and anti-Semitism, in the verdict of Gray: “….are not incidental defects in Enlightenment thinking. They flow from some of the Enlightenment’s central beliefs.”

Even Darwinism’s theory of natural selection was, according to Hughes, “suborned by the progressive optimistic thinking of the Enlightenment and its successors to the doctrine of inevitable progress, aided in part by Darwin’s own teleological interpretation.” Problem, however, is that from the scientific worldview, there is no support for “progress” as to be found provided by the theory of natural selection, only that humanity, Hughes plainly states, “like all creatures, is on a random walk through a mine field, that human intelligence is only an accident, and that we could easily go extinct as many species have done.” Gray, for example, rebukes Darwin, who wrote: “As natural selection works solely for the good of each being, all corporeal and mental endowments will tend to progress to perfection.” Natural selection, however, does not work solely for the good of each being, a fact Darwin himself elsewhere acknowledged. Nonetheless, it has continually proven rather difficult for people to resist the impulse to identify evolution with progress, with an extended downside to this attitude being equally difficult to resist the temptation to apply evolution in the rationalization of views as dangerous as Social Darwinism and acts as horrible as eugenics.

Many skeptics therefore hold, rationally, that scientific utopias and promises to transform the human condition deserve the deepest suspicion. Reason is but a frail reed, all events of moral and political progress are and will always remain subject to reversal, and civilization could as well just collapse, eventually. Historical events and experiences have therefore caused faith in the inevitability of progress to wax and wane over time. Hughes notes that among several Millenarian movements and New Age beliefs, such faith could still be found that the world is headed for a millennial age, just as it exists in techno-optimist futurism. Nevertheless, he makes us see that “since the rise and fall of fascism and communism, and the mounting evidence of the dangers and unintended consequences of technology, there are few groups that still hold fast to an Enlightenment belief in the inevitability of conjoined scientific and political progress.” Within the transhumanist community, however, the possession of such faith in progress can still be found as held by many, albeit signifying a camp in the continuation therefore of the Enlightenment-bequeathed conflict as manifested between transhumanist optimism in contradiction with views of future uncertainty.

As with several occasions in the past, humanity is, again, currently being spun yet another “End of History” narrative: one of a posthuman future. Yuval Harari, for instance, in Homo Deus argues that emerging technologies and new scientific discoveries are undermining the foundations of Enlightenment humanism, although as he proceeds with his presentation he also proves himself unable to avoid one of the defining tropes of Enlightenment humanist thinking, i.e., that deeply entrenched tendency to conceive human history in teleological terms: fundamentally as a matter of collective progress towards a definitive end-point. This time, though, our era’s “End of History” glorious “salvation moment” is to be ushered in, not by a politico-economic system, but by a nascent techno-elite with a base in Silicon Valley, USA, a cluster steeped in a predominant tech-utopianism which has at its core the idea that the new technologies emerging there can steer humanity towards a definitive break-point in our history, the Singularity. Among believers in this coming Singularity, transhumanists, as it were, having inherited the tension between Enlightenment convictions in the inevitability of progress, and, in Hughes’ words, “Enlightenment’s scientific, rational realism that human progress or even civilization may fail”, now struggle with a renewed contradiction. And here the contrast as Hughes intends to portray gains sharpness, for as such, transhumanists today are “torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities” on the one hand, and, on the other, their “rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future.”

The risks of new technologies, even if not necessarily one that threatens the survival of humanity as a species with extinction, may yet be of an undesirable impact on the mode and trajectory of our extant civilization. Henry Kissinger, in his 2018 article “How the Enlightenment Ends”, expressed his perception that technology, which is rooted in Enlightenment thought, is now superseding the very philosophy that is its fundamental principle. The universal values proposed by the Enlightenment philosophes, as Kissinger points out, could be spread worldwide only through modern technology, but at the same time, such technology has ended or accomplished the Enlightenment and is now going its own way, creating the need for a new guiding philosophy. Kissinger argues specifically that AI may spell the end of the Enlightenment itself, and issues grave warnings about the consequences of AI and the end of Enlightenment and human reasoning, this as a consequence of an AI-led technological revolution whose “culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.” By way of analogy to how the printing press allowed the Age of Reason to supplant the Age of Religion, he buttresses his proposal that the modern counterpart of this revolutionary process is the rise of intelligent AI that will supersede human ability and put an end to the Enlightenment. Kissinger further outlines his three areas of concern regarding the trajectory of artificial intelligence research: AI may achieve unintended results; in achieving intended goals, AI may change human thought processes and human values, and AI may reach intended goals, but be unable to explain the rationale for its conclusions. Kissinger’s thesis, of course, has not gone without both support and criticisms attracted from different quarters. Reacting to Kissinger, Yuk Hui, for example, in “What Begins After the End of the Enlightenment?” maintained that “Kissinger is wrong—the Enlightenment has not ended.” Rather, “modern technology—the support structure of Enlightenment philosophy—has become its own philosophy”, with the universalizing force of technology becoming itself the political project of the Enlightenment.

Transhumanists, as mentioned already, reflect the continuity of some of those contradictions between belief in progress and uncertainty about human future. Hughes shows us nonetheless that there are some interesting historical turns suggesting further directions that this mood has taken. In the 1990s, Hughes recalls, “transhumanists were full of exuberant Enlightenment optimism about unending progress.” As an example, Hughes cites Max More’s 1998 Extropian Principles which defined “Perpetual Progress” as “the first precept of their brand of transhumanism.” Over time, however, Hughes communicates how More himself has had cause to temper this optimism, stressing rather this driving principle as one of “desirability” and more a normative goal than a faith in historical inevitability. “History”, More would say in 2002, “since the Enlightenment makes me wary of all arguments to inevitability…”

Rational uncertainty among transhumanists hence make many of them refrain from an argument for the inevitability of transhumanism as a matter of progress. Further, there are indeed several possible factors which could deter the transhumanist idea and drive for “progress” from translating to reality: A neo-Luddite revolution, a turn and rise in preference for rural life, mass disenchantment with technological addiction and increased option for digital detox, nostalgia, disillusionment with modern civilization and a “return-to-innocence” counter-cultural movement, neo-Romanticism, a pop-culture allure and longing for a Tolkien-esque world, cyclical thinking, conservatism, traditionalism, etc. The alternative, backlash, and antagonistic forces are myriad. Even within transhumanism, the anti-democratic and socially conservative Neoreactionary movement, with its rejection of the view that history shows inevitable progression towards greater liberty and enlightenment, is gradually (and rather disturbingly) growing a contingent. Hughes talks, as another point for rational uncertainty, about the three critiques: futurological, historical, and anthropological, of transhumanist and Enlightenment faith in progress that Phillipe Verdoux offers, and in which the anthropological argument holds that “pre-moderns were probably as happy or happier than we moderns.” After all, Rousseau, himself a French Enlightenment thinker, “is generally seen as having believed in the superiority of the “savage” over the civilized.” Perspectives like these could stir anti-modern, anti-progress sentiments in people’s hearts and minds.

Demonstrating still why transhumanists must not be obstinate over the idea of inevitability, Hughes refers to Greg Burch’s 2001 work “Progress, Counter-Progress, and Counter-Counter-Progress” in which the latter expounded on the Enlightenment and transhumanist commitment to progress as “to a political program, fully cognizant that there are many powerful enemies of progress and that victory was not inevitable.” Moreover, the possible failure in realizing goals of progress might not even result from the actions of “enemies” in that antagonistic sense of the word, for there is also that likely scenario, as the 2006 movie Idiocracy depicts, of a future dystopian society based on dysgenics, one in which, going by expectations and trends of the 21st century, the most intelligent humans decrease in reproduction and eventually fail to have children while the least intelligent reproduce prolifically. As such, through the process of natural selection, generations are created that collectively become increasingly dumber and more virile with each passing century, leading to a future world plagued by anti-intellectualism, bereft of intellectual curiosity, social responsibility, coherence in notions of justice and human rights, and manifesting several other traits of degeneration in culture. This is yet a possibility for our future world.

So while for many extropians and transhumanists, nonetheless, perpetual progress was an unstoppable train, responding to which “one either got on board for transcension or consigned oneself to the graveyard”, other transhumanists, however, Hughes comments, especially in response to certain historical experiences (the 2000 dot-com crash, for example), have seen reason to increasingly temper their expectations about progress. In Hughes’s appraisal, while, therefore, some transhumanists “still press for technological innovation on all fronts and oppose all regulation, others are focusing on reducing the civilization-ending potentials of asteroid strikes, genetic engineering, artificial intelligence and nanotechnology.” Some realism hence need be in place to keep under constant check the excesses of contemporary secular technomillennialism as contained in some transhumanist strains.

Hughes presents Nick Bostrom’s 2001 essay “Analyzing Human Extinction Scenarios and Related Hazards” as one influential example of this anti-millennial realism, a text in which Bostrom, following his outline of scenarios that could either end the existence of the human species or have us evolve into dead-ends, then addressed not just how we can avoid extinction and ensure that there are descendants of humanity, but also how we can ensure that we will be proud to claim them. Subsequently, Bostrom has been able to produce work on “catastrophic risk estimation” at the Future of Humanity Institute at Oxford. Hughes seems to favour this approach, for he ensures to indicate that this has also been adopted as a programmatic focus for the Institute for Ethics and Emerging Technologies (IEET) which he directs, and as well for the transhumanist non-profit, the Lifeboat Foundation. Transhumanists who listen to Bostrom, as we could deduce from Hughes, are being urged to take a more critical approach concerning technological progress.

With the availability of this rather cautious attitude, a new tension, Hughes reports, now plays out between eschatological certainty and pessimistic risk assessment. This has taken place mainly concerning the debate over the Singularity. For the likes of Ray Kurzweil (2005), representing the camp of a rather technomillennial, eschatological certainty, his patterns of accelerating trendlines towards a utopian merger of enhanced humanity and godlike artificial intelligence is one of unstoppability, and this Kurzweil supports by referring to the steady exponential march of technological progress through (and despite) wars and depressions. Dystopian and apocalyptic predictions of how humanity might fare under superintelligent machines (extinction, inferiority, and the likes) are, in the assessment of Hughes, but minimally entertained by Kurzweil, since to the techno-prophet we are bound to eventually integrate with these machines into apotheosis.

The platform, IEET, thus has taken a responsibility of serving as a site for teasing out this tension between technoprogressive “optimism of the will and pessimism of the intellect,” as Hughes echoes Antonio Gramsci. On the one hand, Hughes explains, “we have championed the possibility of, and evidence of, human progress. By adopting the term “technoprogressivism” as our outlook, we have placed ourselves on the side of Enlightenment political and technological progress.”And yet on the other hand, he continues, “we have promoted technoprogressivism precisely in order to critique uncritical techno-libertarian and futurist ideas about the inevitability of progress. We have consistently emphasized the negative effects that unregulated, unaccountable, and inequitably distributed technological development could have on society” (one feels tempted to call out Landian accelerationism at this point). Technoprogressivism, the guiding philosophy of IEET, avails as a principle which insists that technological progress needs to be consistently conjoined with, and dependent on, political progress, whilst recognizing that neither are inevitable.

In charting the essay towards a close, Hughes mentions his and a number of IEET-led technoprogresive publications, among which we have Verdoux who, despite his futurological, historical, and anthropological critique of transhumanism, yet goes ahead to argue for transhumanism on moral grounds (free from the language of “Marxism’s historical inevitabilism or utopianism, and cautious of the tragic history of communism”), and “as a less dangerous course than any attempt at “relinquishing” technological development, but only after the naive faith in progress has been set aside.” Unfortunately, however, the “rational capitulationism” to the transhumanist future that Verdoux offers, according to Hughes, is “not something that stirs men’s souls.” Hughes hence, while admitting to our need “to embrace these critical, pessimistic voices and perspectives”, yet calls on us to likewise heed to the need to “also re-discover our capacity for vision and hope.” This need for optimism that humans “can” collectively exercise foresight and invention, and peacefully deliberate our way to a better future, rather than yielding to narratives that would lead us into the traps of utopian and apocalyptic fatalism, has been one of the motivations behind the creation of the “technoprogressive” brand. The brand, Hughes presents, has been of help in distinguishing necessarily “Enlightenment optimism about the “possibility” of human political, technological and moral progress from millennialist techno-utopian inevitabilism.”

Presumably, upon this technoprogressive philosophy, the new version of the Transhumanist Declaration, adopted by Humanity+ in 2009, indicated a shift from some of the language of the 1998 version, and conveyed a more reflective, critical, realistic, utilitarian, “proceed with caution” and “act with wisdom” tone with respect to the transhumanist vision for humanity’s progress. This version of the declaration, though relatively sobered, remains equally inspiring nonetheless. Hughes closes the essay with a reminder on our need to stay aware of the diverse ways by which our indifferent universe threatens our existence, how our growing powers come with unintended consequences, and why applying mindfulness on our part in all actions remains the best approach for navigating our way towards progress in our radically uncertain future.

Conclusively, following Hughes’ objectives in this series, it can be suggested that more studies on the Enlightenment (European and global) are desirable especially for its potential to furnish us with richer understanding into a number of problems within contemporary transhumanism as sprouting from its roots deep in the Enlightenment. Interest and scholarship in Enlightenment studies, fortunately, seems to be experiencing some current revival, and even so with increasing diversity in perspective, thereby presenting transhumanism with a variety of paths through which to explore and gain context for connected issues. Seeking insight thence into some foundations of transhumanism’s problems could take the path, among others: of an examination of internal contradictions within the Enlightenment, of the approach of Max Horkheimer and Theodor Adorno’s “Dialectic of Enlightenment”; of assessing opponents of the Enlightenment as found, for example, in Isaiah Berlin’s notion of “Counter Enlightenment”; of investigating a rather radical strain of the Enlightenment as presented in Jonathan Israel’s “Radical Enlightenment”, and as well in grappling with the nature of the relationships between transhumanism and other heirs both of the Enlightenment and the Counter-Enlightenment today. Again, and significantly, serious attention need be paid now and going forwards in jealously guarding transhumanism against ultimately falling into the hands of the Dark Enlightenment.


Ojochogwu Abdul is the founder of the Transhumanist Enlightenment Café (TEC), is the co-founder of the Enlightenment Transhumanist Forum of Nigeria (H+ Nigeria), and currently serves as a Foreign Ambassador for the U.S. Transhumanist Party in Nigeria. 

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

logo_bg

Gennady Stolyarov II
Ray Kurzweil


The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

Gareth John


Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019


In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.

NOTES

1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.

2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)

3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5

4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015

5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)

6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129

7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia) https://intelligence.org/files/PredictingAI.pdf

8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference

9. Pensky, N, 2014, article taken from Pando. https://goo.gl/LpR3eF

10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’. http://goo.gl/ujQlyI

11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015. https://goo.gl/nFzi2y

12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future. http://goo.gl/dZptO3

Gareth John lives in Cardiff, UK and is a trainee social researcher with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party. 


HISTORICAL COMMENTS

Gareth,

Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.

I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.

Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?

Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.

[…]

Thanks,

Rich Casada


Hi Rich,
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!

I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:

“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).

Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.

I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)

So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.

This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.

Gareth

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Gennady Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can I Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can I Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

Gennady Stolyarov II is Chairman of the United States Transhumanist Party. Learn more about Mr. Stolyarov here

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II).