Browsed by
Tag: existential risk

A Dialogue on the Simulation Interview with Dan Faggella: A Case for Responsible Stewardship – Article by Dinorah Delfin

A Dialogue on the Simulation Interview with Dan Faggella: A Case for Responsible Stewardship – Article by Dinorah Delfin

Dinorah Delfin

Voice by Terence McKenna

Last week I published an article, Programmatically Generated Everything – The Intelligence/Love Paradox” in response to an interview with Dan Faggella for Allen Saakyan’s Simulation series.

Dan’s thoughtful response to my article was highly stimulating as he was able to mindfully elaborate on, and critique my thoughts. So grateful for the opportunity to have this exchange of insights – Thank you!

Dan hypothesizes that human civilization might be heading towards a future subordinated by “substrate digital monopolies”, and as a result, becoming more disconnected from real human interactions, nature, and a truer sense of reality.

From this exchange, we expressed agreement on three fundamental areas:

1. A future controlled by substrate digital monopolies is one we don’t want, and therefore should mitigate.

2. We need to agree on a global set of values to enable responsible and sustainable technological development.

3. Emerging technologies such as virtual realities (VR), brain-computer interfaces or brain-machine interfaces (BMI), and artificial general intelligence (AGI) will benefit humanity and all sentient life, if used responsibly.

1. Predicting undesirable futures

For the record, this world of substrate monopolies is not something I hope for, strive for, or wish for. Rather, I consider it likely (read the full essay on the matter). It’s a hypothesis – and more than anything – a warning against a kind of power conflict that I fear. Such virtual worlds could be amazing and beneficial, but the conflict of controlling the substrate is a reality I foresee to be likely, not a reality I foresee to be preferable.” — Dan Faggella

Given our global market dynamics, sadly, substrate digital monopolies are likely to happen. The greater the disconnect between meaningful human interactions and nature, the longer we’ll perpetuate the destructive narcissism and lack of care driving society to an accelerating ecological and moral crisis. Could it be that a greater disconnect from a Truer sense Self and of Reality could put at odds the stability of all natural systems – including intelligent life itself?

Though I embrace the idea that we, technological creatures, have an inherent right to have full sovereignty over our individual evolution and senescence through the means of science and technologies, I’m also aware that many things can and will probably go wrong if we don’t properly educate ourselves or have the right policies in place.

How could we redirect the destructive forces of a global arms race for digital dominance towards instead a thriving technological era of creative and ecological flourishing? What would it take for market forces to adopt systems that are in service of all stakeholders?

By understanding where we come from – historically, biologically, and energetically – we will be better equipped at thinking and solving problems holistically and sustainably.

2. On the issue of context and values

During my years in Business School at Baruch College N.Y., a class that had the most impact on my education was “Social Entrepreneurship”.

As we become more dependent on automated processes, we ought to also device outlets for people to participate in Creative Processes that involve both the fulfillment of drives and pleasures and the accumulation of virtues and sound moral values. The better we are at mastering Self-Leadership, the better we will be at designing social systems that operate under the Highest Ethical Standards.

How can businesses and corporations profit from making the world a better place? How can we inspire young entrepreneurs to find True meaning in Life so that our drives and intentions are aligned with maintaining a more Harmonious Universal Ecosystem?

“Most of what we believe to be moral tenets and insights are contextual”, says Dan. I agree. It is a postmodern sensibility based on the universal natural principle of Relativism, which means, everything is essentially an opinion and nothing could be iron truth.

Some things are More truthful than others however, for example, when supported by empirical evidence. Context isn’t fixed. A set of values need not be static. (The USTP’s Bill of Rights, for example, is a living document which can be amended via votes by the U.S. Transhumanist Party members.) 

In response to last week’s article, fellow USTP Officer, Ryan Stevenson, shared the following observations:

“I’ve been thinking a lot about the issues you raise about the relationship between human intelligence/reason/technology and goodness recently, and what you’ve written here is really insightful. It seems that’s it’s rather easy to forget that technology isn’t a panacea, and there must be (as you put it) ‘humane’ intentions guiding its use.  

A number of the distinctions you put forward reminded me of an early Christian philosopher, Maximus the Confessor. Maximus was one of the first individuals in Western thought to grapple with the nature of technology and, unlike his fellow Christians, saw its potential for making human beings more human.  Obviously, his thought exists in a theistic context, but maybe Maximus belongs in our list of Transhumanist forebears. If you’re interested, here’s an article that touches on some of his philosophy dealing with techne…

I also support advocating for a proactive policy with regards to BMI and VR at international bodies like the UN.  Building a transorganizational coalition with other groups would be a good step forward – labor and time-intensive, but important and doable.  It would be great to get that conversation going.” 

Maximus the Confessor’s theistic approach to this early reference to critical and ethical Transhumanism is a compelling reminder that one of the most fundamental uses of technology is to help humans become More Humane. In this article, it is argued that technology is meant to assist us in “stewarding creation across the cosmos” and that the tradition of “natural law reasoning” can help “ground a global ethic for sustainable and integral development”.

Could natural law reasoning based on empirical evidence be a viable tool to articulate norms and ideas of universal understanding?

Growing up as a Christian, for example, I often wondered about the meaning of the “holy spirit”. Devoid of mythology, the concept is one of transition towards self-awareness – Ape-to-Human – the idea of “self” becoming apparent to “sinless” primates whose awareness might have increased from introducing bone marrow and mushrooms to their diet, leading to an miraculous evolutionary transition from fearful subjects of nature’s will, to responsible masters and designers of our individual and collective destiny. 

The un-learning of one’s convictions to exercise novelty and expanded new perspectives (the dissolution of limiting beliefs) is one of the most challenging of human endeavors, but the only way to true freedom and collective harmony.

3. Love and god as universal natural phenomena; not as romantic ideas of love, or a culture’s perception of a “moral” “entity”.

In an age of powerful technology, it becomes poignantly obvious that while a personal and social ethic remain necessary—albeit altered to reflect emerging understandings of personhood and relationality—they are also increasingly not sufficient. If as humanity, as one global culture, we are to order complex ecological changes effected through human (and possibly even non-human) agency and manipulation, natural law reasoning must be more profoundly cosmological. This implies, that natural law must consider as much as possible, the ‘total ecology’ in view of its finality as New Creation, but also our human obligation to steward the flourishing of creation in all its rich, inter-dependent diversity. This ultimately is what Laudato Sì calls for when it promotes an ‘ecological conversion’ for an authentic integral flourishing.” Nadia Delicata, “Homo Technologicus and the Recovery of a Universal Ethic: Maximus the Confessor and Romano Guardini”, 2018.

Concepts of Order and Chaos are as deeply ingrained in Quantum Mechanics as in Theological, or Natural Law, reasoning. Quantum theory suggests there are many dimensions to reality, and the Noosphere has been referred to as a natural phenomenon of “transhuman consciousness emerging from the interactions of human minds.”

Love exists in many forms as the inherent “sacred”/“intelligent” programming driving natural systems towards reproduction and survival. Positive feelings like joy, love, care, trust, or instinctual arousal and mating, for example, all produce chemical reactions and high vibrational frequencies in the brain, linked to growth and a strong immune system. Stress, depression, anxiety, which make the body sick and susceptible to degenerative diseases, are linked to lower vibrational brain frequencies.

While we envision a future where sentient life blooms into higher forms of understanding and expression, I believe it is safe to say that the Transhuman Era we desire is one that encourages humanity to be more caring and to think more holistically.

Dinorah Delfin is an Artist and the Director of Admissions and Public Relations for the U.S. Transhumanist Party / Transhuman Party. 


Become a Member of the U.S. Transhumanist Party. 

Become a Foreign Ambassador

Transhumanist Bill of Rights

U.S. Transhumanist Party Constitution 


Programmatically Generated Everything: The Intelligence/Love Paradox – Article by Dinorah Delfin

Programmatically Generated Everything: The Intelligence/Love Paradox – Article by Dinorah Delfin

Dinorah Delfin

To Love or Not To Love?” Illustration by Dinorah Delfin

Yesterday, while still in excitement about Elon Musk’s recent announcement of Neuralink’s first human trials starting next year, I came across an interview by media host and science communicator Allen Saakyan with Dan Faggella, founder of Emerj (a market research platform focused on AI), and I’d love to share some thoughts!

Mr. Faggella is deeply passionate about Virtual Reality, Brain-Machine Interfaces, and Artificial Intelligence. Like myself, he is also driven by a desire to address ethical questions regarding the application of emerging advanced technologies in everyday lives – He has established communications with the UN to discuss sustainable plans for the future and avoid a potentially destructive arms-race global dynamic towards a digital monopoly.

Mr. Faggella talks about two crucial questions for humanity to address: what is the trajectory of human intelligence (what is the point of it), and how do we get there without killing each other?

I believe the role of Human Intelligence is to become ever more complex (whatever form this may take) so that LIFE has a better chance to spread across the Universe. I also believe in a mathematical, fractal, and cyclical reality. Our connection and relationship to the macro and microcosms is no accident, it is physics. We have had a basic understanding of the world of atoms, and beyond, for thousands of years, not from digital computers, but from observation and exploring Nature. 

Recently, I’ve been studying Dante’s Divine Commedia, and I have learned that the central issue in the poem is the role of Intelligence and Reason. This complex and multi-dimensional literary masterpiece centers around the idea of “conversion” – the main character realizing he wasn’t part of the solution, but the problem. The poem illustrates a stark contrast between the faculty of knowing and the faculty of choosing – If we don’t know things it is hard to make the right choices, but even if we know the right things, we don’t always make the right choices (humans usually strive to lower this probability).

In the interview, Dan and Allen talk about how to ensure that the Transhumanist transition happens in a way that maximizes humanity’s potential towards indefinite permanence. Mr. Faggella offers several responses: “The best possible scenario would be through uploading or some degree of Brain-Machine Interface.” … “The very idea that cognitively enhanced people would get together and agree on things on earth is completely inviable.” … ”Our expansion of vastly greater degrees of creativity and capabilities have to happen in the virtual world because if it happens in the physical world, we are killing each other.” … “People don’t want what they say they want, they just want the fulfillment of their drives.”

Is the future Mr. Faggella illustrates – a society driven by the “fulfillment of drives” controlled by “substrate digital monopolies” – one we want? This future, as Mr. Faggella remarks, has nothing to do with the accumulation of virtues or the preservation of what makes us not just Humans, but Humane – e.g. Reciprocal Love. 

Without LOVE there is no reason for LIFE to continue. Literally. Love relates more to Intelligent Creation than to Intelligence. Feelings of love, or high vibrational frequencies, enable GROWTH, complexity, creativity, and a strong immune system. Low vibrational frequencies are linked to stress and high anxiety levels which make the body sick and susceptible to all kinds of degenerative diseases. Love exists in many forms throughout all living systems – As above so below.

Digital technology, just like any tool, is meant to help humans maximize our innate capabilities, and Planet Earth is to be regarded as one of the Most precious legacies and source of wisdom. All sentient beings matter, and Humans, in particular, want to be in full control of our lives and destinies. 

Our physical, carbon-based reality isn’t a perception, but a re-interpretation of ideas. Technologies like VR’s and BMI’s are best when used to enhance our physical reality and relationships with other beings. Humans desire to safeguard our Individual Psyches and Sovereignty over our Individual Consciousnesses and Physical Expression. Let’s aim towards a future where we use advanced technologies not only to teach our body to leverage, on-demand, the power of universal wisdom to heal and regenerate itself, but also to leverage this universal wisdom to design systems that will protect all sentient beings from any suffering.

In the interview, Mr. Faggella talks about establishing Sustainable Development Goals with the United Nations for humanity “to get on the same page on what is after people (posthumans), and to figure out a way to have a non-arm-race global dynamic to get there”. As an active member of the Transhumanist Movement and an Officer of the United States Transhumanist Party, I’d like to kindly extend an invitation to discuss these very important and pressing topics and, along with Mr. Fagella, get involved with the UN and similar organizations.

There are many reasons to be optimistic about the future – there is no limit to where our imagination can take us. Dante’s Divine Commedia is, after all, a reminder that Epic Transpersonal Meta-Narratives also come with happy endings. 

Here is a link to Dan Faggella’s Interview:

Dinorah Delfin is an Artist and the Director of Admissions and Public Relations for the U.S. Transhumanist Party / Transhuman Party. 


Become a Member of the U.S. Transhumanist Party. 

Become a Foreign Ambassador

Transhumanist Bill of Rights

U.S. Transhumanist Party Constitution 

Highlights #1 – First Virtual Debate Among U.S. Transhumanist Party Presidential Candidates – July 6, 2019

Highlights #1 – First Virtual Debate Among U.S. Transhumanist Party Presidential Candidates – July 6, 2019

Rachel Haywire
Johannon Ben Zion
Charles Holsopple
Moderated by Gennady Stolyarov II

Watch highlights from the first virtual debate among U.S. Transhumanist Party / Transhuman Party (USTP) candidates for President of the United States, which took place on Saturday, July 6, 2019, at 3 p.m. U.S. Pacific Time.

Candidates Rachel Haywire, Johannon Ben Zion, and Charles Holsopple provided their introductory statements and discussed how their platforms reflect the Core Ideals of the USTP.

This highlights reel was created by Tom Ross, the USTP Director of Media Production. Watch the full 3-hour debate here.

Learn about the USTP candidates here.

View individual candidate profiles:

Johannon Ben Zion
Rachel Haywire
Charles Holsopple

Join the U.S. Transhumanist Party / Transhuman Party for free, no matter where you reside. Apply in less than a minute here.

Those who join the USTP by August 10, 2019, will be eligible to vote in the Electronic Primary on August 11-17, 2019.


First Virtual Debate Among U.S. Transhumanist Party Presidential Candidates – July 6, 2019

First Virtual Debate Among U.S. Transhumanist Party Presidential Candidates – July 6, 2019

Rachel Haywire
Johannon Ben Zion
Charles Holsopple
Moderated by Gennady Stolyarov II

The first virtual debate among U.S. Transhumanist Party / Transhuman Party candidates for President of the United States took place on Saturday, July 6, 2019, at 3 p.m. U.S. Pacific Time.

Candidates Rachel Haywire, Johannon Ben Zion, and Charles Holsopple discussed how their platforms reflect the Core Ideals of the USTP and also answered selected questions from the public.

Learn about the USTP candidates here.

View individual candidate profiles:

Johannon Ben Zion
Rachel Haywire
Charles Holsopple

Join the U.S. Transhumanist Party / Transhuman Party for free, no matter where you reside. Apply in less than a minute here.

Those who join the USTP by August 10, 2019, will be eligible to vote in the Electronic Primary on August 11-17, 2019.

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018


Gennady Stolyarov II
Ray Kurzweil

The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, [CENSORED]

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, [CENSORED]


Gennady Stolyarov II
Zoltan Istvan
Max More
Ben Goertzel

Gennady Stolyarov II, Chairman of the United States Transhumanist Party, moderated this panel discussion, entitled “Advocating for the Future”, at RAAD Fest 2017 on August 11, 2017, in San Diego, California.

Watch it on YouTube here.

From left to right, the panelists are Zoltan Istvan, Gennady Stolyarov II, Max More, Ben Goertzel, and . With these leading transhumanist luminaries, Mr. Stolyarov discussed subjects such as what the transhumanist movement will look like in 2030, artificial intelligence and sources of existential risk, gamification and the use of games to motivate young people to create a better future, and how to persuade large numbers of people to support life-extension research with at least the same degree of enthusiasm that they display toward the fight against specific diseases.

Learn more about RAAD Fest here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentations of Gennady Stolyarov II and Zoltan Istvan from the “Advocating for the Future” panel.

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

Gareth John

Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019

In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.


1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.

2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)

3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5

4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015

5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)

6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129

7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia)

8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference

9. Pensky, N, 2014, article taken from Pando.

10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’.

11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015.

12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future.

Gareth John lives in Cardiff, UK and is a trainee social researcher with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party. 



Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.

I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.

Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?

Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.



Rich Casada

Hi Rich,
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!

I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:

“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).

Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.

I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)

So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.

This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.


U.S. Transhumanist Party Chairman Gennady Stolyarov II Interviewed by Nikola Danaylov of Singularity.FM

U.S. Transhumanist Party Chairman Gennady Stolyarov II Interviewed by Nikola Danaylov of Singularity.FM

logo_bgGennady Stolyarov II
Nikola Danaylov

On March 31, 2018, Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, was interviewed by Nikola Danaylov, a.k.a. Socrates, of Singularity.FM. A synopsis, audio download, and embedded video of the interview can be found on Singularity.FM here. You can also watch the YouTube video recording of the interview here.

Apparently this interview, nearly three hours in length, broke the record for the length of Nikola Danaylov’s in-depth, wide-ranging conversations on philosophy, politics, and the future.  The interview covered both some of Mr. Stolyarov’s personal work and ideas, such as the illustrated children’s book Death is Wrong, as well as the efforts and aspirations of the U.S. Transhumanist Party. The conversation also delved into such subjects as the definition of transhumanism, intelligence and morality, the technological Singularity or Singularities, health and fitness, and even cats. Everyone will find something of interest in this wide-ranging discussion.

The U.S. Transhumanist Party would like to thank its Director of Admissions and Public Relations, Dinorah Delfin, for the outreach that enabled this interview to happen.

To help advance the goals of the U.S. Transhumanist Party, as described in Mr. Stolyarov’s comments during the interview, become a member for free, no matter where you reside. Click here to fill out a membership application.

California Transhumanist Party Leadership Meeting – Presentation by Newton Lee and Discussion on Transhumanist Political Efforts

California Transhumanist Party Leadership Meeting – Presentation by Newton Lee and Discussion on Transhumanist Political Efforts


Newton Lee
Gennady Stolyarov II
Bobby Ridge
Charlie Kam

The California Transhumanist Party held its inaugural Leadership Meeting on January 27, 2018. Newton Lee, Chairman of the California Transhumanist Party and Education and Media Advisor of the U.S. Transhumanist Party,  outlined the three Core Ideals of the California Transhumanist Party (modified versions of the U.S. Transhumanist Party’s Core Ideals), the forthcoming book “Transhumanism: In the Image of Humans” – which he is curating and which will contain essays from leading transhumanist thinkers in a variety of realms, and possibilities for outreach, future candidates, and collaboration with the U.S. Transhumanist Party and Transhumanist Parties in other States. U.S. Transhumanist Party Chairman Gennady Stolyarov II contributed by providing an overview of the U.S. Transhumanist Party’s current operations and possibilities for running or endorsing candidates for office in the coming years.

Visit the website of the California Transhumanist Party:

Read the U.S. Transhumanist Party Constitution:

Become a member of the U.S. Transhumanist Party for free:

(If you reside in California, this would automatically render you a member of the California Transhumanist Party.)

We Must Unite for an International Ban on AI Weaponry; A Real Solution to Survive the Singularity Along with What Lies Beyond – Article by Bobby Ridge

We Must Unite for an International Ban on AI Weaponry; A Real Solution to Survive the Singularity Along with What Lies Beyond – Article by Bobby Ridge

Bobby Ridge

I urge the United States Transhumanist Party to support an international ban on the use of autonomous weapons and support subsidies from governments and alternative funding into research for AI safety – funding that is very similar to Elon Musk’s efforts. Max Tegmark recently stated that “Elon Musk’s $10M donation to the Future of Life Institute that helped put out 37 grants to run a global research program aimed at keeping AI beneficial to humanity.”

Biologists fought hard to pass the international ban on biological weapons, so that the name of biology would be known as it is today, i.e., a science that cures diseases, ends suffering, and makes sense of the complexity of living organisms. Similarly, the community of chemists also united and achieved an international ban on the use of chemical weapons. Scientists conducting AI research should follow their predecessors’ wisdom and unite to achieve an international ban on autonomous weapons! It is sad to say that we are already losing this fight for an international ban on autonomous weapons. The Kalashnikov Bureau weapons manufacturing company announced that they have recently invented an unmanned ground vehicle (UGV), which field tests have already shown better than human level intelligence. China recently began field-testing cruise missiles with AI and autonomous capabilities, and a few companies are getting very close to having AI autopilot operating to control the flight envelope at hypersonic speeds. (Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence“)

Even though, in 2015 and 1016, the US government spent only $1.1 billion and $1.2 billion in AI research, respectively, according to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia.” While these autonomous weapons are already being developed, the UN Convention on Certain Conventional Weapons (CCW) couldn’t even come up with a definition for autonomous weapons after 4 years of meeting up, despite their explicit expression for a dire concern for the spread of autonomous weapons. They decided to put off the conversation another year, but we all know that at the pace technology is advancing, we may not have another year to postpone a definition and solutions. Our species must advocate and emulate the 23 Asilomar AI principles, which over 1000 expert AI researchers from all around the globe have signed.

In only the last decade or so, there has been a combined investment of trillions of dollars towards an AI race from the private sector, such as, Google, Microsoft, Facebook, Amazon, Alibaba, Baidu, and other tech titans, along with whole governments, such as, China, South Korea, Russia, Canada, and only recently the USA. The investments are mainly towards making AI more powerful, but not safer! Yes, the intelligence and sentience of artificial superintelligence (ASI) will be inherently uncontrollable. As a metaphor, humans controlling the development of ASI, will be like an ant trying to control the human development of a NASA space station on top of their ant colony. Before we get to that point, at which hopefully this issue will be solved by a brain-computer-interface, we can get close to making the development of artificial general intelligence (AGI) and weak ASI safe, by steering AI research efforts towards solving the alignment problem, the control problem, and other problems in the field. This can be done with proper funding from the tech titans and governments.

“AI will be the new electricity. Electricity has changed every industry and AI will do the same but even more of an impact.” – Andrew Ng

“Machine learning and AI will empower and improve every business, every government organization, philanthropy, basically there is no institution in the world that cannot be improved by machine learning.” – Jeff Bezos

ANI (artificial narrow intelligence) and AGI (artificial general intelligence) by themselves have the potential to alleviate an incomprehensible amount of suffering and diseases around the world, and in the next few decades, the hammer of biotechnology and nanotechnology will likely come down to cure all diseases. If the trends of information technologies continue to accelerate, which they certainly will, then in the next decade or so an ASI will be developed. This God-like intelligence will immigrate for resources in space and will scale to an intragalactic size. To iterate old news, to keep up with this new being, we are likely to connect our brains to it via brain-computer-interface.

“The last time something so important like this has happened was maybe 4.2 billion-years-ago, when life was invented.” – Juergen Schmidhuber

Due to independent assortment of chromosomes during meiosis, you roughly have a 1 in 70 trillionth of a chance at living. Now multiply this 70-trillionth by the probability of crossing over, and the process of crossing over has orders of magnitude more possible outcomes than 70 trillion. Then multiply this by probability of random fertilization (the chances of your parents meeting and copulating). Then multiply whatever that number is by similar probabilities for all our ancestors for hundreds of millions of years – ancestors that have also survived asteroid impacts, plagues, famine, predators, and other perils. You may be feeling pretty lucky, but on top of all of that science and technology is about to prevent and cure any disease we may come across, and we will see this new intelligence emerge in our laboratories all around the world. Any attempt to provide a linguistic description for how spectacularly LUCKY we are to be alive right now and to experience this scientific revolution, will be an abysmally disingenuous description, as compared to how truly lucky we all are. AI experts, Transhumanists, Singularitarians, and all others who understand this revolution have an obligation to provide every person with an educated option that they could pursue if they desire to take part in indefinite longevity, augmentation into superintelligence, and whatever lies beyond the Singularity 10-30 years from now.

There are many other potential sources existential threats, such as synthetic biology, nuclear war, the climate crisis, molecular nanotechnology, totalitarianism-enabling technologies, super volcanoes, asteroids, biowarfare, human modification, geoengineering, etc. Mistakes in only one of these areas could cause our species to go extinct, which is the definition of an existential risk. Science created some of these existential risks, and only Science will prevent them. Philosophy, religion, complementary alternative medicines, and any other proposed scientific demarcation will not solve these existential risks, along with the myriad of other individual suffering and death that occurs daily. With this recent advancement, Carl Sagan’s priceless wisdom has become even more palpable than before; “we have arranged a society based on Science and technology, in which no one understands anything about Science and technology and this combustible mixture of ignorance and power sooner or later is going to blow up in our faces.” The best chance we have of surviving this next 30 years and whatever is beyond the Singularity is by transitioning to a Science-Based Species. A Science-Based Species is like Dr. Steven Novella’s recent advocacy, which calls for transition off Evidence-Based medicine to a Science-Based medicine. Dr. Novella and his team understand that “the best method for determining which interventions and health products are safe and effective is, without question, good science.” Why arbitrarily claim this only for medicine? I propose a K-12 educational system that teaches the PROCESS of Science. Only when the majority of ~8 billion people are scientifically literate and when public reason is guided by non-controversial scientific results and non-controversial methods, then we will be cable of managing these scientific tools – tools that could take our jobs, can cause incomprehensible levels of suffering, and kill us all; tools that are currently in our possession; and tools that continue to become more powerful, to democratize, dematerialize, and demonetize at an exponential rate. I cannot stress enough that ‘scientifically literate’ means that the people are adept at utilizing the PROCESS of Science.

Bobby Ridge is the Secretary-Treasurer of the United States Transhumanist Party. Read more about him here


Tegmark, M. (2015). Elon Musk donates $10M to keep AI beneficial.

Husain, A. (2018). Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence” | Talks at Google. Talks at Google.

Tegmark, M. (2017). Max Tegmark: “Life 3.0: Being Human in the Age of AI” | Talks at Google. Talks at Google.

Conn, A. (2015). Pentagon Seeks $12 -$15 Billion for AI Weapons Research.
BAI 2017 conference. (2017). ASILOMAR AI PRINCIPLES.

Ng, A. (2017). Andrew Ng – The State of Artificial Intelligence. The Artificial Intelligence Channel.

Bezos, J. (2017). Gala2017: Jeff Bezos Fireside Chat. Internet Association.

Schmidhuber, J. (2017). True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo. TEDx Talks.

Kurzweil, R. (2001). The Law of Accelerating Returns. Kurzweil Accelerating Intelligence.


Sbmadmin. (2008). Announcing the Science-Based Medicine Blog.