Browsed by
Tag: Steven Pinker

From Darwinian Greed to Altruistic Greed: the Strangest Period So Far in Our Planet’s History – Article by Sarah Lim

From Darwinian Greed to Altruistic Greed: the Strangest Period So Far in Our Planet’s History – Article by Sarah Lim

Sarah Lim

We are smack-dab in the middle of what might be the oddest period of our planet’s history thus far. The last 200 years have seen more rapid technological and scientific advancement than all the 3.5 billion prior years of life on Earth combined. And that technological progress is set to increase even more exponentially within our lifetimes. In the span of my grandmother’s life, humanity has put a man on the Moon, and now we’re having serious discussions about Moon bases and terraforming Mars to start a colony there. Within my own life thus far, I’ve gone from using a dial-up box-shaped computer in my kindergarten years to learning about the exponential progress made in quantum computing and the invention of a material that could potentially be a non-organic substrate to download human thoughts into.

I think that John L. Smart is essentially correct in the theories he puts force in his evolutionary-developmental (“EvoDevo”) transcension hypothesis. There seems to be a kind of biological Moore’s law that applies to human intelligence. If you chart the developments in human evolution from 200,000 years ago till the present, the jump from hunting and gathering to civilization occurred at an immensely fast rate. And the subsequent jump from pre-scientific civilization to the contemporary technological age has been the most astronomical one thus far. And with that astronomical jump in humanity’s technological progress has come an incredible leap in humanity’s moral progress.

The irony of our strange epoch

One of the most ironic aspects about the current climate crisis I like to point out is this: thank goodness that the climate crisis is happening now, and not in the 1500s. That seems like a rather ironic or even flippant thing to say. But thank goodness that the two greatest existential threats to all sentient life on Earth, the existence of nuclear weapons of mass destruction (WMDs) and global warming, are occurring in the 21st century. Because we are living in a time period where democracies are the most common political model across the globe. Public protests such as those led by Extinction Rebellion and Greta Thunberg’s climate strike movement have proliferated across the globe. Can you imagine what would have happened if this order of climate catastrophe had occurred a thousand years ago, when monarchies were the default political model? Can you imagine what would happen if you had tyrannical monarchies across the globe, with kings and lords as the primary stakeholders in climate-destroying corporations? It doesn’t seem likely that Greta Thunberg and her ilk would have made much progress in pushing for a pro-climate action zeitgeist in a regime where criticizing the reigning monarch automatically meant decapitation.

Furthermore, we’re extremely fortunate to be living in an era where science is accelerating fast enough to pioneer carbon-capture technology, and more recently, the geoengineering as a viable solution. To paraphrase Michio Kaku, “the dinosaurs got wiped out by the meteor shower; but they didn’t have advanced technology which could detect and disintegrate meteors long before they enter the Earth’s orbit. That’s something current human beings can work on building.” The same is true of the current scramble for climate engineers to churn out anti-pollution and temperature-lowering technologies.

How the technological pursuit of a post-scarcity world is encourages altruism and egalitarianism

I often write about how the last 150 years of global society have seen an exponential jump in the perpetuation of universal human rights. And that’s because it’s nothing short of amazing. Most of the world’s major civilizations which had political and economically subjugated women, ethnic minorities, and the working class for the past 6,000 years suddenly had a change of heart overnight, seemingly. It’s no coincidence that the proliferation of universal civil rights and the criminalization of interpsersonal violence against women and minorities coincided with the Post-Industrial Revolution. As resource scarcity has been drastically reduced in the contemporary technological era, so, too, has the Darwinian impetus towards domination and subjugation of minority groups.

We have shifted from a violent Darwinian greed in the form of the colonization of minority groups, to a kind of altruistic greed. Altruistic greed is characterized by an unabetting desire for ever-higher qualities of life; but which can be made widely available to the masses. The clearest example of this is the advent of modern healthcare, beginning with the mass administration of vaccinations for diseases like polio. As Steven Pinker points out, infant mortality rates and deaths from child birth have plummeted throughout that world in the last 50 years. Across the world, the proliferation of technological infrastructure has made public transport systems faster and safer than they ever were before. Altruistic greed is a major driving force for many in the transhumanist community. Most transhumanists are advocates of making radical life extension and cutting edge medical therapies affordable and accessible to everyone. The fundamental driving principle behind transhumanism is that humanity can transcend its biological limitations through rapid technological advancement; but the benefits reaped must be made as accessible as possible.

A reason often cited by nihilists who say that we should accept human extinction is on the grounds that human beings hold the glaring track record of being the most gut-wrenchingly cruel of all the species on Earth. This is empirically and philosophically indisputable. No other species shares a historical laundry list of genocide campaigns, slavery, rape, domestic abuse, and egregious socio-economic inequality on par with human beings.

But since the post-World War II era, something miraculous happened. We became kind and peaceful; and this impetus towards kindness and peace proliferated globally. After 10,000 years of treating women as the property of their husbands, it became possible for women to get voted into positions of power across the globe, and marital rape became criminalized in an increasing number of countries. After 10,000 years of holding corporal punishment as an essential part of child-rearing in nearly every human society, an increasing number of democracies have begun to enact child-abuse laws against striking children.

We still have long ways to go.

Sweatshop labor exploitation and the sex trafficking of females remain major human-rights issues today. But an increasing number of international law bodies and humanitarian groups are cracking down on them and fighting to eradicate them permanently. They are no longer seen as “business as usual” practices that are essential parts of human society which shouldn’t cause anyone to bat an eye; despite the fact that slavery has been a staple institution of nearly every civilization for the last ten millennia.

There are, of course, many aspects of ethical progress in which human beings are still lagging sorely behind, besides human trafficking. Although wars are far less common and less glamorized than they were in millennia past, conflicts are still raging on in Congo, and dictatorial regimes still exist. Income inequality is now greater than it was at any other time in human history. Another of the great ironies of the contemporary technological era is that we now produce enough food to feed 10 billion people, but there are still 795 million people in the world suffering from malnutrition. As much as 40% of all the food we produce is wasted unnecessarily.

The exploitation of animals and the thoughtless destruction of their habitats is one respect in which humanity has actually backslid in terms of ethical progress in the last 70 years. Since the Industrial Revolution and the explosion of the human population, humans have radically decimated the earth’s natural biomass, and one million species are now facing the threat of extinction due to human industrial activity.

Nevertheless, one hopes that Steven Pinker is essentially correct in his assessment of humanity’s rapid moral growth over the last 200 years. It could be said that it’s not necessarily the case that primates are inherently more predisposed to cruelty than all other species. Rape, infanticide, and killing rival males during mating season are common amongst many species of birds, reptiles, and mammals, as David Pearce points out. It’s just that human beings have the capacity to inflict exponential amounts on damage on other humans and animals because of our exceptional intelligence. Intelligence makes possible exploitation. Human intelligence has allowed us to exploit other human beings and sentient beings for millennia. But human intelligence is what has also enabled us to radically improve healthcare, longevity, and universal human rights across the globe.

The long history of suffering endured by sentient life on Earth is why the far-flung topic of technological resurrection is a major point of discussion amongst transhumanists. We believe that all sentient creatures which have endured considerable physical suffering, manmade or naturally-inflicted, deserve a second shot at life in the name of humanitarian justice.

There’s still much room for progress.

At present we seem to be entering a bottleneck era where we might have to drastically reduce our currently excessive consumption of the Earth’s resources, in light of the current climate crisis. The good news is that a growing number of us are realizing the looming existential threat of climate change and doubling down on combating it, as I’d mentioned earlier. The even better news is that an increasing number of bioethicists, particularly in the transhumanist movement, are now touting a permanent solution to the worst of humanity’s selfish, overly aggressive monkey-brain impulses. This seems to be just in the nick of time, given that this coincides with an era where humanity has access to nuclear arms capable of obliterating all life on Earth with the press of a Big Red Button.

My biggest hope for humanity is not only that our exponential technological progress will persist, but that our ethical and altruistic progress will continue in tandem with it. We have gotten to a stage of technological development where the forces of nature have become almost entirely subjugated, and our own impetus towards aggression has become the single greatest existential threat. It could be that every single sufficiently advanced alien civilization that is capable of exploiting all the natural resources on its home planets or inventing WMDs is eventually forced to cognitively recondition itself towards pacifism and altruism.

There is an ongoing debate in the existential-risk movement about whether or not SETI or METI could be unintentionally endangering all life on Earth by attempting to make contact with alien civilizations several orders of magnitude more advanced than ours. The analogy commonly cited is how the first European explorers of the Americas massacred scores of indigenous tribespeople who didn’t have guns. But the opposite could also be true. It could be that once other alien civilizations achieve a post-scarcity global economy, the neurobiological Darwinian impetus to colonize less developed groups gets steadily replaced by an altruistic impetus to ensure the survival and flourishing of all sentient species on that planet. We can’t tell for sure until we meet another alien species. But on our part, we’ve yet to ride out the tidal wave of the strangest period of Earth’s history. As we take our next steps forward into a radically different phase of human civilization, we gain an ever greater ability to control our own development as a species. Here’s to Pinker’s hope that we’re going in the right direction, and will do our best to head that way indefinitely.

Sarah Lim is a fourth-year political science major at the National University of Singapore. She is a proud supporter of the transhumanist movement and aims to do her best to promote transhumanism and progress towards the Singularity.

In Support of “Unfit for the Future”: When the Vessel is Unfit for the Task – Article by Sarah Lim

In Support of “Unfit for the Future”: When the Vessel is Unfit for the Task – Article by Sarah Lim

Sarah Lim

This essay has been submitted for publication to the Journal of Posthuman Studies.

This essay is written in support of the ideas presented by Julian Savulescu and Ingmar Persson in their book Unfit for the Future: the Need for Moral Enhancement. I will argue that Savulescu and Persson’s arguments for moral bioenhancement should be given more serious consideration, on the grounds that moral bioenhancement will most likely be humanity’s best chance at ensuring its future ethical progress, since our current achievements in rapid ethical progress have been highly contingent on economic progress and an increasing quality of life. As a vehicle for for ethical progress, this is becoming increasingly untenable as the world enters a new period of resource scarcity brought about by the ravages of climate change. This essay will also respond to some of the claims against human genetic enhancement, and transhumanism in general, made by critic John Gray. Finally, the concluding remarks of this essay will examine a possible long-term drawback to moral bioenhancement which has not net been raised by Savulescu’s critics thus far – namely, that genetically altering future human beings to be less aggressive could unintentionally result in them becoming complacent to a point of lacking self-preservation.

Maslow and Malthus

Ethical philosophers in Steven Pinker’s camp may argue that the consideration of moral bioenhancement is absurd because moral education has apparently been sufficient enough to bring forth radical moral progress in terms of civil liberties in the 20th and 21st centuries. The 20th century heralded in never-before-seen progress in terms of the civil rights granted to women, ethnic minorities, LGBT+ people, and the working class. As Pinker points out, crime rates plummeted over the past 150 years, and so has the total number of wars being fought throughout the world. Savulescu admits that this is a valid point.

However, Savulescu’s main point of contention is that while the overall rates of violent crime have been drastically reduced, rapid advancements in technology have enabled rouge individuals to inflict more mass damage than at any other point in human history. While overall rates of interpersonal violence and warfare are decreasing, advancements in technology have exponentially increased the ability of individual actors to inflict harm on others to a greater extent than at any other point in human history. It takes just one lone Unabomber-type anarchist to genetically engineer a strain of smallpox virus in a backyard laboratory, to start a pandemic killing millions of innocent people, argues Savulescu. A statistic he constantly cites is that 1% of the overall human population are psychopaths. This means that there are approximately 77 million psychopaths alive today.

I would like to raise a further point in support of Savulescu’s argument. I would argue that the exceptional progress in ethics and civil rights that the developed world has witnessed in the last century has been the result of unprecedented levels of economic growth and vast improvements in the average quality of life. The life spans, health spans, and accessibility of food, medicine, and consumer goods seen in developed economies today would have been an unbelievable utopian dream as little as 250 years ago. One of X Prize Foundation chairman Peter Diamandis’s favorite quips is that our standard of living has increased so exponentially that the average lower-income American has a far higher quality of life than the wealthiest of robber barons did in the 19th century.

As Pinker himself points out, the first moral philosophies of the Axial Age arose when our ancestors finally became agriculturally productive enough to no longer worry about basic survival. Once they had roofs over their heads and sufficient grain stores, they could begin to wax lyrical about philosophy, the meaning of life, and the place of the individual in wider society. Arguably, the same correlation was strongly demonstrated in the post-World War II era in the developed economies of the world. Once the population’s basic needs are not just met, but they are also provided with access to higher education and a burgeoning variety of consumer goods, they’re much less likely to be in conflict with “out” groups over scarce resources. Similarly, incredible advancements in maternal healthcare and birth control played a major role in the socio-economic emancipation of women.

Our ethical progress being highly contingent on economic progress and quality of life should concern us for one major reason – climate change and the resource scarcity that follows it. The UN estimates that the world’s population will hit 9.8 billion by 2050. At the same time, food insecurity and water scarcity are going to become increasingly common. According to UNICEF, 1.3 million people in Madagascar are now at risk of malnutrition, due to food shortages caused by cyclones and droughts. There could as many as 25 million more children worldwide suffering from climate-change-caused malnutrition by the middle of this century. This is on top of the 149 million malnourished children below 5 years old, who are already suffering from stunted growth, as of 2019.

This is the worst-case scenario that climate-change doomsdayers and authors of fiction revolving around dystopian civilizational collapse keep on warning us of. There is a legitimate fear that a rapid dwindling of access to food, medical care, and clean water could lead currently progressive developed economies to descend back into pre-Enlightenment levels of barbarism. Looting and black markets for necessities could flourish, while riots break out over access to food and medical supplies. Ostensibly, worsening scarcity could encourage the proliferation of human trafficking, especially of females from desperate families. The idea is often dismissed as wildly speculative alarmist screed by a considerable number of middle-income city dwellers living in developed nations. Food shortages caused by climate change have mostly affected the sub-Saharan Africa and India, where they’re far out of sight and out of mind to most people in developed economies.

However, the World Bank estimates that 140 million people could become refugees by 2050, as a result of climate change. These populations will predominantly be from Africa, the Middle East, and South Asia, but it is likely that a significant percentage of them will seek asylum in Europe and America. And developed Western economies will only be spared from the worst effects of climate change for so long. North Carolina has already been afflicted by severe flooding caused by Hurricane Florence in 2018, just as it was  affected by Hurricane Matthew which had struck two years earlier. Climate journalist David Wallace-Wells has gone so far as to claim that a four degree increase in global temperature by 2100 could result in resource scarcity so severe, that it will effectively double the number of wars we see in the world today.

Savulescu argues that the fact that we’ve already let climate change and global income inequality get this bad is itself proof that we’re naturally hardwired towards selfishness and short-term goals.

A Response to John Gray

As one of the most well-known critics of transhumanism, John Gray has said that it is naive to dream that humanity’s future will somehow be dramatically safer, more humane, and more rational than its past. Gray claims that humanity’s pursuit of moral progress will ultimately never see true fruition, because our proclivities towards irrationality and self-preservation will inevitably override our utopian goals in the long run. Gray cites the example of torture, which was formally banned in various treaties across Europe during the 20th century. However, this hasn’t stopped the US from torturing prisoners of war with all sorts of brutal methods, in Afghanistan and Iraq. Gray claims that this is proof that moral progress can be rolled back just as easily as it is made. Justin E. H. Smith makes similar arguments about the inherent, biologically-influenced cognitive limits of human rational thinking, although he does not explicitly criticise transhumanism itself. And Savulescu agrees with him. Throughout their argument, both Savulescu and Persson hammer home the assertion that humans have a much greater predilection towards violence than altruism.

But here Gray is making a major assumption – that future generations of human beings will continue to have the same genetically-predisposed psychology and cognitive capabilities as we currently do. Over millennia, we have been trying to adapt humanity to a task that evolution did not predispose us towards. We’ve effectively been trying to carry water from a well using a colander. We might try to stop the water from leaking out from the colander as best we can by cupping its sides and bottom with our bare palms, but Savulescu is proposing a radically different solution; that we should re-model the colander into a proper soup bowl.

It seems that Gray is overlooking some of his own circular reasoning which he uses to perpetuate his arguments against transhumanist principles and genetic enhancement. He argues that humanity will never truly be able to overcome our worst proclivities towards violence and selfishness. However, he simultaneously argues that endeavoring to enhance our cognitive capabilities and dispositions towards rationality and altruism are a lost cause that will be ultimately futile. Following Gray’s line of reasoning will effectively keep humanity stuck in a catch-22 situation where we’re damned if we do and damned if we don’t. Gray is telling us that we need to resign ourselves to never being able to have a proper water-holding vessel while simultaneously discouraging us from considering the possibility of going to a workshop to weld the holes in our colander shut.

Windows of Opportunity

There is one final reason for which I will argue for greater urgency in considering Savulescu’s proposal seriously. Namely, we are currently have a very rare window of opportunity to execute it practically. If Gray is right about the likelihood that moral progress can be rolled back more easily than it is made, then he should acknowledge that we need to take full advantage of the current moral progress in developed economies, while we still have the chance to. Rapid advancements in CRISPR technologies and gene-editing are increasing the practical viability of moral bioenhancement without the consumption of neurotransmitters. Savulescu argues that we need to strike while the iron is hot; while the world economy is still relatively healthy and while STEM fields are still receiving billions in funding for research and development.

If nothing else, a rather intellectually sparse appeal to novelty can be made in defence of Savulescu’s proposal. Given that climate change could be the greatest existential risk humanity has ever faced in its whole history to date, we should begin considering more radical options to deal with its worst ravages. The limited faculties of rationality and altruism which nature has saddled us with have brought us millennia of warfare, genocide, radical inequality in resource distribution, and sexual violence. We keep on saying “never again” after every single cataclysmic man-made tragedy, but “again” still keeps on happening. Now is as good a time as ever to consider the possibility that humanity’s cognitive faculties are themselves fundamentally flawed, and inadequate to cope with the seemingly insurmountable challenges that lie ahead of us.

A Possible Future Negative Consequence of Moral Bioenhancement to be Considered

Multiple objections to Savulescu’s proposal have been raised by authors such as Alexander Thomas and Rebecca Bennett. I would like to raise another possible objection to moral bioenhancement, although I myself am a proponent of it. A possible unforeseen consequence of radically genetically reprogramming homo sapiens to be significantly less selfish and prone to aggression could be that this will simultaneously destroy our drive for self-improvement. One could argue that the only reason human beings have made it far enough to become the most technologically advanced and powerful species in our solar system was precisely because our drive for self-preservation and insatiable desire for an ever-increasing quality of life. You could claim that if we had just remained content to be hunter-gatherers, we would never have gotten to the level of civilization we’re at now. It’s more likely that we would have gone extinct on the savannah like our other hominid cousins, who were not homo sapiens.

Our inability to be satisfied with the naturally-determined status quo is the very reason the transhumanist movement itself exists. What happens, then, if we genetically re-dispose homo sapiens to become more selfless and less aggressive? Could this policy ironically backfire and create future generations of human beings who become complacent about technological progress and self-improvement? Furthermore, what happens if these future generations of morally bioenhanced human beings face new existential threats which require them to act urgently? What happens if they face an asteroid collision or a potential extraterrestrial invasion (although the latter seems to be far less likely)? We don’t want to end up genetically engineering future generations of human beings who are so devoid of self-preservation that they accept extinction as an outcome they should just peacefully resign themselves to. And if human beings become a space-faring species and end up making contact with a highly-advanced imperialist alien species bent on galaxy-wide colonization, our future generations will have to take up arms in self-defence.

This raises the question of whether it might be possible to simultaneously increase the human propensity towards altruism and non-violence towards other human beings, while still preserving the human predisposition towards ensuring our overall survival and well-being. If such a radical re-programming of humanity’s cognitive disposition is possible, it’s going to be a very delicate balancing act. This major shortcoming is one that proponents of moral bioenhancement have not yet formulated a plausible safety net for. Techno-utopian advocates claim that we could one day create a powerful artificial intelligence programme that will indefinitely protect humanity against unforeseen attacks from extraterrestrials or possible natural catastrophes. More serious discussion needs to be devoted to finding possible ways to make moral bioenhancement as realistically viable as possible.


The arguments put forth by Savulescu in Unfit for the Future should be reviewed with greater urgency and thoughtful consideration, and this essay has argued in favour of this appeal. We cannot take the great strides in civil rights made in the last 100 years, which have been heavily dependent on economic development and the growth of the capitalist world economy, for granted. As resource scarcity brought about by climate change looms on the near horizon, the very system which the 20th and 21st centuries’ great ethical progress has been contingent upon threatens to crumble. Gray is right in arguing that the human animal is fundamentally flawed and that repeated historical attempts at better models of moral systems have failed to truly reform humanity. And this is where Savulescu proposes a controversial answer to Gray’s resignation to humanity’s impending self-destruction. We must consider reforming the human animal itself. As the field of gene-editing and the development of impulse-controlling neurotransmitter drugs continue to show great promise, world governments and private institutions should begin to view these as viable options to creating a less short-sighted, less-aggressive, and more rational version of homo sapiens 2.0. There are only so many more global-scale man-made catastrophes that mankind can further inflict upon itself and the planet, before this radical proposal is finally undertaken as a last resort.

Sarah Lim is a fourth-year political science major at the National University of Singapore. She is a proud supporter of the transhumanist movement and aims to do her best to promote transhumanism and progress towards the Singularity.

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

Gareth John

Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019

In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.


1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.

2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)

3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5

4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015

5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)

6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129

7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia)

8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference

9. Pensky, N, 2014, article taken from Pando.

10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’.

11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015.

12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future.

Gareth John lives in Cardiff, UK and is a trainee social researcher with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party. 



Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.

I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.

Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?

Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.



Rich Casada

Hi Rich,
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!

I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:

“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).

Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.

I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)

So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.

This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.