Browsed by
Tag: Nick Bostrom

James Hughes’ Problems of Transhumanism: A Review (Part 5) – Article by Ojochogwu Abdul

James Hughes’ Problems of Transhumanism: A Review (Part 5) – Article by Ojochogwu Abdul

logo_bg

Ojochogwu Abdul


Part 1 | Part 2 | Part 3 | Part 4 | Part 5

Part 5: Belief in Progress vs. Rational Uncertainty

The Enlightenment, with its confident efforts to fashion a science of man, was archetypal of the belief and quest that humankind will eventually achieve lasting peace and happiness. In what some interpret as a reformulation of Christianity’s teleological salvation history in which the People of God will be redeemed at the end of days and with the Kingdom of Heaven established on Earth, most Enlightenment thinkers believed in the inevitability of human political and technological progress, secularizing the Christian conception of history and eschatology into a conviction that humanity would, using a system of thought built on reason and science, be able to continually improve itself. As portrayed by Carl Becker in his 1933 book The Heavenly City of the Eighteenth-Century Philosophers, the philosophies “demolished the Heavenly City of St. Augustine only to rebuild it with more up-to-date materials.” Whether this Enlightenment humanist view of “progress” amounted merely to a recapitulation of the Christian teleological vision of history, or if Enlightenment beliefs in “continual, linear political, intellectual, and material improvement” reflected, as James Hughes posits, “a clear difference from the dominant Christian historical narrative in which little would change until the End Times and Christ’s return”, the notion, in any case, of a collective progress towards a definitive end-point was one that remained unsupported by the scientific worldview. The scientific worldview, as Hughes reminds us in the opening paragraph of this essay within his series, does not support historical inevitability, only uncertainty. “We may annihilate ourselves or regress,” he says, and “Even the normative judgment of what progress is, and whether we have made any, is open to empirical skepticism.”

Hereby, we are introduced to a conflict that exists, at least since after the Enlightenment, between a view of progressive optimism and that of radical uncertainty. Building on the Enlightenment’s faith in the inevitability of political and scientific progress, the idea of an end-point, salvation moment for humankind fuelled all the great Enlightenment ideologies that followed, flowing down, as Hughes traces, through Comte’s “positivism” and Marxist theories of historical determinism to neoconservative triumphalism about the “end of history” in democratic capitalism. Communists envisaged that end-point as a post-capitalist utopia that would finally resolve the class struggle which they conceived as the true engine of history. This vision also contained the 20th-century project to build the Soviet Man, one of extra-human capacities, for as Trotsky had predicted, after the Revolution, “the average human type will rise to the heights of an Aristotle, a Goethe, or a Marx. And above this ridge new peaks will rise”, whereas for 20th-century free-market liberals, this End of History had arrived with the final triumph of liberal democracy, with the entire world bound to be swept in its course. Events though, especially so far in the 21st century, appear to prove this view wrong.

This belief moreover, as Hughes would convincingly argue, in the historical inevitability of progress has also always been locked in conflict with “the rationalist, scientific observation that humanity could regress or disappear altogether.” Enlightenment pessimism, or at least realism, has, over the centuries, proven a stubborn resistance and constraint of Enlightenment optimism. Hughes, citing Henry Vyberg, reminds us that there were, after all, even French Enlightenment thinkers within that same era who rejected the belief in linear historical progress, but proposed historical cycles or even decadence instead. That aside, contemporary commentators like John Gray would even argue that the efforts themselves of the Enlightenment on the quest for progress unfortunately issued in, for example, the racist pseudo-science of Voltaire and Hume, while all endeavours to establish the rule of reason have resulted in bloody fanaticisms, from Jacobinism to Bolshevism, which equaled the worst atrocities attributable to religious believers. Horrendous acts like racism and anti-Semitism, in the verdict of Gray: “….are not incidental defects in Enlightenment thinking. They flow from some of the Enlightenment’s central beliefs.”

Even Darwinism’s theory of natural selection was, according to Hughes, “suborned by the progressive optimistic thinking of the Enlightenment and its successors to the doctrine of inevitable progress, aided in part by Darwin’s own teleological interpretation.” Problem, however, is that from the scientific worldview, there is no support for “progress” as to be found provided by the theory of natural selection, only that humanity, Hughes plainly states, “like all creatures, is on a random walk through a mine field, that human intelligence is only an accident, and that we could easily go extinct as many species have done.” Gray, for example, rebukes Darwin, who wrote: “As natural selection works solely for the good of each being, all corporeal and mental endowments will tend to progress to perfection.” Natural selection, however, does not work solely for the good of each being, a fact Darwin himself elsewhere acknowledged. Nonetheless, it has continually proven rather difficult for people to resist the impulse to identify evolution with progress, with an extended downside to this attitude being equally difficult to resist the temptation to apply evolution in the rationalization of views as dangerous as Social Darwinism and acts as horrible as eugenics.

Many skeptics therefore hold, rationally, that scientific utopias and promises to transform the human condition deserve the deepest suspicion. Reason is but a frail reed, all events of moral and political progress are and will always remain subject to reversal, and civilization could as well just collapse, eventually. Historical events and experiences have therefore caused faith in the inevitability of progress to wax and wane over time. Hughes notes that among several Millenarian movements and New Age beliefs, such faith could still be found that the world is headed for a millennial age, just as it exists in techno-optimist futurism. Nevertheless, he makes us see that “since the rise and fall of fascism and communism, and the mounting evidence of the dangers and unintended consequences of technology, there are few groups that still hold fast to an Enlightenment belief in the inevitability of conjoined scientific and political progress.” Within the transhumanist community, however, the possession of such faith in progress can still be found as held by many, albeit signifying a camp in the continuation therefore of the Enlightenment-bequeathed conflict as manifested between transhumanist optimism in contradiction with views of future uncertainty.

As with several occasions in the past, humanity is, again, currently being spun yet another “End of History” narrative: one of a posthuman future. Yuval Harari, for instance, in Homo Deus argues that emerging technologies and new scientific discoveries are undermining the foundations of Enlightenment humanism, although as he proceeds with his presentation he also proves himself unable to avoid one of the defining tropes of Enlightenment humanist thinking, i.e., that deeply entrenched tendency to conceive human history in teleological terms: fundamentally as a matter of collective progress towards a definitive end-point. This time, though, our era’s “End of History” glorious “salvation moment” is to be ushered in, not by a politico-economic system, but by a nascent techno-elite with a base in Silicon Valley, USA, a cluster steeped in a predominant tech-utopianism which has at its core the idea that the new technologies emerging there can steer humanity towards a definitive break-point in our history, the Singularity. Among believers in this coming Singularity, transhumanists, as it were, having inherited the tension between Enlightenment convictions in the inevitability of progress, and, in Hughes’ words, “Enlightenment’s scientific, rational realism that human progress or even civilization may fail”, now struggle with a renewed contradiction. And here the contrast as Hughes intends to portray gains sharpness, for as such, transhumanists today are “torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities” on the one hand, and, on the other, their “rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future.”

The risks of new technologies, even if not necessarily one that threatens the survival of humanity as a species with extinction, may yet be of an undesirable impact on the mode and trajectory of our extant civilization. Henry Kissinger, in his 2018 article “How the Enlightenment Ends”, expressed his perception that technology, which is rooted in Enlightenment thought, is now superseding the very philosophy that is its fundamental principle. The universal values proposed by the Enlightenment philosophes, as Kissinger points out, could be spread worldwide only through modern technology, but at the same time, such technology has ended or accomplished the Enlightenment and is now going its own way, creating the need for a new guiding philosophy. Kissinger argues specifically that AI may spell the end of the Enlightenment itself, and issues grave warnings about the consequences of AI and the end of Enlightenment and human reasoning, this as a consequence of an AI-led technological revolution whose “culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.” By way of analogy to how the printing press allowed the Age of Reason to supplant the Age of Religion, he buttresses his proposal that the modern counterpart of this revolutionary process is the rise of intelligent AI that will supersede human ability and put an end to the Enlightenment. Kissinger further outlines his three areas of concern regarding the trajectory of artificial intelligence research: AI may achieve unintended results; in achieving intended goals, AI may change human thought processes and human values, and AI may reach intended goals, but be unable to explain the rationale for its conclusions. Kissinger’s thesis, of course, has not gone without both support and criticisms attracted from different quarters. Reacting to Kissinger, Yuk Hui, for example, in “What Begins After the End of the Enlightenment?” maintained that “Kissinger is wrong—the Enlightenment has not ended.” Rather, “modern technology—the support structure of Enlightenment philosophy—has become its own philosophy”, with the universalizing force of technology becoming itself the political project of the Enlightenment.

Transhumanists, as mentioned already, reflect the continuity of some of those contradictions between belief in progress and uncertainty about human future. Hughes shows us nonetheless that there are some interesting historical turns suggesting further directions that this mood has taken. In the 1990s, Hughes recalls, “transhumanists were full of exuberant Enlightenment optimism about unending progress.” As an example, Hughes cites Max More’s 1998 Extropian Principles which defined “Perpetual Progress” as “the first precept of their brand of transhumanism.” Over time, however, Hughes communicates how More himself has had cause to temper this optimism, stressing rather this driving principle as one of “desirability” and more a normative goal than a faith in historical inevitability. “History”, More would say in 2002, “since the Enlightenment makes me wary of all arguments to inevitability…”

Rational uncertainty among transhumanists hence make many of them refrain from an argument for the inevitability of transhumanism as a matter of progress. Further, there are indeed several possible factors which could deter the transhumanist idea and drive for “progress” from translating to reality: A neo-Luddite revolution, a turn and rise in preference for rural life, mass disenchantment with technological addiction and increased option for digital detox, nostalgia, disillusionment with modern civilization and a “return-to-innocence” counter-cultural movement, neo-Romanticism, a pop-culture allure and longing for a Tolkien-esque world, cyclical thinking, conservatism, traditionalism, etc. The alternative, backlash, and antagonistic forces are myriad. Even within transhumanism, the anti-democratic and socially conservative Neoreactionary movement, with its rejection of the view that history shows inevitable progression towards greater liberty and enlightenment, is gradually (and rather disturbingly) growing a contingent. Hughes talks, as another point for rational uncertainty, about the three critiques: futurological, historical, and anthropological, of transhumanist and Enlightenment faith in progress that Phillipe Verdoux offers, and in which the anthropological argument holds that “pre-moderns were probably as happy or happier than we moderns.” After all, Rousseau, himself a French Enlightenment thinker, “is generally seen as having believed in the superiority of the “savage” over the civilized.” Perspectives like these could stir anti-modern, anti-progress sentiments in people’s hearts and minds.

Demonstrating still why transhumanists must not be obstinate over the idea of inevitability, Hughes refers to Greg Burch’s 2001 work “Progress, Counter-Progress, and Counter-Counter-Progress” in which the latter expounded on the Enlightenment and transhumanist commitment to progress as “to a political program, fully cognizant that there are many powerful enemies of progress and that victory was not inevitable.” Moreover, the possible failure in realizing goals of progress might not even result from the actions of “enemies” in that antagonistic sense of the word, for there is also that likely scenario, as the 2006 movie Idiocracy depicts, of a future dystopian society based on dysgenics, one in which, going by expectations and trends of the 21st century, the most intelligent humans decrease in reproduction and eventually fail to have children while the least intelligent reproduce prolifically. As such, through the process of natural selection, generations are created that collectively become increasingly dumber and more virile with each passing century, leading to a future world plagued by anti-intellectualism, bereft of intellectual curiosity, social responsibility, coherence in notions of justice and human rights, and manifesting several other traits of degeneration in culture. This is yet a possibility for our future world.

So while for many extropians and transhumanists, nonetheless, perpetual progress was an unstoppable train, responding to which “one either got on board for transcension or consigned oneself to the graveyard”, other transhumanists, however, Hughes comments, especially in response to certain historical experiences (the 2000 dot-com crash, for example), have seen reason to increasingly temper their expectations about progress. In Hughes’s appraisal, while, therefore, some transhumanists “still press for technological innovation on all fronts and oppose all regulation, others are focusing on reducing the civilization-ending potentials of asteroid strikes, genetic engineering, artificial intelligence and nanotechnology.” Some realism hence need be in place to keep under constant check the excesses of contemporary secular technomillennialism as contained in some transhumanist strains.

Hughes presents Nick Bostrom’s 2001 essay “Analyzing Human Extinction Scenarios and Related Hazards” as one influential example of this anti-millennial realism, a text in which Bostrom, following his outline of scenarios that could either end the existence of the human species or have us evolve into dead-ends, then addressed not just how we can avoid extinction and ensure that there are descendants of humanity, but also how we can ensure that we will be proud to claim them. Subsequently, Bostrom has been able to produce work on “catastrophic risk estimation” at the Future of Humanity Institute at Oxford. Hughes seems to favour this approach, for he ensures to indicate that this has also been adopted as a programmatic focus for the Institute for Ethics and Emerging Technologies (IEET) which he directs, and as well for the transhumanist non-profit, the Lifeboat Foundation. Transhumanists who listen to Bostrom, as we could deduce from Hughes, are being urged to take a more critical approach concerning technological progress.

With the availability of this rather cautious attitude, a new tension, Hughes reports, now plays out between eschatological certainty and pessimistic risk assessment. This has taken place mainly concerning the debate over the Singularity. For the likes of Ray Kurzweil (2005), representing the camp of a rather technomillennial, eschatological certainty, his patterns of accelerating trendlines towards a utopian merger of enhanced humanity and godlike artificial intelligence is one of unstoppability, and this Kurzweil supports by referring to the steady exponential march of technological progress through (and despite) wars and depressions. Dystopian and apocalyptic predictions of how humanity might fare under superintelligent machines (extinction, inferiority, and the likes) are, in the assessment of Hughes, but minimally entertained by Kurzweil, since to the techno-prophet we are bound to eventually integrate with these machines into apotheosis.

The platform, IEET, thus has taken a responsibility of serving as a site for teasing out this tension between technoprogressive “optimism of the will and pessimism of the intellect,” as Hughes echoes Antonio Gramsci. On the one hand, Hughes explains, “we have championed the possibility of, and evidence of, human progress. By adopting the term “technoprogressivism” as our outlook, we have placed ourselves on the side of Enlightenment political and technological progress.”And yet on the other hand, he continues, “we have promoted technoprogressivism precisely in order to critique uncritical techno-libertarian and futurist ideas about the inevitability of progress. We have consistently emphasized the negative effects that unregulated, unaccountable, and inequitably distributed technological development could have on society” (one feels tempted to call out Landian accelerationism at this point). Technoprogressivism, the guiding philosophy of IEET, avails as a principle which insists that technological progress needs to be consistently conjoined with, and dependent on, political progress, whilst recognizing that neither are inevitable.

In charting the essay towards a close, Hughes mentions his and a number of IEET-led technoprogresive publications, among which we have Verdoux who, despite his futurological, historical, and anthropological critique of transhumanism, yet goes ahead to argue for transhumanism on moral grounds (free from the language of “Marxism’s historical inevitabilism or utopianism, and cautious of the tragic history of communism”), and “as a less dangerous course than any attempt at “relinquishing” technological development, but only after the naive faith in progress has been set aside.” Unfortunately, however, the “rational capitulationism” to the transhumanist future that Verdoux offers, according to Hughes, is “not something that stirs men’s souls.” Hughes hence, while admitting to our need “to embrace these critical, pessimistic voices and perspectives”, yet calls on us to likewise heed to the need to “also re-discover our capacity for vision and hope.” This need for optimism that humans “can” collectively exercise foresight and invention, and peacefully deliberate our way to a better future, rather than yielding to narratives that would lead us into the traps of utopian and apocalyptic fatalism, has been one of the motivations behind the creation of the “technoprogressive” brand. The brand, Hughes presents, has been of help in distinguishing necessarily “Enlightenment optimism about the “possibility” of human political, technological and moral progress from millennialist techno-utopian inevitabilism.”

Presumably, upon this technoprogressive philosophy, the new version of the Transhumanist Declaration, adopted by Humanity+ in 2009, indicated a shift from some of the language of the 1998 version, and conveyed a more reflective, critical, realistic, utilitarian, “proceed with caution” and “act with wisdom” tone with respect to the transhumanist vision for humanity’s progress. This version of the declaration, though relatively sobered, remains equally inspiring nonetheless. Hughes closes the essay with a reminder on our need to stay aware of the diverse ways by which our indifferent universe threatens our existence, how our growing powers come with unintended consequences, and why applying mindfulness on our part in all actions remains the best approach for navigating our way towards progress in our radically uncertain future.

Conclusively, following Hughes’ objectives in this series, it can be suggested that more studies on the Enlightenment (European and global) are desirable especially for its potential to furnish us with richer understanding into a number of problems within contemporary transhumanism as sprouting from its roots deep in the Enlightenment. Interest and scholarship in Enlightenment studies, fortunately, seems to be experiencing some current revival, and even so with increasing diversity in perspective, thereby presenting transhumanism with a variety of paths through which to explore and gain context for connected issues. Seeking insight thence into some foundations of transhumanism’s problems could take the path, among others: of an examination of internal contradictions within the Enlightenment, of the approach of Max Horkheimer and Theodor Adorno’s “Dialectic of Enlightenment”; of assessing opponents of the Enlightenment as found, for example, in Isaiah Berlin’s notion of “Counter Enlightenment”; of investigating a rather radical strain of the Enlightenment as presented in Jonathan Israel’s “Radical Enlightenment”, and as well in grappling with the nature of the relationships between transhumanism and other heirs both of the Enlightenment and the Counter-Enlightenment today. Again, and significantly, serious attention need be paid now and going forwards in jealously guarding transhumanism against ultimately falling into the hands of the Dark Enlightenment.


Ojochogwu Abdul is the founder of the Transhumanist Enlightenment Café (TEC), is the co-founder of the Enlightenment Transhumanist Forum of Nigeria (H+ Nigeria), and currently serves as a Foreign Ambassador for the U.S. Transhumanist Party in Nigeria. 

James Hughes’ Problems of Transhumanism: A Review (Part 2) – Article by Ojochogwu Abdul

James Hughes’ Problems of Transhumanism: A Review (Part 2) – Article by Ojochogwu Abdul

logo_bg

Ojochogwu Abdul


Part 1 | Part 2 | Part 3 | Part 4 | Part 5

Part 2: Deism, Atheism and Natural Theology

“The dominant trajectory of Enlightenment thought over the last three hundred years has been towards atheism. Most transhumanists are atheists. But some transhumanists, like many of the original Enlightenment thinkers, are attempting to reconcile naturalism and their religious traditions. Some transhumanists even believe that the transcendent potentials of intelligence argue for a new form of scientific theology.” (James Hughes, 2010)

The Enlightenment was the age of the triumph of science (Newton, Leibniz, Bacon) and of philosophy (Descartes, Locke, Spinoza, Kant, Voltaire, Diderot, Montesquieu). Unlike the Renaissance philosophers, the Enlightenment thinkers ceased the search for validation in the texts of the Greco-Roman philosophers, but were predicated more solidly on rationalism and empiricism. Religious tolerance and skepticism about superstition and Biblical literalism was also a central theme of the Enlightenment. Most of the Enlightenment philosophers of the 17th century through the 19th century, however, were theists of some sort who, in general, were attempting to reconcile belief in God with rational skepticism and naturalism. There were, of course, atheists among them as well as devout Christians, but if there was a common theological stance and belief about the divine among Enlightenment philosophers, it was probably Deism, a worldview consisting in the rejection of blind faith and organized religion, an advocacy for the discovery of religious truth through reason and direct empirical observation, and a belief that divine intervention in human affairs stopped with the creation of the world.

Deism, as James Hughes accounts, declined in the nineteenth century, gradually replaced by atheist materialism. Nonetheless, the engagement with Enlightenment values continued in liberal strains of Christianity such as Unitarianism and Universalism, united today among some communities as Unitarian Universalism (UU), and hosting congregations with individuals of varying beliefs that range widely to include atheism, agnosticism, pantheism, deism, Judaism, Islam, Christianity, Neopaganism, Hinduism, Buddhism, Daoism, Humanism, and many more.

Read More Read More

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

Gareth John


Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019


In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.

NOTES

1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.

2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)

3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5

4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015

5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)

6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129

7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia) https://intelligence.org/files/PredictingAI.pdf

8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference

9. Pensky, N, 2014, article taken from Pando. https://goo.gl/LpR3eF

10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’. http://goo.gl/ujQlyI

11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015. https://goo.gl/nFzi2y

12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future. http://goo.gl/dZptO3

Gareth John lives in Cardiff, UK and is a trainee psychotherapist with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party. 


HISTORICAL COMMENTS

Gareth,

Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.

I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.

Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?

Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.

[…]

Thanks,

Rich Casada


Hi Rich,
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!

I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:

“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).

Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.

I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)

So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.

This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.

Gareth