Gennady Stolyarov II
Gennady Stolyarov II
Part 5: Belief in Progress vs. Rational Uncertainty
The Enlightenment, with its confident efforts to fashion a science of man, was archetypal of the belief and quest that humankind will eventually achieve lasting peace and happiness. In what some interpret as a reformulation of Christianity’s teleological salvation history in which the People of God will be redeemed at the end of days and with the Kingdom of Heaven established on Earth, most Enlightenment thinkers believed in the inevitability of human political and technological progress, secularizing the Christian conception of history and eschatology into a conviction that humanity would, using a system of thought built on reason and science, be able to continually improve itself. As portrayed by Carl Becker in his 1933 book The Heavenly City of the Eighteenth-Century Philosophers, the philosophies “demolished the Heavenly City of St. Augustine only to rebuild it with more up-to-date materials.” Whether this Enlightenment humanist view of “progress” amounted merely to a recapitulation of the Christian teleological vision of history, or if Enlightenment beliefs in “continual, linear political, intellectual, and material improvement” reflected, as James Hughes posits, “a clear difference from the dominant Christian historical narrative in which little would change until the End Times and Christ’s return”, the notion, in any case, of a collective progress towards a definitive end-point was one that remained unsupported by the scientific worldview. The scientific worldview, as Hughes reminds us in the opening paragraph of this essay within his series, does not support historical inevitability, only uncertainty. “We may annihilate ourselves or regress,” he says, and “Even the normative judgment of what progress is, and whether we have made any, is open to empirical skepticism.”
Hereby, we are introduced to a conflict that exists, at least since after the Enlightenment, between a view of progressive optimism and that of radical uncertainty. Building on the Enlightenment’s faith in the inevitability of political and scientific progress, the idea of an end-point, salvation moment for humankind fuelled all the great Enlightenment ideologies that followed, flowing down, as Hughes traces, through Comte’s “positivism” and Marxist theories of historical determinism to neoconservative triumphalism about the “end of history” in democratic capitalism. Communists envisaged that end-point as a post-capitalist utopia that would finally resolve the class struggle which they conceived as the true engine of history. This vision also contained the 20th-century project to build the Soviet Man, one of extra-human capacities, for as Trotsky had predicted, after the Revolution, “the average human type will rise to the heights of an Aristotle, a Goethe, or a Marx. And above this ridge new peaks will rise”, whereas for 20th-century free-market liberals, this End of History had arrived with the final triumph of liberal democracy, with the entire world bound to be swept in its course. Events though, especially so far in the 21st century, appear to prove this view wrong.
This belief moreover, as Hughes would convincingly argue, in the historical inevitability of progress has also always been locked in conflict with “the rationalist, scientific observation that humanity could regress or disappear altogether.” Enlightenment pessimism, or at least realism, has, over the centuries, proven a stubborn resistance and constraint of Enlightenment optimism. Hughes, citing Henry Vyberg, reminds us that there were, after all, even French Enlightenment thinkers within that same era who rejected the belief in linear historical progress, but proposed historical cycles or even decadence instead. That aside, contemporary commentators like John Gray would even argue that the efforts themselves of the Enlightenment on the quest for progress unfortunately issued in, for example, the racist pseudo-science of Voltaire and Hume, while all endeavours to establish the rule of reason have resulted in bloody fanaticisms, from Jacobinism to Bolshevism, which equaled the worst atrocities attributable to religious believers. Horrendous acts like racism and anti-Semitism, in the verdict of Gray: “….are not incidental defects in Enlightenment thinking. They flow from some of the Enlightenment’s central beliefs.”
Even Darwinism’s theory of natural selection was, according to Hughes, “suborned by the progressive optimistic thinking of the Enlightenment and its successors to the doctrine of inevitable progress, aided in part by Darwin’s own teleological interpretation.” Problem, however, is that from the scientific worldview, there is no support for “progress” as to be found provided by the theory of natural selection, only that humanity, Hughes plainly states, “like all creatures, is on a random walk through a mine field, that human intelligence is only an accident, and that we could easily go extinct as many species have done.” Gray, for example, rebukes Darwin, who wrote: “As natural selection works solely for the good of each being, all corporeal and mental endowments will tend to progress to perfection.” Natural selection, however, does not work solely for the good of each being, a fact Darwin himself elsewhere acknowledged. Nonetheless, it has continually proven rather difficult for people to resist the impulse to identify evolution with progress, with an extended downside to this attitude being equally difficult to resist the temptation to apply evolution in the rationalization of views as dangerous as Social Darwinism and acts as horrible as eugenics.
Many skeptics therefore hold, rationally, that scientific utopias and promises to transform the human condition deserve the deepest suspicion. Reason is but a frail reed, all events of moral and political progress are and will always remain subject to reversal, and civilization could as well just collapse, eventually. Historical events and experiences have therefore caused faith in the inevitability of progress to wax and wane over time. Hughes notes that among several Millenarian movements and New Age beliefs, such faith could still be found that the world is headed for a millennial age, just as it exists in techno-optimist futurism. Nevertheless, he makes us see that “since the rise and fall of fascism and communism, and the mounting evidence of the dangers and unintended consequences of technology, there are few groups that still hold fast to an Enlightenment belief in the inevitability of conjoined scientific and political progress.” Within the transhumanist community, however, the possession of such faith in progress can still be found as held by many, albeit signifying a camp in the continuation therefore of the Enlightenment-bequeathed conflict as manifested between transhumanist optimism in contradiction with views of future uncertainty.
As with several occasions in the past, humanity is, again, currently being spun yet another “End of History” narrative: one of a posthuman future. Yuval Harari, for instance, in Homo Deus argues that emerging technologies and new scientific discoveries are undermining the foundations of Enlightenment humanism, although as he proceeds with his presentation he also proves himself unable to avoid one of the defining tropes of Enlightenment humanist thinking, i.e., that deeply entrenched tendency to conceive human history in teleological terms: fundamentally as a matter of collective progress towards a definitive end-point. This time, though, our era’s “End of History” glorious “salvation moment” is to be ushered in, not by a politico-economic system, but by a nascent techno-elite with a base in Silicon Valley, USA, a cluster steeped in a predominant tech-utopianism which has at its core the idea that the new technologies emerging there can steer humanity towards a definitive break-point in our history, the Singularity. Among believers in this coming Singularity, transhumanists, as it were, having inherited the tension between Enlightenment convictions in the inevitability of progress, and, in Hughes’ words, “Enlightenment’s scientific, rational realism that human progress or even civilization may fail”, now struggle with a renewed contradiction. And here the contrast as Hughes intends to portray gains sharpness, for as such, transhumanists today are “torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities” on the one hand, and, on the other, their “rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future.”
The risks of new technologies, even if not necessarily one that threatens the survival of humanity as a species with extinction, may yet be of an undesirable impact on the mode and trajectory of our extant civilization. Henry Kissinger, in his 2018 article “How the Enlightenment Ends”, expressed his perception that technology, which is rooted in Enlightenment thought, is now superseding the very philosophy that is its fundamental principle. The universal values proposed by the Enlightenment philosophes, as Kissinger points out, could be spread worldwide only through modern technology, but at the same time, such technology has ended or accomplished the Enlightenment and is now going its own way, creating the need for a new guiding philosophy. Kissinger argues specifically that AI may spell the end of the Enlightenment itself, and issues grave warnings about the consequences of AI and the end of Enlightenment and human reasoning, this as a consequence of an AI-led technological revolution whose “culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.” By way of analogy to how the printing press allowed the Age of Reason to supplant the Age of Religion, he buttresses his proposal that the modern counterpart of this revolutionary process is the rise of intelligent AI that will supersede human ability and put an end to the Enlightenment. Kissinger further outlines his three areas of concern regarding the trajectory of artificial intelligence research: AI may achieve unintended results; in achieving intended goals, AI may change human thought processes and human values, and AI may reach intended goals, but be unable to explain the rationale for its conclusions. Kissinger’s thesis, of course, has not gone without both support and criticisms attracted from different quarters. Reacting to Kissinger, Yuk Hui, for example, in “What Begins After the End of the Enlightenment?” maintained that “Kissinger is wrong—the Enlightenment has not ended.” Rather, “modern technology—the support structure of Enlightenment philosophy—has become its own philosophy”, with the universalizing force of technology becoming itself the political project of the Enlightenment.
Transhumanists, as mentioned already, reflect the continuity of some of those contradictions between belief in progress and uncertainty about human future. Hughes shows us nonetheless that there are some interesting historical turns suggesting further directions that this mood has taken. In the 1990s, Hughes recalls, “transhumanists were full of exuberant Enlightenment optimism about unending progress.” As an example, Hughes cites Max More’s 1998 Extropian Principles which defined “Perpetual Progress” as “the first precept of their brand of transhumanism.” Over time, however, Hughes communicates how More himself has had cause to temper this optimism, stressing rather this driving principle as one of “desirability” and more a normative goal than a faith in historical inevitability. “History”, More would say in 2002, “since the Enlightenment makes me wary of all arguments to inevitability…”
Rational uncertainty among transhumanists hence make many of them refrain from an argument for the inevitability of transhumanism as a matter of progress. Further, there are indeed several possible factors which could deter the transhumanist idea and drive for “progress” from translating to reality: A neo-Luddite revolution, a turn and rise in preference for rural life, mass disenchantment with technological addiction and increased option for digital detox, nostalgia, disillusionment with modern civilization and a “return-to-innocence” counter-cultural movement, neo-Romanticism, a pop-culture allure and longing for a Tolkien-esque world, cyclical thinking, conservatism, traditionalism, etc. The alternative, backlash, and antagonistic forces are myriad. Even within transhumanism, the anti-democratic and socially conservative Neoreactionary movement, with its rejection of the view that history shows inevitable progression towards greater liberty and enlightenment, is gradually (and rather disturbingly) growing a contingent. Hughes talks, as another point for rational uncertainty, about the three critiques: futurological, historical, and anthropological, of transhumanist and Enlightenment faith in progress that Phillipe Verdoux offers, and in which the anthropological argument holds that “pre-moderns were probably as happy or happier than we moderns.” After all, Rousseau, himself a French Enlightenment thinker, “is generally seen as having believed in the superiority of the “savage” over the civilized.” Perspectives like these could stir anti-modern, anti-progress sentiments in people’s hearts and minds.
Demonstrating still why transhumanists must not be obstinate over the idea of inevitability, Hughes refers to Greg Burch’s 2001 work “Progress, Counter-Progress, and Counter-Counter-Progress” in which the latter expounded on the Enlightenment and transhumanist commitment to progress as “to a political program, fully cognizant that there are many powerful enemies of progress and that victory was not inevitable.” Moreover, the possible failure in realizing goals of progress might not even result from the actions of “enemies” in that antagonistic sense of the word, for there is also that likely scenario, as the 2006 movie Idiocracy depicts, of a future dystopian society based on dysgenics, one in which, going by expectations and trends of the 21st century, the most intelligent humans decrease in reproduction and eventually fail to have children while the least intelligent reproduce prolifically. As such, through the process of natural selection, generations are created that collectively become increasingly dumber and more virile with each passing century, leading to a future world plagued by anti-intellectualism, bereft of intellectual curiosity, social responsibility, coherence in notions of justice and human rights, and manifesting several other traits of degeneration in culture. This is yet a possibility for our future world.
So while for many extropians and transhumanists, nonetheless, perpetual progress was an unstoppable train, responding to which “one either got on board for transcension or consigned oneself to the graveyard”, other transhumanists, however, Hughes comments, especially in response to certain historical experiences (the 2000 dot-com crash, for example), have seen reason to increasingly temper their expectations about progress. In Hughes’s appraisal, while, therefore, some transhumanists “still press for technological innovation on all fronts and oppose all regulation, others are focusing on reducing the civilization-ending potentials of asteroid strikes, genetic engineering, artificial intelligence and nanotechnology.” Some realism hence need be in place to keep under constant check the excesses of contemporary secular technomillennialism as contained in some transhumanist strains.
Hughes presents Nick Bostrom’s 2001 essay “Analyzing Human Extinction Scenarios and Related Hazards” as one influential example of this anti-millennial realism, a text in which Bostrom, following his outline of scenarios that could either end the existence of the human species or have us evolve into dead-ends, then addressed not just how we can avoid extinction and ensure that there are descendants of humanity, but also how we can ensure that we will be proud to claim them. Subsequently, Bostrom has been able to produce work on “catastrophic risk estimation” at the Future of Humanity Institute at Oxford. Hughes seems to favour this approach, for he ensures to indicate that this has also been adopted as a programmatic focus for the Institute for Ethics and Emerging Technologies (IEET) which he directs, and as well for the transhumanist non-profit, the Lifeboat Foundation. Transhumanists who listen to Bostrom, as we could deduce from Hughes, are being urged to take a more critical approach concerning technological progress.
With the availability of this rather cautious attitude, a new tension, Hughes reports, now plays out between eschatological certainty and pessimistic risk assessment. This has taken place mainly concerning the debate over the Singularity. For the likes of Ray Kurzweil (2005), representing the camp of a rather technomillennial, eschatological certainty, his patterns of accelerating trendlines towards a utopian merger of enhanced humanity and godlike artificial intelligence is one of unstoppability, and this Kurzweil supports by referring to the steady exponential march of technological progress through (and despite) wars and depressions. Dystopian and apocalyptic predictions of how humanity might fare under superintelligent machines (extinction, inferiority, and the likes) are, in the assessment of Hughes, but minimally entertained by Kurzweil, since to the techno-prophet we are bound to eventually integrate with these machines into apotheosis.
The platform, IEET, thus has taken a responsibility of serving as a site for teasing out this tension between technoprogressive “optimism of the will and pessimism of the intellect,” as Hughes echoes Antonio Gramsci. On the one hand, Hughes explains, “we have championed the possibility of, and evidence of, human progress. By adopting the term “technoprogressivism” as our outlook, we have placed ourselves on the side of Enlightenment political and technological progress.”And yet on the other hand, he continues, “we have promoted technoprogressivism precisely in order to critique uncritical techno-libertarian and futurist ideas about the inevitability of progress. We have consistently emphasized the negative effects that unregulated, unaccountable, and inequitably distributed technological development could have on society” (one feels tempted to call out Landian accelerationism at this point). Technoprogressivism, the guiding philosophy of IEET, avails as a principle which insists that technological progress needs to be consistently conjoined with, and dependent on, political progress, whilst recognizing that neither are inevitable.
In charting the essay towards a close, Hughes mentions his and a number of IEET-led technoprogresive publications, among which we have Verdoux who, despite his futurological, historical, and anthropological critique of transhumanism, yet goes ahead to argue for transhumanism on moral grounds (free from the language of “Marxism’s historical inevitabilism or utopianism, and cautious of the tragic history of communism”), and “as a less dangerous course than any attempt at “relinquishing” technological development, but only after the naive faith in progress has been set aside.” Unfortunately, however, the “rational capitulationism” to the transhumanist future that Verdoux offers, according to Hughes, is “not something that stirs men’s souls.” Hughes hence, while admitting to our need “to embrace these critical, pessimistic voices and perspectives”, yet calls on us to likewise heed to the need to “also re-discover our capacity for vision and hope.” This need for optimism that humans “can” collectively exercise foresight and invention, and peacefully deliberate our way to a better future, rather than yielding to narratives that would lead us into the traps of utopian and apocalyptic fatalism, has been one of the motivations behind the creation of the “technoprogressive” brand. The brand, Hughes presents, has been of help in distinguishing necessarily “Enlightenment optimism about the “possibility” of human political, technological and moral progress from millennialist techno-utopian inevitabilism.”
Presumably, upon this technoprogressive philosophy, the new version of the Transhumanist Declaration, adopted by Humanity+ in 2009, indicated a shift from some of the language of the 1998 version, and conveyed a more reflective, critical, realistic, utilitarian, “proceed with caution” and “act with wisdom” tone with respect to the transhumanist vision for humanity’s progress. This version of the declaration, though relatively sobered, remains equally inspiring nonetheless. Hughes closes the essay with a reminder on our need to stay aware of the diverse ways by which our indifferent universe threatens our existence, how our growing powers come with unintended consequences, and why applying mindfulness on our part in all actions remains the best approach for navigating our way towards progress in our radically uncertain future.
Conclusively, following Hughes’ objectives in this series, it can be suggested that more studies on the Enlightenment (European and global) are desirable especially for its potential to furnish us with richer understanding into a number of problems within contemporary transhumanism as sprouting from its roots deep in the Enlightenment. Interest and scholarship in Enlightenment studies, fortunately, seems to be experiencing some current revival, and even so with increasing diversity in perspective, thereby presenting transhumanism with a variety of paths through which to explore and gain context for connected issues. Seeking insight thence into some foundations of transhumanism’s problems could take the path, among others: of an examination of internal contradictions within the Enlightenment, of the approach of Max Horkheimer and Theodor Adorno’s “Dialectic of Enlightenment”; of assessing opponents of the Enlightenment as found, for example, in Isaiah Berlin’s notion of “Counter Enlightenment”; of investigating a rather radical strain of the Enlightenment as presented in Jonathan Israel’s “Radical Enlightenment”, and as well in grappling with the nature of the relationships between transhumanism and other heirs both of the Enlightenment and the Counter-Enlightenment today. Again, and significantly, serious attention need be paid now and going forwards in jealously guarding transhumanism against ultimately falling into the hands of the Dark Enlightenment.
Ojochogwu Abdul is the founder of the Transhumanist Enlightenment Café (TEC), is the co-founder of the Enlightenment Transhumanist Forum of Nigeria (H+ Nigeria), and currently serves as a Foreign Ambassador for the U.S. Transhumanist Party in Nigeria.
Gennady Stolyarov II
The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.
U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.
Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.
Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.
Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.
Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.
~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019
In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity?  Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?
According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’.  The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity.  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. 
Kurzweil predicts the singularity to occur around 2045  whereas Vinge predicts some time before 2030 . In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040.  Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ 
Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.
Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?
Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.
There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:
It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. 
Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:
There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. 
There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.
Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence.  The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.
And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:
I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. 
So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?
Whatever, I’d really value your input and hear your views on the subject.
1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.
2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)
3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5
4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015
5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)
6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129
7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia) https://intelligence.org/files/PredictingAI.pdf
8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference
9. Pensky, N, 2014, article taken from Pando. https://goo.gl/LpR3eF
10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’. http://goo.gl/ujQlyI
11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015. https://goo.gl/nFzi2y
12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future. http://goo.gl/dZptO3
Gareth John lives in Cardiff, UK and is a trainee social researcher with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party.
Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.
I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.
Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?
Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.[…]
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!
I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:
“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).
Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.
I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)
So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.
This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.
There is an idea or perception bandied about the general public that unstoppable technological forces are already upon us like a runaway train, threatening to derail our way of life and everything we have ever known, and that there is nothing we can do about it.
However, I would like to offer some hope and at the same time dispel this seemingly apocalyptic scenario.
There appear to be two main schools of thought when we discuss the future; the Ray Kurzweil school of thought, which states that the future will evolve as it will and that we will reach Singularity by a certain date, and the Peter Thiel school of thought, which says that the future won’t be built unless we build it.
I would like to add upon Mr. Thiel’s idea by saying that the future will indeed be built, but unless we, as a society, a human race, and a world, join forces to build a future we would like to live in and which reflects our values, we will indeed have a future, but perhaps not one we are completely comfortable with.
Thus, this is a call to action for not only those who are actively involved in the fields of technology, science, and engineering, but all people around the world, because the sum of our collective actions will decide the fate of the world, and the future we live in. Whether we want to admit it, all of us are, on some level, responsible for how the world develops every day.
I urge those of you who may have resigned yourselves to the idea that there is nothing you can do to help change the trajectory of the world to take a look with new eyes. There is always something all of us can do, because every day we are interacting with others, building relationships, helping to create products, working on resolving problems that affect humanity, contributing to the success of an organization, company, or family, and performing actions that help the world develop, no matter on how small a scale that might be.
Everyone on Earth has a role to play in the creation of our future. That is what you are here for – to help fulfill your personal vision and mission while also contributing to the development of the world. That is how important you are.
So the next time someone remarks that the writing is on the wall and that we should just accept that we have no say in how the world evolves, please remember that we are all architects of our own future, which hasn’t even been written yet. How it will be written depends on the actions every one of us takes every day. Therefore, the question we should be asking ourselves every day is, what kind of future will we build? And then, of course, after answering this question, we should not waste any time in building that future we have envisioned.
Arin Vahanian is Director of Marketing for the U.S. Transhumanist Party.
Gennady Stolyarov II
On March 31, 2018, Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, was interviewed by Nikola Danaylov, a.k.a. Socrates, of Singularity.FM. A synopsis, audio download, and embedded video of the interview can be found on Singularity.FM here. You can also watch the YouTube video recording of the interview here.
Apparently this interview, nearly three hours in length, broke the record for the length of Nikola Danaylov’s in-depth, wide-ranging conversations on philosophy, politics, and the future. The interview covered both some of Mr. Stolyarov’s personal work and ideas, such as the illustrated children’s book Death is Wrong, as well as the efforts and aspirations of the U.S. Transhumanist Party. The conversation also delved into such subjects as the definition of transhumanism, intelligence and morality, the technological Singularity or Singularities, health and fitness, and even cats. Everyone will find something of interest in this wide-ranging discussion.
The U.S. Transhumanist Party would like to thank its Director of Admissions and Public Relations, Dinorah Delfin, for the outreach that enabled this interview to happen.
To help advance the goals of the U.S. Transhumanist Party, as described in Mr. Stolyarov’s comments during the interview, become a member for free, no matter where you reside. Click here to fill out a membership application.
Gennady Stolyarov II
Bobby Ridge, Secretary-Treasurer of the U.S. Transhumanist Party, and Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, provide a broad “big-picture” overview of transhumanism and major ongoing and future developments in emerging technologies that present the potential to revolutionize the human condition and resolve the age-old perils and limitations that have plagued humankind.
This is a beginners’ overview of transhumanism – which means that it is for everyone, including those who are new to transhumanism and the life-extension movement, as well as those who have been involved in it for many years – since, when it comes to dramatically expanding human longevity and potential, we are all beginners at the beginning of what could be our species’ next great era.
Become a member of the U.S. Transhumanist Party for free, no matter where you reside.
See Mr. Stolyarov’s presentation, “The U.S. Transhumanist Party: Pursuing a Peaceful Political Revolution for Longevity“.
In the background of some of the video segments is a painting now owned by Mr. Stolyarov, from “The Singularity is Here” series by artist Leah Montalto.
I urge the United States Transhumanist Party to support an international ban on the use of autonomous weapons and support subsidies from governments and alternative funding into research for AI safety – funding that is very similar to Elon Musk’s efforts. Max Tegmark recently stated that “Elon Musk’s $10M donation to the Future of Life Institute that helped put out 37 grants to run a global research program aimed at keeping AI beneficial to humanity.”
Biologists fought hard to pass the international ban on biological weapons, so that the name of biology would be known as it is today, i.e., a science that cures diseases, ends suffering, and makes sense of the complexity of living organisms. Similarly, the community of chemists also united and achieved an international ban on the use of chemical weapons. Scientists conducting AI research should follow their predecessors’ wisdom and unite to achieve an international ban on autonomous weapons! It is sad to say that we are already losing this fight for an international ban on autonomous weapons. The Kalashnikov Bureau weapons manufacturing company announced that they have recently invented an unmanned ground vehicle (UGV), which field tests have already shown better than human level intelligence. China recently began field-testing cruise missiles with AI and autonomous capabilities, and a few companies are getting very close to having AI autopilot operating to control the flight envelope at hypersonic speeds. (Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence“)
Even though, in 2015 and 1016, the US government spent only $1.1 billion and $1.2 billion in AI research, respectively, according to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia.” While these autonomous weapons are already being developed, the UN Convention on Certain Conventional Weapons (CCW) couldn’t even come up with a definition for autonomous weapons after 4 years of meeting up, despite their explicit expression for a dire concern for the spread of autonomous weapons. They decided to put off the conversation another year, but we all know that at the pace technology is advancing, we may not have another year to postpone a definition and solutions. Our species must advocate and emulate the 23 Asilomar AI principles, which over 1000 expert AI researchers from all around the globe have signed.
In only the last decade or so, there has been a combined investment of trillions of dollars towards an AI race from the private sector, such as, Google, Microsoft, Facebook, Amazon, Alibaba, Baidu, and other tech titans, along with whole governments, such as, China, South Korea, Russia, Canada, and only recently the USA. The investments are mainly towards making AI more powerful, but not safer! Yes, the intelligence and sentience of artificial superintelligence (ASI) will be inherently uncontrollable. As a metaphor, humans controlling the development of ASI, will be like an ant trying to control the human development of a NASA space station on top of their ant colony. Before we get to that point, at which hopefully this issue will be solved by a brain-computer-interface, we can get close to making the development of artificial general intelligence (AGI) and weak ASI safe, by steering AI research efforts towards solving the alignment problem, the control problem, and other problems in the field. This can be done with proper funding from the tech titans and governments.
“AI will be the new electricity. Electricity has changed every industry and AI will do the same but even more of an impact.” – Andrew Ng
“Machine learning and AI will empower and improve every business, every government organization, philanthropy, basically there is no institution in the world that cannot be improved by machine learning.” – Jeff Bezos
ANI (artificial narrow intelligence) and AGI (artificial general intelligence) by themselves have the potential to alleviate an incomprehensible amount of suffering and diseases around the world, and in the next few decades, the hammer of biotechnology and nanotechnology will likely come down to cure all diseases. If the trends of information technologies continue to accelerate, which they certainly will, then in the next decade or so an ASI will be developed. This God-like intelligence will immigrate for resources in space and will scale to an intragalactic size. To iterate old news, to keep up with this new being, we are likely to connect our brains to it via brain-computer-interface.
“The last time something so important like this has happened was maybe 4.2 billion-years-ago, when life was invented.” – Juergen Schmidhuber
Due to independent assortment of chromosomes during meiosis, you roughly have a 1 in 70 trillionth of a chance at living. Now multiply this 70-trillionth by the probability of crossing over, and the process of crossing over has orders of magnitude more possible outcomes than 70 trillion. Then multiply this by probability of random fertilization (the chances of your parents meeting and copulating). Then multiply whatever that number is by similar probabilities for all our ancestors for hundreds of millions of years – ancestors that have also survived asteroid impacts, plagues, famine, predators, and other perils. You may be feeling pretty lucky, but on top of all of that science and technology is about to prevent and cure any disease we may come across, and we will see this new intelligence emerge in our laboratories all around the world. Any attempt to provide a linguistic description for how spectacularly LUCKY we are to be alive right now and to experience this scientific revolution, will be an abysmally disingenuous description, as compared to how truly lucky we all are. AI experts, Transhumanists, Singularitarians, and all others who understand this revolution have an obligation to provide every person with an educated option that they could pursue if they desire to take part in indefinite longevity, augmentation into superintelligence, and whatever lies beyond the Singularity 10-30 years from now.
There are many other potential sources existential threats, such as synthetic biology, nuclear war, the climate crisis, molecular nanotechnology, totalitarianism-enabling technologies, super volcanoes, asteroids, biowarfare, human modification, geoengineering, etc. Mistakes in only one of these areas could cause our species to go extinct, which is the definition of an existential risk. Science created some of these existential risks, and only Science will prevent them. Philosophy, religion, complementary alternative medicines, and any other proposed scientific demarcation will not solve these existential risks, along with the myriad of other individual suffering and death that occurs daily. With this recent advancement, Carl Sagan’s priceless wisdom has become even more palpable than before; “we have arranged a society based on Science and technology, in which no one understands anything about Science and technology and this combustible mixture of ignorance and power sooner or later is going to blow up in our faces.” The best chance we have of surviving this next 30 years and whatever is beyond the Singularity is by transitioning to a Science-Based Species. A Science-Based Species is like Dr. Steven Novella’s recent advocacy, which calls for transition off Evidence-Based medicine to a Science-Based medicine. Dr. Novella and his team understand that “the best method for determining which interventions and health products are safe and effective is, without question, good science.” Why arbitrarily claim this only for medicine? I propose a K-12 educational system that teaches the PROCESS of Science. Only when the majority of ~8 billion people are scientifically literate and when public reason is guided by non-controversial scientific results and non-controversial methods, then we will be cable of managing these scientific tools – tools that could take our jobs, can cause incomprehensible levels of suffering, and kill us all; tools that are currently in our possession; and tools that continue to become more powerful, to democratize, dematerialize, and demonetize at an exponential rate. I cannot stress enough that ‘scientifically literate’ means that the people are adept at utilizing the PROCESS of Science.
Bobby Ridge is the Secretary-Treasurer of the United States Transhumanist Party. Read more about him here.
Tegmark, M. (2015). Elon Musk donates $10M to keep AI beneficial. Futureoflife.org. https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial/
Husain, A. (2018). Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence” | Talks at Google. Talks at Google. Youtube.com. https://www.youtube.com/watch?v=JcC5OV_oA1s&t=763s
Tegmark, M. (2017). Max Tegmark: “Life 3.0: Being Human in the Age of AI” | Talks at Google. Talks at Google. Youtube.com. https://www.youtube.com/watch?v=oYmKOgeoOz4&t=1208s
Conn, A. (2015). Pentagon Seeks $12 -$15 Billion for AI Weapons Research. Futureoflife.org. https://futureoflife.org/2015/12/15/pentagon-seeks-12-15-billion-for-ai-weapons-research/
BAI 2017 conference. (2017). ASILOMAR AI PRINCIPLES. Futureoflife.org. https://futureoflife.org/ai-principles/
Ng, A. (2017). Andrew Ng – The State of Artificial Intelligence. The Artificial Intelligence Channel. Youtube.com. https://www.youtube.com/watch?v=NKpuX_yzdYs
Bezos, J. (2017). Gala2017: Jeff Bezos Fireside Chat. Internet Association. Youtube.com. https://www.youtube.com/watch?v=LqL3tyCQ1yY
Schmidhuber, J. (2017). True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo. TEDx Talks. Youtube.com. https://www.youtube.com/watch?v=-Y7PLaxXUrs
Kurzweil, R. (2001). The Law of Accelerating Returns. Kurzweil Accelerating Intelligence. Kurzweilai.net. http://www.kurzweilai.net/the-law-of-accelerating-returns
Sagan C. (1996). REMEMBERING CARL SAGAN. Charlierose.com. https://charlierose.com/videos/2625
Sbmadmin. (2008). Announcing the Science-Based Medicine Blog. Sciencebasedmedicine.org. https://sciencebasedmedicine.org/hello-world/
Long ago and very, VERY far away, the universe that we find ourselves in today was born. For billions of years, the universe has been completely inhospitable to life as we understand it. This remains the case today throughout much of the known universe, including right here in what we have come to believe is a “safe haven” for us.
Yes, the Earth that gave birth to us appears to be safe – for now. It is clearly much more hospitable to life than even the space just beyond our atmosphere. Make no mistake, however – danger lurks at every corner. Gamma rays from exploding stars millions of light years away; massive solar flares from our own beloved star; planet-obliterating asteroids; sociopathic world leaders armed to the teeth with megaton warheads; and superintelligent robots gone mad are just a few potential threats to the survival of life on this tiny speck of nothing floating in a vast ocean of cold indifference.
Have no fear, though. One of these things is not like the others. It could be the reverse of what we imagine it to be; it very well could be our saving grace.
“Superintelligence.” A term coined to describe the future of man-made technology. It is essentially the natural outcome of the very thing that makes us human: our deep-seated need to understand and master our surroundings. For millions of years now, mankind has made a somewhat steady march towards this goal.
We now stand on the brink of The Singularity – the point in spacetime where humans will be able to integrate the culmination of their millions-of-years-long pursuit of knowledge directly into their physiology. The benefits of this unspeakably beautiful possibility are immeasurable, and yet most of us fear it as much as – or more than – any other event imaginable.
Whether we fear it or not, The Singularity is inevitable. We would not be humans if we collectively decided to stop understanding and mastering our surroundings. If we somehow manage to dodge all of the other doomsday scenarios, we WILL see The Singularity come to fruition.
While it is extremely difficult to speculate what life will be like post-Singularity, it seems silly to think that life will remain as it is now. When every snot-nosed toddler has direct access to computational power exponentially greater than Albert Einstein could have ever hoped to have, diseases – including old age – will be a morbid footnote in the annals of spacetime. When the Neil deGrasse Tysons and Elon Musks of the world have instantaneous access to the accumulated knowledge of the last few hundred years, and the ability to research and master any subject in milliseconds, we will have very few problems.
It is likely that, with our new capabilities, humans will cease to be what we think of as humans altogether. We will be transhumans, a forced step forward in the evolutionary process. Once we transcend evolution, we transcend nature. In essence, we become “supernatural beings” – gods in our own right.
Where does this path lead us, and what does it mean for the “inhospitable” universe out there? Will we conquer “the final frontier” with our god-like powers? This also seems inevitable.
Imagine if we had the ability to “upload” our consciousnesses onto tiny, nearly weightless microchips. The problem of escaping this doomed planet seems ridiculously simple at that point. With no bodies requiring nourishment, being subject to radiation poisoning, or even weighing a spaceship down, it would be extraordinarily easy to transport thousands of sentient beings across vast expanses of spacetime in a vessel no larger than a modern automobile. With no threat of death due to the passage of time, the only real threat at that point is collision with comets and the like.
What would our problems be without any real threat of death? The scarcity of energy comes to mind. Without energy, everything dies. In space, the temperature is near Absolute Zero. This means that energy is NOT abundant, at least not in any form that we’re used to utilizing.
In fact, there is a finite amount of useable energy in the universe. It seems inevitable that our next great mission after The Singularity is to find useable energy throughout the universe and maximize its utility. We will view the destruction of useable energy as the greatest waste imaginable, similar to how we view the wasting of time now. Alas, before The Singularity, we are forced to view time as the most precious resource because we are tragically bound to expiration dates. When time is no longer a hindrance, energy will become vastly more precious.
What then, would be our likely solution to the need to collect and conserve as much energy as possible? Visions of stars wrapped in energy nets, and entire planets converted into generators dance in the head. Eventually, we may even convert entire solar systems, then galaxies, into energy farms to feed our superintelligent descendants. With the conversion of the energy of entire galaxies into fuel for self-evolving gods, all connected through an intergalactic internet which obliterates our notion of a separate “self,” it would not be long before the entire universe is made up of a single, omnipotent, omnipresent being.
It is possible that the end result of the universe is what we call “God.” This is the transposition of the Alpha and the Omega. It is possible that, rather than the universe being created by a supreme being, the opposite is true: the universe could be the womb wherein a supreme being is gestating.
In the end, it seems silly to worry about the “consequences” of our technological advances. Either way, we are destined for eradication in our present form. Whether by our own doing, or simply because the universe is indifferent to our desire to survive, the human race will eventually come to an end. Why not utilize the very essence of our species to escape this fate? What have we really got to lose in the end?
And they all lived happily ever after…
Michael Hanson founded the Transhumanist Research and Support Foundation and endeavors to assist the Transhumanist Party’s efforts in New Hampshire.
Copyright 2017 Live and Live Free Media, LLC
Left-click on the thumbnails below to see a higher-resolution, downloadable image of each painting.
The U.S. Transhumanist Party is pleased to feature art by painter Leah Montalto, inspired by the concept of the Singularity. These paintings were originally exhibited at the Reis Experimental Gallery in Long Island City, NY, during March 23-25, 2016. See the page for the original exhibit, “The Singularity is Here“.
“The paintings are a celebratory valuing of life. They are symbolic of the wonder inherent in the art of creation and building, alluding to the potential for the advancement of civilization. The paintings celebrate the impulse toward reason, innovation, creation, and liberty.” – Leah Montalto
Description from “The Singularity is Here” Exhibit:
In her large-scale paintings, Leah Montalto explores the visible and the invisible, the physical and the metaphysical, envisioning an expansion in both material and inner dimensions. With a masterful deployment of color, and a dynamic sense of motion, her paintings defy convention by simultaneously revealing two paradoxical perspectives.
From one perspective, Montalto’s paintings are classical landscapes, inspired by the tradition of the Hudson River Landscape School. Montalto creates futurist landscapes, taking the viewer on a three-dimensional journey through outer space, and through imagined nanotech creation scenes in scales both massive and miniature. Structures break apart, coalesce, and reform as she evokes the spirit of creation, transformation, and reconstruction.
From a second perspective, Montalto’s paintings are two-dimensional abstract paintings, evoking the physiological effect of gazing at a Tibetan mandala, offering an entry to an internal space of reflection, contemplation, and illumination.
With these paradoxical yet simultaneous perspectives – alternating between three-dimensional space and internal vision, Montalto’s paintings evoke an exhilarating sense of the potential of new worlds and new ways of thinking. Montalto’s paintings present an optimistic view of the future, of transformational possibilities, and of a merging of the material environment and the psyche, and of nature and technology.
Leah Montalto was born in Boston, MA in 1979. She lives and works in New York City and Queens, NY. Montalto holds an MFA from Rhode Island School of Design and a BFA from the Cleveland Institute of Art. She is the recipient of numerous awards, including the National Academy Museum of Fine Art’s Hallgarten Prize for Excellence in Painting, and a New York City Cultural Commission Individual Artist Grant. Exhibitions of her work include shows at the National Academy Museum of Fine Art in New York City, Priska Juschka Fine Art Gallery in New York City, Reis Experimental Gallery in Queens, University of Michigan Gallery in Ann Arbor, and the Korea Biennial. Montalto has taught painting at Sarah Lawrence College, State University of New York at Purchase, Massachusetts College of Art and Design, University of Connecticut, and Rhode Island School of Design.