Browsed by
Tag: artificial superintelligence

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

Gareth John


Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019


In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.

NOTES

1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.

2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)

3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5

4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015

5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)

6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129

7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia) https://intelligence.org/files/PredictingAI.pdf

8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference

9. Pensky, N, 2014, article taken from Pando. https://goo.gl/LpR3eF

10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’. http://goo.gl/ujQlyI

11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015. https://goo.gl/nFzi2y

12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future. http://goo.gl/dZptO3

Gareth John lives in Cardiff, UK and is a trainee social researcher with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party. 


HISTORICAL COMMENTS

Gareth,

Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.

I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.

Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?

Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.

[…]

Thanks,

Rich Casada


Hi Rich,
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!

I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:

“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).

Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.

I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)

So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.

This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.

Gareth

We Must Unite for an International Ban on AI Weaponry; A Real Solution to Survive the Singularity Along with What Lies Beyond – Article by Bobby Ridge

We Must Unite for an International Ban on AI Weaponry; A Real Solution to Survive the Singularity Along with What Lies Beyond – Article by Bobby Ridge

Bobby Ridge


I urge the United States Transhumanist Party to support an international ban on the use of autonomous weapons and support subsidies from governments and alternative funding into research for AI safety – funding that is very similar to Elon Musk’s efforts. Max Tegmark recently stated that “Elon Musk’s $10M donation to the Future of Life Institute that helped put out 37 grants to run a global research program aimed at keeping AI beneficial to humanity.”

Biologists fought hard to pass the international ban on biological weapons, so that the name of biology would be known as it is today, i.e., a science that cures diseases, ends suffering, and makes sense of the complexity of living organisms. Similarly, the community of chemists also united and achieved an international ban on the use of chemical weapons. Scientists conducting AI research should follow their predecessors’ wisdom and unite to achieve an international ban on autonomous weapons! It is sad to say that we are already losing this fight for an international ban on autonomous weapons. The Kalashnikov Bureau weapons manufacturing company announced that they have recently invented an unmanned ground vehicle (UGV), which field tests have already shown better than human level intelligence. China recently began field-testing cruise missiles with AI and autonomous capabilities, and a few companies are getting very close to having AI autopilot operating to control the flight envelope at hypersonic speeds. (Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence“)

Even though, in 2015 and 1016, the US government spent only $1.1 billion and $1.2 billion in AI research, respectively, according to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia.” While these autonomous weapons are already being developed, the UN Convention on Certain Conventional Weapons (CCW) couldn’t even come up with a definition for autonomous weapons after 4 years of meeting up, despite their explicit expression for a dire concern for the spread of autonomous weapons. They decided to put off the conversation another year, but we all know that at the pace technology is advancing, we may not have another year to postpone a definition and solutions. Our species must advocate and emulate the 23 Asilomar AI principles, which over 1000 expert AI researchers from all around the globe have signed.

In only the last decade or so, there has been a combined investment of trillions of dollars towards an AI race from the private sector, such as, Google, Microsoft, Facebook, Amazon, Alibaba, Baidu, and other tech titans, along with whole governments, such as, China, South Korea, Russia, Canada, and only recently the USA. The investments are mainly towards making AI more powerful, but not safer! Yes, the intelligence and sentience of artificial superintelligence (ASI) will be inherently uncontrollable. As a metaphor, humans controlling the development of ASI, will be like an ant trying to control the human development of a NASA space station on top of their ant colony. Before we get to that point, at which hopefully this issue will be solved by a brain-computer-interface, we can get close to making the development of artificial general intelligence (AGI) and weak ASI safe, by steering AI research efforts towards solving the alignment problem, the control problem, and other problems in the field. This can be done with proper funding from the tech titans and governments.

“AI will be the new electricity. Electricity has changed every industry and AI will do the same but even more of an impact.” – Andrew Ng

“Machine learning and AI will empower and improve every business, every government organization, philanthropy, basically there is no institution in the world that cannot be improved by machine learning.” – Jeff Bezos

ANI (artificial narrow intelligence) and AGI (artificial general intelligence) by themselves have the potential to alleviate an incomprehensible amount of suffering and diseases around the world, and in the next few decades, the hammer of biotechnology and nanotechnology will likely come down to cure all diseases. If the trends of information technologies continue to accelerate, which they certainly will, then in the next decade or so an ASI will be developed. This God-like intelligence will immigrate for resources in space and will scale to an intragalactic size. To iterate old news, to keep up with this new being, we are likely to connect our brains to it via brain-computer-interface.

“The last time something so important like this has happened was maybe 4.2 billion-years-ago, when life was invented.” – Juergen Schmidhuber

Due to independent assortment of chromosomes during meiosis, you roughly have a 1 in 70 trillionth of a chance at living. Now multiply this 70-trillionth by the probability of crossing over, and the process of crossing over has orders of magnitude more possible outcomes than 70 trillion. Then multiply this by probability of random fertilization (the chances of your parents meeting and copulating). Then multiply whatever that number is by similar probabilities for all our ancestors for hundreds of millions of years – ancestors that have also survived asteroid impacts, plagues, famine, predators, and other perils. You may be feeling pretty lucky, but on top of all of that science and technology is about to prevent and cure any disease we may come across, and we will see this new intelligence emerge in our laboratories all around the world. Any attempt to provide a linguistic description for how spectacularly LUCKY we are to be alive right now and to experience this scientific revolution, will be an abysmally disingenuous description, as compared to how truly lucky we all are. AI experts, Transhumanists, Singularitarians, and all others who understand this revolution have an obligation to provide every person with an educated option that they could pursue if they desire to take part in indefinite longevity, augmentation into superintelligence, and whatever lies beyond the Singularity 10-30 years from now.

There are many other potential sources existential threats, such as synthetic biology, nuclear war, the climate crisis, molecular nanotechnology, totalitarianism-enabling technologies, super volcanoes, asteroids, biowarfare, human modification, geoengineering, etc. Mistakes in only one of these areas could cause our species to go extinct, which is the definition of an existential risk. Science created some of these existential risks, and only Science will prevent them. Philosophy, religion, complementary alternative medicines, and any other proposed scientific demarcation will not solve these existential risks, along with the myriad of other individual suffering and death that occurs daily. With this recent advancement, Carl Sagan’s priceless wisdom has become even more palpable than before; “we have arranged a society based on Science and technology, in which no one understands anything about Science and technology and this combustible mixture of ignorance and power sooner or later is going to blow up in our faces.” The best chance we have of surviving this next 30 years and whatever is beyond the Singularity is by transitioning to a Science-Based Species. A Science-Based Species is like Dr. Steven Novella’s recent advocacy, which calls for transition off Evidence-Based medicine to a Science-Based medicine. Dr. Novella and his team understand that “the best method for determining which interventions and health products are safe and effective is, without question, good science.” Why arbitrarily claim this only for medicine? I propose a K-12 educational system that teaches the PROCESS of Science. Only when the majority of ~8 billion people are scientifically literate and when public reason is guided by non-controversial scientific results and non-controversial methods, then we will be cable of managing these scientific tools – tools that could take our jobs, can cause incomprehensible levels of suffering, and kill us all; tools that are currently in our possession; and tools that continue to become more powerful, to democratize, dematerialize, and demonetize at an exponential rate. I cannot stress enough that ‘scientifically literate’ means that the people are adept at utilizing the PROCESS of Science.

Bobby Ridge is the Secretary-Treasurer of the United States Transhumanist Party. Read more about him here

References

Tegmark, M. (2015). Elon Musk donates $10M to keep AI beneficial. Futureoflife.org. https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial/

Husain, A. (2018). Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence” | Talks at Google. Talks at Google. Youtube.com. https://www.youtube.com/watch?v=JcC5OV_oA1s&t=763s

Tegmark, M. (2017). Max Tegmark: “Life 3.0: Being Human in the Age of AI” | Talks at Google. Talks at Google. Youtube.com. https://www.youtube.com/watch?v=oYmKOgeoOz4&t=1208s

Conn, A. (2015). Pentagon Seeks $12 -$15 Billion for AI Weapons Research. Futureoflife.org. https://futureoflife.org/2015/12/15/pentagon-seeks-12-15-billion-for-ai-weapons-research/
BAI 2017 conference. (2017). ASILOMAR AI PRINCIPLES. Futureoflife.org. https://futureoflife.org/ai-principles/

Ng, A. (2017). Andrew Ng – The State of Artificial Intelligence. The Artificial Intelligence Channel. Youtube.com. https://www.youtube.com/watch?v=NKpuX_yzdYs

Bezos, J. (2017). Gala2017: Jeff Bezos Fireside Chat. Internet Association. Youtube.com. https://www.youtube.com/watch?v=LqL3tyCQ1yY

Schmidhuber, J. (2017). True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo. TEDx Talks. Youtube.com. https://www.youtube.com/watch?v=-Y7PLaxXUrs

Kurzweil, R. (2001). The Law of Accelerating Returns. Kurzweil Accelerating Intelligence. Kurzweilai.net. http://www.kurzweilai.net/the-law-of-accelerating-returns

Sagan C. (1996). REMEMBERING CARL SAGAN. Charlierose.com. https://charlierose.com/videos/2625

Sbmadmin. (2008). Announcing the Science-Based Medicine Blog. Sciencebasedmedicine.org. https://sciencebasedmedicine.org/hello-world/