We Must Unite for an International Ban on AI Weaponry; A Real Solution to Survive the Singularity Along with What Lies Beyond – Article by Bobby Ridge

We Must Unite for an International Ban on AI Weaponry; A Real Solution to Survive the Singularity Along with What Lies Beyond – Article by Bobby Ridge

Bobby Ridge


I urge the United States Transhumanist Party to support an international ban on the use of autonomous weapons and support subsidies from governments and alternative funding into research for AI safety – funding that is very similar to Elon Musk’s efforts. Max Tegmark recently stated that “Elon Musk’s $10M donation to the Future of Life Institute that helped put out 37 grants to run a global research program aimed at keeping AI beneficial to humanity.”

Biologists fought hard to pass the international ban on biological weapons, so that the name of biology would be known as it is today, i.e., a science that cures diseases, ends suffering, and makes sense of the complexity of living organisms. Similarly, the community of chemists also united and achieved an international ban on the use of chemical weapons. Scientists conducting AI research should follow their predecessors’ wisdom and unite to achieve an international ban on autonomous weapons! It is sad to say that we are already losing this fight for an international ban on autonomous weapons. The Kalashnikov Bureau weapons manufacturing company announced that they have recently invented an unmanned ground vehicle (UGV), which field tests have already shown better than human level intelligence. China recently began field-testing cruise missiles with AI and autonomous capabilities, and a few companies are getting very close to having AI autopilot operating to control the flight envelope at hypersonic speeds. (Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence“)

Even though, in 2015 and 1016, the US government spent only $1.1 billion and $1.2 billion in AI research, respectively, according to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia.” While these autonomous weapons are already being developed, the UN Convention on Certain Conventional Weapons (CCW) couldn’t even come up with a definition for autonomous weapons after 4 years of meeting up, despite their explicit expression for a dire concern for the spread of autonomous weapons. They decided to put off the conversation another year, but we all know that at the pace technology is advancing, we may not have another year to postpone a definition and solutions. Our species must advocate and emulate the 23 Asilomar AI principles, which over 1000 expert AI researchers from all around the globe have signed.

In only the last decade or so, there has been a combined investment of trillions of dollars towards an AI race from the private sector, such as, Google, Microsoft, Facebook, Amazon, Alibaba, Baidu, and other tech titans, along with whole governments, such as, China, South Korea, Russia, Canada, and only recently the USA. The investments are mainly towards making AI more powerful, but not safer! Yes, the intelligence and sentience of artificial superintelligence (ASI) will be inherently uncontrollable. As a metaphor, humans controlling the development of ASI, will be like an ant trying to control the human development of a NASA space station on top of their ant colony. Before we get to that point, at which hopefully this issue will be solved by a brain-computer-interface, we can get close to making the development of artificial general intelligence (AGI) and weak ASI safe, by steering AI research efforts towards solving the alignment problem, the control problem, and other problems in the field. This can be done with proper funding from the tech titans and governments.

“AI will be the new electricity. Electricity has changed every industry and AI will do the same but even more of an impact.” – Andrew Ng

“Machine learning and AI will empower and improve every business, every government organization, philanthropy, basically there is no institution in the world that cannot be improved by machine learning.” – Jeff Bezos

ANI (artificial narrow intelligence) and AGI (artificial general intelligence) by themselves have the potential to alleviate an incomprehensible amount of suffering and diseases around the world, and in the next few decades, the hammer of biotechnology and nanotechnology will likely come down to cure all diseases. If the trends of information technologies continue to accelerate, which they certainly will, then in the next decade or so an ASI will be developed. This God-like intelligence will immigrate for resources in space and will scale to an intragalactic size. To iterate old news, to keep up with this new being, we are likely to connect our brains to it via brain-computer-interface.

“The last time something so important like this has happened was maybe 4.2 billion-years-ago, when life was invented.” – Juergen Schmidhuber

Due to independent assortment of chromosomes during meiosis, you roughly have a 1 in 70 trillionth of a chance at living. Now multiply this 70-trillionth by the probability of crossing over, and the process of crossing over has orders of magnitude more possible outcomes than 70 trillion. Then multiply this by probability of random fertilization (the chances of your parents meeting and copulating). Then multiply whatever that number is by similar probabilities for all our ancestors for hundreds of millions of years – ancestors that have also survived asteroid impacts, plagues, famine, predators, and other perils. You may be feeling pretty lucky, but on top of all of that science and technology is about to prevent and cure any disease we may come across, and we will see this new intelligence emerge in our laboratories all around the world. Any attempt to provide a linguistic description for how spectacularly LUCKY we are to be alive right now and to experience this scientific revolution, will be an abysmally disingenuous description, as compared to how truly lucky we all are. AI experts, Transhumanists, Singularitarians, and all others who understand this revolution have an obligation to provide every person with an educated option that they could pursue if they desire to take part in indefinite longevity, augmentation into superintelligence, and whatever lies beyond the Singularity 10-30 years from now.

There are many other potential sources existential threats, such as synthetic biology, nuclear war, the climate crisis, molecular nanotechnology, totalitarianism-enabling technologies, super volcanoes, asteroids, biowarfare, human modification, geoengineering, etc. Mistakes in only one of these areas could cause our species to go extinct, which is the definition of an existential risk. Science created some of these existential risks, and only Science will prevent them. Philosophy, religion, complementary alternative medicines, and any other proposed scientific demarcation will not solve these existential risks, along with the myriad of other individual suffering and death that occurs daily. With this recent advancement, Carl Sagan’s priceless wisdom has become even more palpable than before; “we have arranged a society based on Science and technology, in which no one understands anything about Science and technology and this combustible mixture of ignorance and power sooner or later is going to blow up in our faces.” The best chance we have of surviving this next 30 years and whatever is beyond the Singularity is by transitioning to a Science-Based Species. A Science-Based Species is like Dr. Steven Novella’s recent advocacy, which calls for transition off Evidence-Based medicine to a Science-Based medicine. Dr. Novella and his team understand that “the best method for determining which interventions and health products are safe and effective is, without question, good science.” Why arbitrarily claim this only for medicine? I propose a K-12 educational system that teaches the PROCESS of Science. Only when the majority of ~8 billion people are scientifically literate and when public reason is guided by non-controversial scientific results and non-controversial methods, then we will be cable of managing these scientific tools – tools that could take our jobs, can cause incomprehensible levels of suffering, and kill us all; tools that are currently in our possession; and tools that continue to become more powerful, to democratize, dematerialize, and demonetize at an exponential rate. I cannot stress enough that ‘scientifically literate’ means that the people are adept at utilizing the PROCESS of Science.

Bobby Ridge is the Secretary-Treasurer of the United States Transhumanist Party. Read more about him here

References

Tegmark, M. (2015). Elon Musk donates $10M to keep AI beneficial. Futureoflife.org. https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial/

Husain, A. (2018). Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence” | Talks at Google. Talks at Google. Youtube.com. https://www.youtube.com/watch?v=JcC5OV_oA1s&t=763s

Tegmark, M. (2017). Max Tegmark: “Life 3.0: Being Human in the Age of AI” | Talks at Google. Talks at Google. Youtube.com. https://www.youtube.com/watch?v=oYmKOgeoOz4&t=1208s

Conn, A. (2015). Pentagon Seeks $12 -$15 Billion for AI Weapons Research. Futureoflife.org. https://futureoflife.org/2015/12/15/pentagon-seeks-12-15-billion-for-ai-weapons-research/
BAI 2017 conference. (2017). ASILOMAR AI PRINCIPLES. Futureoflife.org. https://futureoflife.org/ai-principles/

Ng, A. (2017). Andrew Ng – The State of Artificial Intelligence. The Artificial Intelligence Channel. Youtube.com. https://www.youtube.com/watch?v=NKpuX_yzdYs

Bezos, J. (2017). Gala2017: Jeff Bezos Fireside Chat. Internet Association. Youtube.com. https://www.youtube.com/watch?v=LqL3tyCQ1yY

Schmidhuber, J. (2017). True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo. TEDx Talks. Youtube.com. https://www.youtube.com/watch?v=-Y7PLaxXUrs

Kurzweil, R. (2001). The Law of Accelerating Returns. Kurzweil Accelerating Intelligence. Kurzweilai.net. http://www.kurzweilai.net/the-law-of-accelerating-returns

Sagan C. (1996). REMEMBERING CARL SAGAN. Charlierose.com. https://charlierose.com/videos/2625

Sbmadmin. (2008). Announcing the Science-Based Medicine Blog. Sciencebasedmedicine.org. https://sciencebasedmedicine.org/hello-world/

Leave a Reply

Your email address will not be published. Required fields are marked *