Browsed by
Tag: evidence-based medicine

We Must Unite for an International Ban on AI Weaponry; A Real Solution to Survive the Singularity Along with What Lies Beyond – Article by Bobby Ridge

We Must Unite for an International Ban on AI Weaponry; A Real Solution to Survive the Singularity Along with What Lies Beyond – Article by Bobby Ridge

Bobby Ridge


I urge the United States Transhumanist Party to support an international ban on the use of autonomous weapons and support subsidies from governments and alternative funding into research for AI safety – funding that is very similar to Elon Musk’s efforts. Max Tegmark recently stated that “Elon Musk’s $10M donation to the Future of Life Institute that helped put out 37 grants to run a global research program aimed at keeping AI beneficial to humanity.”

Biologists fought hard to pass the international ban on biological weapons, so that the name of biology would be known as it is today, i.e., a science that cures diseases, ends suffering, and makes sense of the complexity of living organisms. Similarly, the community of chemists also united and achieved an international ban on the use of chemical weapons. Scientists conducting AI research should follow their predecessors’ wisdom and unite to achieve an international ban on autonomous weapons! It is sad to say that we are already losing this fight for an international ban on autonomous weapons. The Kalashnikov Bureau weapons manufacturing company announced that they have recently invented an unmanned ground vehicle (UGV), which field tests have already shown better than human level intelligence. China recently began field-testing cruise missiles with AI and autonomous capabilities, and a few companies are getting very close to having AI autopilot operating to control the flight envelope at hypersonic speeds. (Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence“)

Even though, in 2015 and 1016, the US government spent only $1.1 billion and $1.2 billion in AI research, respectively, according to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia.” While these autonomous weapons are already being developed, the UN Convention on Certain Conventional Weapons (CCW) couldn’t even come up with a definition for autonomous weapons after 4 years of meeting up, despite their explicit expression for a dire concern for the spread of autonomous weapons. They decided to put off the conversation another year, but we all know that at the pace technology is advancing, we may not have another year to postpone a definition and solutions. Our species must advocate and emulate the 23 Asilomar AI principles, which over 1000 expert AI researchers from all around the globe have signed.

In only the last decade or so, there has been a combined investment of trillions of dollars towards an AI race from the private sector, such as, Google, Microsoft, Facebook, Amazon, Alibaba, Baidu, and other tech titans, along with whole governments, such as, China, South Korea, Russia, Canada, and only recently the USA. The investments are mainly towards making AI more powerful, but not safer! Yes, the intelligence and sentience of artificial superintelligence (ASI) will be inherently uncontrollable. As a metaphor, humans controlling the development of ASI, will be like an ant trying to control the human development of a NASA space station on top of their ant colony. Before we get to that point, at which hopefully this issue will be solved by a brain-computer-interface, we can get close to making the development of artificial general intelligence (AGI) and weak ASI safe, by steering AI research efforts towards solving the alignment problem, the control problem, and other problems in the field. This can be done with proper funding from the tech titans and governments.

“AI will be the new electricity. Electricity has changed every industry and AI will do the same but even more of an impact.” – Andrew Ng

“Machine learning and AI will empower and improve every business, every government organization, philanthropy, basically there is no institution in the world that cannot be improved by machine learning.” – Jeff Bezos

ANI (artificial narrow intelligence) and AGI (artificial general intelligence) by themselves have the potential to alleviate an incomprehensible amount of suffering and diseases around the world, and in the next few decades, the hammer of biotechnology and nanotechnology will likely come down to cure all diseases. If the trends of information technologies continue to accelerate, which they certainly will, then in the next decade or so an ASI will be developed. This God-like intelligence will immigrate for resources in space and will scale to an intragalactic size. To iterate old news, to keep up with this new being, we are likely to connect our brains to it via brain-computer-interface.

“The last time something so important like this has happened was maybe 4.2 billion-years-ago, when life was invented.” – Juergen Schmidhuber

Due to independent assortment of chromosomes during meiosis, you roughly have a 1 in 70 trillionth of a chance at living. Now multiply this 70-trillionth by the probability of crossing over, and the process of crossing over has orders of magnitude more possible outcomes than 70 trillion. Then multiply this by probability of random fertilization (the chances of your parents meeting and copulating). Then multiply whatever that number is by similar probabilities for all our ancestors for hundreds of millions of years – ancestors that have also survived asteroid impacts, plagues, famine, predators, and other perils. You may be feeling pretty lucky, but on top of all of that science and technology is about to prevent and cure any disease we may come across, and we will see this new intelligence emerge in our laboratories all around the world. Any attempt to provide a linguistic description for how spectacularly LUCKY we are to be alive right now and to experience this scientific revolution, will be an abysmally disingenuous description, as compared to how truly lucky we all are. AI experts, Transhumanists, Singularitarians, and all others who understand this revolution have an obligation to provide every person with an educated option that they could pursue if they desire to take part in indefinite longevity, augmentation into superintelligence, and whatever lies beyond the Singularity 10-30 years from now.

There are many other potential sources existential threats, such as synthetic biology, nuclear war, the climate crisis, molecular nanotechnology, totalitarianism-enabling technologies, super volcanoes, asteroids, biowarfare, human modification, geoengineering, etc. Mistakes in only one of these areas could cause our species to go extinct, which is the definition of an existential risk. Science created some of these existential risks, and only Science will prevent them. Philosophy, religion, complementary alternative medicines, and any other proposed scientific demarcation will not solve these existential risks, along with the myriad of other individual suffering and death that occurs daily. With this recent advancement, Carl Sagan’s priceless wisdom has become even more palpable than before; “we have arranged a society based on Science and technology, in which no one understands anything about Science and technology and this combustible mixture of ignorance and power sooner or later is going to blow up in our faces.” The best chance we have of surviving this next 30 years and whatever is beyond the Singularity is by transitioning to a Science-Based Species. A Science-Based Species is like Dr. Steven Novella’s recent advocacy, which calls for transition off Evidence-Based medicine to a Science-Based medicine. Dr. Novella and his team understand that “the best method for determining which interventions and health products are safe and effective is, without question, good science.” Why arbitrarily claim this only for medicine? I propose a K-12 educational system that teaches the PROCESS of Science. Only when the majority of ~8 billion people are scientifically literate and when public reason is guided by non-controversial scientific results and non-controversial methods, then we will be cable of managing these scientific tools – tools that could take our jobs, can cause incomprehensible levels of suffering, and kill us all; tools that are currently in our possession; and tools that continue to become more powerful, to democratize, dematerialize, and demonetize at an exponential rate. I cannot stress enough that ‘scientifically literate’ means that the people are adept at utilizing the PROCESS of Science.

Bobby Ridge is the Secretary-Treasurer of the United States Transhumanist Party. Read more about him here

References

Tegmark, M. (2015). Elon Musk donates $10M to keep AI beneficial. Futureoflife.org. https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial/

Husain, A. (2018). Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence” | Talks at Google. Talks at Google. Youtube.com. https://www.youtube.com/watch?v=JcC5OV_oA1s&t=763s

Tegmark, M. (2017). Max Tegmark: “Life 3.0: Being Human in the Age of AI” | Talks at Google. Talks at Google. Youtube.com. https://www.youtube.com/watch?v=oYmKOgeoOz4&t=1208s

Conn, A. (2015). Pentagon Seeks $12 -$15 Billion for AI Weapons Research. Futureoflife.org. https://futureoflife.org/2015/12/15/pentagon-seeks-12-15-billion-for-ai-weapons-research/
BAI 2017 conference. (2017). ASILOMAR AI PRINCIPLES. Futureoflife.org. https://futureoflife.org/ai-principles/

Ng, A. (2017). Andrew Ng – The State of Artificial Intelligence. The Artificial Intelligence Channel. Youtube.com. https://www.youtube.com/watch?v=NKpuX_yzdYs

Bezos, J. (2017). Gala2017: Jeff Bezos Fireside Chat. Internet Association. Youtube.com. https://www.youtube.com/watch?v=LqL3tyCQ1yY

Schmidhuber, J. (2017). True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo. TEDx Talks. Youtube.com. https://www.youtube.com/watch?v=-Y7PLaxXUrs

Kurzweil, R. (2001). The Law of Accelerating Returns. Kurzweil Accelerating Intelligence. Kurzweilai.net. http://www.kurzweilai.net/the-law-of-accelerating-returns

Sagan C. (1996). REMEMBERING CARL SAGAN. Charlierose.com. https://charlierose.com/videos/2625

Sbmadmin. (2008). Announcing the Science-Based Medicine Blog. Sciencebasedmedicine.org. https://sciencebasedmedicine.org/hello-world/

Review of Philip Tetlock’s “Superforecasting” by Adam Alonzi

Review of Philip Tetlock’s “Superforecasting” by Adam Alonzi

logo_bg

Adam Alonzi


Alexander Consulting the Oracle of Apollo, Louis Jean Francois Lagrenée. 1789, Oil on Canvas.

“All who drink of this treatment recover in a short time, except those whom it does not help, who all die. It is obvious, therefore, that it fails only in incurable cases.”

-Galen

Before the advent of evidence-based medicine, most physicians took an attitude like Galen’s toward their prescriptions. If their remedies did not work, surely the fault was with their patient. For centuries scores of revered doctors did not consider putting bloodletting or trepanation to the test. Randomized trials to evaluate the efficacy of a treatment were not common practice. Doctors like Archie Cochrane, who fought to make them part of standard protocol, were met with fierce resistance. Philip Tetlock, author of Superforecasting: The Art and Science of Prediction (2015), contends that the state of forecasting in the 21st century is strikingly similar to medicine in the 19th. Initiatives like the Good Judgement Project (GJP), a website that allows anyone to make predictions about world events, have shown that even a discipline that is largely at the mercy of chance can be put on a scientific footing.

More than once the author reminds us that the key to success in this endeavor is not what you think or what you know, but how you think. For Tetlock pundits like Thomas Friedman are the “exasperatingly evasive” Galens of the modern era. In the footnotes he lets the reader know he chose Friedman as target strictly because of his prominence. There are many like him. Tetlock’s academic work comparing random selections with those of professionals led media outlets to publish, and a portion of their readers to conclude, that expert opinion is no more accurate than a dart-throwing chimpanzee. What the undiscerning did not consider, however, is not all of the experts who participated failed to do better than chance.

Daniel Kahneman hypothesized that “attentive readers of the New York Times…may be only slightly worse” than these experts corporations and governments so handsomely recompense. This turned out to be a conservative guess. The participants in the Good Judgement Project outperformed all control groups, including one composed of professional intelligence analysts with access to classified information. This hodgepodge of retired bird watchers, unemployed programmers, and news junkies did 30% better than the “pros.” More importantly, at least to readers who want to gain a useful skillset as well as general knowledge, the managers of the GJP have identified qualities and ways of thinking that separate “superforecasters” from the rest of us. Fortunately they are qualities we can all cultivate.

While the merits of his macroeconomic theories can be debated, John Maynard Keynes was an extremely successful investor during one of the bleakest periods in international finance. This was no doubt due in part to his willingness to make allowance for new information and his grasp of probability. Participants in the GJP display open-mindedness, an ability and willingness to repeatedly update their forecasts, a talent to neither under- nor over-react to new information by putting it into a broader context,  and a predilection for mathematical thinking (though those interviewed admitted they rarely used an explicit equation to calculate their answer). The figures they give also tend to be more precise than their less successful peers. This “granularity” may seem ridiculous at first. I must confess that when I first saw estimates on the GJP of 34% or 59%, I would chuckle a bit. How, I asked myself, is a single percentage point meaningful? Aren’t we just dealing with rough approximations? Apparently not.

Tetlock reminds us that the GJP does not deal with nebulous questions like “Who will be president in 2027?” or “Will a level 9 earthquake hit California two years from now?” However, there are questions that are not, in the absence of unforeseeable Black Swan events, completely inscrutable. Who will win the Mongolian presidency? Will Uruguay sign a trade agreement with Laos in the next six months? These are parts of highly complex systems, but they can be broken down into tractable subproblems.

Using numbers instead of words like “possibly”, “probably”, “unlikely”, etc., seems unnatural. It gives us wiggle room and plausible deniability. They also cannot be put on any sort of record to keep score of how well we’re doing. Still, to some it may seem silly, pedantic, or presumptuous. If Joint Chiefs of Staff had given the exact figure they had in mind (3 to 1) instead of the “fair chance” given to Kennedy, the Bay of Pigs debacle may have never transpired. Because they represent ranges of values instead of single numbers, words can be retroactively stretched or shrunk to make blunders seem a little less avoidable. This is good for advisors looking to cover their hides by hedging their bets, but not so great for everyone else.

If American intelligence agencies had presented the formidable but vincible figure of 70% instead of a “slam dunk” to Congress, a disastrous invasion and costly occupation would have been prevented. At this point it is hard not to see the invasion as anything as a mistake, but even amidst these emotions we must be wary of hindsight. Still, a 70% chance of being right means there is a 30% chance of being wrong. It is hardly a “slam dunk.” No one would feel completely if an oncologist told them they are 70% sure the growth is not malignant. There are enormous consequences to sloppy communications. However, those with vested interests are more than content with this approach if it agrees with them, even if it ends up harming them.

When Nate Silver put the odds of the 2008 election in Obama’s favor, he was panned by Republicans as a pawn of the liberal media. He was quickly reviled by Democrats when he foresaw a Republican takeover of the Senate. It is hard to be a wizard when the king, his court, and all the merry peasants sweeping the stables would not know a confirmation bias from their right foot. To make matters worse, confidence is widely equated with capability. This seems to be doubly true of groups of people, particularly when they are choosing a leader. A mutual-fund manager who tells his clients they will see great returns on a company is viewed as stronger than a Poindexter prattling on about Bayesian inference and risk management.

The GJP’s approach has not spread far — yet. At this time most pundits, consultants, and self-proclaimed sages do not explicitly quantify their success rates, but this does not stop corporations, NGOs, and institutions at all levels of government from paying handsomely for the wisdom of untested soothsayers. Perhaps they have a few diplomas, but most cannot provide compelling evidence for expertise in haruspicy (sans the sheep’s liver). Given the criticality of accurate analyses to saving time and money, it would seem as though a demand for methods to improve and assess the quality of foresight would arise. Yet for the most part individuals and institutions continue to happily grope in the dark, unaware of the necessity for feedback when they misstep — afraid of having their predictions scrutinized or having to take the pains to scrutinize their predictions.

David Ferrucci is wary of the “guru model” to settling disputes. No doubt you’ve witnessed or participated in this kind of whimpering fracas: one person presents a Krugman op-ed to debunk a Niall Ferguson polemic, which is then countered with a Tommy Friedman book, which was recently excoriated  by the newest leader of the latest intellectual cult to come out of the Ivy League. In the end both sides leave frustrated. Krugman’s blunders regarding the economic prospects of the Internet, deflation, the “imminent” collapse of the euro (said repeatedly between 2010 and 2012) are legendary. Similarly, Ferguson, who strongly petitioned the Federal Reserve to reconsider quantitative easing, lest the United States suffer Weimar-like inflation, has not yet been vindicated. He and his colleagues responded in the same way as other embarrassed prophets: be patient, it has not happened, but it will! In his defense, more than one clever person has criticized the way governments calculate their inflation rates…

Paul Ehrlich, a darling of environmentalist movement, has screeched about the detonation of a “population bomb” for decades. Civilization was set to collapse between 15 and 30 years from 1970. During the interim 100 to 200 million would annually starve to death, by the year 2000 no crude oil would be left, the prices of raw materials would skyrocket, and the planet would be in the midst of a perpetual famine. Tetlock does not mention Ehrlich, but he is, particularly given his persisting influence on Greens, as or more deserving of a place in this hall of fame as anyone else. Larry Kudlow continued to assure the American people that the Bush tax breaks were producing massive economic growth. This continued well into 2008, when he repeatedly told journalists that America was not in a recession and the Bush boom was “alive and well.” For his stupendous commitment to his contention in the face of overwhelming evidence to the contrary, he was nearly awarded a seat in the Trump cabinet.

This is not to say a mistake should become the journalistic equivalent of a scarlet letter. Kudlow’s slavish adherence to his axioms is not unique. Ehrlich’s blindness to technological advances is not uncommon, even in an era dominated by technology. By failing to set a timeline or give detailed causal accounts, many believe they have predicted every crash since they learned how to say the word. This is likely because they begin each day with the same mantra: “the market will crash.”  Yet through an automatically executed routine of psychological somersaults, they do not see they were right only once and wrong dozens, hundreds, or thousands of times. This kind of person is much more deserving of scorn than a poker player who boasts about his victories, because he is (likely) also aware of how often he loses. At least he’s not fooling himself. The severity of Ehrlich’s misfires is a reminder of what happens when someone looks too far ahead while assuming all things will remain the same. Ceteris paribus exists only in laboratories and textbooks.

Axioms are fates accepted by different people as truth, but the belief in Fate (in the form of retroactive narrative construction) is a nearly ubiquitous stumbling block to clear thinking. We may be far removed from Sophocles, but the unconscious human drive to create sensible narratives is not peculiar to fifth-century B.C. Athens. A questionnaire given to students at Northwestern showed that most believed things had turned out for the best even if they had gotten into their first pick. From an outsider’s perspective this is probably not true. In our cocoons we like to think we are in the right place either through the hand of fate or through our own choices. Atheists are not immune to this Panglossian habit. Our brains are wired for stories, but the stories we tell ourselves about ourselves seldom come out without distortions. We can gain a better outside view, which allows us to see situations from perspectives other than our own, but only through regular practice with feedback. This is one of the reasons groups are valuable.

Francis Galton asked 787 villagers to guess the weight of an ox hanging in the market square. The average of their guesses (1,197 lbs) turned out to be remarkably close to its actual weight (1,198 lbs). Scott Page has said “diversity trumps ability.” This is a tad bold, since legions of very different imbeciles will never produce anything of value, but there is undoubtedly a benefit to having a group with more than one point of view. This was tested by the GJP. Teams performed better than lone wolves by a significant margin (23% to be exact). Partially as a result of encouraging one another and building a culture of excellence, and partially from the power of collective intelligence.

“No battle plan survives contact with the enemy.”

-Helmuth von Moltke

“Everyone has a plan ’till they get punched in the mouth.”

-Mike Tyson

When Archie Cochrane was told he had cancer by his surgeon, he prepared for death. Type 1 thinking grabbed hold of him and did not doubt the diagnosis. A pathologist later told him the surgeon was wrong. The best of us, under pressure, fall back on habitual modes of thinking. This is another reason why groups are useful (assuming all their members do not also panic). Organizations like the GJP and the Millennium Project are showing how well collective intelligence systems can perform. Helmuth von Moltke and Mike Tyson aside, a better motto, substantiated by a growing body of evidence, comes from Dwight  Eisenhower: “plans are useless, but planning is indispensable.”

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.