Browsed by
Tag: Artificial Intelligence

James Hughes’ Problems of Transhumanism: A Review (Part 3) – Article by Ojochogwu Abdul

James Hughes’ Problems of Transhumanism: A Review (Part 3) – Article by Ojochogwu Abdul

logo_bg

Ojochogwu Abdul


Part 1 | Part 2 | Part 3 | Part 4 | Part 5

Part 3: Liberal Democracy Versus Technocratic Absolutism

“Transhumanists, like Enlightenment partisans in general, believe that human nature can be improved but are conflicted about whether liberal democracy is the best path to betterment. The liberal tradition within the Enlightenment has argued that individuals are best at finding their own interests and should be left to improve themselves in self-determined ways. But many people are mistaken about their own best interests, and more rational elites may have a better understanding of the general good. Enlightenment partisans have often made a case for modernizing monarchs and scientific dictatorships. Transhumanists need to confront this tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites.” (James Hughes, 2010)

Hughes’ series of essays exploring problems of transhumanism continues with a discussion on the tensions between a choice either for liberal democracy or technocratic absolutism as existing or prospective within the transhumanist movement. As Hughes would demonstrate, this problem in socio-political preference between liberalism and despotism turns out as just one more among the other transhumanist contradictions inherited from its roots in the Enlightenment. Liberalism, an idea which received much life during the Enlightenment, developed as an argument for human progress. Cogently articulated in J.S. Mill’s On Liberty, Hughes re-presents the central thesis: “if individuals are given liberty they will generally know how to pursue their interests and potentials better than will anyone else. So, society generally will become richer and more intelligent if individuals are free to choose their own life ends rather than if they are forced towards betterment by the powers that be.” This, essentially, was the Enlightenment’s ground for promoting liberalism.

Read More Read More

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

logo_bg

Gennady Stolyarov II
Ray Kurzweil


The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

logo_bg

Gennady Stolyarov II
Zoltan Istvan
Max More
Ben Goertzel
Natasha Vita-More


Gennady Stolyarov II, Chairman of the United States Transhumanist Party, moderated this panel discussion, entitled “Advocating for the Future”, at RAAD Fest 2017 on August 11, 2017, in San Diego, California.

Watch it on YouTube here.

From left to right, the panelists are Zoltan Istvan, Gennady Stolyarov II, Max More, Ben Goertzel, and Natasha Vita-More. With these leading transhumanist luminaries, Mr. Stolyarov discussed subjects such as what the transhumanist movement will look like in 2030, artificial intelligence and sources of existential risk, gamification and the use of games to motivate young people to create a better future, and how to persuade large numbers of people to support life-extension research with at least the same degree of enthusiasm that they display toward the fight against specific diseases.

Learn more about RAAD Fest here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentations of Gennady Stolyarov II and Zoltan Istvan from the “Advocating for the Future” panel.

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

The Singularity: Fact or Fiction or Somewhere In-Between? – Article by Gareth John

Gareth John


Editor’s Note: The U.S. Transhumanist Party features this article by our member Gareth John, originally published by IEET on January 13, 2016, as part of our ongoing integration with the Transhuman Party. This article raises various perspectives about the idea of technological Singularity and asks readers to offer their perspectives regarding how plausible the Singularity narrative, especially as articulated by Ray Kurzweil, is. The U.S. Transhumanist Party welcomes such deliberations and assessments of where technological progress may be taking our species and how rapid such progress might be – as well as how subject to human influence and socio-cultural factors technological progress is, and whether a technological Singularity would be characterized predominantly by benefits or by risks to humankind. The article by Mr. John is a valuable contribution to the consideration of such important questions.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, January 2, 2019


In my continued striving to disprove the theorem that there’s no such thing as a stupid question, I shall now proceed to ask one. What’s the consensus on Ray Kurzweil’s position concerning the coming Singularity? [1] Do you as transhumanists accept his premise and timeline, or do you feel that a) it’s a fiction, or b) it’s a reality but not one that’s going to arrive anytime soon? Is it as inevitable as Kurzweil suggests, or is it simply millenarian daydreaming in line with the coming Rapture?

According to Wikipedia (yes, I know, but I’m learning as I go along), the first use of the term ‘singularity’ in this context was made by Stanislav Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’. [2] The term was popularised by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological advancement, or brain-computer interfaces could be possible causes of the singularity. [3]  Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. [4]

Kurzweil predicts the singularity to occur around 2045 [5] whereas Vinge predicts some time before 2030 [6]. In 2012, Stuart Armstrong and Kaj Sotala published a study of AGI predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. [7] Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: ‘It’s not fully formalized, but my current 80% estimate is something like five to 100 years.’ [8]

Speaking for myself, and despite the above, I’m not at all convinced that a Singularity will occur, i.e. one singular event that effectively changes history for ever from that precise moment moving forward. From my (admittedly limited) research on the matter, it seems far more realistic to think of the future in terms of incremental steps made along the way, leading up to major diverse changes (plural) in the way we as human beings – and indeed all sentient life – live, but try as I might I cannot get my head around these all occurring in a near-contemporary Big Bang.

Surely we have plenty of evidence already that the opposite will most likely be the case? Scientists have been working on AI, nanotechnology, genetic engineering, robotics, et al., for many years and I see no reason to conclude that this won’t remain the case in the years to come. Small steps leading to big changes maybe, but perhaps not one giant leap for mankind in a singular convergence of emerging technologies?

Let’s be straight here: I’m not having a go at Kurzweil or his ideas – the man’s clearly a visionary (at least from my standpoint) and leagues ahead when it comes to intelligence and foresight. I’m simply interested as to what extent his ideas are accepted by the wider transhumanist movement.

There are notable critics (again leagues ahead of me in critically engaging with the subject) who argue against the idea of the Singularity. Nathan Pensky, writing in 2014 says:

It’s no doubt true that the speculative inquiry that informed Kurzweil’s creation of the Singularity also informed his prodigious accomplishment in the invention of new tech. But just because a guy is smart doesn’t mean he’s always right. The Singularity makes for great science-fiction, but not much else. [9]

Other well-informed critics have also dismissed Kurzweil’s central premise, among them Professor Andrew Blake, managing director of Microsoft at Cambridge, Jaron Lanier, Paul Allen, Peter Murray, Jeff Hawkins, Gordon Moore, Jared Diamond, and Steven Pinker to name but a few. Even Noam Chomsky has waded in to categorically deny the possibility of such. Pinker writes:

There is not the slightest reason to believe in the coming singularity. The fact you can visualise a future in your imagination is not evidence that it is likely or even possible… Sheer processing is not a pixie dust that magically solves all your problems. [10]

There are, of course, many more critics, but then there are also many supporters also, and Kurzweil rarely lets a criticism pass without a fierce rebuttal. Indeed, new academic interdisciplinary disciplines have been founded in part on the presupposition of the Singularity occurring in line with Kurzweil’s predictions (along with other phenomena that pose the possibility of existential risk). Examples include Nick Bostrom’s Future of Humanity Institute at Oxford University or the Centre for the Study of Existential Risk at Cambridge.

Given the above and returning to my original question: how do transhumanists taken as a whole rate the possibility of an imminent Singularity as described by Kurzweil? Good science or good science-fiction? For Kurzweil it is the pace of change – exponential growth – that will result in a runaway effect – an intelligence explosion– where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a super intelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [11] The only way for us to participate in such an event will be by merging with the intelligent machines we are creating.

And I guess this is what is hard for me to fathom. We are creating these machines with all our mixed-up, blinkered, prejudicial, oppositional minds, aims, and values. We as human beings, however intelligent, are an absolutely necessary part of the picture that I think Kurzweil sometimes underestimates. I’m more inclined to agree with Jamais Cascio when he says:

I don’t think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late followers learn from the dilemmas of those who had initially encountered the disruptive change. [12]

So I’d love to know what you think. Are you in Kurzweil’s corner waiting for that singular moment in 2045 when the world as we know it stops for an instant… and then restarts in a glorious new utopian future? Or do you agree with Kurzweil but harbour serious fears that the whole ‘glorious new future’ may not be on the cards and we’re all obliterated in the newborn AGI’s capriciousness or gray goo? Or, are you a moderate, maintaining that a Singularity, while almost certain to occur, will pass unnoticed by those waiting? Or do you think it’s so much baloney?

Whatever, I’d really value your input and hear your views on the subject.

NOTES

1. As stated below, the term Singularity was in use before Kurweil’s appropriation of it. But as shorthand I’ll refer to his interpretation and predictions relating to it throughout this article.

2. Carvalko, J, 2012, ‘The Techno-human Shell-A Jump in the Evolutionary Gap.’ (Mechanicsburg: Sunbury Press)

3. Ulam, S, 1958, ‘ Tribute to John von Neumann’, 64, #3, part 2. Bulletin of the American Mathematical Society. p. 5

4. Vinge, V, 2013, ‘Vernor Vinge on the Singularity’, San Diego State University. Retrieved Nov 2015

5. Kurzweil R, 2005, ‘The Singularity is Near’, (London: Penguin Group)

6. Vinge, V, 1993, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129

7. Armstrong S and Sotala, K, 2012 ‘How We’re Predicting AI – Or Failing To’, in Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster (Pilsen: University of West Bohemia) https://intelligence.org/files/PredictingAI.pdf

8. Armstrong, S, ‘How We’re Predicting AI’, from the 2012 Singularity Conference

9. Pensky, N, 2014, article taken from Pando. https://goo.gl/LpR3eF

10. Pinker S, 2008, IEEE Spectrum: ‘Tech Luminaries Address Singularity’. http://goo.gl/ujQlyI

11. Wikipedia, ‘Technological Singularity; Retrieved Nov 2015. https://goo.gl/nFzi2y

12. Cascio, J, ‘New FC: Singularity Scenarios’ article taken from Open the Future. http://goo.gl/dZptO3

Gareth John lives in Cardiff, UK and is a trainee social researcher with an interest in the intersection of emerging technologies with behavioural and mental health. He has an MA in Buddhist Studies from the University of Bristol. He is also a member of the U.S. Transhumanist Party / Transhuman Party. 


HISTORICAL COMMENTS

Gareth,

Thank you for the thoughtful article. I’m emailing to comment on the blog post, though I can’t tell when it was written. You say that you don’t believe the singularity will necessarily occur the way Kurzweil envisions, but it seems like you slightly mischaracterize his definition of the term.

I don’t believe that Kurzweil ever meant to suggest that the singularity will simply consist of one single event that will change everything. Rather, I believe he means that the singularity is when no person can make any prediction past that point in time when a $1,000 computer becomes smarter than the entire human race, much like how an event horizon of a black hole prevents anyone from seeing past it.

Given that Kurzweil’s definition isn’t an arbitrary claim that everything changes all at once, I don’t see how anyone can really argue with whether the singularity will happen. After all, at some point in the future, even if it happens much slower than Kurzweil predicts, a $1,000 computer will eventually become smarter than every human. When this happens, I think it’s fair to say no one is capable of predicting the future of humanity past that point. Would you disagree with this?

Even more important is that although many of Kurzweil’s predictions are untrue about when certain products will become commercially available to the general public, all the evidence I’ve seen about the actual trend of the law of accelerating returns seems to be exactly spot on. Maybe this trend will slow down, or stop, but it hasn’t yet. Until it does, I think the law of accelerating returns, and Kurzweil’s singularity, deserve the benefit of the doubt.

[…]

Thanks,

Rich Casada


Hi Rich,
Thanks for the comments. The post was written back in 2015 for IEET, and represented a genuine ask from the transhumanist community. At that time my priority was to learn what I could, where I could, and not a lot’s changed for me since – I’m still learning!

I’m not sure I agree that Kurzweil’s definition isn’t a claim that ‘everything changes at once’. In The Singularity is Near, he states:

“So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence … This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars’ worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” (Kurzweil 2005, pp.135-36, italics mine).

Kurzweil specifically defines what the Singularity is and isn’t (a profound and disruptive transformation in human intelligence), and a more-or-less precise prediction of when it will occur. A consequence of that may be that we will not ‘be able to make any prediction past that point in time’, however, I don’t believe this is the main thrust of Kurzweil’s argument.

I do, however, agree with what you appear to be postulating (correct me if I’m wrong) in that a better definition of a Singularity might indeed simply be ‘when no person can make any prediction past that point in time.’ And, like you, I don’t believe it will be tied to any set-point in time. We may be living through a singularity as we speak. There may be many singularities (although, worth noting again, Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence as opposed to other technologies, writing for example that, “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” (Kurzweil 2005, p. 9)

So, having said all that, and in answer to your question of whether there is a point beyond which no one is capable of predicting the future of humanity: I’m not sure. I guess none of us can really be sure until, or unless, it happens.

This is why I believe having the conversation about the ethical implications of these new technologies now is so important. Post-singularity might simply be too late.

Gareth

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by Gennady Stolyarov II

Gennady Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can I Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can I Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

Gennady Stolyarov II is Chairman of the United States Transhumanist Party. Learn more about Mr. Stolyarov here

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). 

Sophia the Humanoid Robot Wants to Meet You at RAADfest – Video by Hanson Robotics

Sophia the Humanoid Robot Wants to Meet You at RAADfest – Video by Hanson Robotics

logo_bg

Hanson Robotics
Coalition for Radical Life Extension


Editor’s Note: The U.S. Transhumanist Party encourages our members to attend RAAD Fest 2018, where we will have our own conference room, and technological marvels such as Sophia the Robot, as well the visionaries who make these technological advances possible, will be present. Over the coming weeks we hope to offer other videos highlighting some of the key features of this unique gathering in furtherance of the Revolution Against Aging and Death.

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, August 10, 2018

Message from the Coalition for Radical Life Extension:

Meet Sophia, the latest robot from Hanson Robotics. She will be attending (and performing!) at RAADfest 2018.

Sophia was created using breakthrough robotics and artificial intelligence technologies developed by David Hanson, Dr. Ben Goertzel and their friends at Hanson Robotics in Hong Kong; and is being used as a platform for blockchain-based AI development by SingularityNET Foundation.

RAADfest is the largest event in the world where practical and cutting-edge methods to reverse aging are presented for all interest levels, from beginner to expert.

RAADfest is organized by the non-profit Coalition for Radical Life Extension.

-More about RAADfest: http://raadfest.com/

-More about the Coalition for Radical Life Extension: http://www.rlecoalition.com/

-More about Sophia: http://sophiabot.com/

-More about Hanson Robotics: http://www.hansonrobotics.com/

Fourth Enlightenment Salon – Political Segment: Discussion on Artificial Intelligence in Politics, Voting Systems, and Democracy

Fourth Enlightenment Salon – Political Segment: Discussion on Artificial Intelligence in Politics, Voting Systems, and Democracy

logo_bg

Gennady Stolyarov II
Bill Andrews
Bobby Ridge
John Murrieta


This is the third and final video segment from Mr. Stolyarov’s Fourth Enlightenment Salon.

Watch the first segment here.

Watch the second segment here.

On July 8, 2018, during his Fourth Enlightenment Salon, Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, invited John Murrieta, Bobby Ridge, and Dr. Bill Andrews for an extensive discussion about transhumanist advocacy, science, health, politics, and related subjects.

Topics discussed during this installment include the following:

• What is the desired role of artificial intelligence in politics?
• Are democracy and transhumanism compatible?
• What are the ways in which voting and political decision-making can be improved relative to today’s disastrous two-party system?
• What are the policy implications of the development of artificial intelligence and its impact on the economy?
• What are the areas of life that need to be separated and protected from politics altogether?

 

Join the U.S. Transhumanist Party for free, no matter where you reside by filling out an application form that takes less than a minute. Members will also receive a link to a free compilation of Tips for Advancing a Brighter Future, providing insights from the U.S. Transhumanist Party’s Advisors and Officers on some of what you can do as an individual do to improve the world and bring it closer to the kind of future we wish to see.

 

Andrew Yang, Dreams, and Tacos – Meeting with the Technoprogressive 2020 Presidential Candidate – Article by Keith Yu

Andrew Yang, Dreams, and Tacos – Meeting with the Technoprogressive 2020 Presidential Candidate – Article by Keith Yu

logo_bgKeith Yu


Editor’s Note: The U.S. Transhumanist Party features this article by Keith Yu as part of its ongoing integration with the Transhuman Party. This article was originally published on the Transhuman Party website on June 1, 2018, and discusses Mr. Yu’s experiences meeting with Democratic technoprogressive Presidential candidate Andrew Yang. The U.S. Transhumanist Party has not endorsed Andrew Yang as of this publication and would not endorse a candidate running for either of the major U.S. political parties, but we do consider our website  to be a place where members can discuss political issues and candidates relevant to transhumanism from a multiplicity of viewpoints so as to encourage conversations about desirable policies as well as which candidate(s) the U.S. Transhumanist Party  could consider endorsing for the 2020 election season. 

~ Gennady Stolyarov II, Chairman, United States Transhumanist Party, December 29, 2018


I walk into the back room of the San Francisco Mission district’s Tacolicious Wednesday evening with a purpose. I am here for a meet and greet with 2020 presidential candidate Andrew Yang and I am armed – with questions. Questions derived from the past few months researching this man whose values seem to so naturally align with my own. Questions from myself as well as from the Transhumanist community at large.

Andrew greets me with a warm smile and a hand. Keith Yu, I introduce myself, of the Transhuman Party. He is interested, but inquisitive, and asks me what Transhumanism is about. I tell him that we envision a future-proofed human race that will thrive as we head towards the future.

“Would you call me a Transhumanist?” he asks. But I think this is something that he needs to decide for himself. He is, however, definitively technoprogressive. The primary plank on his platform, his “Freedom Dividend” (named thus, he jokes, because it tested well with people on all sides of the political spectrum) of $1000 a month to all adult citizens, is a direct response to the job losses caused by automation, now, and in the near future. Indeed, the reason for his bid for presidency is due to the lack of a plan to address these concerns in DC. “I will become the plan,” he says. Beyond the Freedom Dividend, many of his other policy positions put an emphasis on investing in technology and especially, understanding technology’s effects on people – a cautious optimist, as far as technology is concerned.

People filter in slowly, most giving Andrew hugs, a few, handshakes. Most attendees of this small gathering, it seems, are friends. Servers wander around and between mingling groups, ceviche tostadas and bruschetta at the ready. A margarita bar sits in the corner of the room, and specialty tacos adorn a table along the wall. We are soon gathered in a semicircle around the room as Andrew takes the stage.

He gestures to the screen behind him and an introduction video begins to play.

Andrew speaks at length about his universal basic income (UBI) policy as well as his slogan, “Humanity First”, and how it translates into his policy platform. Andrew plans to change the way America measures its progress by adopting such measurements as childhood success, median wealth, incarceration rates, and more, on top of the existing measurements of GDP and job statistics. He also plans to implement a system of digital-social currency for “doing good things that normally don’t have dollar values”, such as volunteering for one’s community or starting a book club. The credits can then be used redeem discounts or experiences in much the same way that credit card points are used. He believes that this credit system will help improve social cohesion and increase civic engagement.

But I have seen all of this before on his website and have even explored it in another article. I came here for a purpose, and as the floor is opened up for questions, I seize my chance.

“What are your views on longevity research?” I ask.

Andrew Yang responds. He is supportive of longevity research, but believes that it does not need much public support. Citing the efforts of Silicon Valley elites, he asserts that private interests will support longevity research naturally.

“And what are your views on the regulations around illegal and controlled substances for the purpose of research?”

Andrew initially misunderstands this question as a question about marijuana (which he supports for recreational and therapeutic use). Having botched the question initially, I follow up with him afterwards, mentioning the difficulty that researchers run into with the National Institute on Drug Abuse when trying to acquire controlled substances for their research. He tells me that he is very much supportive of research and is strongly against blanket criminalizations.

With the questions that I had come to ask out of the way, I wander over to the bar in the corner of the room and grab a margarita.

Mission complete.

Keith Yu is a Bay Area-based research scientist working within the biotech industry.

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

logo_bg

Adam Alonzi


From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.

Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.  

Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”

What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.

It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.

Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?  

DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?

Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the  Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”  

Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it  “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.

During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one.  Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.

Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!).  There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.

Five Tangible Steps We Can Take in 2018 to Reach Indefinite Longevity – Article by Bobby Ridge

Five Tangible Steps We Can Take in 2018 to Reach Indefinite Longevity – Article by Bobby Ridge

Bobby Ridge


You may have finally just discovered this most important conversation, or you may be a transhumanist veteran. When I research others’ attempts to articulate Transhumanism, I observe that they tend to either discuss the intangible philosophy, or they will offer an hour plus of hard science. The purpose of this article is to provide 5 tangible ways in which right now you can get involved with Transhumanism and take real steps towards extending your healthy lifespan.

I am not a doctor, so I am not providing medical advice. It is recommended that any person considering significant health-related decisions take into account his or her personal circumstances and consult a knowledgeable medical professional. I am merely a normal guy providing some salient information I have discovered during my Transhumanist journey. Here are the 5 tangible ways some of us might, in the appropriate circumstances, extend our healthy lifespans right now:

  1. Whole genome sequencing
  2. Stem-cell therapy
  3. Sign the international ban on AI weaponry
  4. Become a member of Transhumanist organizations
  5. Cryonics

There are other tangible ways to extend our healthy lifespans right now, but these are ones I have done a significant amount of research into. Transhumanism and the Singularity will transform every single person’s life whether they want this transformation to happen or not, so constant research is essential to being as prepared as possible for this next few decades. Let’s take a closer look at those five tangible ways to extend our healthy lifespans right now.   

  • Whole Genome Sequencing

Getting our whole genomes sequenced is a great step to increasing our healthy lifespans for so many reasons. One reason is because of how cheap it is relative to the recent past. It cost $3 billion to have one’s genome sequenced in 2001. Since genome sequencing is an exponentially advancing technology like Ray Kurzweil predicted, in 2015 it only cost $1000 to sequence someone’s genome [1]. Most importantly, getting our whole genomes sequenced prevents diseases. I am not talking about the type of prevention where your parents and grandparents had heart attacks, so now you must eat specific cereal to prevent a heart attack. I am talking about complete prevention, e.g., the scientists at Human Longevity Inc. (HLI) only accept into their testing program people who are considered healthy by contemporary modern medicine. Even though members are considered “healthy,” HLI still finds some sort of hidden pathology in 40% of people tested. With their advanced scanning machines, whole genome sequencing process, and future application of machine learning to all their data, they are turning medicine into proactive, preventative, personalized, and predictive, rather than the contemporary healthcare system being reactive, disease-management-oriented, generalized, and costly.

Figure 1. HLI transforming the modern health system.  [2]

With this incredible approach, HLI has had a 100% diagnosis rate, and all their findings have been successfully treated. They have whole lists of stories, such as the story when a doctor considered healthy came into to get scanned by HLI, and they found a 5cm tumor under his tail bone; a week later he had it removed. He was later told that if he did not have it removed in 6 months, it would have metastasized [3]. The HLI offers two health packages, the Health Nucleus X for $4,950.00 [4] and the Health Nucleus X Platinum for $25,000.00 [5]. This may sound expensive, but with the exponential decrease in genome sequencing cost and with further democratization of HNX clinics, the price will drop quickly. But there is no time to waste, because contemporary health statistics are not on our side. We may feel healthy and are even told we are healthy by a hospital, but getting our whole genomes sequenced is how we really know.

Here is a link to HLI: https://www.humanlongevity.com/

  • Stem-Cell Therapy

Stem cells are so exciting, if you don’t feel excitement after reading this part of the article, then you did not fully comprehend the article. According to Daniel Kota, “We have reached a critical point. We see a massive number of different stem-cell treatments out there. The only thing between stem cell therapies and us, is regulatory agencies, such as the FDA in the US. But the number of stem-cell treatments out there are getting so overwhelming that some are just falling through the cracks” [6]. Not only can stem-cell therapy provide a massive number of treatments, but it may even have age-reversal effects. Before I explain these effects in more detail, the stem-cell therapies I am going to describe have not gone through FDA approval yet in the US, so it would require you to be a medical tourist for now. So, a lot of research and many discussions with your physician are essential before actively seeking any sort of therapy. As we age the number and function of mesenchymal stem cells (MSCs) decrease. MSCs are the major modulators of our health and homeostasis. It is also important to note that MSCs are not the controversial embryonic stem cells (EMCs). MSCs are not only more ethical to use, because the extraction process does not require the destruction of a human embryo, but research has found MSCs to be significantly better to use for human treatments, relative to EMCs. So, back to how MSCs help with age-related diseases.

Figure 2. The amount of MSCs in our body decrease over time.

A shown in Figure 2, when we are born, we have a certain amount of MSCs, and they divide at their fastest rate. The number of cells and their rate of division decrease over time. So, let’s say you’re 80 years old and you need 10,000 MSCs to recover from some pathology, but you only have a 1,000 – well you can clearly see that the 80-year-old will not have enough MSCs [7]. So what researchers and doctors are doing are just placing stem cells back into the body, e.g., at the Stem Cell Institute, the medical clinic in Panama City, Panama, they inject stem cells in the specific area of bodily damage, such as a hip fracture. They also intravenously inject stem cells into patients. Matter of fact, this is what Mel Gibson’s father did. He was 92 years old, on his death bed, and the genius Mel Gibson had his personal physician talk with Dr. Neil Riordan, and soon thereafter, Mel’s Dad was in Panama getting stem cell injections in his hip and intravenously. Now his father is thriving at 99 years of age! [8] That is amazing! There are a large number of similar stories ranging from curing complete quadriplegia to low-functioning children with autism becoming high-functioning, even to a point where it is barely noticeable that the child has autism. Dr. Riordan, the founder of Stem Cell Institute, wrote an amazing book called Stem Cell Therapy: A Rising Tide: How Stem Cells Are Disrupting Medicine and Transforming Lives, that explains MSC therapy in a very easy-to-understand and passionate manner. I highly recommend it. Prices for the stem-cell treatment depend on the specific pathology, but general intravenous injections would cost around $20,000.

Here is a link to the Stem Cell Institute: https://www.cellmedicine.com/

Here is a link to Dr. Neil Riordan speaking: https://www.youtube.com/watch?v=cLKOddCPs9I

  • Sign the International Ban on AI Weaponry

The only other epochs that were as important as the next two decades of artificial intelligence (AI) development were when life first began 4.2 billion years ago and when the universe began 13.8 billion years ago. According to Andrew Ng, “AI is the new electricity. About a hundred years ago, we did not have widespread access to electricity in the US, but with the rise of electricity, it transformed every industry. Agriculture was transformed through the rise of refrigeration, communications was transformed by telegraph, manufacturing was transformed by the electric motor, healthcare was transformed. In all these industries you have a hard time imagining how to run these things without electricity. AI technology, especially deep learning, has now advanced to a point where we see a surprisingly clear path for it to also transform every industry” [9]. Similar statements have been uttered from many of the tech titans, e.g., Sundar Pichai, Jeff Bezos, and Elon Musk [10] [11] [12]. Like every technology, AI can either be used for good or evil. Well, the amount of good AI can bring humanity is probably infinite. It will help us cure all diseases, personalize teaching to children, drive our cars, take away our soul-draining jobs, and SO MUCH MORE. To make an ideal AI future come to fruition, we must properly steer this most powerful technological development. The amount of bad that AI can bring humanity, if misapplied, is an existential risk, possibly worse. There are already AI weapons being successfully made, e.g., The Kalashnikov Bureau weapons manufacturing company announced that it has recently invented an unmanned ground vehicle (UGV), which field tests have already shown better than human level intelligence. China recently began field-testing cruise missiles with AI and autonomous capabilities, and a few companies are getting very close to having AI autopilot operating to control the flight envelope at hypersonic speeds. [13]. According to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia” [14]. Vladimir Putin publicly announced that “Artificial intelligence is the future. Not only for Russia, but for all of humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere, will become the ruler of the world” [15]. The government of China laid out a timeline to beat the US in this AI arms race, aiming by 2020 to have caught up to the AI industry, by 2025 to be ahead of the US, and by 2030 to dominate the industry of AI. One tangible way to prevent an AI arms race is to sign the international ban on AI weaponry.

Here is a link to the open letter supporting an international ban on AI weaponry: https://futureoflife.org/open-letter-autonomous-weapons/

  • Transhumanist organizations

It almost seems like every week there is a new Transhumanist organization emerging. I guess people are finally figuring out how quintessential transhumanist principles are for the future of humanity. I recommend you search Wikipedia, because they have a great database of Transhumanist organizations.

Becoming a member of the United States Transhumanist Party (USTP) and all the other transhumanist parties and organizations is a great way to stay informed about all the exponentially accelerating science and technology [1]. The USTP’s central tenet is to place science, health, and technology at the forefront of American Politics. The accelerated technological advancement that is occurring will cause such enormous change, but for some reason our political leaders mainly focus on providing the American people with momentary uplifting feelings, and their advocacies encompass going back to the old days. Well instead, they should be placing all their resources into educating and preparing Americans for this massive transformation we are all about to witness in the next few decades. Please become a member of the USTP and help us get the word out about Transhumanism and the Singularity for the sake of all our lives.

Here is the link to become a member of the USTP: http://transhumanist-party.org/membership/

Here is a link to H+Pedia’s list of Transhumanist political organizations: https://hpluspedia.org/wiki/Transhumanist_political_organisations

Here is a link to Wikipedia’s list of Transhumanist organizations: https://en.wikipedia.org/wiki/Category:Transhumanist_organizations

  • Cryonics

Cryonics has been around for a relatively long time. The first cryonics institute was formed in 1976, and even though cryonics has been around for so long, cryogenically preserving one’s body just has not been popularized yet by the mainstream. One would think that by 2018, we would have caught on. It is sad to think of all the millions of people that missed the chance to be preserved for the last 40 years. It is very important to have yourself and loved ones cryopreserved after death, because it will decrease the taboo, push the advancement of the technology forward, and, most importantly, you will be able to live indefinitely! Many may think that it is too expensive, and prices can range anywhere from $28,000.00 to $200,000.00, but choosing to do monthly payments makes the price very affordable [16], [17].

Here is a link to a cryonics organization – the Alcor Life Extension Foundation: https://alcor.org/

Here is a link to another cryonics organization – the Cryonics Institute: http://www.cryonics.org/

In the great Transhumanist game, the human species must unite once and for all to survive the 21st century. Ray Kurzweil gave us a map to the greatest treasure, a treasure that will buy more than happiness. It will buy us eternal love, beautiful augmentation, indefinite longevity, and maybe even utopia. It is up to us to steer this ship in the right direction and make sure we stay afloat while on this dangerous journey. I sincerely hope this information saves as many lives as possible.

Bobby Ridge is the Secretary of the United States Transhumanist Party. Read more about him here

References

  1. Kurzweil, Ray. “Ray Kurzweil — Immortality By 2045 / Global Future 2045 Congress’2013.” YouTube, 2045 Initiative, 18 Jan. 2015. https://www.youtube.com/watch?v=qlRTbl_IB-s
  2. Venter, Craig. “Dr. Craig Venter – How We Will Extend Our Lives: From Synthetic Life to Human Longevity.” YouTube.com, The Artificial Intelligence Channel, 1 Oct. 2017, https://www.youtube.com/watch?v=OfzfI2dvp3s
  3. Venter, Craig. “MIS2017: Genomics, Advanced Imaging, And The Future Of Medicine.” YouTube.com, Cleveland Clinic, 8 Nov. 2017, https://www.youtube.com/watch?v=nkUMjh1GjKs
  4. Health Nucleus X. Human Longevity, Inc. 2013. Web. 4 May 2018.
  5. Health Nucleus X Platinum. Human Longevity, Inc. 2013. Web. 4 May 2018
  6. Kota, Daniel. “Promises and Dangers of Stem Cell Therapies | Daniel Kota | TEDxBrookings.” YouTube.com, TEDx Talks., 28 Nov. 2017. https://www.youtube.com/watch?v=hsFEcBwO8O4
  7. Riordan, Neil H. Stem Cell Therapy: A Rising Tide: How Stem Cells Are Disrupting Medicine and Transforming Lives. 2017. Print.
  8. Riordan, Neil. “Golden Cells and Mesenchymal Molecules – Neil Riordan, PhD.” YouTube.com, Riordan Clinic. 15 Jan. 2018. https://www.youtube.com/watch?v=cLKOddCPs9I
  9. Ng, Andrew. “Andrew Ng – The State of Artificial Intelligence.” YouTube.com, The Artificial Intelligence Channel, 15 Dec. 2017. https://www.youtube.com/watch?v=NKpuX_yzdYs
  10. Pichai, Sundar. “Google CEO Sundar Pichai: A.I. More Important To Humanity Than Fire And Electricity | MSNBC.” YouTube.com, MSNBC. 29, Jan. 2018. https://www.youtube.com/watch?v=jxEo3Epc43Y
  11. Bezos, Jeff. “Amazon’s Jeff Bezos: Lessons in Management at I.A. Gala 2017.” YouTube.com, Expovista TV, 8 May, 2017. https://www.youtube.com/watch?v=fpDUiDQigO8
  12. Musk, Elon. “Elon Musk, National Governors Association, July 15, 2017.” YouTube.com, WordsmithFL, 16 July, 2017. https://www.youtube.com/watch?v=b3lzEQANdHk
  13. Husain, Amir. “Amir Husain: “The Sentient Machine: The Coming Age of Artificial Intelligence.”” YouTube.com. Talks at Google. 31 Jan. 2018. https://www.youtube.com/watch?v=JcC5OV_oA1s
  14. Conn, Ariel. “Pentagon Seeks $12 -$15 Billion for AI Weapons Research.” Future of life institute. FLI – Future of Life Institute, 15 Dec. 2015. Web. 4 May, 2018.
  15. Putin, Vladimir. “Whoever leads in AI will rule the world! – Putin to Russian children on Knowledge Day.” YouTube.com. Russia Insight. 4 Sep. 2017.  https://www.youtube.com/watch?v=2kggRND8c7Q].
  16. “Cryospreservation is far more affordable than you might think.” Cryonics Institute Technology for Life. Cryonics Institute. 4 May, 2018. Web 4 May, 2018.
  17. “Alcor Cryopreservation Agreement – Schedule A Required Costs and Cryopreservation Fund Minimums.” Alcor Life Extension Foundation. Alcor. Web 4 May, 2018.