Browsed by
Tag: privacy

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

logo_bg

Adam Alonzi


From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.

Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.  

Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”

What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.

It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.

Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?  

DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?

Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the  Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”  

Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it  “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.

During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one.  Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.

Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!).  There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.

Transhumanism: Contemporary Issues – Presentation by Gennady Stolyarov II at VSIM:17 Conference in Ravda, Bulgaria

Transhumanism: Contemporary Issues – Presentation by Gennady Stolyarov II at VSIM:17 Conference in Ravda, Bulgaria

logo_bg

Gennady Stolyarov II


Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, outlines common differences in perspectives in three key areas of contemporary transhumanist discourse: artificial intelligence, religion, and privacy. Mr. Stolyarov follows his presentation of each issue with the U.S. Transhumanist Party’s official stances, which endeavor to resolve commonplace debates and find new common ground in these areas. Watch the video of Mr. Stolyarov’s presentation here.

This presentation was delivered by Mr. Stolyarov on September 14, 2017, virtually to the Vanguard Scientific Instruments in Management 2017 (VSIM:17) Conference in Ravda, Bulgaria. Mr. Stolyarov was introduced by Professor Angel Marchev, Sr. –  the organizer of the conference and the U.S. Transhumanist Party’s Ambassador to Bulgaria.

After his presentation, Mr. Stolyarov answered questions from the audience on the subjects of the political orientation of transhumanism, what the institutional norms of a transhuman society would look like, and how best to advance transhumanist ideas.

Download and view the slides of Mr. Stolyarov’s presentation (with hyperlinks) here.

Listen to the Transhumanist March (March #12, Op. 78), composed by Mr. Stolyarov in 2014, here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form here.

Become a Foreign Ambassador for the U.S. Transhumanist Party. Apply here.

A Word on Implanted NFC Tags – Article by Ryan Starr

A Word on Implanted NFC Tags – Article by Ryan Starr

logo_bg

Ryan Starr


TL;DR – CALM DOWN. No one is forcing you to be chipped and you can’t be tracked or hacked.

So, I’ve seen a lot of people lose their minds over a Wisconsin company, Three Square Market (32Market), implanting NFC tags in their employees. Everyone just stop and a take a deep breath. You likely have no actual understanding of what the tag is or how it works, so let me tell you. I got one last year – an xNT, the original implantable NFC tag, from the company Dangerous Things (www.dangerousthings.com). It is exactly the same as what Three Square Market is offering to their employees. I know what it is and is not capable of doing. But let’s back up for a second.

First, the company is not forcing any employee to get it. There are several companies around the world who have offered the same thing (no, they are not the first) and no one has ever been forcibly implanted. Period. EVERYONE I have come across in the biohacking community is vocal about this NOT BEING MANDATORY. It is a choice, and we want to keep it that way. Furthermore, there is a growing political movement that specifically addresses concerns about bodily autonomy and preventing implants from becoming mandatory.

Now, to the most common concerns I’ve seen:

Can your tag be tracked?

NO. It is not a GPS device or even an active piece of electronics. It is a passive chip and antenna that pulls power from the device used to read it. The tag is the size of a grain of rice, and even if we wanted to cram active electronics in there, we can’t.

Can your tag be hacked?

NO. As I said above, these are passive devices that require power from a reader. In order to do so, the reading device essentially has to be placed directly on your tag (typically implanted in the hand) and held there still for several seconds. Also, some readers don’t read very well because of antenna differences. If someone really wanted to steal your stored data, they would have to physically attack you, restrain you, and then read your tag. If that were the case, you have bigger problems than someone reading your 800 bytes of information. But in the very unlikely event that someone did try to do that to you, don’t worry, because you can password-protect your tag.

So what are they good for?

PRIVACY AND SECURITY. Yes, you read that correctly. When I first saw NFC tags being implanted, I had many of the same privacy concerns that many of you do. But then I started actually researching the technology. NFC tags (implanted or not) can be used to lock and unlock devices and are more secure than a password or a fingerprint. Of course, implanting one means you’ll never lose it, and it will never get stolen. You can unlock your android phones, unlock your doors, safes, and padlocks (with specific NFC enabled hardware), and if you’re particularly good with electronics, you can rig up many Arduino or Pi-based devices that read and respond to your tag.

There are other cool things you can do. You can store links, digital business cards, Bitcoin wallets, or just generic text. But also understand that this technology is fairly new, and associated hardware are even newer. This is ground-level development going on, and because of that we can steer the development to ensure privacy and safety for the user. There is not a greedy corporation running the industry, just passionate hobbyists who are just as concerned about privacy as you are.

If you want more information, I highly suggest just asking someone who actually has an NFC tag or visit www.dangerousthings.com.

Ryan Starr (R. Nicholas Starr) is the is the leader of the Transhumanist Party of Colorado and founder of the Transhumanists of the Sierras

U.S. Transhumanist Party Support for H.R. 1868, the Restoring American Privacy Act of 2017

U.S. Transhumanist Party Support for H.R. 1868, the Restoring American Privacy Act of 2017

logo_bg

Gennady Stolyarov II


The United States Transhumanist Party and Nevada Transhumanist Party support H.R. 1868, the Restoring American Privacy Act of 2017, proposed by Rep. Jacky Rosen of Henderson, Nevada.

This bill, if enacted into law, would undo the power recently granted by S.J. Res. 34 for regional-monopoly Internet Service Providers (ISPs) to sell individuals’ private data – including browsing histories – without those individuals’ consent. For more details, read Caleb Chen’s article on Privacy News Online, “Congresswoman Rosen introduces Restoring American Privacy Act of 2017 to reverse S.J. Res. 34”.

Section I of the U.S. Transhumanist Party Platform states, “The United States Transhumanist Party strongly supports individual privacy and liberty over how to apply technology to one’s personal life. The United States Transhumanist Party holds that each individual should remain completely sovereign in the choice to disclose or not disclose personal activities, preferences, and beliefs within the public sphere. As such, the United States Transhumanist Party opposes all forms of mass surveillance and any intrusion by governmental or private institutions upon non-coercive activities that an individual has chosen to retain within his, her, or its private sphere. However, the United States Transhumanist Party also recognizes that no individuals should be protected from peaceful criticism of any matters that those individuals have chosen to disclose within the sphere of public knowledge and discourse.”

Neither governmental nor private institutions – especially private institutions with coercive monopoly powers granted to them by laws barring or limiting competition – should be permitted to deprive individuals of the choice over whether or not to disclose their personal information.

Individuals’ ownership over their own data and sovereignty over whether or not to disclose any browsing history or other history of online visitation to external entities are essential components of privacy, and we applaud Representative Rosen for her efforts to restore these concepts within United States federal law.

A Transhumanist Opinion on Privacy

A Transhumanist Opinion on Privacy

logo_bg

Ryan Starr

****************

Privacy is a favorite topic of mine. Maintaining individual privacy is a crucial element in free society. Yet there are many who want to invade it for personal or political gain. As our digital fingerprint becomes a part of our notion of self, how do we maintain our personal privacy on an inherently impersonal network of data? Where do we draw that line on what is private, and how do we enforce it? These are questions that are difficult to answer when looking at a short-term perspective. However, if we look further into the probable future, we can create a plan that helps protect the privacy of citizens today and for generations to come. By taking into account the almost certain physical merger of human biology and technology, the answer becomes clear. Our electronic data should be treated as part of our bodily autonomy.

The explosive success of social media has shown that we already view ourselves as partly digital entities. Where we go, what we eat, and who we are with is proudly displayed in cyberspace for eternity. But beyond that we store unique data about ourselves “securely” on the internet. Bank accounts, tax returns, even medical information are filed away on a server somewhere and specifically identified as us. It’s no longer solely what we chose to let people see. We are physical and digital beings, and it is time we view these two sides as one before we take the next step into enhanced humanity.

Subdermal storage of electronic data is here, and its storage capabilities will expand rapidly. Soon we will be able to store a lot more than just access codes for our doors. It is hard to speculate exactly what people will chose to keep stored this way, and there may even come a time when what we see and hear is automatically stored this way. But before we go too far into what will be stored, we must understand how this information is accessed in present time. These implants are currently based in NFC technology. Near-Field Communication is a method of storing and transmitting data wirelessly within a very short distance. Yes, “wireless” is the key word. It means that if I can connect my NFC tag to my smart phone by just waiving my hand close to it (usually within an inch or so), then technically someone else can, too. While current antenna limitations and the discreetness of where a person’s tag is implanted create a highly secure method of storage, advances in technology will eventually make it easier to access the individual. This is why it is urgent we develop a streamlined policy for privacy.

The current Transhumanist position is that personally collected intellectual property, whether stored digitally or organically, is the property of the individual. As such, it should be protected from unauthorized search and download. The current platform also states that each individual has the freedom to enhance their own body as they like so long as it doesn’t negatively impact others. However, it does not specify what qualifies as a negative impact or how to prevent it. Morphological freedom is a double-edged sword. A person can a person enhance their ability to access information on themselves, but they can also use it to access others. It is entirely feasible enhancements will be created that allow a person to hack another. And collecting personal data isn’t the only risk with that. What if the hacking victim has an artificial heart or an implanted insulin pump? The hacker could potentially access the code the medical device is operating with and change or delete it, ultimately leading to death. Another scenario might be hacking into someone’s enhanced sensory abilities. Much like in the novel Ender’s Game, a person can access another to see what they see. This ability can be abused countless ways ranging from government surveillance to sexual voyeurism. While this is still firmly within the realm of science fiction, a transhuman society will need to create laws to protect against these person-to-person invasions of privacy.

Now let’s consider mass data collection. Proximity beacons could easily and cheaply be scattered across stores and cities to function as passive collection points much like overhead cameras are today. Retail stands to gain significantly from this technology, especially if they are allowed access to intimate knowledge about customers. Government intelligence gathering also stands to benefit from this capability. Levels of adrenaline, dopamine, and oxytocin stored for personal health analysis could be taken and paired with location data to put together an invasive picture of how people are feeling in a certain situation. Far more can be learned and exploited when discreetly collected biodata is merged with publicly observable activity.

In my mind, these are concerns that should be addressed sooner than later. If we take the appropriate steps to preserve personal privacy in all domains, we can make a positive impact that will last into the 22nd century.

***
Ryan Starr is the leader of the Transhumanist Party of Colorado. This article was originally published on his blog, and has been republished here with his permission.