Browsed by
Tag: machine learning

State of AI 2020 – Article by Pavel Ilin

State of AI 2020 – Article by Pavel Ilin

logo_bg

Pavel Ilin


This summary is prepared based on the State of AI Report 2020, which was crafted by Nathan Benaich and Ian Hogarth.

The AI industry is very diverse in its application, and it’s going through a transformation from the magical-wand stage to the plateau of adequate development. Let’s take a look at what is happening in the AI industry.

Research

We haven’t come up with new super-smart algorithms. Progress in model performance keeps being driven by big computational budgets and huge data sets. Training of the GPT-3 language model, with its 175 billion parameters, cost approximately $10 million. At the same time larger models require less data to achieve the same level of performance. With a deep-learning approach we are getting close to the point when the cost of training will grow outrageous with incrementally smaller improvements of the model.

An important fact is that the code base of most artificial intelligence systems remains closed. Only 15% of papers publish their code. This raises a lot of concerns about reproducibility and AI safety. AI explainability remains a critical issue for AI safety research; there are promising avenues of exploration such as Asymmetric Shapley Values, but so far it’s unknown how AI systems make decisions. 

Natural language processing (NLP) models successfully simulate common scenes and linguistics, but they fail dramatically with understanding problems and context and forming knowledge. 

Talent

Talented people with skills in math and computer science are the drivers of the progress in the AI field. More and more US professors are being recruited by tech companies. This affects the quality of education that US universities can provide. We already can see a decline in the level of entrepreneurship among recent graduates. At the same time Universities are creating AI-related degree programs.

The US keeps its position as the main attractor of talented individuals. For example China contributes to the talent pool of AI developers, but after publication of their first results, talented people are most likely to move to the US. 90% of international PhD graduates stay and work in US universities and corporations. Demand for AI talent remains much higher than supply, even despite COVID-19’s impact on market growth.  

Industry

AI keeps progressing not only on a theoretical and research level. Many real world applications are already in use, and they are affecting the industries in various ways.

New drugs are being designed by AI, and they are already in clinical trials. For example AI-designed drugs for OCD treatment are out for testing in Japan. AI drug-discovery startups keep raising funds. Also big pharma is teaming up with startups around preserving privacy during drug discovery. For example OpenMined uses federated learning to preserve privacy with medical data. Viz.ai presented the first product which was approved by the Centers for Medicare and Medicaid Services in the US. Their product analyzes tomography scans and alerts specialists who can treat patients before they receive damage that leads to the long-term disability. 

Progress in self-driving cars stays limited. Only 3 companies in California have permission to conduct testing of self-driving cars without a safety driver. Self-driving mileage remains microscopic compared to human drivers (2,874,950 miles for self-driving cars versus 390,313,739,000 miles for humans). The research and development process for self-driving cars remains very expensive. The major companies in this field raised around $7 billion since July 2019. Tesla chose to approach gradually adding self-driving features to its cars, but human drivers still remain in the loop. Recent approaches such as supervised learning do not perform well enough. To make dramatic breakthroughs, new approaches are required.

Computer vision unlocks faster accident and disaster recovery intervention. It also reduces the amount of human hours spent using a microscope, which could lead to acceleration of development processes and reduction of product costs.

AI drives sales and at the same time reduces costs in supply chains and manufacturing. Robotic process automation and computer vision are the most commonly deployed techniques in the enterprise. Speech, natural language generation, and physical robots are the least common. Recently IBM partnered with health insurance company Humana. IBM implemented natural language understanding (NLU) software which is already live and handles calls. It not only redirects calls to the different queues; it’s able to answer basic questions, such as “How much will the copay be to visit a specific specialist?” without human intervention.

Modern AI, in order to perform well, requires a lot of computing resources. Specialized AI hardware keeps progressing, and companies are now presenting second generations of their products. Graphcore M2000 offers faster training time to drop the cost of state-of-the-art models. Google’s new TPU v4 delivers up to a 3.7x training speedup over their TPU v3. NVIDIA will not rest either; it has achieved up to 2.5x training speedups with the new A100 GPU vs V100. Increasing interest towards machine learning devOps is a signal that the industry shifting its focus from how to build models to how to run them.

Despite the COVID-19 pandemic, investments keep coming into the industry. Private funding rounds of greater than $15 million for the AI-first companies remain strong.

Politics

Usage of AI for facial recognition tasks is extremely common around the world. Around half of the world allows facial recognition. This has become a recognizable political and ethical problem, especially when use of this technology leads to the wrong arrests. There were two highly publicized cases of wrong arrest in the US (which is probably just a tip of the iceberg). In May 2019, Detroit police arrested Michael Oliver who was wrongly accused of a felony for supposedly reaching into a teacher’s vehicle, grabbing a cellphone and throwing it, cracking the screen, and breaking the case. In January 2020, Detroit police arrested Robert Williams as a shoplifter who allegedly stole five watches from Midtown’s trendy Shinola store in October 2018. In both cases charges were dismissed but harm was done. 

Industry took a more thoughtful approach as a reaction to the AI mistakes. Microsoft deleted its database of 10 million faces, Amazon announced a one-year pause on letting the police use its facial recognition tool Rekognition. IBM announced it would sunset its general purpose facial recognition products. Washington State in the US introduced requirements to acquire warrants to run facial recognition scans. The ImageNet, a popular image database, is making an effort toward reduction of the biases in its image collections.

As Deep Fake technology produces more and more realistic media, it becomes illegal to use in certain states in the US. California passed a law, AB 730, aimed at deep fakes, which criminalizes distributing audio or video that gives a false, damaging impression of a politician’s words or action. Many other US state bills have been passed, addressing different risks. For example Virginia law amends current criminal law on revenge porn to include computer-generated pornography.

The US government keeps pursuing implementation of the military AI systems. DARPA organised a virtual dogfighting tournament where various AI systems would compete with each other and a human fighter pilot from the US military.

AI nationalism is on the rise. Countries tend to pursue protectionist policies to scrutinize acquisitions of AI companies by the players from other countries.

Every year AI plays a more and more noticeable part in our lives. It becomes cheaper, and you learn how to do new things. But we have to remember that at the moment AI is still a tool. And there are some philosophical and methodological difficulties which we have to overcome before it will be possible to deliberate about the potential sentience of the AI. It’s very important for the policy makers to make informed decisions based on how technology actually works and not on magical understanding formed based on popular sci-fi.

Pavel Ilin is Secretary of the United States Transhumanist Party. 

 

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

logo_bg

Adam Alonzi


From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.

Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.  

Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”

What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.

It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.

Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?  

DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?

Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the  Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”  

Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it  “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.

During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one.  Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.

Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!).  There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.