Why Machine Learning and AI May Not Be Good for the Jews
Or anyone else, for that matter.
A few years ago, Microsoft made a chatbot using an Artificial Intelligence farm, called “Tay,” designed to make Tweets from the perspective of a teenage girl.
Within 24 hours, Internet trolls managed to subvert the chatbot AI into racist and antisemitic, claiming Jews caused 9/11 and that Hitler was right.
Social Justice Warriors pounced on this to rail against hate speech and racists, and perhaps rightly so.
But as someone who works extensively with Microsoft’s Azure AI tools, as well as Apache Mahout and other machine learning tools, particularly for sophisticated business analysis, this set off a series of alarm bells for me. There’s an underlying concern that other people have touched on with regards to AI and machine learning reflecting our biases back at us, without necessarily understanding quasi-apocalyptic implications.
Also a few years ago, Pennsylvania toyed with an algorithm, a “Sentence Risk Assessment Instrument,” to help reduce incarceration rates. Critics rightly worried that because of the high rates of black incarceration, that it would actually incarcerate disproportionately more African-Americans because of their already higher rates of recidivism. And from a data science perspective, these concerns were well-founded. Sentencing algorithms, increasingly in use all around the nation, generally look at objective factors such as age, nature of offense. Defense attorneys and others argued (correctly) that using past arrest data would be terrible, and instead they opted to use convictions, which also leads to the same problem. What these algorithms cannot necessarily compensate for is a factor overlooked in any discussion of criminal justice, from shootings of unarmed black men by police to high incarceration rates. The entry point for most African-American men in the criminal justice system is the frequency of contact with police, a side effect of the penchant for cities especially to build revenue models out of copious nuisance ordinances. What kind of determinative logical weight can you assign to a repeat offender when their entry likely started with a curfew violation or a busted tail light, which is far more frequent in urban settings than suburban or rural?
It’s also a subject for another blog, but a number of commercially developed sentencing algorithms are subject to privacy protections, and thus the public cannot audit their controls. At least the Pennsylvania process has been a transparent one.
Another AI concern on our current social landscape are the kinds of decisions that will need to be made by the AI of self-driving cars. MIT, as you would expect, is trying with one of the largest ethical studies ever undertaken, to develop moral instructions for AI. What kinds of moral calculus will we imbue a vehicle AI with? Objectively, we might reduce accidents significantly by handing over the reins to algorithms, but it’s not wrong, I think, to be cautious about turning over moral decisions to carefully scripted code.
Developed responsibly, there are so many use cases where AI can make a profound difference. One of the most intriguing possibilities is in the realm of medicine, with AI assisted diagnosis perhaps allowing us to move more towards the very illiberal notion (in spite of its ironic embrace by liberals) that healthcare is a right. AI coupled with robotics are already moving us towards rote surgeries requiring less and less dependency on surgeons. If we can offload a terrific amount of routine care using AI, we can certainly take a large bite out of our issues with equality of care.
But returning to “Tay” and criminal justice algorithms – these systems are only as successful as the data points we impute. We tend to think that this makes them, by default, completely rational systems, but this isn’t true.
Even the most rational being, say for instance Noam Chomsky, lacks the capability to make truly mathematical decisions. We might round out the data we have to process our decision making more thoroughly than others, but when decisions need to be made, adjusting for the time we have to make decisions, we’re still internally treating the data we’re processing largely in abstraction or approximation, and this is where we introduce bias as a matter of biological function. People tend to think we can merely peel away bias (and certainly, we can make the effort to compensate for it), it’s deeply embedded in our decision-making process. It’s a general knowledge of snakes combined with bias that causes us to recoil from the cobra, not a system of rational cognitive processing. Bias is an evolutionary survival shorthand.
We tend to think that AI will be rational because computer systems process according to a chain of logical instructions. What can we do, though, when we lack unbiased or thorough data to craft those rule-based approaches to logical instructions? When we ourselves, through our variety of biases, are eminently hackable, how can we, in turn, identify impartial and objective data for machine learning to ingest?
Fine, we say. We build exceptionally narrow field AIs. So I set up a machine learning system that analyzes online sales data for a variety of purposes, and the AI accordingly helps forecast inventory, target products for upselling or cross-selling opportunities, or otherwise I’m configuring this system entirely around a profit motive. The whole gestalt of AI, under the law of accelerating returns, ends up being built upon systems often designed and developed by a limited spectrum of developers, often each imbued with irrational processing directives to compensate for various scenarios. If an AI is built around a profit-motive, then we have to impose what end up being irrational ethical directives to prevent it from breaking the law.
As we advance the processing capabilities available for AI, particularly with the advent of quantum computing, the processing power behind machine learning, according to the law of accelerating returns, may well put applying system constraints on irrational data outside of our realm of control. Our foundational AIs are imperfect and irrational, and if they form the basis for expanding Artificial General Intelligence, we may well realize we’ve lost the control over a profoundly irrational system or set of systems.
Going back to a criminal justice AI or algorithm as a case for the approach we should take with machine learning systems…it’s clear the safe path for our society and AI would be for any system to be developed slowly. We look for all mitigating data points, in order to create an objective set of data, and we also ensure that it’s logical directives are adjusted for ethical and moral values, just as the effort we are seeing with vehicle AI.
But we will not. Most systems will be developed in isolation, out of concern for competitive efforts, with corners cut in order to decrease time to market, suffering from a paucity of development resources, perhaps by overseas workers who may miss crucial steps because of language barriers.
SkyNet suddenly becomes less far-fetched.
But how is this bad for the Jews specifically?
The frightening reality, especially for the programmer or data scientist that considers such things, is that we often assume that there is an a priori, universal set of morality and ethics that machine learning can discern on its own. Most of us know, not necessarily rationally, that Jews are not bad. But consider how machine learning might evaluate the history of Jews in antiquity. From the pogroms of Alexandria (thanks, Rabbi Seimers!) through to the Holocaust, what weight can we give processing logic that respects Jews as victims vs. what might be a rational, machine assumption that Jews should be victims because they always have been victims. Unless you provoke those constraints in the development of the system, we do have to worry about “Tay” writ large, and perhaps more damaging.
This could be exacerbated by the social media landscape. 30% of all Twitter accounts attacking Jews in recent years are bots, according to the ADL. AI, in a sense, is already being specifically designed to target us.
What about the 70% Twitter accounts spouting antisemitism that are not bots? We are unwittingly creating data points for any AI all the time. It’s frightening enough that we’ve decoupled—all of us—objectivity from what we place in social media, to the point where we call legitimate, well-sourced news “fake news,” and promote fake news (looking at you Breitbart, RT, or Blue Nation) all across the spectrum. How is any system looking to social media to parse the distinction between what is authoritatively true, and what is not? My hope is that any AI run amok that attempts to draw on social media will simply collapse under the weight of conflicting, idiotic datasets, rather than determining, perhaps correctly, that the human species has limited value, since we can’t even find the value in one another.
What we suffer ourselves to do on social media, especially promoting pathos over fact regardless of our individual politics, could well be stenciling our own civilization’s epitaph.
But I think as always, we are already seeing that Jews in particular will serve as the bellwether for the perils of AI. “Tay” is online for 24 hours, and because antisemitic trolls are unchecked and in ascendance, all across the political spectrum, it concludes logically and rationally that being antisemitic is correct. The sheer volume of antisemitism online creates the possibility of AI associating volume with credibility, depending on how machine learning might weigh it. And because antisemitism, more than any other form of bigotry, is transnational and polymorphic, it would be very difficult to consistently adjust for.
Disenfranchised minorities already are being done great harm. Mortgage issuing algorithms, criminal justice algorithms, credit assessment algorithms…any variety of mundane AIs are already present and possibly glomming on to already entrenched institutional biases.
But as we imbue autonomous military drones, missile defense systems, and more lethal systems with AI that by design must make ethical/moral determinations, can we afford the present combination of hate speech sanctified as a freedom? As social media scales to a point where content is often only manageable by AI, how do we avoid promotion of content deleterious to Jews?
Perhaps this is the place where private ownership of major social networks can save us where liberal systems of governments cannot. Government dare not outlaw hate speech, although we might consider the materializing AI landscape and realize the aggregate of unfettered, online hate speech can have “yelling fire in a theater” consequences for vulnerable groups. Private companies can, and fortunately already are, undertaking efforts to truncate hate speech and fake news. I am concerned that the effort is not nearly sufficient, and antisemitism too often is overlooked where other forms of bigotry are halted.
So yes, I truly believe the current state of online culture combined with AI is bad for the Jews.
I turn to more than a few science fiction authors as prophets. Arthur C. Clarke was a prophet. Isaac Asimov was a prophet. Frank Herbert, with the Butlerian Jihad backstory to the original Dune, is my favorite, especially with the commandment in the Orange Catholic Bible, “thou shalt not make a machine in the likeness of a human mind.”
The only thing that says he may have missed the mark on the consequences of AI were that he put Jews in Chapterhouse: Dune. I don’t believe Jews will be safe as AI scales, unless we make major societal adjustments.
1 Comment