Wednesday, November 11, 2020

Why Machine Learning Won't Generate General AI

It's been a common trope in science fiction that a general AI, that is, an artificial intelligence that can understand the context of what it's looking at, will arise out of our current machine learning tech. (Let's leave the question of consciousness out of this for the moment). 

In his latest Locus column, Cory Doctorow has some thoughts on this and why it won't happen that are worth reading.  

Machine learning operates on quantitative elements of a system, and quantizes or discards any qualita­tive elements. And because it is theory-free – that is, because it has no understanding of the causal relationships between the correlates it identifies – it can’t know when it’s making a mistake.

The role this deficit plays in magnifying bias has been well-theorized and well-publicized by this point: feed a hiring algorithm the resumes of previously successful candidates and you will end up hiring people who look exactly like the people you’ve hired all along; do the same thing with a credit-assessment system and you’ll freeze out the same people who have historically faced financial discrimination; try it with risk-assessment for bail and you’ll lock up the same people you’ve always slammed in jail before trial. The only difference is that it happens faster, and with a veneer of empirical facewash that provides plausible deniability for those who benefit from discrimination.

But there’s another important point to make here – the same point I made in “Full Employment” in July 2020: there is no path of continuous, incremental improvement in statistical inference that yields understanding and synthesis of the sort we think of when we say “artificial intelligence.” Being able to calculate that Inputs a, b, c… z add up to Outcome X with a probability of 75% still won’t tell you if arrest data is racist, whether students will get drunk and breathe on each other, or whether a wink is flirtation of grit in someone’s eye.

We don’t have any consensus on what we meant by “intelligence,” but all the leading definitions include “comprehension,” and statistical infer­ence doesn’t lead to comprehension, even if it sometimes approximates it.

 

No comments: