Why we are afraid of AI

Spread the love

In Western culture, at least, we have a healthy fear, or an unhealthy fear, I can’t decide, of Artificial Intelligent. That fear may be justified, but any such justification is a product of our culture, not rational discourse. I say that with certainty because that’s how everything is, as I’m sure you already know. I’m not saying that we shouldn’t be afraid (or that we should). I’m saying that like almost everything else we claim to believe, we didn’t work it out in a Baconian framework, but rather, came to that belief using the same process of mind we come to most of our beliefs by.

So, where does the cultural trait of fear of AI in Western society come from? The same place all cultural traits come from. Movies. (And other conduits of received knowledge.)

I’m teaching a class on a related topic, and in so doing put together a series of video clips that I thought I’d share. Have a watch, and feel free to discuss. You’ve seen most of these already.

They are in a sort of order.


(Hat Tip Parker)

Comic relief. How hackers meet and fall in love:

Modern consequences. One tipping point among many:

The most annoying man in the world almost certainly being wrong:

What have I missed?

Have you read the breakthrough novel of the year? When you are done with that, try:

In Search of Sungudogo by Greg Laden, now in Kindle or Paperback
*Please note:
Links to books and other items on this page and elsewhere on Greg Ladens' blog may send you to Amazon, where I am a registered affiliate. As an Amazon Associate I earn from qualifying purchases, which helps to fund this site.

Spread the love

8 thoughts on “Why we are afraid of AI

  1. I wouldn’t consider the con man musk to be an informed voice on this.

    I’ll suggest that the immediate problems with AI stem from problems with training the models, from biased data resulting in unreliable results to design teams so homogeneous that biased decision making issues are baked into the algorithms themselves — you only need to look at the problems with

    – fairly simple machines like the one built by Amazon to review resumes to recommend candidates — because it was trained on resumes from successful applicants, who had been overwhelmingly male

    – the problems with Northpointe’s COMPAS system, designed to predict whether a convicted criminal is likely to commit more crimes — it produced results incredibly biased against minorities

    – almost all of the algorithms designed to determine whether an applicant for a home loan should be given a loan, also biased

    – photo recognition systems, which perform terribly on photos of non-Caucasian individuals

    – recommendation engines — systems which should be very good considering the amount of data collected every stinking day, but that still have performance far less than stellar. (Amusing [I think] anecdote: About a month a got a music recommendation that said: “Since you like Tom Waits you’ll probably like this from Michael Buble” [Michael Buble, the mayonnaise on white bread sandwich of music]

    If these things are still giving problems based on data and design it’s no wonder systems for autonomous driving are not here, to name one thing.

    In short, the primary things to worry about with AI are the people and approaches in place to design it, not AI itself.

    1. Makes sense. AI draws from its surroundings to produce new intelligence. We live in a racists and sexist world. Some of that sexism has been shown to be especially strong in the tech world, so maybe that is also a factor here.

    2. Greg:

      I will have to do more research. It doesn’t make sense to me yet.

      Racism or sexism (to me) has always required an element of subjective intent.

      You have to be doing something to someone (usually bad) because of the color of their skin or their sex in order to be racist or sexist.

      So does AI even know the color of a person’s skin or their sex when it is reviewing online information or responding to chats? If it isn’t even possible to discern such a characteristic (unless input as a data item) from whatever information the AI is using – how can it be racist or sexist or any other type of “ist”?

      If AI was given the job to hire someone by reviewing their resumes and race or sex wasn’t part of the resume – and it ended up hiring more black people than another race – would that be racist? I say No! If applying an objective criteria ends up in a racial disparity, that doesn’t necessarily mean racism. It could be something else which is skewing the decision.

      For example, say you are hiring an engineer. It is well known that engineer is more male than female. It is based on choice and for whatever reason, more men choose engineer than women. So say that 3/4 of the resumes submitted are for males and 1/4 for females – but the AI doesn’t know the sex (or the race )of the applicant (lots of firms deliberately screen this info out so as not to take sex or race into account). Well – that skew would of course explain more male hires than female hires (if that was the outcome).

      So I guess I need more to understand how AI can be considered racist.

      Objective outcome differences doesn’t per se mean racism. It could be racism – or it could have another equally logical explanation not based on an intentional choice based on sex or skin color.

      Those are my thoughts anyway.

      Interesting topic!

    3. We live in a racists and sexist world. Some of that sexism has been shown to be especially strong in the tech world,

      Yup — the myth that women “choose” not to pursue stem careers in all its different flavors is still widely spewed, but it’s still a myth.

      The difficult thing for people to grasp, whether they’re learning the basics of these algorithms or just looking in from the outside, is that the “models” produced are enormously different from the models they may have seen in a stat class: there is no regression equation, no discriminant functions, no factors, etc. It’s all based on examining [for the big runs] tens of millions of observations with possibly thousands of variables: there’s no way to know how the process uses that data.

      But, when an algorithm for purpose X consistently gives unequal predictions for one particular group, even when inputs are essentially the same, it’s a biased [racist, in the jargon of the article] situation.

  2. A couple other discussions:

    https://research.gatech.edu/flawed-ai-makes-robots-racist-sexist

    https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070

    MIT tech review has a history of good discussions on this (altered to avoid entering tooo many links)

    https :// www dot technologyreview dot com slash 2020/12/10 slash 1013617 slashracism-data-science-artificial-intelligence-ai-opinion/

    Books like

    Weapons of Math Destruction
    Algorithms of Oppression
    Automating Inequality

    Are also good reads.

Leave a Reply to dean Cancel reply

Your email address will not be published. Required fields are marked *