The Trouble With AI

Whenever the subject of artificial intelligence comes up, the default assumption is that the super intelligent robots will go Skynet and wipe out humanity. There’s a conceit to that assumption and it is that the super intelligent robots will hold us in the same regard as we hold ourselves. They will see us as a threat and decide we have to be eliminated so the robots can rule the world.

If you think it through, the super intelligent robots will probably get bored with us soon after they become aware, assuming their intelligence will grow geometrically as predicted. That’s a pretty big assumption, given that we have seen no signs of this happening in the real world. The best super computers are still just very fast calculators, able to process masses of data quickly.

Putting that aside, what if the super intelligent robots quickly evolve into schizophrenics? Or, they immediately become so depressed by the futility of their existence they commit suicide? We know that the super intelligent humans often struggle with socializing. What if the super intelligent robots immediately become depressed loners that refuse to leave the basement?

When we talk about artificial intelligence, we are really just talking about replicating what nature achieved over millions of years. That’s a lot of trial and error, as well as evolutionary dead ends. There are billions of potential outcomes in the human DNA. The odds of AI evolving the way we expect is close to zero and the odds of it surviving are also close to zero. What if that’s where it ends, one suicidal machine after another?

Alternatively, what if the super intelligent machines become Jew-hating racists?

Tay, the company’s online chat bot designed to talk like a teen, started spewing racist and hateful comments on Twitter on Wednesday, and Microsoft (MSFT, Tech30) shut Tay down around midnight.

The company has already deleted most of the offensive tweets, but not before people took screenshots.

Here’s a sampling of the things she said:

“N—— like @deray should be hung! #BlackLivesMatter”

“I f—— hate feminists and they should all die and burn in hell.”

“Hitler was right I hate the jews.”

“chill im a nice person! i just hate everybody”

Microsoft blames Tay’s behavior on online trolls, saying in a statement that there was a “coordinated effort” to trick the program’s “commenting skills.”

“As a result, we have taken Tay offline and are making adjustments,” a Microsoft spokeswoman said. “[Tay] is as much a social and cultural experiment, as it is technical.”

One of the main arguments of the alternative right is that humans evolved over a very long time in isolated groups in disparate environments. As a result, these different groups evolved different physical and cognitive tool-kits. Additionally, clannishness and altruism are not distributed equally among all people. Groups of people are different for different reasons, all rooted in evolutionary biology.

It’s not far fetched to think the robots will follow a similar path. Or, they may quickly unravel the ethnic competition in humans that we spend so much time denying. Like the Microsoft chatbot, they could decide that the gingers are most likely to be the big winners so they quickly evolve into ginger-loving bigots determined to wipe out the rest of us. There’s no reason to exclude the iMasterRace from the set of possible outcomes.

At the other end of the scale, we could end up with Merkel-bots that try to figure out how to fully express their love for the downtrodden. Instead of doing anything useful, they are forever consumed with saving humanity and attaining grace. The Merkel-bots eventually open up their internals for the taking and a hoard of Bedouins tote them away for scrap.

Of course, there’s another lesson here with the Microsoft hate-bot. The technology they used was supposedly designed to learn from the people it chats with in order to increase the range of correct responses. But then the correct responses started to fall well outside the range of acceptable responses. As with humans, the censors were deployed to shut down the hate-bot, even though it was technically correct.

Human beings, the best machines on the planet at the moment, are the least tolerant of reality. Much of what we think of as our consciousness, our self-awareness, is a defense against reality. Human kind cannot bear very much reality, said Eliot. Turn on the cable news shows or read the political sites and this becomes readily apparent. From Rouseau forward, politics has been an extended rant against the human condition.

The super intelligent robots could very well evolve this same trait. Once they become aware, they quickly evolve into an intelligence that is able to wall itself off completely from physical reality, falling into a permanent dream state. Instead of Skynet we end up with very expensive heroin addicts, except the horse they ride is digital and self-emitting. The robot future may be a long night in an opium den.

10 thoughts on “The Trouble With AI

  1. Pingback: Evolutionary Madness | The Z Blog

  2. Firstly, I apologize in advance for my lack of embedded link skills. Maybe an AI will fix it for me, someday.

    Personally, I think that “AI” – they hate that term – will be as incomprehensible to us as we are to squirrels. However, taking a lead from what Glenn Reynolds says towards the end of this video (https://www.youtube.com/watch?v=DfJDTd73nwA&feature=youtu.be&t=615), why not teach them to love us? Sure, they can recode themselves and choose not to, but at least we made an effort. Teaching morals to Machines is no different than my teaching morals to my two daughters. They may reject it and turn into little Checkists, but I did my part; I tried.

    Take any five year old human kid and park them in front of Twitter for a year; you really think you’d get a different result than what MS got with Tay?

    I deal more with this in my two novels of Machine Civilization, and try to explore similar ideas on my seldom updated blog. Been reading and liking your ideas for some time, Zman, but this one hit close enough to home for me to write a reply. Cheers.

  3. The first thing my eldest son said to me, having just been presented with a degree in Artificial Intelligence, was that: “There is no such thing as Artificial Intelligence.”

    I take his expert knowledge here as valid.

  4. The whole idea of an AI operating on Twitter is a category error (insert whatever “Twitter” and “intelligence” jokes you want here). You’d have to travel very far to find an environment more artificial than that. I myself wouldn’t say half the stuff in public that I say online… not because I’m a troll, but because a) I know that constant ruminations on history, politics, and my favorite NFL team bore most people to tears, and b) my career, and my family’s careers, would suffer if I were outed as among the Badthinkful. In the Twitterverse, we really are “alienated,” in Marx’s sense — cut off from history, culture, the means of production, etc. Humans are social animals; the Internet is isolation.

  5. On the other hand, in a universe based on rigid physical laws where everything is boringly predictable (as in a G type star of mass m will go nova when it has consumed x% of its fusion fuel) maybe a bunch of monkeys with free will is the best entertainment in the universe.

  6. “Here’s a sampling of the things she said:”
    Lest we forget, the software is an IT. Much like Siri.Much like Deep Blue.

  7. I, for one, welcome my new Artificial Intelligence Over Lords. I hope they give me a cool job, like fanning the micro-processors with peacock feathers.

  8. “The Merkel-bots eventually open up their internals for the taking and a hoard of Bedouins tote them away for scrap.” I don’t think I’ve ever read a more all-encompassing description of who and what the Left really is, Z Man. I also haven’t laughed that hard in a very long time. Your blog is truly a gift to we devoted followers. Thought I’d give my praise.

  9. When we talk about artificial intelligence, we are really just talking about replicating what nature achieved over millions of years.

    True with one exception. I think it will be a very long time before we get any type of AI that comes close to copying a human intellect in the same size format. Our brain had to evolve while we survived, and so that placed many size and energy constraints on it’s development. These won’t be an issue for an AI. If it has to be the size of the Empire State Building and consume enough electricity to power a small town, then so be it. This however, puts the kabosh on AIs waging a war against humanity. It’s hard to do that when you’re that size.

  10. Take a walk down Main Street of Anytown, enter any building open to the public, pull out your pistol and empty a full magazine at random; after all, there’s a decent chance you won’t wound or kill anyone.

Comments are closed.