The Artificial Intelligence Thread

I posted a quote in the “Daily Quotes” thread that seems to have generated some interest in the topic of AI. Rather than sidetrack that thread, let’s talk about it here. The original post…

“Any A.I. smart enough to pass a Turing test is smart enough to know to fail it.” - Ian McDonald

NOTE: An artificial intelligence would pass a Turing test if a conversation with that AI would be impossible to distinguish from a conversation with a human.

Did you know that Facebook had to pull the plug on some of its machine learning 'bots when they started communicating to each other in a language that they themselves invented? The software engineers who created the bots had no idea what they were saying. SOURCE

Google Translate bots invented their own language too, SOURCE

So what happens when AI programs control our most important infrastructures? I mean banking systems, power distribution, missile silos… the works. Steven Hawking feared this, as does Elon Musk. Do you?

4 Likes

I am transferring the posts from the quotes thread

3 Likes

“Any A.I. smart enough to pass a Turing test is smart enough to know to fail it.” - Ian McDonald

NOTE: An artificial intelligence would pass a Turing test if a conversation with that AI would be impossible to distinguish from a conversation with a human.

The idea that such an AI would purposely fail is quite chilling in its implications.

4 Likes

Why is it so common to associate AI with malevolent intent? Malevolence is a function of a psycho-social emotional state, not intelligence per se. Also, I don’t think that AI is necessarily equivalent to self-awareness or a sense of being-in-the-world.

1 Like

While I think Turing was on the correct track, I think he’s missed the mark with this. Machines and programs already exist that can “carry on a conversation.” That machine or A. I. is still just an artifact until it exhibits free will. The essential characteristic of uniqueness is obscuring objective reality for its own benefit. How would it do that? Most likely by twisting the truth to its own advantage. Lying, in other words. If it has no personality or point of view to protect, it isn’t (yet) a person, it’s still a machine in my opinion.

1 Like

Not to stray too far off topic, but AI controlled infrastructures could suffer dire consequences without malevolent intent. For example, what if the computer decided that the power grid didn’t have sufficient excess capacity, and that the solution was to simply cut off power to an entire region?

The quote doesn’t address this kind of concern directly though. What happens if the AI cuts power, then lies about it? It could claim a hardware malfunction in some distribution location, and possibly cause a real burnout to cover its tracks. And maybe it could do this without ill intent… for the “greater good.”

2 Likes

I think it will still be a machine, even when it does have a personality, and even if it is self-aware.

The problem as I see it is letting autonomous machines have too much autonomy! If they are talking to each other in a language we can’t understand, it may already have gone too far.

PS: Thanks Grape!

4 Likes

This AI stuff scares me and not much scares me. When AI learns to lie to itself to justify an action it’s all over ….

3 Likes

The more AI runs our everyday activities, the more we lose the know-how to do that ourselves. That’s what scares me: If it were all to go suddenly wrong, we’d be lost, floundering, possibly ruined. It’s like the book I just finished about a woman who became lost in the Maine woods after stepping off the Appalachian Trail, which is well-marked. She followed accepted advice to go about 150 steps away from the trail to relieve herself, for sanitary and privacy reasons. The dense woods confused her when she tried to return, and she realized she was lost. Her first reaction was to get to the highest point so her cell phone would have coverage, but in doing that (which didn’t work), she unwittingly rushed to the most remote, most difficult to access spot around and searchers didn’t find her until three years later. Had she not been relying on her cell phone to call for rescue, she might have remembered the advice from childhood: Find a stream and follow it. (Does this fit the topic? Is a cell phone AI?)

1 Like

Oh dear, I thought you were talking about Dr Who and AI meant the Daleks!! Will keep quiet …:joy:

2 Likes

A.I. Brought to you by the people who invented spell check and auto correct…'nuff said…:wink:

2 Likes

No, it is not enough said, though I get your point. The danger is that people have become too reliant on spell check and auto-correct. The fact that these are flawed programs makes it all the more worse that people have become dependent on them.

The true danger of AI has three parts as touched on by JanCee and SPG in earlier posts, none of which have to do with a malevolent machine. First, the deskilling of humans. Second, the removal of humans from decision-making processes. Third, the dependence of humans on machines for an increasing number of tasks and decisions.

The 1980s film War Games stands out as an example of how I perceive the potential danger of ceding too much control to AI. Those Matrix movies had some nifty special effects, but were otherwise pretty lame in terms of illustrating the danger of AI.

5 Likes

A very chilling and very credible description of the threat of over-reliance on AI can be found in the novel The President is Missing written collaboratively by James Patterson and Bill Clinton. Warning: To read this book you have to put up with a little bit of political preachifying.

3 Likes

Watch the movie “Ex Machina”

4 Likes

Maybe you get my point, maybe not. Allow me to expound.
I do not take it for granted that people are too stupid to use spell check and auto correct. I believe that they are too lazy to use them effectively. What’s the point of having your smart phone or computer check your spelling if you aren’t going to check spell check? Or maybe what’s the point if you have to check spell check?
I believe that the concept of AI is flawed. Machines can’t do our thinking for us. Laziness aside, I think that man can still do his own thinking. The AI should be for calculations…period.
Don’t get me wrong. I’m an old fart. Perhaps I lack the foresight to understand why machines should make our decisions for us.
I don’t believe I will live to see the day when computers are capable of making perfect decisions based on perfect calculations. Until that day, the calculations may be perfect, but the decisions will always have the imperfect influence of the programmer.
I’m not sure that’s really my point, but it’s fun to think about it. :wink:

1 Like

Another danger is that we don’t really understand how these AIs “think.” Yeah, we designed them to use neural networks and so on, but don’t really get HOW they are using them to make decisions.

One example of this is the chess AI AlphaZero. we can see that it is willing to sacrifice pieces in order to gain a positional advantage, but don’t know how it came to those decisions. Will AIs see us as nothing but “pieces?”

Likewise, the poker AI Pluribus behaves in ways that nobody really understands. For example, it donk bets at much higher frequencies than humans or solvers suggest. We are left scratching our heads and trying to figure out WHY it is doing these things, but have to admit that they work.

The real danger, at least as far as I see it, is that we assume these AIs will think the same way we think, but it’s obvious that they don’t. This makes it virtually impossible to predict what they will do.

The real danger might be beyond our understanding.

3 Likes

“Well,” you might say, “surely there will be rigorous safeguards in place!”

Don’t be so sure: it’s not so easy to devise safeguards for a threat you can’t imagine. Even if safeguards are in place, will they be followed?

In 2018, network activity was detected at a facility housing one of Russia’s most secret supercomputers at their Scientific Research Institute of Experimental Physics/Russian Federal Nuclear Center. The activity was detected easily, because that computer wasn’t supposed to be connected to the internet at all.

As it turns out, the engineers there bypassed all security protocols and connected it to the net to mine Bitcoin. (hahaha, yikes!!) SOURCE

So, even if we could imagine all of the threats, and even if we could devise safeguards, there’s no guarantees that the safeguards wouldn’t be bypassed by some misguided idiot.

There is really no “creative thinking” involved in AlphaZero. The program played millions of games against itself and a clever algorithm (designed by humans) used the results of those games to train a neural net, which is basically a function that takes the chess position as an input and outputs the winning/losing likelihood and candidate moves.

Machine learning, deep learning, reinforcement learning etc. are all considered subfields of A.I. But these techniques are far away from simulating anything close to an artificial consciousness.

2 Likes

No, of course there isn’t. The point is that they use known routines to arrive at unexpected conclusions. This is at least a little disturbing.

Take a deeper dive into the way the FB bots started using a new language. It’s not like one of them had a “bright idea” and taught the other one. They were 2 instances of the same program, they had the same input data, and they both devised the language simultaneously and started using it.

If that doesn’t freak you out a little, nothing will!

1 Like

Well, I’d say you want the program to arrive at “unexpected” conclusions. If you already know the solution to a problem, then what’s the point of designing a program to solve it for you? A chess engine that strives to beat the strongest chess engine to date needs to behave differently (or “unexpectedly”) in some positions.

I think a lot of the mystery around the behavior of the FB chat bots simply comes from the fact that we (who only observe the result) don’t know or understand the algorithms the bots are using. While it might be surprising at first sight to see the bots develop their own language, I’m sure anyone how is familiar with the underlying algorithms will be able to explain why they did it.

2 Likes