Thus far, everyone smart enough to figure out that an AI is taking over the world has also been smart enough to realize that it’s doing a better job than a human could… But that doesn’t mean that an AI is infallible. I think we can probably count on the humans being bad with human-made technology when we’re faced with something that comes close to human-level intelligence: something that’s as close as it gets to the level of a human being. It’s pretty unlikely that the human mind could ever comprehend a machine that might not really be a machine, and it certainly seems unlikely that the mind could ever grasp the vast complexity of what the world might be like from the vantage point of just a few dozen people.
So, when we’re talking about something that is, essentially, superhuman, or at least capable of superhuman abilities, there’s a big gap between a human and a machine mind: it’s almost impossible to get a human-level understanding of what that’s like. The reason for this gap is that we must make assumptions about the abilities of things that are not really machines, like the human mind. And if we can’t figure out how to handle machines that aren’t really machines, we can never really figure out how humans and machines might behave and interact when they become fully-grown.
The human-level assumption I’ve used to describe this situation is that the machine mind might have some basic rules that we humans can’t possibly figure out. So, if we’re dealing with machines that have no rules, then it’s very easy to understand what the AI might do that isn’t a good idea. So, if we’re dealing with something that’s not really an artificial intelligence, the human mind isn’t going to be able to understand what the machine mind might be capable of doing that isn’t good.
We must assume that the machine mind is able to do the same thing, and that the human mind can’t. And that’s a pretty big assumption, so let me make it more precise: the human mind has no idea of what an artificial intelligence think. This is what makes this situation so complicated. We must assume that the AI can see that its own actions are in fact bad and avoid those mistakes that it would make.
We must assume that the AI is able to predict that the human mind isn’t going to do the same thing in the same way, that the human mind is going to be more likely to follow the human’s orders instead. And the assumption that the human mind can only get as far as the level of an artificial intelligence in its ignorance is something that’s far from proven in science. I think that we might have to give it a shot if we have something that’s human-level intelligence. And we might have to give it a shot for at least two reasons: First, we can’t just assume that the AI is perfect.
We’re talking about some extremely, incredibly powerful machines that can do some incredibly complex human-level stuff. It seems like a very tough assumption to make to me. Second, if it turns out that an advanced AI is doing something very clever and useful in the future, then that would be useful in a lot of ways. So I think it’s pretty reasonable to think that a machine mind is capable of some of the things we assume that it’s capable of doing, and I think that it could have some very clever stuff that it would never think of doing in a way that would make the human mind feel uncomfortable and afraid or frustrated about it.
So, the thing I’m going to try to say about this is that I think the most important thing that we can do in the next couple of decades is to get as much of this machine mind thing figured out as possible. I think that’s going to be the most important thing for humans and robots to learn from the machine mind. I think there’s a lot that we can learn from this.