Skip Navigation
BlackBerry Blog

The Psychology of Machines

I understand the mindset of the machine. I used to be a huge fanbot of machines and my cosplay was second to none. This was back before I stumbled into socially self-destructive neurohacking where I learned how difficult it can be just being a human being. And although I’ve overcome my complete disdain for everything not machine, I’m still massively incapable of relating to people in oh so many other ways.

And so do you or else you wouldn’t be shaking your head now and thinking, yeah, humans suck. Or if you’re not then you’re curious as to why the rest of us do and which subreddit club you’re missing out on. Many especially if raccoons also scare you, but I digress.... Just allow this introduction to serve as my awkwardly anecdotal attempt at introducing you to the psychology of machines.

Don’t pretend for a moment that you don’t know machines can be superior to us flesh bags in so many ways. There’s just something about the whole immunity to raccoon bites thing that is plainly superior. Yet robots aren’t mounting the high horse and locking themselves in white towers like oh so many people do when implying their betterness. At least until the raccoons eat them. Which brings me to my point that from your self-appointed superiority you’ve ignored the psychological machine revolution happening all around you.

This is important because we’re putting machines in charge of more and more things in our lives, even our security. We even brag about how much better a product is because we’ve put a robot mind in charge of it. Yes, Artificial Intelligence and Machine Learning are the thing to have in any product these days. Seriously, when have you ever seen AI as a reason NOT to buy a solution in a security product review?

In a completely unrelated study academics found that a robot is a million times <Editor's note: this is really obvious so no citation needed> more capable at dealing with cybersecurity issues than you are (except for all cybersecurity issues that can’t be solved by speed, scheduled actions, and static pattern-matching). And if you tell me what passes for cybersecurity today, you are saying threat response, patching, and authentication which is totally speed, scheduled actions, and static pattern-matching. So you just admitted that robots would be better at modern cybersecurity than humans are. Or maybe you didn’t. I don’t know; I have no people skills.

Now let’s ask ourselves, if AI is so good at defending our modern cybersecurity, how is it at ATTACKING our modern cybersecurity?

I know you have that one guy friend who won’t shut up about the dangers of self-aware AI and that Skynet and Terminators are inevitable. But that’s just it, those are science fiction stories designed to scare us into thinking it’s possible because AI is cold like a calculating killer. But it’s not. AI has personality.

Artificial Intelligence is an extension of a developer’s id, a raw implementation of their wants and desires. Machine language is a mathematical representation of the people who contributed to the algorithms. And really, even more simplistic scripts are extensions of their coder’s personality.

But is code an artistic expression or a personality? After all, an artist is putting an idea on canvas and not their personality. Isn’t a coder just putting an idea into action? No, because our ideas are the results of our personalities. Which is why some people find them good and some find them bad.

The actions performed by the malware comes from ideas the coder had. Look, malware that lies and deceives mirrors the malware author who can lie and deceive in that way. The malware that deletes or changes data is only as evil as the malware author. And this isn’t just true of attackers but also defense.

The AI designed to protect a network will do so according to the internalized cultural rules of the people who designed it. In all aspects, autonomous code is very much human because it was designed by a human. Because of that we can study its psychology.

Now the reason why we should care about the psychology of a machine is two-fold. One, because if it follows familiar behaviors we can determine the intent of it just like we do with people we don’t know. And two, because it’s pretty damn cool.

Now when people hear that they think it’s crazy because, you know, it is completely nuts. Nobody sane will believe their Roomba has a personality. And it doesn’t, not in the traditional sense of a friend or a pet or a wine. Unlike what you think of as a personality which includes character, education, and morality, a robot’s personality is the collected character, education, and morality of the person or people and then stripped down to their raw desire, the thing they wish to accomplish. So where you might wish to accomplish cleaning the whole yard of raccoon hair, things like your character (you’re lazy when it comes to yard work) might get in the way. But the raccoon-hair cleaning robot you design wouldn’t have that problem. Additionally, we would be able to see your robot will follow the same morality (whether to trap or kill caught raccoons) and education (you know raccoons can turn their front paws 180º to walk frontwards down trees). So we can psychoanalyze your AI to have some idea of what it can or can’t do.

Much like a grifter will analyze the psychology of a target person to know how to manipulate them, an attacker can do the same to your AI. So while the capabilities of your AI may be faster and more clever than people, it still holds the limitations of the developer’s personality. So don’t expect AI protection to be better when we know humans are the weakest link.

The same with a piece of malware, how it wants to infect, how it wants to spread, and what it wants to do defines its behavior. This means we can understand so much about the psychology of the developers behind it that we can, with degrees of certainty, determine the malware’s intent, an extension of the developer’s intent, before it even attacks.

I know you’re going to argue with me that you can program a machine to not be like yourself. To be honest, I've heard that argument more times than I've heard “pass the ketchup” and "I love you" combined. And while it’s true that you can program outside your personality, you will still likely program within the confines of what you think is possible, as a HUMAN. See, what scares me is when AI makes software that isn’t restricted to human desires. We won’t recognize it.

There’s some examples making the rounds like tales of machines suddenly talking to each other in self-invented shorthand or evolving a better antenna by co-opting its own circuitry. Which is exactly what nature does because it has no morality or behavior to restrict it. That’s why genetic algorithms give us results that sometimes are so hard to comprehend. Because nature, and unrestricted AI, cheats. It has no internal rules to follow or, because of it, a consistent personality. And the only people we know who don’t play by the rules and have inconsistent personalities are the protagonists in horror movies.

Until that happens though, we only need to worry about the AI developed by the cold, calculating killer and inheriting that personality. So maybe that one guy friend of yours is right.

The moral of this story is to be aware of the personality you add to AI especially when using it for cybersecurity. Or don’t. I don’t know. I’m not your Mom; I can't tell you what to do.

About Pete Herzog

Pete knows how to solve very complex security problems. He's co-founder of the Institute for Security and Open Methodologies (ISECOM). He created the OSSTMM, the international standard on security testing and analysis, Hacker Highschool, cybersecurity for teens, and the Cybersecurity Playbook, practical cyberdefense for everyone else. More about him here.

The opinions expressed in guest author articles are solely those of the contributor, and do not necessarily reflect those of Cylance.

Pete Herzog

About Pete Herzog

Guest Research Contributor at BlackBerry

Pete Herzog knows how to solve very complex security problems. He's the co-founder of the non-profit research organization, the Institute for Security and Open Methodologies (ISECOM). He co-created the OSSTMM, the international standard in security testing and analysis, and Hacker High School, a free cybersecurity curriculum for teens. He's an active security researcher, investigator, and threat analyst, specializing in artificial intelligence (AI), threat analysis, security awareness, and electronic investigation.