Skip Navigation
BlackBerry Blog

This Week in Security: AI Bias, E-Stalker Apps, and AI Laughing at You

Automating Bias

With the advent of machine learning and artificial intelligence (AI), amazing progress has been made in terms of having computers do more of our work for us. However, offloading work to computers and algorithms comes with a hidden danger. When decision-making power is handed from people to algorithms, the decisions are suddenly assumed to be correct and immune to bias, even though this is far from the truth.

Not only can algorithms dangerously simplify complicated real-world situations to yes/no decisions or single numbers, but over-confidence in the accuracy of an algorithm can remove any kind of accountability or ability to second-guess a computer’s decision.

One example from nearly two years ago is a ProPublica report on racial bias, in a system used to calculate risk scores for people as they are processed through the criminal justice system. In their research they found that these systems assigned higher risk scores to African-Americans, and that these systems were widely used, sometimes at every point of the process in the criminal justice system.

Another interesting “gotcha” of AI is adversarial input - an active area of research regarding various ways and means to fool different AI systems. Here is a 3D printed turtle designed to fool Google’s inception v3 image classifier into thinking it’s a rifle, and this is a sticker designed to fool the VGG16 neural network into thinking a toaster is the subject of an image, regardless of what else is present.

Meanwhile, AI is being swiftly applied to everything that can’t get up and run away from a data scientist: analyzing military drone footage, determining who to search at the border, various aspects of crime-fighting, and secretive police facial recognition programs. While moving decision-making work towards computers and away from humans may appear to remove human bias from important decisions, we risk hard-coding existing bias into unquestionable and un-auditable algorithms.

If we’re going to leverage AI in making social decisions, we need to take great care to take that input with a healthy dose of skepticism and context.

Shutting Down E-Stalkers

Stalkers have long been a problem, and have grown adept at using technology to track their victims. The most recent instance of this is the growing proliferation of “dual-use” tracking applications, often dubbed spyware or stalkerware. While marketed as legitimate applications to track children or family, these apps are all too often used without the tracked person’s knowledge or consent, such as spying on a partner’s private texts in an attempt to uncover suspected cheating.

However, someone has apparently found an alternative solution by just repeatedly hacking a stalkerware provider until they shut down.

While this might help the victims who are being tracked without their consent by this particular service, the full problem is social and not easily handled with technical solutions. Learning to spot the tell-tale signs of stalkerware on your smartphone or personal computer is a good start. Even better is knowing how to spot red flags in a relationship that can be warning signs of abusive behaviors, and learning how to reach out to others for emotional support and physical help to get out of bad relationships.

 “Alexa, Creep Me Out In The Middle Of The Night”

Tying the previous two stories together, we are now hearing reports of Amazon Alexa units creepily laughing at people for an unknown reason. The working theory is that the unit mistakenly thinks the users says “Alexa, laugh”, but there are also reports of units laughing spontaneously.

No word yet on if they laugh when nobody is around to hear them laugh, or if Amazon is working on a Poltergeist-as-a-Service (PoiP) that is simply being rolled out to select users as a test. I was unsettled enough at the thought of an always-on microphone in my house, but unprompted tauntings are enough for me to flee to the woods.

All that said, keep your ears perked for your own Alexa unit to try scaring you, since it seems likely Amazon will exorcise the issue before too long.

The Cylance Research and Intelligence Team

About The Cylance Research and Intelligence Team

Exploring the boundaries of the information security field

The Cylance Research and Intelligence team explores the boundaries of the information security field identifying emerging threats and remaining at the forefront of attacks. With insights gained from these endeavors, Cylance stays ahead of the threats.