Skip Navigation
BlackBerry Blog

The AI Manifesto

FEATURE / 09.11.19 / The Cylance Team

The AI Manifesto Part 1: Understanding the Risks
and Ethical Implications of AI-Based Security

We live in a time of rapid technological change, where nearly every aspect of our lives now relies on devices that compute and connect. The resulting exponential increase in the use of cyber-physical systems has transformed industry, government, and commerce; what’s more, the speed of innovation shows no signs of slowing down, particularly as the revolution in artificial intelligence (AI) stands to transform daily life even further through increasingly powerful tools for data analysis, prediction, security, and automation.1

Like past waves of extreme innovation, as this one crests, debate over ethical usage and privacy controls are likely to proliferate. So far, the intersection of AI and society has brought its own unique set of ethical challenges, some of which have been anticipated and discussed for many years, while others are just beginning to come to light.

For example, academics and science fiction authors alike have long pondered the ethical implications of hyper-intelligent machines, but it’s only recently that we’ve seen real-world problems start to surface, like social bias in automated decision-making tools, or the ethical choices made by self-driving cars.2, 5

During the past two decades, the security community has increasingly turned to AI and the power of machine learning (ML) to reap many technological benefits, but those advances have forced security practitioners to navigate a proportional number of risks and ethical dilemmas along the way. As the leader in the development of AI and ML for cybersecurity, BlackBerry Cylance is at the heart of the debate and is passionate about advancing the use of AI for good. From this vantage point, we’ve been able to keep a close watch on AI’s technical progression while simultaneously observing the broader social impact of AI from a risk professional’s perspective. 

We believe that the cyber-risk community and AI practitioners bear the responsibility to continually assess the human implications of AI use, both at large and within security protocols, and that together, we must find ways to build ethical considerations into all AI-based products and systems.

This article outlines some of these early ethical dimensions of AI and offers guidance for our own work and that of other AI practitioners.

The Ethics of Computer-Based Decisions

The largest sources of concern over the practical use of AI are typically about the possibility of machines failing at the tasks they are given. The consequences for failure are trivial when that task is playing chess, but the stakes mount when AI is tasked with, say, driving a car or flying a jumbo jet carrying 500 passengers.

In some ways, these risks of failure are no different than those in established technologies that rely on human decision-making to operate. However, the complexity and the perceived lack of transparency that underlie the ways AI makes its decisions heighten concerns over AI-run systems, because they appear harder to predict and understand. Additionally, the relatively short time that this technology has been used more widely, coupled with a lack of public understanding about how, exactly, these AI-powered systems operate, add to the fear factor.

The novelty of a computer making decisions that could have fatal consequences scares people, and a large part of that fear revolves around how those systems balance ethical concerns.Consider a real-world example: Society has become accustomed to car accidents resulting from human error or mechanical failure and, in spite of regulatory and technical improvements to reduce the danger inherent in car accidents, we now accept them without question as part of the overall risk of driving. Accidents caused by AI failures, on the other hand, raise considerably more public alarm than those caused by more traditional types of human or machine-based failure.

Take, for instance, the furor over the first known case of a driverless car killing a pedestrian.4, 8 The computer appears to have determined too late that the car was about to hit a pedestrian, but could it have driven the car off the road to avoid the collision? Did the computer favor its passenger’s safety over the pedestrian’s? What if it had been two pedestrians? What if they were children? What if the computer was faced with the choice of colliding with one of two different pedestrians? What would a human driver do differently from AI-based software when faced with that split-second decision?

Part of the alarm over this accident also results from fears that its cause affects other autonomous vehicles and a wider array of activities linked to AI. For example, did the road conditions make this accident one that no human or computer system could have avoided? Was it a flaw in the AI of this particular navigation system or in all AI-based navigation systems? The AI technology involved in a driverless car is highly complex, making it more difficult to test than the car’s mechanical parts. Do we know enough to adequately quantify the risks before this technology is rolled out on a global scale?

The fatal crash of Lion Air Flight 610 offers another instructive example. The crash appears to have been caused by a mechanical sensor error leading to the airplane’s computer system forcing its nose down. The human pilots appear to have pulled the nose back up repeatedly before losing control.9 The fact that this incident involved a computer making a flawed decision and removing control from the pilots raises concerns beyond those raised by a purely mechanical failure. The tragedy would be the same had it been the result of, say, engine failure, but it would raise different ethical considerations in terms of agency and fault. Moreover, we would presumably be better able to quantify the risk of the accident being repeated in a mechanical failure than in the case of a complex AI system.

Examples like these highlight the impor-tance of ensuring that AI-dependent systems are well-tested and built in ways that are transparent enough to enable an adequate assessment of risk by the end-users of those systems.10 What that means in practice depends to a large extent on the purpose for which AI is being employed.

Careful attention needs to be given to the potential harm that may result from failure at a given task as well as to the complexity of the system and the extent to which that complexity adds to uncertainty in estimates of the probability of failure. Risk professionals will need to consider tradeoffs between transparency and effectiveness, between transparency and privacy, and between the possibility of human override and overall effectiveness of AI decisioning, all of which depend on the contextual use of AI in any given setting.

Privacy and Consent

AI’s rapid adoption and widespread use in recent years also raises considerable privacy concerns. AI systems increasingly depend on ingesting massive amounts of data for training and testing purposes, which creates incentives for companies not only to maintain large databases that may be exposed to theft, but also to actively collect excessive personal information to build the value of those databases.5, 10

It also creates incentives to use such data in ways that go beyond that which the data’s owner initially consented. Indeed, in complex AI systems, it may be hard to know in advance exactly how any given piece of data will be used in future.5

These concerns are linked to the overall proliferation and indefinite storage of captured data, with an increasing percentage of this data emitted like exhaust from cyber-physical systems such as the Internet of things (IoT).11, 12 These fears are heightened exponentially by the fact that AI derives the best value from large data sets, and is increasingly able to detect unique patterns that can re-identify data thought to be anonymized.

Concerns are further ratcheted up by the increasing ability of cyber attackers to expose these large data sets that were supposed to be protected — a trend that goes hand-in-hand with the decreasing efficacy of traditional, signature-based security solutions.

Such concerns add new dimensions to data privacy laws that cybersecurity and risk leaders must consider as they help organizations navigate the onboarding of AI. The good news in this case is that AI-powered technology can, in fact, be used to enhance privacy, if installed and correctly configured as part of a company’s overall layered defense strategy. 

In contrast to other analysis tools, AI is often better suited to use and learn from properly anonymized data. Feature hashing, when the data used to train a machine learning system is first altered through a hashing algorithm,13, 14 is an irreversible transformation that makes the data worthless for analysis by humans but still readable by AI systems for pattern detection. Feature hashing can make AI-based analysis more efficient by reducing the dimensionality of the data, thus making the process more protective of privacy than it might otherwise be. 

Bias and Transparency

Going back to the issue of ethics, the potential for AI systems to exacerbate social inequality through discriminatory or arbitrary decision-making (often caused by the use of limited data sets for training) has also become a recent source of public concern.4, 10 As government agencies and courts increasingly turn to AI-based systems to aid and enhance human decision making, including life-altering decisions such as criminal sentencing and bail determinations, it has become apparent that existing social biases can unintentionally become baked into AI-based systems via their algorithms or in the training data on which these algorithms rely. It is also becoming apparent that some of these AI systems are being made intentionally biased to hide arbitrary or unjust results behind a veneer of objectivity and scientific rigor.

A recent study by Pro Publica of AI-based risk assessment scores used for bail decisions in Broward County, Florida illustrates the point 10, 15, 16. By comparing risk scores to defendants’ subsequent conduct, Pro Publica showed not only how unreliable the scores were, but also how biased they were against African Americans. The scores erroneously flagged African American defendants as future criminals at nearly twice the rate as it falsely flagged European Americans defen-dants as such. Importantly, the flags occurred even though the system did not explicitly ask about race.16

In 2013, U.S. Immigration and Customs Enforcement (ICE) began the nationwide use of an automated risk assessment tool to help determine whether to detain or release non-citizens during deportation proceedings. It initially recommended release in only about 0.6% of cases.17 In 2017, ICE quietly modified the tool to make it recommend detention in all cases. This came to light only through a Reuters investi-gation of detention decisions in 2018. 4, 18

The danger of these types of discriminatory and arbitrary AI usage is only heightened with the spread of AI-based facial recognition tools in law enforcement and other settings, including classrooms and cars.4 A study by researchers at the ACLU and U.C. Berkeley found that Amazon’s facial recognition software incorrectly classified 28 members of Congress as having arrest records. Moreover, the false positive rate was 40% for non-white members compared to 5% for white members. The subfield of affect recognition raises even more concerns.4

One of the clear lessons to be taken from these examples is the importance of making AI-based decision-making systems more transparent to the end-user or administrator charged with purchasing, installing, and supervising these systems. Information about algorithms and training data should be available for inspection on demand, and systems should be able to objectively record and display the logic patterns behind their decisions.10

In addition, regular auditing is clearly important, as built-in biases may only become apparent as systems are used and the data they collect and store expands. Such audits will require security and risk professionals and AI practitioners to create a bridge between various knowledge domains in order to enable and support effective oversight activities.

Guarding Against the Malicious Use of AI

Finally comes the dimension of ethical concern that puts the most fear into the hearts of security professionals and the public alike: the use of AI for malicious purposes. The concerns start with the attacks on benign AI systems for malicious purposes, but extend into the strategic use of AI by attackers to subvert cyber defenses. 

By gaining access to an AI-based system — or even to the data on which such a system is trained — an attacker can potentially change the way it functions in harmful ways. A world in which everything from cars to heart implants to power grids relies on AI and are connected to a network is one in which cyber attacks become increasingly life-threatening. Additionally, when AI determines the flow of personalized news and other information, malicious actors can undermine societal trust in government and media on a grand scale — a scenario that is all-too-common today.

One of the largest public concerns surrounding the release of any powerful new technology is that once Pandora’s box has been opened, whether that invention is for the good of mankind or engineered to cause its detriment, there is no putting that new technology back in the box. Once it is out there in the wild, it is here to stay, and whether it will make society better or worse can only be determined by careful and consistent monitoring over time.

AI-based security technology has now reliably proven itself to be more effective than traditional technology (such as antivirus products that rely on human-generated signatures), but so long as security practitioners have access to that cutting-edge technology, so too do people with malicious agendas. 

Preventing the malicious use of AI requires security professionals to double down on their commitment to the fundamentals of security, ensuring the confidentiality, integrity, and availability, or CIA, of AI-based systems. Again, such commitments will require greater levels of transparency into the application of AI at the algorithmic and code level, to ensure that future growth happens in an open and accountable fashion.

Additionally, as risk professionals examine systems for the kinds of problems noted above, such as operational failure, privacy, and algorithmic bias, they’ll need to consider how threat actors distort or amplify the risks to achieve their own ends.

Security professionals must also remember that threat actors continually look for ways to leverage their own personal application of AI to boost the effectiveness of their attacks. The rise of AI-based cyber attacks like DeepLocker further undermine traditional cybersecurity methods, making it hard to imagine adequate defenses that do not themselves rely on AI. 

Risks in AI-Driven Cybersecurity

Back in the late 1890s when the first steam-powered motor cars chugged around the streets at a top speed of 12 miles per hour, nobody would have suspected that just a few decades later, their descendants would make the horse-drawn carriage obsolete.

In contrast, long before the global spread and integration of AI into all walks of life, security professionals recognized that traditional cybersecurity solutions were becoming increasingly ineffective and antiquated. In the face of proliferating automated attacks, advances in malware production and distribution, and the increasingly vulnerable attack surfaces of organizations that rely on cloud computing and networks with numerous endpoints, the unchecked and often unregulated growth in the technology sector over the last few decades has created ever more cybersecurity vulnerabilities by exponentially expanding the attack surface of globally connected companies, while providing malicious actors with increasingly powerful tools.

Fortunately, most security practitioners recognize that AI-fueled cyber attacks can be best thwarted by AI-powered security and are continually updating their defenses to meet this challenge. It is also fortunate that leaders in cybersecurity have acknowledged that effective cybersecurity for automated systems needs to be driven by AI in order for the defenders to stay one step ahead of the attackers at all times and provide real-world AI-based solutions for security practitioners to deploy in their environments. 

Reducing risk in AI adoption thus requires advances in AI-based cybersecurity, coupled with the expansion and adoption of that technology across many industry and government sectors, to take it into more widespread use.6 Attackers who themselves use AI-based tools to manipulate AI-based cybersecurity to, for example, recognize benign code or behavior as malicious, damage both the system that security tool was protecting and the public reputation of AI. In other words, a practical first step to securing the very future of AI entails first ensuring that AI-based cybersecurity systems and any training data that they use are themselves secure.

While so much of the ethical oversight of AI depends on transparency within the security ecosystem, AI-based cybersecurity is yet another area in which transparency may conflict to some extent with the effectiveness of the solutions. The advantages of making code open in this context may be outweighed by the risk of subsequent exploitation by malicious actors; likewise, where training and testing data are supplied, there are obvious privacy concerns around making that data open, as we discuss below.

The stakes in cybersecurity efficacy demand that IT admins and similar industry users be given enough information about the ways their security is implemented and how it has been tested, in order to make informed decisions about their level of risk in granting access to that data.

Building Ethically-Grounded Cybersecurity Organizations

The risk of AI-based cybersecurity technology making unethical decisions is unlikely to be nearly as large as when AI is used to classify malicious real-word activity, such as is occurring right now in China through a controversial experimental social credit system designed to classify people based on their personal and public data.23 Nonetheless, AI-based cybersecurity has the potential to exclude individuals or groups from accessing computer systems in discriminatory or arbitrary ways, most importantly in ways the individuals themselves may not fully understand. 

The same lessons that apply to other AI-based systems in this regard therefore also apply to AI-based cybersecurity: That which is not 100% transparent is open to unintentional flaws and misuse. At the same time, AI-based cybersecurity also has the capacity to make other AI-based decision-making systems more secure, thus protecting them from malicious attacks.

AI-driven cybersecurity can be used to enhance privacy for both individuals and corporations, but it also creates incentives for the creators of such systems to collect and use data without informed consent, so the inclination to behave badly must be countered at all times by organizational and technical safeguards. The risk of discriminatory or arbitrary decisions made by AI will always be present as a result of the self-learning capabilities of such systems, and thus they will always require regular human audits to ensure that individuals and groups are not excluded from system use or privacy protections.

At the end of the day, our call to action is clear: AI plays a vital and beneficial role in society and in security, but deploying it in the real world requires careful attention to detail on the part of those who deploy it and a careful balance of openness and transparency on the part of those who create and supply it. While AI-driven security can mount a highly effective defense against cyber attacks as part of a layered defense strategy, care needs to be taken at all times to ensure that systems and training data are sufficiently transparent to allow users and administrators to make informed decisions about acceptable risk levels.

Although many of the points outlined here are largely technical guidelines, they depend on the creation of accountability structures and an ethics-focused organizational culture to ensure that they are implemented effectively.21, 22 

In the next installment of the AI Manifesto, we will look at the ways organizations can hold themselves accountable for better cyber risk assessments and better overall attack defenses. 

NOTE: This article was originally written by Malcolm Harkins and published in the BlackBerry Cylance publication 'Phi Magazine' Issue 2. The online version of Phi is now available for digital download HERE.  

References: 

[1] M. Harkins, “The Promises and Perils of Emerging Technologies for Cybersecurity: Statement of Malcolm Harkins,” 2017.

[2] “The AI Now Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,” 2016.

[3] A. Campolo, M. Sanfilippo, M. Whittaker, and K. Crawford, “AI Now 2017 Report,” 2017.

[4] M. Whittaker, K. Crawford, R. Dobbe, G. Fried, E. Kaziunas, V. Mathur, S. M. West, R. Ricardson, J. Schultz, and O. Schwartz, “AI Now Report 2018,” 2018.

[5] I. A. Foundation, “Artificial Intelligence, Ethics and Enhanced Data Stewardship,” 2017.

[6] Cylance, “The Artificial Intelligence Revolution in Cybersecurity: How Prevention Achieves Superior ROI and Efficacy,” 2018.

[7] Cylance Data Science Team, "Introduction to Artificial Intelligence for Security Professionals." Cylance, 2017.

[8] A. Smith, “Franken-algorithms: the deadly consequences of unpredictable code,” The Guardian, August 30, 2018.

[9] J. Glanz, M. Suhartono, and H. Beech, “In Indonesia Lion Air Crash, Black Box Data Reveal Pilots’ Struggle to Regain Control,” The New York Times, November 27, 2018.

[10] Committee on Oversight and Government Reform, “Rise of the Machines,” Washington, D.C., 2018.

[11] U.N. Global Pulse, “Big Data for Development: Challenges & Opportunities,” 2012.

[12] O. Tene and J. Polonetsky, “Big Data for All: Privacy and User Control in the Age of Analytics,” Northwest. J. Technol. Intellect. Prop., vol. 11, p. xxvii, 2012.

[13] K. Weinberger, A. Dasgupta, J. Attenberg, J. Langford, and A. Smola, “Feature Hashing for Large Scale Multitask Learning,” February 2009.

[14] J. Attenberg, K. Weinberger, A. Smola, A. Dasguptaa, and M. Zinkevich, “Collaborative spam filtering with the hashing trick,” Virus Bulletin, November 2009.

[15] J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine Bias,” Pro Publica, May 2016.

[16] J. Larson, S. Mattu, L. Kirchner, and J. Angwin, “How We Analyzed the COMPAS Recidivism Algorithm,” 2016.

[17] M. Nofferi and R. Koulish, “The Immigration Detention Risk Assessment,” Georget. Immgr. Law J., vol. 29, 2014.

[18] M. Rosenberg and R. Levinson, “Trump’s catch-and-detain policy snares many who call the U.S. home,” Reuters,  June 20, 2018.

[19] United States Government, “AI, Automation and the Economy,” December 2016.

[20] D. Acemoglu and P. Restrepo, “The Race between Man and Machine: Implications of Technology for Growth, Factor Shares, and Employment,” Am. Econ. Rev., vol. 108, no. 6, pp. 1488–1542, June 20, 2018.

[21] M. Harkins, "Managing Risk and Information Security," Second. Aspen, 2016.

[22] M. C. Gentile, “Giving Voice to Values,” Stanford Soc. Innov. Rev., 2018.

[23] Rogier Creemers (via China Law Translation), “Planning Outline for the Establishment of a Social Credit System (2014-2020),” 2015.

The Cylance Team

About The Cylance Team

Our mission: to protect every computer, user, and thing under the sun.

Cylance’s mission is to protect every computer, user, and thing under the sun. That's why we offer a variety of great tools and resources to help you make better-informed security decisions.