A big part of the job for cyber underwriters is to be curious, to piece together the “what-ifs” and to try to analyze as yet unknown or untested risks. Our role is to look at the ever-evolving technology and a shifting regulatory environment when considering liability implications.
When it comes to how individuals think about, value and protect their data, there are many things for underwriters to think about. Attitudes to the ways in which data is used vary greatly, as follows:
- At one end of the scale are people who are willing to part with their personal data so long as there is a clear reward for doing so – the U.K.’s Data Marketing Association (DMA) refers to this group as “data pragmatists.”
- At the other end are people who are resistant to sharing personal data under any circumstances – what the DMA calls “data fundamentalists.”
- Somewhere in between are the “data unconcerned,” those who show no or little concern with the issue of digital privacy and data exchange.
As we all know, data is a commodity – and a valuable one at that. There are, broadly speaking, three types of personal data:
- Volunteered data – data that individuals readily volunteer to third parties, such as name and gender;
- Observed data – data such as location data or browsing history captured by programs and websites, for example; and
- Inferred data – what can be guessed about you from the other two.
Inferred data is, of course, the real money-maker here. After all, as the saying goes, “if you are not paying for it, you are the product.” As consumers, we have become used to being “served” advertising targeted to us based upon our Internet searches, our age group and gender, and who our friends are.
But what if this data was not just used to sell us stuff, but could also be used for good?
Many of us carry loyalty cards for our favorite stores which give retailers information about what we like to eat and drink. This data enables shops to target us with advertising and offers, but these cards also can yield data on healthcare purchases, among other things. Data about how often those suffering from long-term or chronic pain buy pain medication could contribute to health research into the lifestyle predictors of various illnesses, for example.
A group of cellphone operators in India has begun a pilot project with the World Health Organization to identify whether their network data can provide insights into population volume and movement patterns, and if it can be used to improve planning to control the spread of tuberculosis – one of the biggest killers in the country.
There are all sorts of ways that this type of personal data could be put to powerful use to benefit society; for example, in crime prediction and prevention, or to analyze the impact of floods or wildfires on communities.
But on the flipside, some of us are becoming increasingly concerned about the way our data is used, and what it is used for.
Potential Societal Benefits of Data Sharing
Some are concerned that data analytics firms are harvesting their data to skew election results. Some resent being targeted by advertising. And others fear that their data is not as private as they might hope.
For example, many homes now have voice-enabled digital assistants that answer questions, order shopping, control devices such as light switches or thermostats – and even tell jokes. These assistants can be convenient, useful and entertaining.
But some users have expressed concern about the extent to which the data collected and stored by this technology is kept private.
The makers of digital assistants insist that the devices do not eavesdrop, and that recording is activated only when a “wake word” is spoken. A woman in Portland, Oregon, however, claimed that earlier this year her digital assistant recorded a conversation between her and her husband – on the titillating subject of hardwood flooring – and sent it to a random contact in her husband’s address book. This was explained as a glitch that occurred after the digital assistant was “awoken” by a word similar to its “wake word” and then responded to other words that sounded like commands.
While some of us might write this off as a freakish and thankfully not too sinister occurrence, others might view this as evidence of increasing intrusion into our private lives by organizations that can make use of our data for their own ends.
A study last year by the UK’s Information Commissioner’s Office found that only 20% of the UK public had trust and confidence in the companies storing their personal information. And only one in ten said they had a good understanding of how their personal data is being used.
So what does all this mean for the future of data privacy?
It’s possible to imagine two scenarios that could come to pass. The first of those is a data free-for-all, where data is shared willingly and openly in order to reap the potential societal benefits.
At the opposite extreme, however, is a society where privacy is valued more highly than the benefits of sharing data, where individuals “own” and guard their personal data closely – and governments impose even more stringent data protection requirements and penalties for those that breach them.
The answer is likely to lie somewhere in the middle. But as cyber underwriters we must explore all the possibilities and assess the liabilities that might arise.
The Future of Data Privacy
The evolution of global data protection regulation will no doubt have a major impact on where we likely end up. Will it continue ahead in making personal data proprietary, as the new European Union General Data Protection Regulation (GDPR) is encouraging, or could it, in response to changing public opinion or empirical evidence perhaps, switch direction and attempt to free up personal data so that we may better harness the potential benefits it could bring?
Increased government intervention and regulation in this area could also up the ante still further on firms’ data security risk management. Stricter reporting requirements and stiffer penalties for data breach would increase companies’ potential liabilities and possibly the reputational risks they face.
And the risk implications go right to the root of how companies operate – if individuals guard their data more closely, or the use of personal data is restricted, companies’ ability to do business could be severely disrupted.
Even if there is a relaxation of the rules around the use of personal data, the risks of that data being breached, leaked or used in ways that individuals or companies did not intend remain.
Cyber liability is an evolving type of insurance. None of us knows for sure how the data privacy landscape might look in five, ten, or 20 years’ time. But as underwriters we will keep our eyes on the ball and continue to monitor developments.
About the Author
James Tuplin is Head of Cyber and TMT – International Financial Lines at AXA XL, a division of AXA, where he is responsible for the underwriting of cyber insurance, and professional indemnity insurance for the technology and media sectors. Mr. Tuplin has more than 15 years’ experience in the insurance industry, having begun his career as a Pension Trustee Liability Underwriter at an MGA in 2003. He then worked at to Zurich Global Corporate for 10 years, where he held various underwriting positions specializing in errors and omissions and professional indemnity insurance.
James went on to hold the role of Senior Technology PI and Cyber Underwriter at AGCS London in 2013, before joining QBE as Cyber & TMT Portfolio Manager for Europe a year later. He joined XL Catlin (now AXA XL) in 2017, where he is responsible for all cyber and technology professional indemnity insurance underwritten outside of North America and Canada.