On the economics of Cybersecurity
I believe that having a broader perspective can influence how we approach the world. I'm an actually not educated on economics. I have very little knowledge about it. But it seems that money is pretty important in this world eg. I should learn some basics econ concepts to get through it better.
I have limited experience but, as I frequently do, when I want to explore some topic, I refer to Youtube. My interest is how can economics can shape the cybersecurity realm and how much they intertwined. And it looks like they are, as most things. In particular, "Security Engineering Lecture 5: The Economics of Security" lecture by Ross Anderson seems to be incredibly useful answering this question. Ross Anderson was a really important figure, he was Professor of Security Engineering at Cambridge University and Edinburgh University. He wrote an incredible book called "Security Engineering" and this blog post will include the highlights of that lecture. This lecture is packed with information, hidden gems that need to be listened at least twice to discover how things actually work.
The Intersection of Economics and Cybersecurity
Anderson's lecture reveals that cybersecurity isn't just about technology—it's deeply intertwined with economics. It is deeply relevant how incentives shape the economic analysis. The first topic of analysis is cooperation or conflict, and in its basic form we want to understand whether or not two or more will cooperate. To represent this, we have an tool called "Game Theory" that let's us represent the world via mathematical models, there could be for example perfect information games such as chess or go, or more interestingly imperfect information such as poker where you don't know the entire game state and the intentions of the other players.
1. Game Theory in Cybersecurity
During the 20th century, Game Theory was applied to nuclear deterrence. The question was:
How could you persuade the Americans to cut the tens of thousands of nuclear warheads they had if there was a risk that the Russians wouldn't do the same? And how could you persuade the Russians to cut their armaments if the Americans might not do the same?
Game theory could also play a role in understanding cybersecurity dynamics. The example given represent a subset of what the lecture talks about, for the purpose of this article I'll write about the two most important thought experiments.
Prisoner's Dilemma
This classic game theory scenario applies to many cybersecurity situations. For example, two companies might both benefit from sharing threat intelligence, but fear giving competitors an advantage. An example of interest to us as we study cybercrime could be do you persuade the police in a given country to extradite a cyber criminal in another who's doing fraud against people in the capital, for example, when the police don't extradite people or don't even investigate people who are doing cybercrime in the first country, let alone against residents of foreign countries?
Police priorities just don't place the welfare of foreign citizens that high on the agenda. And so if you're a police force, how do you get collaboration from foreign police forces against criminals? And of course, there's some countries like Russia and Israel, for example, that won't extradite their nationals at all.
So prisoners' dilemma is a real problem when we're dealing with cybercrime. And going back to the normal form of the game, we come to this tough but inescapable conclusion. If the game is truly as described, then there isn't any escape.
To fix this one of the ways that you can do it is by changing the game (for example by being merciful, see Tit-for-tat). Over the past 30 years or so, people have realized that the evolution of strategy explains an awful lot of behavior among people, among corporations, among states, and even among animals.
Hawk-Dove Game
This model helps explain why some individual animal of a given populations are more aggressive than others. In cybersecurity, you can model it asn why some actors are more aggressive than others, balancing the potential gains against the risks of conflict.
This turns out to be a reasonably good model of how aggression comes about, and how it settles at a particular level among people, among firms, among states, and of course in the animal kingdom, which is what Maynard Smith, the evolutionary biologist who developed this model, was setting out to try and model.
The algebraic results are really interesting and not the scope of this blog post, but the Hawk-Dove model suggest that constantly playing Hawk (being aggressive) is not optimal. Sometimes, the equilibrium involves a mixed strategy (sometimes being aggressive, sometimes not). This indicate that unpredictability can be advantageous in competitive situations. For a better deep dive, I suggest visiting this webpage.
2. Market Forces and Security Failures
In second part of the lecture, Anderson explains how market dynamics can lead to suboptimal security outcomes, for example when markets fail, how they fail, and how this affects security.
Classical economic theory, originating with Adam Smith's "invisible hand", suggests that markets function efficiently when numerous buyers and sellers interact, and no single player holds market power. This leads to an equilibrium where supply meets demand, and resources are allocated efficiently.
Economists use Pareto efficiency as a standard for market efficiency. A market is Pareto efficient if no one can be made better off without making someone else worse off. However, this definition doesn't account for fairness, meaning a highly unequal distribution of resources can still be "efficient" by this definition.
For a market to remain efficient, several conditions must be met:
- Rational Actors: All participants must act rationally.
- Property Rights: All goods and services must have clear ownership and be tradable.
- Complete Information: All market participants need access to the same information.
- No Transaction Costs: Buying and selling should occur without friction.
Infomation is crucial, so a really importatnt factor for the economics of the information goods and service markets is this: the marginal cost of information is basically zero. As an example, encyclopedia back in the days were so expensive, nowadays Wikipedia has fundamentally all the information for free. The question then translate to:
How can someone make money out of selling information?
First let's define a few concepts.
Externalities: Externalities are goods or bads that people care about, but aren't traded because they're typically side effects. An example of a negative externality would be a steelworks, which was polluting a fishery downstream. Another example is firms and households emitting oil. And this is basically heating up the planet. And a third would be insecure IoT devices, which end up being hacked and ending up in botnets.
There are also positive externalities, such as education, where we find that if people have one more year of education, then that cuts crime by about 2%. Other positive externalities include technical standards, which enable firms to make their products interoperatePublic Goods: A public good is non-rivalrous and non-excludable, which means that my consumption of the public good doesn't affect your consumption, I've got no means of stop ping you consuming a public good if I produce it. An example of public good is scientific knowledge.
Network Externalities: This is where the connection to information industries becomes clearer. In fact, the inventor of Ethernet, Bob Metcalf, came up with Metcalf's Law, as he called it, that the value of a network is proportional to the square of the number of users. Two example of network externalities are telephones, email ans social networks. The problem with that is the creation of monopoly.
Technical Lock-in: Often in the information goods and services world, if you buy a product, you end up being committed to buying more of it or to spending money on things like durable complementary assets such as apps. A few examples could be Iphone/Android, Office/Google Docs. This makes it more difficult for you to switch from one vendor to another.
3. The Economics of Information Goods
Cybersecurity products and services have unique economic properties. Low marginal costs, technical lock-in, and network externalities tend to lead to a dominant firm market model, where one firm or a small number of firms can set prices and extract significant money from the market. Given all three factors, monopoly becomes even more likely. Anderson emphasizes how misaligned incentives often lead to security failures:
Vendor vs. User Security: Companies may prioritize rapid market entry over security, leaving users to bear the risk. This is exemplified by Microsoft's "ship it Tuesday and get it right by version three" philosophy in the 1990s. Similarly, many Android phones today are not patched up to date, and Facebook provides users with an illusion of privacy while making most information available to advertisers.
Asymmetric Information: In cybersecurity markets, buyers often can't distinguish between good and bad products, leading to potential "lemons markets." Anderson uses the example of used cars to illustrate this concept: when buyers can't tell the difference between good and bad cars, only the "lemons" (bad cars) end up on the market. In cybersecurity, this can mean companies invest more in marketing than in actual security engineering.
Moral Hazard: The existence of cybersecurity insurance might lead some companies to underinvest in security measures. Anderson gives the example of Volvo drivers having more accidents, possibly because they feel safer and drive more recklessly. In cybersecurity, companies with insurance might take more risks, knowing they're covered.
These economic factors shape how information industries, including cybersecurity firms, develop their business models and generate revenue, often at the expense of optimal security outcomes for users
4. Measuring Cybercrime
Accurately measuring cybercrime presents significant challenges due to various factors that can skew data. Anderson highlights how different stakeholders may manipulate or misreport cybercrime statistics to serve their interests. For instance, banks might blame customers for fraud to avoid liability, while law enforcement and politicians may underreport to appear more effective. A stark example of this comes from the UK, where from 2005 to 2015, a policy change requiring fraud victims to report to banks first led to a misleading decrease in reported crime rates. This had far-reaching political consequences, allowing politicians to claim success in crime reduction while actually masking the growing issue of cybercrime.
To get a better grasp of the cybercrime landscape, Anderson and his team did some big surveys in 2012 and 2019. Surprisingly, even though technology had changed a lot, the main trends in cybercrime stayed pretty much the same Surprisingly, despite significant technological shifts (from laptops to smartphones, on-premises servers to cloud computing), the underlying economic patterns of cybercrime remained relatively stable. Their findings categorized cybercrime costs into three main areas:
- Traditional fraud now conducted via computers (e.g., tax and welfare fraud), costing hundreds of dollars per citizen annually.
- Evolved fraud leveraging new technologies (e.g., payment fraud), costing tens of dollars per person yearly.
- Pure cybercrime and related infrastructure (e.g., botnets, antivirus software), costing several dollars per person annually.
The research also revealed a notable exception: ransomware and cryptocurrency-related crime, which have been doubling yearly, reaching $2 billion globally by 2018. This rapid growth is attributed to easily exploitable systems, cryptocurrency-enabled money laundering, and scalable criminal business models like "ransomware as a service." Anderson emphasizes that understanding and measuring cybercrime requires a multidisciplinary approach, combining economics, engineering, and psychology. He also notes the importance of victimization surveys, such as those conducted by the UK's Office of National Statistics, which provide a more accurate picture of cybercrime prevalence than official reports alone.
Conclusion
And that wraps up my takeaways from Prof. Anderson’s incredible lecture. One of my favorite lines from him was this:
Understanding that means understanding the economics and its business model, it means understanding the engineering, and in many cases, it also means understanding the psychology.
In conclusion, what I hope to convey is how much cybersecurity isn't just about tech—it’s this complex mix of economics, engineering, and even human behavior. We’ve touched on how markets shape security, the role of game theory in cyber dynamics, and how to measure cybercrime effectively.
I hope you found this as eye-opening as I did. Thanks for your time!