The End of the Fortress Metaphor

Geoffroy Couprie is a consultant in software security and an independent developer. He teaches development teams how to write safe software. This is the most seducing approach in IT security. This is also the worst. For more than 20 years now, people have believed that their network was a fortress, protected from the outside world by firewalls, NAT and DMZ. This idea is obsolete, we must change now. 20 years ago, it was still possible to see internal networks totally open, with every machine directly addressable from Internet. There were enough IPv4 addresses for everybody, the networks were small, life was good. But the security was atrocious: TCP stacks were remotely exploitable, worms were reproducing on corporate networks, internal file servers were publicly available, so people found the easiest way to secure everything on the cheap: isolate the network from the outside world. There's nothing wrong with that approach: it made sense at the time. As usual when someone finds a small, temporary hack instead of fixing everything, people kept improving it, approaching the local optimum. This led to firewalls on every machine, every network. People discovered that NAT could hide IP addresses, instead of simply allowing IP reuse, and thought it was a security feature. All of the nonsense about DMZ and airgapped networks appeared. Companies were actually selling hardware which would get packets from one network, disconnect (physically) from it, connect to another network, then send the packets. Airgap, yup. It worked for a time, since a lot of exploits in the 90s focused on remote exploits in operating systems and servers. If you cannot exploit the public face of the network, everything is alright. The attacker is only one wrong click on a lovingly crafted PDF file away from your network. Sysadmin taunting hackers Unfortunately, we cannot think that way anymore. Web applications give too much entry points to your servers. Pivoting from a DMZ server to the internal network is easy, since internal users will also access those web applications. The attacker is only one wrong click on a lovingly crafted PDF file away from your network. Why would you concentrate on firewall rules when phishing is so effective? Once the attacker is in your network, it is over. Listen to traffic, elevate your privilege, pivot to another machine, impersonate users, traverse the whole network... Traditional IT infrastructure The fortress metaphor, where everything behind your firewall is safe and trusted, is dead. Your walls are useful, but not that much when the attacker can get insiders to help him, willingly or unknowingly. The goal is not to keep the attacker out of your system. It is to detect the threat, isolate it, find the attacker's path and heal the system. The attacker may have been in your network for months. How would you be sure he is not there anymore? There is a much better metaphor than the fortress, now. Think of your system as a city. The city can have walls, but to function properly, it should let people enter and get out. You cannot know precisely if everything in your city is legit. Chances are, someone uses his personal USB key. Someone else connected a WiFi router in his office. People are talking on Facebook, watching porn, using forbidden applications, like modern browsers. You will not be able to catch them, unless repression is your main tool, and this will not help them work. You want to reduce criminality in your city, but you will not eradicate it. You cannot prevent fires, but you can prevent them from spreading too far and too fast. If you imagine the attacker as already present on your network, you go from plugging holes in one wall, to verifying dependencies and access control between systems. The trusted network approach is flawed, you have to think in terms of authorization from one user/app/machine to the other. The attacker will explore your network from one node to the next connected one, from one access level to the upper one, and try to combine them. Defenders think in lists, attackers think in graphs. You must assume that the internal network is as dangerous as the Internet. Assuming that servers will be safer if they are on your own network leads to a false sense of security. This is also why the nonsense around private cloud has to die. Assuming that servers will be safer on your own network leads to a false sense of security. A system built from scratch to handle the worst of internet has a better chance to survive. What matters is access control granularity around data, users and applications. The network is not a security boundary anymore.

Geoffroy Couprie is a consultant in software security and an independent developer. He teaches development teams how to write safe software.

This is the most seducing approach in IT security. This is also the worst. For more than 20 years now, people have believed that their network was a fortress, protected from the outside world by firewalls, NAT and DMZ. This idea is obsolete, we must change now.

20 years ago, it was still possible to see internal networks totally open, with every machine directly addressable from Internet. There were enough IPv4 addresses for everybody, the networks were small, life was good. But the security was atrocious: TCP stacks were remotely exploitable, worms were reproducing on corporate networks, internal file servers were publicly available, so people found the easiest way to secure everything on the cheap: isolate the network from the outside world. There’s nothing wrong with that approach: it made sense at the time.

As usual when someone finds a small, temporary hack instead of fixing everything, people kept improving it, approaching the local optimum. This led to firewalls on every machine, every network. People discovered that NAT could hide IP addresses, instead of simply allowing IP reuse, and thought it was a security feature. All of the nonsense about DMZ and airgapped networks appeared. Companies were actually selling hardware which would get packets from one network, disconnect (physically) from it, connect to another network, then send the packets. Airgap, yup.

It worked for a time, since a lot of exploits in the 90s focused on remote exploits in operating systems and servers. If you cannot exploit the public face of the network, everything is alright.

The attacker is only one wrong click on a lovingly crafted PDF file away from your network.

Sysadmin taunting hackers

Unfortunately, we cannot think that way anymore. Web applications give too much entry points to your servers. Pivoting from a DMZ server to the internal network is easy, since internal users will also access those web applications. The attacker is only one wrong click on a lovingly crafted PDF file away from your network. Why would you concentrate on firewall rules when phishing is so effective?

Once the attacker is in your network, it is over. Listen to traffic, elevate your privilege, pivot to another machine, impersonate users, traverse the whole network…

Traditional IT infrastructure

The fortress metaphor, where everything behind your firewall is safe and trusted, is dead. Your walls are useful, but not that much when the attacker can get insiders to help him, willingly or unknowingly.

The goal is not to keep the attacker out of your system. It is to detect the threat, isolate it, find the attacker’s path and heal the system. The attacker may have been in your network for months. How would you be sure he is not there anymore?

There is a much better metaphor than the fortress, now. Think of your system as a city. The city can have walls, but to function properly, it should let people enter and get out. You cannot know precisely if everything in your city is legit. Chances are, someone uses his personal USB key. Someone else connected a WiFi router in his office. People are talking on Facebook, watching porn, using forbidden applications, like modern browsers. You will not be able to catch them, unless repression is your main tool, and this will not help them work. You want to reduce criminality in your city, but you will not eradicate it. You cannot prevent fires, but you can prevent them from spreading too far and too fast.

If you imagine the attacker as already present on your network, you go from plugging holes in one wall, to verifying dependencies and access control between systems. The trusted network approach is flawed, you have to think in terms of authorization from one user/app/machine to the other. The attacker will explore your network from one node to the next connected one, from one access level to the upper one, and try to combine them. Defenders think in lists, attackers think in graphs. You must assume that the internal network is as dangerous as the Internet.

Assuming that servers will be safer if they are on your own network leads to a false sense of security.

This is also why the nonsense around private cloud has to die. Assuming that servers will be safer on your own network leads to a false sense of security. A system built from scratch to handle the worst of internet has a better chance to survive. What matters is access control granularity around data, users and applications. The network is not a security boundary anymore.

Blog

À lire également

Why choose a French cloud?

At a time when data security and compliance with European standards have become key concerns, this question is becoming increasingly crucial.
Engineering

Clever Cloud and DataConnect Africa: the first step towards a secure, sovereign cloud for Africa

As part of its international growth, Clever Cloud has signed its first partnership in Côte d'Ivoire, with DataConnet Africa, to offer the region's first Sovereign and Secure cloud.
Company

clever cloud fest. is coming

Clever Cloud announces its clever cloud fest. an event to bring together customers, prospects, partners and more.
Company