The Clever Cloud Blog

The Commit Log

Top today's news

banner organic growth

Clever Cloud structures itself to support its organic growth

In 2023, Clever Cloud has once again made great strides, with a significant increase in its turnover. Having recently passed the 60-strong mark, the company is welcoming new profiles to support its development, and is expanding its Management Committee.
Company Press

Our journey to a better Clever Cloud

Over two years ago, we decided to strengthen the Clever Cloud team. At that time, our goal was to better support our customers in their growth, respond to their requests and complete the development of new products more efficiently. Discover our evolution.
Company

Our consortium, InfrateX, wins Simpl

The InfrateX Consortium, led by Sopra Steria with NTT Data and including Clever Cloud as a member, has won the significant contract Simpl.
Company Press

Clever Cloud announces the appointment of Jean-Baptiste Piacentino as Cloud Diplomat

Nantes, September 18, 2023 - Clever Cloud, a French creator of solutions for deploying and…

Company

How does Matomo work on Clever Cloud ?

In a previous post, we explained why you should switch from Google Analytics to Matomo for your audience measurement.
Engineering

Biscuit tutorial

In the previous article, I introduced Biscuit, our authentication and authorization token, and mentioned…

Engineering

Biscuit, the foundation for your authorization systems

After 2 years of development, I am proud to share with you the official release…

Engineering

In Defense of Optimization Work

It is common knowledge that hardware is cheap, and programmers are expensive, and that…

Engineering

Spectre and Meltdown

Yesterday two issues affecting CPUs have been released to the public.

TL;DR: the attacks are…

Engineering

Hot reloading configuration: why and how?

At Clever Cloud, we are working on Sōzu, an HTTP reverse proxy that can…

Engineering

Async, Futures, AMQP, pick three

A few weeks ago, we set out to develop an AMQP client library in…

Engineering

Falling for Rust

If you ever talked to me, or looked at my Twitter feed, you may have…

Engineering

Let your logs help you

We use logs for everything, to track errors, measure performance, keep a journal of how…

Engineering

Security is a process, not a reaction

Wake up. Check the news. There is a new OpenSSL vulnerability, the world is on…
Company

nom 1.0 is here! REJOICE!

nom is a parser combinators library witten in Rust that I started about a…

Engineering

The End of the Fortress Metaphor

Geoffroy Couprie is a consultant in software security and an independent developer. He teaches development teams how to write safe software. This is the most seducing approach in IT security. This is also the worst. For more than 20 years now, people have believed that their network was a fortress, protected from the outside world by firewalls, NAT and DMZ. This idea is obsolete, we must change now. 20 years ago, it was still possible to see internal networks totally open, with every machine directly addressable from Internet. There were enough IPv4 addresses for everybody, the networks were small, life was good. But the security was atrocious: TCP stacks were remotely exploitable, worms were reproducing on corporate networks, internal file servers were publicly available, so people found the easiest way to secure everything on the cheap: isolate the network from the outside world. There's nothing wrong with that approach: it made sense at the time. As usual when someone finds a small, temporary hack instead of fixing everything, people kept improving it, approaching the local optimum. This led to firewalls on every machine, every network. People discovered that NAT could hide IP addresses, instead of simply allowing IP reuse, and thought it was a security feature. All of the nonsense about DMZ and airgapped networks appeared. Companies were actually selling hardware which would get packets from one network, disconnect (physically) from it, connect to another network, then send the packets. Airgap, yup. It worked for a time, since a lot of exploits in the 90s focused on remote exploits in operating systems and servers. If you cannot exploit the public face of the network, everything is alright. The attacker is only one wrong click on a lovingly crafted PDF file away from your network. Sysadmin taunting hackers Unfortunately, we cannot think that way anymore. Web applications give too much entry points to your servers. Pivoting from a DMZ server to the internal network is easy, since internal users will also access those web applications. The attacker is only one wrong click on a lovingly crafted PDF file away from your network. Why would you concentrate on firewall rules when phishing is so effective? Once the attacker is in your network, it is over. Listen to traffic, elevate your privilege, pivot to another machine, impersonate users, traverse the whole network... Traditional IT infrastructure The fortress metaphor, where everything behind your firewall is safe and trusted, is dead. Your walls are useful, but not that much when the attacker can get insiders to help him, willingly or unknowingly. The goal is not to keep the attacker out of your system. It is to detect the threat, isolate it, find the attacker's path and heal the system. The attacker may have been in your network for months. How would you be sure he is not there anymore? There is a much better metaphor than the fortress, now. Think of your system as a city. The city can have walls, but to function properly, it should let people enter and get out. You cannot know precisely if everything in your city is legit. Chances are, someone uses his personal USB key. Someone else connected a WiFi router in his office. People are talking on Facebook, watching porn, using forbidden applications, like modern browsers. You will not be able to catch them, unless repression is your main tool, and this will not help them work. You want to reduce criminality in your city, but you will not eradicate it. You cannot prevent fires, but you can prevent them from spreading too far and too fast. If you imagine the attacker as already present on your network, you go from plugging holes in one wall, to verifying dependencies and access control between systems. The trusted network approach is flawed, you have to think in terms of authorization from one user/app/machine to the other. The attacker will explore your network from one node to the next connected one, from one access level to the upper one, and try to combine them. Defenders think in lists, attackers think in graphs. You must assume that the internal network is as dangerous as the Internet. Assuming that servers will be safer if they are on your own network leads to a false sense of security. This is also why the nonsense around private cloud has to die. Assuming that servers will be safer on your own network leads to a false sense of security. A system built from scratch to handle the worst of internet has a better chance to survive. What matters is access control granularity around data, users and applications. The network is not a security boundary anymore.
Guests

Smalltalk in The Cloud

Geoffroy Couprie is a consultant in software security and a independant developer. After testing…

Guests