This year I visited Brussels with some colleagues to attend the FOSDEM Conference for the first time https://fosdem.org
Conference and Organisation
The conference itself takes place in the Université Liberal Brussels (ULB) which is a campus of more or less rotten buildings and can be visited without a ticket or fee. There are many many talks over two days which average on about 30 minutes and are held in 29 different rooms parallel. The rooms are rather small and the conference is very frequented, so sometimes its not possible to get in the room in time, but there is a video stream+recording of every talk and the wifi can handle them pretty well. Also there are a lot of friendly helping organizers and volunteers so when one is awake enough to handle the crowds the conference becomes quite easy. Hint: Even if the Belgium beer is very good that does not mean you can drink more of it without consequences.
Linux and Memory Management
The Linux memory management is a topic which comes up very often, especially when dividing physical memory between different virtual machines. Also there is a lot of superficial knowledge about it and people often speak of memory as if it would be a few bytes which are either full or empty and that’s it. Well the reality is way more complex and I am struggling for a few years now to get a better understanding of how one can measure, analyze and configure memory usage in Linux. So I was quite happy to see the talk of Chris Down who told us about the tools and techniques of Facebook to thrash memory. https://fosdem.org/2020/schedule/event/containers_memory_management/
Strongly related is the question “To swap or not swap” which I have discussed multiple times in the last few years, always with different outcomes. Chris wrote a long article about SWAP and why it is still needed in modern systems. https://chrisdown.name/2018/01/02/in-defence-of-swap.html / bit.ly/whyswap
So I have to say that I still have to learn a lot about memory management and I am absolutely sure that this knowledge can become crucial in analysing and preventing incidents. Every time I dive into this topic I realise that memory is not a barrel which just fills up but a managed system with complex and interdependent rules. This system must be managed as such and therefore understood to keep it working correctly. When OOM killer runs and destroys a process, the cause is already lost and the system was mismanaged.
Virtualization and Systemd
So we come to the question how we can ensure that a computer system managed correctly and the resources are shared wisely. I think we can all agree that the time when a Linux system was a kernel which then uses a lot of shell scripts to startup manually configured applications are over. A modern system consists of a lot of management and surveillance software and of course it is virtually divided into pieces which do different kinds of work. In this context I want to highlight systemd one more time because there is still a lot to learn about it for me. Systemd is capable of doing a lot of stuff way more than just replacing the init V system. Nowadays with systemd a linux system is aware that it is not just an OS running on a computer but it has a hierarchy of subsystems (Devices, Processes, Services, Namespaces, Containers). In contrast to docker and vagrant, systemd comes with many modern linux distributions and is highly integrated into the OS and an operator can use it as a modular system to jail applications, let them depend on hardware changes or manage their resources (see also Linux and Memory Management) I attended a talk about systemds security features and its ability to abstract the kernel and limit resources and syscalls:
- https://github.com/keszybz/systemd-security-talk/blob/master/jesie%C5%84-systemd-security.pdf They also recommends this documentation
I think of systemd as a well designed and well documented structure of a modern Linux which finally enables us to configure our system from the kernel up to the application with modules which know about each other, all in the same style. And eventually we can get rid of wiggly, half-baked solutions like VirtuozzoLinux or proprietary appliance systems.
Micro- and Unikernels
On Sunday at FOSDEM I wanted to look out to some future stuff (beyond Kubernetes). In the autumn of 2018 I attended the MirageOS hack retreat in Marrakesh, Marocco (mirageos.io). Because I already thought of Kubernetes and docker as over complex bloated feature creep software I really enjoined the radical approach of compiling just the code which is needed into a unikernel for smaller footprint, speed and better security. Think of it as a statically linked application which comes as a bootable image. I was happy to see a half a day Micro-/Unikernel track at FOSDEM and realized that there are a lot of projects in this area. I especially want to emphasize the two following talks
- https://fosdem.org/2020/schedule/event/uk_hipperos/ - a realtime, multicore, hierarchical kernel for embedded systems
- https://fosdem.org/2020/schedule/event/uk_unicraft/ - a toolchain to build various unikernels
Unicraft is something which definitely looks like a good starting point to get in touch with Unikernels because it supports a lot of common tools and languages (mirageOS was kinda hard for me because it is written in OCaml). Anyhow, I believe that Unikernels will be the next big thing after Kubernetes and other cloud approaches become too complicated to be maintainable anymore and because of the million lines of code they waste with abstraction too slow and too inflexible.
So in the end this FOSDEM visited encouraged me in my strategy to
- learn more about kernel code to understand why a computer behaves the way it does
- using Linux systems as a cluster of software which is aware of and integrable into each other
- building large scale applications with the smallest (carbon) footprint possible
Clouds are made out of tiny water drops not fuel tanks.