• 2 Posts
  • 18 Comments
Joined 10 months ago
cake
Cake day: November 15th, 2023

help-circle
  • The biggest items on the graph are all out of bounds accesses, use-after-free and overflows. It is undeniable that memory safe languages help reducing vulnerabilities, we know for decades that memory corruption vulnerabilities are both the most common and the most severe in programs written in memory-unsafe languages.

    Unsafe rust is also not turning off every safety feature, and it’s much better to have clear highlighted and isolated parts of code that are unsafe, which can be more easily reviewed and tested, compared to everything suffering from those problems.

    I don’t think there is debate here, rewriting is a huge effort, but the fact that using C is prone to memory corruption vulnerabilities and memory-safe languages are better from that regard is a fact.



  • Comfort is the main reason, I suppose. If I mess up Wireguard config, even to debug the tunnel I need to go to the KVM console. It also means that if I go to a different place and I have to SSH into the box I can’t plug my Yubikey and SSH from there. It’s a rare occurrence, but still…

    Ultimately I do understand both point of view. The thing is, SSH bots pose no threats after the bare minimum hardening for SSH has been done. The resource consumption is negligible, so it has no real impact.

    To me the tradeoff is slight inconvenience vs slightly bigger attack surface (in case of CVEs). Ultimately everyone can decide which compromise is acceptable for them, but I would say that the choice is not really a big one.


  • Hey, the short answer is yes, you can.

    I would elaborate a little more:

    • First, you have the problem of sourcing the data. In essence, Crowdsec won’t be able to go and fetch those logs for you dynamically, but can go and take those logs from a file (you can do a dirty solution like a sidecar deployment) or from a stream. You can deploy crowdsec in multiple modes, and you can have many instances that talk to each other. You can also simply have some process tailing the pod logs and sending them to a file crowdsec has access to or serving them as a stream (see https://doc.crowdsec.net/docs/data_sources/intro).
    • The above means that it doesn’t really matter whether you run Crowdsec inside your cluster (it does have a Helm chart) or on the host. Ultimately all it matters is that crowdsec has access to your pods logs (for example, the logs of your ingress controller).
    • The next piece is the remediation component. What do you want crowdsec to do, once it is able to detect bad IPs? If you want to just add IPs to the firewall, then it might make more sense running it on the host(s) you use in ingress, if you want to add the IPs to network policies you can do it, but you need to develop your own remediation components. I am planning to write a remediation component that will add the IPs to Hetzner firewall, some other systems are already supported, but this would be a way to basically block the IPs outside your cluster. For nginx ingress controller there is already a pre-made remediation component .

    In practice I personally would choose a simple setup where the interesting logs are just forwarded (in Syslog format for example) to a single crowdsec instance. If you have ingress from a single node, I’d go for running it on the host and banning via firewall, if you have multiple ingress nodes, then I would run it inside the cluster and ban via a loadBalancer/cloud firewall/whatever you have in front.

    In essence, I would spend some time to think about your preferences, and it might take a little bit to make the setup clean, but I think you have plenty of flexibility to do what you prefer. Let me know if you want to bounce some more ideas!


  • Yeah I know (I mentioned it myself in the post), but realistically there is no much you can do besides upgrading. Unattended upgrades kick in once a day and you will install the security patches ASAP. There are also virtual patches (crowdsec has a virtual patch for that CVE), but they might not be very effective.

    I argue that VPN software is a smaller attack surface, but the problem still exists (CVEs) for everything you expose.






  • Also hypervisors get escape vulnerabilities every now and then. I would say that in a realistic scale of difficulty of escape, a good container (doesn’t matter if using Docker or something else) is a good security boundary.

    If this is not the case, I wonder what your scale extremes are.

    A good container has very little attack surface, since it can have almost no code or tools available, a read-only fs, no user privileges or capabilities whatsoever and possibly even a syscall filter. Sure, the kernel is the same but then the only alternative is to split that per application VMs-like) and you move the problem to hypervisors.

    In the context of this asked question, I think the gains from reducing the attack surface are completely outweighed from the loss in functionality and waste of resources.




  • Fair question. What I meant is that suggesting that would have made the whole post 10 lines long and not worth doing. So I avoided such suggestions that completely change the threat model.

    It’s not useless to avoid a good security posture (although you might have concerns of a monopoly gatekeeping the internet, TLS traffic inspection privacy concerns etc.), on the contrary makes everything I have written about here redundant (+ provide more, like DDoS protection) as you are outsourcing the security controls.







  • but that also shows that most modern software is poorly written

    Does it? I mean, this is especially annoying with old software, maybe dynamically linked or PHP, or stuff like that. Modern tools (go, rust) don’t actually even have this problem. Dependencies are annoying in general, I don’t think it’s a property of modern software.

    Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

    Who are these people? There are tons of registries that people use, github has its own, quay.io, etc. You also can simply publish Dockerfiles and people can build themselves. Ofc Docker has the edge because it was the first mainstream tool, and it’s still a great choice for single machine deployments, but it’s far from the only used. Kubernetes abandoned Docker as default runtime for years, for example… who are you referring to?

    Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.

    But Systemd also uses unshare, chroot, etc. They are at the same level of abstraction. Docker (and container runtimes) are simply specialized tools, while systemd is not. Why wouldn’t I use a tool that is meant for this when it’s available. I suppose bubblewrap does something similar too (used by Flatpak), and I am sure there are more.

    Completely proprietary… like QEMU/libvirt? :P

    Right, because organizations generally run QEMU, not VMware, Nutanix and another handful of proprietary platforms… :)


  • Most of the pro-Docker arguments go around security

    Actually Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere. Security is a side-effect and definitely not the reason why containers picked-up.

    systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

    Yes, and it’s much harder to achieve the same. In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle. I made an example on my blog where I decided to run blocky in Systemd and not in Docker. It’s just less convenient and accessible, harder to debug and also relies on each individual user to do it, while with containers a lot gets packed into the image and therefore harder to mess up.

    Docker isn’t totally proprietary

    There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

    I will avoid comment what looks like a rant, but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendors, while containers use only native OS features and are therefore a step towards openness.