I got hacked: My Hetzner server started mining Monero

(blog.jakesaunders.dev)

331 points | by jakelsaunders94 11 hours ago

46 comments

  • 3np 8 hours ago
    > I also enabled UFW (which I should have done ages ago)

    I disrecommend UFW.

    firewalld is a much better pick in current year and will not grow unmaintainable the way UFW rules can.

        firewall-cmd --persistent --set-default-zone=block
        firewall-cmd --persistent --zone=block --add-service=ssh
        firewall-cmd --persistent --zone=block --add-service=https
        firewall-cmd --persistent --zone=block --add-port=80/tcp
        firewall-cmd --reload
    
    Configuration is backed by xml files in /etc/firewalld and /usr/lib/firewalld instead of the brittle pile of sticks that is the ufw rules files. Use the nftables backend unless you have your own reasons for needing legacy iptables.

    Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.

    Newer versions of firewalld gives an easy way to configure this via StrictForwardPorts=yes in /etc/firewalld/firewalld.conf.

    • dizhn 1 hour ago
      If you can, do not expose ports like this 8080:8080, but do this "192.168.0.1:8080:8080" so its bound to a private IP. Then use any old method expose only what you want to the world.

      In my own use I have 10.0.10.11 on the vm that I host docker stuff. It doesn't even have its own public IP meaning I could actually expose to 0.0.0.0 if I wanted to but things might change in the future so it's a precaution. That IP is only accessible via wireguard and by the other machines that share the same subnet so reverse proxying with caddy on a public IP is super easy.

      • zwnow 19 minutes ago
        Yup the regular "8080:8080" bind resulted in a ransom note in my database on day 1. Bound it to localhost only now.
    • peanut-walrus 15 minutes ago
      Personally I find just using nftables.conf straightforward enough that I don't really understand the need for anything additional. With iptables, it was painful, but iptables has been deprecated for a while now.
    • exceptione 8 hours ago

        > Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. 
      
      Like I said in another comment, drop Docker, install podman.
    • gus_ 7 hours ago
      it doesn't matter what netfilter frontend you use if you allow outbound connections from any binary.

      In order to stop these attacks, restrict outbound connections from unknown / not allowed binaries.

      This kind of malware in particular requires outbound connections to the mining pools. Others downloads scripts or binaries from remote servers, or try to communicate with their c2c servers.

      On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.

      • 3abiton 4 hours ago
        Is there an automated way of doing this?
        • 3np 2 hours ago
          Two paths:

          - Configuration management (ansible, salt, chef, puppet)

          - Preconfigured images (NixOS, packer, Guix, atomic stuffs)

          For a one-off: pssh

    • PeterStuer 25 minutes ago
      Hetzner has a free firewall service outside of your machine. You can use that as the first line of defence.
    • sph 1 hour ago
      The problem with firewalld is that it has the worst UX of any program I know. Completely unintuitive options, the program itself doesn’t provide any useful help or hint if you get anything wrong and the documentation is so awful you have to consult the Red Hat manuals that have thankfully been written for those companies that pay thousands per month in support.

      It’s not like iptables was any better, but it was more intuitive as it spoke about IPs and ports, not high-level arbitrary constructs such as zones and services defined in some XML file. And since firewalld uses iptables/nftables underneath, I wonder why do I need a worse leaky abstraction on top of what I already know.

      I truly hate firewalld.

      • bingo-bongo 39 minutes ago
        Coming from FreeBSD and pf, all Linux firewalls I’ve tried feels clunky _at best_ UX-wise.

        I’d love a Linux firewall configured with a sane config file and I think BSD really nailed it. It’s easy to configure and still human readable, even for more advanced firewall gateway setups with many interfaces/zones.

        A have no doubt that Linux can do all the same stuff feature-wise, but oh god the UX :/

    • Ey7NFZ3P0nzAe 1 hour ago
      You might be interested in ufw-docker: https://github.com/chaifeng/ufw-docker
    • rglover 7 hours ago
      One of those rare HN comments that's just pure gold.
    • skirge 1 hour ago
      also docker bypasses ufw
    • denkmoon 8 hours ago
      I’ll just mention Foomuuri here. Its bit of a spiritual successor to shorewall and has firewalld emulation to work with tools compatible with firewalld
      • 3np 8 hours ago
        Thanks! Would be cool to have it packaged for alpine since firewalld requires D-Bus. There is awall but that's still on iptables and IMO at bit clunky to set up.
      • egberts1 4 hours ago
        Foomuuri is ALMOST there.

        I mean there are some payload over payload like GRE VPE/VXLAN/VLAN or IPSec that needs to be written in raw nft if using Foomuuni but it works!.

        But I love the Shorewall approach and your configuration gracefully encapsulated Shorewall mechanic.

        Disclaimer: I maintain vim-syntax-nftables syntax highlighter repo at Github.

    • lloydatkinson 8 hours ago
      > Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.

      This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": https://news.ycombinator.com/item?id=42603136.

  • esaym 2 hours ago
    So this is part of the "React2Shell" CVE-2025-55182 issue? I find it interesting that this seems to get so little publicity. Almost like the issue is normal or expected. And it looks like the affected versions go back a little over a year. So if you've deployed anything with Next.js over the last 12 months your web app is now probably part of a million node bot net. And everyone's advice is just "use docker" or "install a firewall".

    I'm not even sure what to say, or think, or even how to feel about the frontend ecosystem at this point. I've been debating on leaving the whole "web app" ecosystem as my main employment ventures and applying to some places requiring C++. C++ seems much easier to understand than what ever the latest frontend fad is. /rant

    • h33t-l4x0r 36 minutes ago
      I'm hearing about it like crazy because I deployed around 100 Next frontends in that time period. I didn't use server components though so I'm not affected.
      • mnahkies 34 minutes ago
        My understanding of the issue is that even if you don't use server components, you're still vulnerable.

        Unless you're running a static html export - eg: not running the nextjs server, but serving through nginx or similar

  • tgtweak 9 hours ago
    Just a note - you can very much limit cpu usage on the docker containers by setting --cpus="0.5" (or cpus:0.5 in docker compose) if you expect it to be a very lightweight container, this isolation can help prevent one roudy container from hitting the rest of the system regardless of whether it's crypto-mining malware, a ddos attempt or a misbehaving service/software.
    • tracker1 9 hours ago
      Another is running containers in read-only mode, assuming they support this configuration... will minimize a lot of potential attack surface.
      • 3eb7988a1663 3 hours ago
        Never looked into this. I would expect the majority of images would fail in this configuration. Or am I unduly pessimistic?
        • hxtk 2 hours ago
          Many fail if you do it without any additional configuration. In Kubernetes you can mostly get around it by mounting `emptyDir` volumes to the specific directories that need to be writable, `/tmp` being a common culprit. If they need to be writable and have content that exists in the base image, you'd usually mount an emptyDir to `/tmp` and copy the content into it in an `initContainer`, then mount the same `emptyDir` volume to the original location in the runtime container.

          Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].

          I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.

          1: https://github.com/kubernetes/kubernetes/issues/48912

        • flowerthoughts 1 hour ago
          Readonly and rootless are my two requirements for Docker containers. Most images can't run readonly because they try to create a user in some startup script. Since I want my UIDs unique to isolate mounted directories, this is meaningless. I end up having to wrap or copy Dockerfiles to make them behave reasonably.

          Having such a nice layered buildsystem with mountpoints, I'm amazed Docker made readonly an afterthought.

          • subscribed 14 minutes ago
            I like steering docker runs with docker-compose, especially with .env files - easy to store in repositories, easy to customise and have sane defaults.
        • s_ting765 2 hours ago
          Depends on specific app use case. Nginx doesn't work with it but valkey will.
    • freedomben 9 hours ago
      This is true, but it's also easy to set at one point and then later introduce a bursty endpoint that ends up throttled unnecessarily. Always a good idea to be familiar with your app's performance profile but it can be easy to let that get away from you.
    • jakelsaunders94 9 hours ago
      This is a great shout actually. Thanks for pointing it out!
    • fragmede 9 hours ago
      The other thing to note is that docker is for the most part, stateless. So if you're running something that has to deal with questionable user input (images and video or more importantly PDFs), is to stick it on its own VM and then cycle the docker container every hour and the VM every 12, and then still be worried about it getting hacked and leaking secrets.
      • Koffiepoeder 57 minutes ago
        If I can get in once, I can do it again an hour later. I'd be inclined to believe that dumb recycling is not very effective against a persistent attacker.
      • tgtweak 2 hours ago
        Most of this is mitigated by running docker in an LXC containers (like proxmox does) which grants a lot more isolation than docker on it's own - closer in nature to running separate VMs.
    • miladyincontrol 8 hours ago
      Soft and hard memory limits are worth considering too, regardless of container method.
  • danparsonson 9 hours ago
    No firewall! Wow that's brave. Hetzner will let you configure one that runs outside of the box so you might want to add that too, as part of your defense in depth - that will cover you if you make a mistake with ufw. Personally I keep SSH firewalled only to my home address in this way; if I'm out and about and need access, I can just log into Hetzner's website and change it temporarily.
    • danw1979 19 minutes ago
      The only time I have ever had a machine compromised in 30 years of running Linux is when I ran something exposed to the internet on a well known port.

      I know port scanners are a thing but the act of using non-default ports seems unreasonably effective at preventing most security problems.

      • jraph 13 minutes ago
        I do this too, but I think it should only be a defense in depth thing, you still need the other measures.
    • tete 7 hours ago
      Firewalls in the majority of cases don't get you much. Yes it's a last line of defense if you do something really stupid and don't even know where or what you configure your services to listen on, but if you don't the difference between running firewalls and not is minuscule.

      There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.

      The way the author describes docker being the savior appears to be sheer luck.

      • danparsonson 3 hours ago
        The author mentioned they had other services exposed to the internet (Postgres, RabbitMQ) which increases their attack surface area. There may be vulnerabilities or misconfigurations in those services for example.

        Good security is layered.

        • seszett 2 hours ago
          But if they have to be exposed then a firewall won't help, and if they don't have to be exposed to the internet then a firewall isn't needed either, just configure them not to listen on non-local interfaces.
          • spoaceman7777 1 hour ago
            This sounds like an extremely effective foot gun.

            Just use a firewall.

            • seszett 1 hour ago
              I'm not sure what you mean, what sounds dangerous to me is not caring about what services are listening to on a server.

              The firewall is there as a safeguard in case a service is temporarily misconfigured, it should certainly not be the only thing standing between your services and the internet.

    • Nextgrid 9 hours ago
      But the firewall wouldn't have saved them if they're running a public web service or need to interact with external services.

      I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.

      • danparsonson 8 hours ago
        No you're right, I didn't mean the firewall would have saved them, but just as a general point of advice. And yes a second VPS running opnSense or similar makes a nice cheap proxy and then you can firewall off the main server completely. Although that wouldn't have saved them either - they'd still need to forward HTTP/S to the main box.
        • Nextgrid 8 hours ago
          A firewall blocking outgoing connections (except those whitelisted through the proxy) would’ve likely prevented the download of the malware (as it’s usually done by using the RCE to call a curl/wget command rather than uploading the binary through the RCE) and/or its connection to the mining server.
          • denkmoon 8 hours ago
            How many people do proper egress filtering though, even when running a firewall
          • drnick1 3 hours ago
            In practice, this is basically impossible to implement. As a user behind a firewall you normally expect to be able to open connections with any remote host.
            • metafunctor 21 minutes ago
              Not impossible at all with a policy-filtering HTTPS proxy. See https://laurikari.github.io/exfilguard/

              In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.

              It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.

    • dizhn 1 hour ago
      I have SSH blocked altogether and use wireguard to access the server. If something goes wrong I can always go to the dashboard and reenable SSH for my IP. But ultimately your setup is just as secure. Perhaps a tiny bit less convenient.
    • jwrallie 8 hours ago
      Password auth being enabled is also very brave. I don’t think fail2ban is necessary personally, but it’s popular enough that it always come up.
    • figassis 3 hours ago
      Yup. All my servers are behind Tailscale. The only thing I expose is a load balancer that routes tcp (email) and http. That balancer is running docker, fully firewalled (incl docker bypasses). Every server is behind herzner’s firewall in addition to the internal firewall.

      App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.

      I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.

    • 3abiton 4 hours ago
      Honestly fail2ban is amazing. I might doa write up on the countless of attempts on my servers.
      • dizhn 58 minutes ago
        The only way I've envisioned fail2ban to be of any use at all is if you gather IPs from one server and use them on your whole fleet and I got it running like this for a while. Ultimately I decided that all it does is give you a cleaner log file since by definition its working on logs for attacks/attempts that did not succeed. We need to stop worrying about attempts we see in the logs and let software do its job.
  • V__ 10 hours ago
    > The Reddit post I’d seen earlier? That guy got completely owned because his container was running as root. The malware could: [...]

    Is that the case, though? My understanding was, that even if I run a docker container as root and the container is 100% compromised, there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?

    • d4mi3n 10 hours ago
      While this is true, the general security stance on this is: Docker is not a security boundary. You should not treat it like one. It will only give you _process level_ isolation. If you want something with better security guarantees, you can use a full VM (KVM/QEMU), something like gVisor[1] to limit the attack surface of a containerized process, or something like Firecracker[2] which is designed for multi-tenancy.

      The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.

      1. https://gvisor.dev/

      2. https://firecracker-microvm.github.io/

      • tgsovlerkhgsel 32 minutes ago
        I hear the "Docker is not a security boundary." mantra all the time, and IIRC it was the official stance of the Docker project a long time ago, but is this really true?

        Of course if you have a kernel exploit you'd be able to break out (this is what gvisor mitigates to some extent), nothing seems to really protect against rowhammer/memory timing style attacks (but they don't seem to be commonly used). Beyond this, the main misconfigurations seem to be too wide volume bindings (e.g. something that allows access to the docker control socket from inside the container, or an obviously stupid mount like mounting your root inside the container).

        Am I missing something?

      • socalgal2 9 hours ago
        that's a really good point .. but, I think 99% of docker users believe it is a a sandbox and treat it as such.
        • freedomben 9 hours ago
          And not without cause. We've been pitching docker as a security improvement for well over a decade now. And it is a security improvement, just not as much as many evangelists implied.
          • fragmede 9 hours ago
            Must depend on who you've been talking to. Docker's not been pitched for security in the circles I run in, ever.
        • TacticalCoder 7 hours ago
          Not 99%. Many people run an hypervisor and then a VM just for Docker.

          Attacker now needs a Docker exploit and then a VM exploit before getting to the hypervisor (and, no, pwning the VM ain't the same as pwning the hypervisor).

          • windexh8er 2 hours ago
            Agreed - this is actually pretty common in the Proxmox realm of hosters. I segment container nodes using LXC, and in some specific cases I'll use a VM.

            Not only does it allow me to partition the host for workloads but I also get security boundaries as well. While it may be a slight performance hit the segmentation also makes more logical sense in the way I view the workloads. Finally, it's trivial to template and script, so it's very low maintenance and allows for me to kill an LXC and just reprovision it if I need to make any significant changes. And I never need to migrate any data in this model (or very rarely).

          • briHass 4 hours ago
            'Double-bagging it' was what we called it in my day.
        • dist-epoch 9 hours ago
          it is a sandbox against unintentional attacks and mistakes (sudo rm -rf /)

          but will not stop serious malware

      • hsbauauvhabzb 9 hours ago
        Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.

        Docker is pretty much the same but supposedly more flimsy.

        Both have non-obvious configuration weaknesses that can lead to escapes.

        • hoppp 8 hours ago
          Yeah but why would somebody co-host military secrets or billions of dollars? Its a bit of a stretch
          • hsbauauvhabzb 8 hours ago
            I think you’re missing the point, which was that high value targets adjacent to soft targets make escapes a legitimate target, but in low value scenarios vm escapes aren’t worth the R&D
            • z3t4 1 hour ago
              but if you can do it at scale it might still be worth it, like owning thousands of machines
    • michaelt 9 hours ago
      Firstly, the attacker just wants to mine Monero with CPU, they can do that inside the container.

      Second, even if your Docker container is configured properly, the attacker gets to call themselves root and talk to the kernel. It's a security boundary, sure, but it's not as battle-tested as the isolation of not being root, or the isolation between VMs.

      Thirdly, in the stock configuration processes inside a docker container can use loads of RAM (causing random things to get swapped to disk or OOM killed), can consume lots of CPU, and can fill your disk up. If you consider denial-of-service an attack, there you are.

      Fourthly, there are a bunch of settings that disable the security boundary, and a lot of guides online will tell you to use them. Doing something in Docker that needs to access hot-plugged webcams? Hmm, it's not working unless I set --privileged - oops, there goes the security boundary. Trying to attach a debugger while developing and you set CAP_SYS_PTRACE? Bypasses the security boundary. Things like that.

    • cyphar 3 hours ago
      You really need to use user namespaces to get this kind of security protection -- running as root inside a container without user namespaces is not secure. Yes, breakouts often require some other bug or misconfiguration but the margin for error is non-existent (for instance, if you add CAP_SYS_PTRACE to your containers it is trivial to break out of them and container runtimes have no way of protecting against that). Almost all container breakouts in the past decade were blocked by user namespaces.

      Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).

    • easterncalculus 7 hours ago
      If the container is running in privileged mode you can just talk to the docker socket to the daemon on the host, spawn a new container with direct access to the root filesystem, and then change anything you want as root.
      • CGamesPlay 4 hours ago
        Notably, if you run docker-in-docker, Docker is probably not a security boundary. Try this inside any dind container (especially devcontainers): docker run -it --rm --pid=host --privileged -v /:/mnt alpine sh

        I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...

        • flaminHotSpeedo 1 hour ago
          Containers are never a security boundary. If you configure them correctly, avoid all the footguns, and pray that there's no container escape vulnerabilities that affect "correctly" configured containers then they can be a crude approximation of a security boundary that may be enough for your use case, but they aren't a suitable substitute for hardware backed virtualization.

          The only serious company that I'm aware of which doesn't understand that is Microsoft, and the reason I know that is because they've been embarrassed again and again by vulnerabilities that only exist because they run multitenant systems with only containers for isolation

    • Nextgrid 9 hours ago
      Container escapes exist. Now the question is whether the attacker has exploited it or not, and what the risk is.

      Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.

      Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.

      This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.

    • ronsor 10 hours ago
      There would be, but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape.

      Also, if you've been compromised, you may have a rootkit that hides itself from the filesystem, so you can't be sure of a file's existence through a simple `ls` or `stat`.

      • miladyincontrol 8 hours ago
        > but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape

        Honestly, citation needed. Very rare unless you're literally giving the container access to write to /usr/bin or other binaries the host is running, to reconfigure your entire /etc, access to sockets like docker's, or some other insane level of over reach I doubt even the least educated docker user would do.

        While of course they should be scoped properly, people act like some elusive 0-day container escape will get used on their minecraft server or personal blog that has otherwise sane mounts, non-admin capabilities, etc. You arent that special.

        • cyphar 3 hours ago
          As a maintainer of runc (the runtime Docker uses), if you aren't using user namespaces (which is the case for the vast majority of users) I would consider your setup insecure.

          And a shocking number of tutorials recommend bind-mounting docker.sock into the container without any warning (some even tell you to mount it "ro" -- which is even funnier since that does nothing). I have a HN comment from ~8 years ago complaining about this.

        • fomine3 6 hours ago
          I've seen many articles with `-v /var/run/docker.sock:/var/run/docker.sock` without scary warning
    • Havoc 10 hours ago
      I think a root container can talk to docker daemon and launch additional containers...with volume mounts of additional parts of file system etc. Not particularly confident about that one though
      • minitech 10 hours ago
        Unintentional vulnerabilities in Docker and the kernel aside, it can only do that if it has access to the Docker API (usually through a bind mount of the Unix socket). Having access to the Docker API is equivalent to having root on the host.
        • czbond 9 hours ago
          Well $hit. I have been using Docker for installing NPM modules in interactive projects I was testing out. I believed Docker blocked access to the underlying host (my computer).

          Thanks for mentioning it - but now... how does one deal with this?

          • minitech 9 hours ago
            If you didn’t mount docker.sock or any directory above it (i.e. / or /run by default) or run your containers as --privileged, you’re probably fine with respect to this angle. I’d still recommend rootless containers under unprivileged users* or VMs for extra comfort. Qubes (https://www.qubes-os.org/) is good, even if it’s a little clunkier than it could be.

            * but if you’re used to bind-mounting, they’ll be a hassle

            Edit: This is by no means comprehensive, but I feel compelled to point it out specifically for some reason: remember not to mount .git writable, folks! Write access to .git is arbitrary code execution as whoever runs git.

          • 3np 9 hours ago
            As sibling mentioned, unless you or the runtime explicitly mount the docker socket, this particular scenario shouldn't affect you.

            You might still want to tighten things up. Just adding on the "rootless" part - running the container runtime as an unprivileged user on the host instead of root - you also want to run npm/node as unprivileged user inside the container. I still see many defaulting to running as root inside the container since that's the default of most images. OP touches on this.

            For rootless podman, this will run as a user with your current uid and map ownership of mounts/volumes:

                podman run -u$(id -u) --userns=keep-id
    • trhway 8 hours ago
      >there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?

      non necessary vulnerability per. se. Bridged adapter for example lets you do a lot - few years ago there were a story of something like how a guy got a root in container and because the container used bridged adapter he was able to intercept traffic of an account info updates on GCP

    • TheRealPomax 10 hours ago
      Docker containers with root have rootish rights on the host machine too because the userid will just be 0 for both. So if you have, say, a bind mount that you play fast and loose with, the docker user can create 0777 files outside the docker container, and now we're almost done. Even worse if "just to make it work" someone runs the container with --privileged and then makes the terminal mistake of exposing that container to the internet.
      • V__ 10 hours ago
        Can you explain this a bit further? Wouldn't that 0777 file outside docker be still executed inside the container and not on the host?
        • necovek 9 hours ago
          I believe they meant you could create an executable that is accessible outside the container (maybe even as setuid root one), and depending on the path settings, it might be possible to get the user to run it on the host.

          Imagine naming this executable "ls" or "echo" and someone having "." in their path (which is why you shouldn't): as long as you do "ls" in this directory, you've ran compromised code.

          There are obviously other ways to get that executable to be run on the host, this just a simple example.

          • marwamc 9 hours ago
            Another example is they would enumerate your directories and find the names of common scripts and then overwrite your script. Or to be even sneakier, they can append their malicious code to an existing script in your filesystem. Now each time you run your script, their code piggybacks.

            OTH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.

            The $HOME/.{aws,docker,claude,ssh}

            Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.

            • tracker1 8 hours ago
              If your chosen development environment supports it, look into distroless or empty base containers, and run as --read-only if you can.

              Go and Rust tend to lend themselves to these more restrictive environments a bit better than other options.

    • Onavo 10 hours ago
      Either docker or a kernel level exploit. With non-VM containers, you are sharing a kernel.
  • kachapopopow 1 hour ago
    I find it interesting that the recent trend of moving to self-hosted solutions is sparking this rediscovery of security issues that come with self-hosting. One more time and it will be a cycle!
  • grekowalski 10 hours ago
    Recently, those Monero miners were installing themselves everywhere that had a vulnerable React 19. I had exactly the same problem.
    • tgsovlerkhgsel 28 minutes ago
      I love mining malware - it's reasonably visible and causes almost no damage. Essentially, it's like a bug bounty program that you don't have to manage, doesn't generate costly bullshit reports, and only costs you a few bucks of electricity when a vulnerability is found.

      If you have decent network or process level monitoring, you're likely to find it, while you might not realize the vulnerable software itself or some stealthier, more dangerous malware that might exploit it.

    • qingcharles 9 hours ago
      I had to nuke my Oracle Cloud box that runs my Umami server. It got hit. Was a good excuse to upgrade version and upgrade all my backup systems etc. Lost a few hours of data while it was returning 500 errors.
  • croemer 7 hours ago
    Not proof read by a human. It claims more than once the vulnerability was related to Puppeteer. Hallucination!

    "CVE-2025-66478 - Next.js/Puppeteer RCE)"

    • loloquwowndueo 7 hours ago
      TFA mentions it’s mostly a transcript of a Claude session literally in the first paragraph.
      • themafia 5 hours ago
        That was added as an edit. It does not cover the inaccuracies contained within. It should more realistically say "this article was generated by an LLM and may contain several errors which I didn't bother to find or correct."
      • croemer 1 hour ago
        That doesn't excuse publishing errors.
  • marwamc 8 hours ago
    Hahaha OP could be in deep trouble depending on what types of creds/data they had in that container. I had replied to a child comment but I figure best to reply to OP.

    From the root container, depending on volume mounts and capabilities granted to the container, they would enumerate the host directories and find the names of common scripts and then overwrite one such script. Or to be even sneakier, they can append their malicious code to an existing script in the host filesystem. Now each time you run your script, their code piggybacks.

    OTOH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here. The $HOME/.{aws,docker,claude,ssh} Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.

    • cobertos 4 hours ago
      Luckily umami in docker is pretty compartimentalized. All data is in the and the DB runs in another container. The biggest thing is the DB credentials. The default config requires no volume mounts so no worries there. It runs unprivileged with no extra capabilities. IIRC don't think the container even has bash, a few of the exploits that tried to run weren't able to due to lack of bash in the scripts they ran.

      Deleting and remaking the container will blow away all state associated with it. So there isn't a whole lot to worry about after you do that.

    • jakelsaunders94 8 hours ago
      Nothing in that container luckily, just what Umami needed to run, so no creds at all. Thanks for the info though!
  • tgsovlerkhgsel 55 minutes ago
    Would "user root" without --privileged and excessive mounts have enabled a container escape, or just exposed additional attack surface that potentially could have allowed the attacker to escape if they had another exploit?
    • PlqnK 8 minutes ago
      They would need a vulnerability in containerd or the kernel to escape the sandbox and being root in the sandbox would give them more leeway to exploit that vulnerability.

      But if they do have a vulnerability and manage to escape the sandbox then they will be root on your host.

      Running your processes as an unprivileged user inside your containers reduces the possibility of escaping the sandbox, running your containers themselves as un unprivileged user (rootless podman or docker for example) reduces the attack surface when they manage to escape the sandbox.

  • heavyset_go 9 hours ago
    I wouldn't trust that boot image or storage again, I'd nuke it for peace of mind.

    That said, do you have an image of the box or a container image? I'm curious about it.

    • jakelsaunders94 9 hours ago
      Yeah I did consider just killing it, I'm going to keep an eye on it for a few days with a gun to it just in case.

      I was lucky in that my DB backups were working so all my persistence wax backed up to S3. I think I could stand up another one in an hour.

      Unfortunately I didn't keep an image no. I almost didn't have the foresight to investigate before yeeting the whole box into the sun!

      • muppetman 7 hours ago
        Enable connection tracking (if it's not already) and keep looking at the conntrack entires. That's a good way to spot random things doing naughty stuff.
  • CGamesPlay 4 hours ago
    I took issue with this paragraph of the article, on account of several pieces of misinformation, presumably courtesy of Claude hallucinations:

    > Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.

    1. Files of running programs can be deleted while the program is running. If the program were trying to hide itself, it would have deleted /tmp/.XIN-unix/javae after it started. The nonexistence of the file is not a reliable source of information for confirming that the container was not escaped.

    2. ps shows program-controlled command lines. Any program can change what gets displayed here, including the program name and arguments. If the program were trying to hide itself, it would change this to display `login -fp ubuntu` instead. This is not a reliable source of information for diagnosing problems.

    It is good to verify the systemd units and crontab, and since this malware is so obvious, it probably isn't doing these two hiding methods, but information-stealing malware might not be detected by these methods alone.

    Later, the article says "Write your own Dockerfiles" and gives one piece of useless advice (using USER root does not affect your container's security posture) and two pieces of good advice that don't have anything to do with writing your own Dockerfiles. "Write your own Dockerfiles" is not useful security advice.

    • 3np 2 hours ago
      > "Write your own Dockerfiles" is not useful security advice.

      I actually think it is. It makes you more intimate with the application and how it runs, and can mitigate one particular supply-chain security vector.

      Agreeing that the reasoning is confused but that particular advice is still good I think.

  • wnevets 10 hours ago
    Sure does seem like the primary outcome of cryptocurrencies being released onto the world has been criminals making money.
    • BLKNSLVR 8 hours ago
      Criminals and the porn industry are almost invariably early adopters of new technologies. For better or worse their use-cases are proof-of-concepts that get expanded and built on, if successful, by more legitimate industries.

      Re: the Internet.

      Re: Peer-to-peer.

      Re: Video streaming.

      Re: AI.

      • lapetitejort 8 hours ago
        What is the average length of time for new tech to escape porn and crime and integrate into real applications? Longer than 15 years?
        • BLKNSLVR 7 hours ago
          Some kind of function of how quickly regulation comes to the technology.
    • nrhrjrjrjtntbt 9 hours ago
      And fast malware detection.
    • dylan604 9 hours ago
      Is that really a surprise though?
      • venturecruelty 9 hours ago
        Not for anyone who doesn't have a financial stake in said fraud, no.
  • gppmad 29 minutes ago
    Well written blog post. Well done, I've learned something new.
  • p0w3n3d 1 hour ago

      $ sudo ufw default deny incoming
      $ sudo ufw default allow outgoing
      $ sudo ufw allow ssh
      $ sudo ufw allow 80/tcp
      $ sudo ufw allow 443/tcp
      $ sudo ufw enable
    
    As a user of iptables this order makes me anxious. I used to cut myself out from the server many times because first blocking then adding exceptions. I can see that this is different here as the last command commits the rules...
  • seymon 9 hours ago
    What's considered nowadays the best practice (in terms of security) for running selfhosted workloads with containers? Daemon less, unprivileged podman containers?

    And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?

    • elric 1 hour ago
      As always: never run containers as root. Never expose ports to the internet unless needed. Never give containers outbound internet access. Run containers that you trust and understand, and not random garbage you find on the internet that ships with ancient vulnerabilities and a full suite of tools. Audit your containers, scan them for vulnerabilities, and nuke them from orbit on the regular.

      Easier said than done, I know.

      Podman makes it easier to be more secure by default than Docker. OpenShift does too, but that's probably taking things too far for a simple self hosted app.

    • movedx 8 hours ago
      You’ll set yourself up for success if you check the dependencies of anything you run, regardless of it being containerised. Use something like Snyk to scan containers and repositories for known exploits and see if anything stands out.

      Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.

      Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.

      And a whole host of other things.

  • spoaceman7777 1 hour ago
    > "No more exposed PostgreSQL ports, no more RabbitMQ ports open to the internet."

    Yikes. I would still recommend a server rebuild. That is _not_ a safe configuration in 2025, whatsoever. You are very likely to have a much better engineered persistent infection on that system.

    • microtonal 10 minutes ago
      Also, apparently they run an IoT platform for other users on the same host that cannot only visualize sensors, but also trigger (mains-powered) devices.

      The right thing to do is to roll out a new server (you have a declarative configuration right?), migrate pure data (or better, get it from the latest backup), remove the attacked machine off the internet to do a full audit. Both to learn about what compromises there are for the future and to inform users of the IoT platform if their data has been breached. In some countries, you are even required by law to report breaches. IANAL of course.

  • aborsy 2 hours ago
    If I’m not wrong, a hetzner VM by default has no firewall enabled. If you are coming from providers with different default settings, that might bite you. Containers that you thought were not open to internet have been open all this time. Two firewalls failed: They bypassed ufw and there was no external firewall either.

    You have to define a firewall policy and attach it to the VM.

  • hughw 7 hours ago
    You can run Docker Scout on one repo for free, and that would alert you that something was using Next.js and had that CVE. AWS ECR has pretty affordable scanning too: 9 cents/image and 1 cent/rescan. Continuous scanning even for these home projects might be worth it.

    [*] https://aws.amazon.com/inspector/pricing/

  • LelouBil 5 hours ago
    Something similar happened to me last year, it was with an unsecured user account accessible over ssh with password authentication, something like admin:admin that I forgot about.

    At least that's what I think happened because I never found out exactly how it was compromised.

    The miner was running as root and it's file was even hidden when I was running ls ! So I didn't understand what was happening, it was only after restarting my VPS from with a rescue image, and after mounting the root filesystem, that I found out the file I was seeing in the processes list did indeed exist.

  • elif 5 hours ago
    This is a perfect example of how honeypots, anti-malware organizations, and blacklists are so important to security.

    Even if you are an owasp member who reads daily vulnerability reports, it's so easy to think you are unaffected.

  • minitech 10 hours ago
    > Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.

      /tmp/.XIN-unix/javae &
      rm /tmp/.XIN-unix/javae
    
    This article’s LLM writing style is painful, and it’s full of misinformation (is Puppeteer even involved in the vulnerability?).
    • jakelsaunders94 10 hours ago
      Yeah fair, I asked claude to help because honestly this was a little beyond my writing skills. I'm real though. Sorry. Will change
      • minitech 10 hours ago
        Seconding what others have said about preferring to read bad human writing. And I don’t want to pick on you – this is a broadly applicable message prompted by a drop in the bucket – but please don’t publish articles beyond your ability to fact check. Just write what you actually know, and when you’re making a guess or you still have open questions at the end of your investigation, be honest about that. (People make mistakes all the time anyway, but we’re in an age where confident and detailed mistakes have become especially accessible.)
      • sincerely 10 hours ago
        Just a data point - I would rather read bad human writing than LLM output
      • croemer 7 hours ago
        It still says Puppeteer in multiple places.
      • seafoamteal 10 hours ago
        Hi Jake! Cool article, and it's something I'll keep in mind when I start giving my self-hosted setup a remodel soon. That said, I have to agree with the parent comment and say that the LLM writing style dulled what would otherwise have been a lovely sysadmin detective work article and didn't make me want to explore your site further.

        I'm glad you're up to writing more of your own posts, though! I'm right there with you that writing is difficult, and I've definitely got some posts on similar topics up on my site that are overly long and meandering and not quite good, but that's fine because eventually once I write enough they'll hopefully get better.

        Here's hoping I'll read more from you soon!

    • jakelsaunders94 10 hours ago
      I fixed it, apologies for the misinformation.
      • 3np 8 hours ago
        It still says:

        > IT NEVER ESCAPED.

        You haven't confirmed this (at least from the contents of the article). You did some reasonable spot checks and confirmed/corrected your understanding of the setup. I'd agree that it looks likely that it did not escape or gain persistence on your host but in no way have you actually verified this. If it were me I'd still wipe the host and set up everything from scratch again[0].

        Also your part about the container user not being root is still misinformed and/or misleading. The user inside the container, the container runtime user, and whether container is privileged are three different things that are being talked about as one.

        Also, see my comment on firewall: https://news.ycombinator.com/item?id=46306974

        [0]: Not necessarily drop-everything-you-do urgently but next time you get some downtime to do it calmly. Recovering like this is a good excercise anyway to make sure you can if you get a more critical situation in the future where you really need to. It will also be less time and work vs actually confirming that the host is uncontaminated.

        • jakelsaunders94 8 hours ago
          I did see your comment on Firewall, and you're right about the escape. It seems safe enough for now. Between the hacking and accidentally hitting the front page of HN it's been a long day.

          I'm going to sit down and rewrite the article and take a further look at the container tomorrow.

          • 3np 8 hours ago
            Hey, thanks for taking the time to share your learnings and engage. I'm sure there are HN readers out there who will be better off for it alongside you!

            (And good to hear you're leaving the LLMs out of the writing next time <3)

      • Eduard 9 hours ago
        I still see Puppeteer mentioned several times in your post and don't understand what that has to do with Umami, nextjs, and/or CVE-2025-66478.
  • xp84 7 hours ago
    I wonder in a case like this how hard it would be to "steal" the crypto that you've paid to mine. But I assume these people are probably smart enough to where everything is instantly forwarded to their C&C server to prevent that.
  • egberts1 7 hours ago
    This Monero mining also happened with one of my VPS over at interserv.net, when I forgot to log out of the root console in web-based terminal console to one of my VPS and closed its browser tab instead.

    It has since been fixed: Lesson learned.

  • hoppp 8 hours ago
    This nextjs vulnerability is gonna be exploited everywhere because its so easy. This is just the start
    • christophilus 7 hours ago
      I didn’t think it was possible for me to dislike nextjs any more, but here we are. It’s the Sharepoint of the JS ecosystem.
  • exceptione 8 hours ago
    The first step I would take is running podman instead of Docker to prevent container escapes. Podman can be run truly rootless and doesn't mess with your firewall. Next I would drop all caps if possible.
    • doodlesdev 8 hours ago
      What's the difference between running Podman and running Docker in rootless mode? (Other than Docker messing with the firewall, which apparently OP doesn't know about… yet). I understand Podman doesn't require a daemon, but is that all there is to it, or is there something I'm missing?
      • exceptione 8 hours ago
        The runtime has been designed from the ground up to be run daemonless and rootless. They also have a K8s runtime, that has an extremely small surface, just enough to be K8s compliant.

        But podman has also great integration with systemd. With that you could use a socket activated systemd unit, and stick the socket inside the container, instead of giving the container any network at all. And even if you want networking in the container, the podman folks developed slirp4netns, which is user space networking, and now something even better: passt/pasta.

      • crimsonnoodle58 5 hours ago
        Rootless docker is more compatible than podman I found. I experienced crash dumps in say mssql with podman, but not with rootless docker.

        Also rootless docker does not bypass ufw like rootful docker does.

  • ryanto 9 hours ago
    Sorry to hear you got hacked.

    I know we aren't supposed to rely on containers as a security boundary, but it sure is great hearing stories like this where the hack doesn't escape the container. The more obstacles the better I guess.

    • DANmode 7 hours ago
      Hacks are humans. For like, ten more minutes anyway.

      If the human involved can’t escalate, the hack can’t.

  • tolerance 10 hours ago
    Was dad notified of the security breach? If not he may want to consider switching hosting providers. Dad deserves a proper LLM-free post mortem.
    • jakelsaunders94 9 hours ago
      Hahaha, I did tell him this afternoon. This is the bloke who has the same password for all his banking apps despite me buying him 1password though. The imminent threat from RCE's just didn't land.
      • dylan604 9 hours ago
        Buying someone 1Pass, or the like, and calling it good is not enough. People using password managers forget how long it takes to visit all of the sites you use to create that site's record, then update the password to a secure one, and then log out and log back in with the new password to test it is good. For a lot of people having a password manager bought for them is going to be over it after the second site. Just think about how many videos on TikTok they could have been watching instead
        • venturecruelty 9 hours ago
          Yeah, mom and I sat down one afternoon and we changed all of her passwords to long, secure ones, generated by 1Password. It was a nice time! It also helped her remember all of the different services she needs to access, and now they're all safely stored with strong passwords. And it was a nice way to connect and spend some time together. :)
  • qingcharles 9 hours ago
    As an aside, if you're using a Hetzner VPS for Umami you might be over-specced. I just cut my Hetzner bill by $4/mo by moving my Umami box to one of the free Oracle Cloud VPS after someone on here pointed out the option to me. Depends whether this is a hobby thing or something more serious, but that option is there.
    • ianschmitz 9 hours ago
      I would pay $4/mo to stay as far away from Oracle as possible
    • angulardragon03 9 hours ago
      All fine and well, but oracle will threaten to turn off your instance if you don’t maintain a reasonable average CPU usage on the free hosts, and will eventually do so abruptly.

      This became enough of a hassle that I stopped using them.

      • treesknees 9 hours ago
        Do you mean if it’s idle, or if it’s maxed out? I’ve had a few relatively idle free-tier VMs with Oracle and I’ve not received any threats of shutoff over the last 3 years I’ve had them online.
      • qingcharles 7 hours ago
        I assumed the same, but as long as you keep a credit card on file apparently they will let you idle it too. I went in and set my max budget at $1/mo and set alerts too, just in case.
    • jakelsaunders94 9 hours ago
      I've got a whole Hetzner EX41 bare metal server, as opposed to a VPS. It's gotr like 20 services on it.

      But yeah it is massively overspecced. Makes me feel cool load testing my go backend at 8000 requests per second though!

    • spiderfarmer 9 hours ago
      I pay for Hetzner because it’s an EU based, sane company without a power hungry CEO.
    • tgtweak 9 hours ago
      The manageability of having everything on one host is kind of nice at that scale, but yeah you can stack free tiers on various providers for less.
  • pigbearpig 9 hours ago
    You might want to harden that those outbound firewall rules as another step. Did the Umami container need the ability to initiate connections? If not, that would eliminate the ability to do the outbound scans.

    Also could prevent something to exfiltrate sensitive data.

  • meisel 10 hours ago
    Is mining via CPU even worthwhile for the hackers? I thought ASICs dominated mining
    • jsheard 10 hours ago
      ASICs do dominate Bitcoin mining but Monero's POW algorithm is supposed to be ASIC resistant. Besides, who cares if it's efficient when it's someone else's server?
    • tgtweak 9 hours ago
      Monero's proof of work (RandomX) is very asic-resistant and although it generates a very small amount of earnings, if you exploit a vulnerability like this with thousands or tens of thousands of nodes, it can add up (8 modern cores 24/7 on Monero would be in the 10-20c/day per node range). OPs Vps probably generated about $1 for those script kiddies.
      • pixl97 9 hours ago
        Hit 1000 servers and it starts adding up. Especially if you live somewhere with a low cost of living.
      • asdff 7 hours ago
        So $40 a year? Does that imply all monero is mined like this because it's clearly not cost effective at all to mine legitimately?
        • beeflet 2 hours ago
          I think so, but it is hard to say. Could be a lot of people with extra power (or stolen power), but their own equipment. I mine myself with waste solar power
    • rnhmjoj 9 hours ago
      This is the PoW scheme that Monero currently uses:

      > RandomX utilizes a virtual machine that executes programs in a special instruction set that consists of integer math, floating point math and branches. > These programs can be translated into the CPU's native machine code on the fly (example: program.asm). > At the end, the outputs of the executed programs are consolidated into a 256-bit result using a cryptographic hashing function (Blake2b).

      I doubt that you anyone managed to create an ASIC that does this more efficiently and cost effective than a basic CPU. So, no, probably no one is mining Monero using an ASIC.

    • heavyset_go 9 hours ago
      Yes, for Monero it is the only real viable option. I'd also assume that the OP's instance is one of many other victims whose total mining might add up to a significant amount of crypto.
    • edm0nd 9 hours ago
      Its easily worth it as they are not spending any money on compute or power.

      If they can enslave 100s or even 1000s of machine mining XMR for them, easy money if you set aside the legality of it.

    • minitech 10 hours ago
      Hard for it not to be worthwhile, since it’s free for them. Same automated exploit run across the entire internet.
    • Bender 10 hours ago
      Optimal hardware costs money. Easy to hack machines are free and in nearly unlimited numbers.
    • justinsaccount 10 hours ago
      If the effectiveness of mining is represented as profit divided by the cost of running the infrastructure, then a CPU that someone else is paying for is worth it as long as the profit is greater than zero.
  • eyberg 5 hours ago
    a) containers don't contain

    b) if you want to limit your hosting environment to only the language/program you expect to run you should provision with unikernels which enforce it

  • zamadatix 10 hours ago
    I don't use Docker for my containers at home, but I take it by the concern that user namespacing is not the employed by them or something?
    • heavyset_go 9 hours ago
      If you're root in a namespace and manage to escape, you can have root privileges outside of it.
      • zamadatix 8 hours ago
        Are you referring to user namespaces and, if so, how does that kind of break out to host root work? I thought the whole point of user namespaces was your UID 0 inside the container is UID 100000 or whatever from the perspective of outside the container. Escaping the container shouldn't inherently grant you ability to change your actual UID in the host's main namespace in that kind of setup, but I'm not sure Docker actually leverages user namespaces or not.

        E.g. on my systemd-nspawn setup with --private-users=pick (enables user namespacing) I created a container and gave it a bind mount. From the container it appears like files in the bind mount created by the container namespace's UID 0 are owned by UID 0 but from outside the container the same file looks owned by UID 100000. Inverted, files owned by the "real" UID 0 on the host look owned by 0 to the host but as owned by 65534 (i.e. "nobody") from the container's perspective. Breaking out of the container shouldn't inherently change the "actual" user of the process from 100000 to 0 any more than breaking out of the container as a non-0 UID in the first place - same as breaking out of any of the other namespaces doesn't make the "UID 0" user in the container turn into "UID 0" on the host.

        • heavyset_go 8 hours ago
          Users in user namespaces are granted capabilities that root has, user namespaces themselves need to be locked down to prevent that, but if a user with root capabilities escapes the namespace, they have the capabilities on the host.

          They also expose kernel interfaces that, if exploited, can lead to the same.

          In the end, namespaces are just for partitioning resources, using them for sandboxes can work, but they aren't really sandboxes.

  • Computer0 9 hours ago
    Still confused what I am supposed to do to avoid all this.
    • movedx 8 hours ago
      Learning to manage an operating system in full, and having a healthy amount of paranoia, is a good first step.
      • doublerabbit 7 hours ago
        Then, write all your own software to please the paranoia for the next 15 years.

        Next year is the 5th year of my current personal project. Ten to go.

  • mikaelmello 10 hours ago
    This article is very interesting at first but I once again get disappointed after reading clear signs of AI like "Why this matters" and "The moment of truth", and then the whole thing gets tainted with signs all over the place.
    • dinkleberg 10 hours ago
      Yeah personally I’d much rather read a poorly constructed article with actually interesting content than the same content put into the formulaic AI voice.
      • venturecruelty 9 hours ago
        Article's been edited:

        >Edit: A few people on HN have pointed out that this article sounds a little LLM generated. That’s because it’s largely a transcript of me panicking and talking to Claude. Sorry if it reads poorly, the incident really happened though!

        For what it's worth, this is not an excuse, and I still don't appreciate being fed undisclosed slop. I'm not even reading it.

  • OutOfHere 9 hours ago
    You're lucky that Hetzner didn't delete your server and terminate your account.
    • croemer 7 hours ago
      With which justification?
      • OutOfHere 6 hours ago
        Cryptocurrency software usage. It is strictly against their policy. Afaik, their policy does not differentiate with voluntary and involuntary use.

        They have done it to others.

  • kopirgan 5 hours ago
    Only lesson seems to be use ufw! (or equivalent)
    • scottyeager 3 hours ago
      As others have mentioned, a firewall might have been useful in restricting outbound connections to limit the usefulness of the machine to the hacker after the breach.

      An inbound firewall can only help protect services that aren't meant to be reachable on the public internet. This service was exposed to the internet intentionally so a firewall wouldn't have helped avoid the breach.

      The lesson to me is that keeping up with security updates helps prevent publicly exposed services from getting hacked.

      • kopirgan 3 hours ago
        Yes thanks for the clarification.
  • guerrilla 10 hours ago
    Whew, load average of 0 here.
  • codegeek 10 hours ago
    tl:dr: He got hacked but the damage was only restricted to one docker container runn ing Umami (that is built on top of NextJS). Thankfully, he was running the docker container as a non privileged non-root user which saved him big time considering the fact that the attack surface was limited only within the container and could not access the entire host/filesystem.

    Is there ever a reason someone should run a docker container as root ?

    • d4mi3n 10 hours ago
      If you're using the container to manage stuff on the host, it'll likely need to be a process running as root. I think the most common form of this is Docker-in-Docker style setups where a container is orchestrating other containers directly through the Docker socket.
  • nodesocket 9 hours ago
    I also run Umami, but patched once the CVE patch was released. Also, I only expose the tracking js endpoint and /api/send via Caddy publically (though, /api/send might be enough to exploit the vul). To actually interact with Umami UI I use Twingate (similar to Tailscale) to tunnel into the VPC locally.
  • zrn900 6 hours ago
    Just use Hetzner managed servers? Very high specs, they manage everything, and you can install a lot of languages, apps etc.
  • iLoveOncall 10 hours ago
    > ls -la /tmp/.XIN-unix/javae

    Unless ran as root this could return file not found because of missing permissions, and not just because the file doesn't actually exist, right?

    > “I don’t use X” doesn’t mean your dependencies don’t use X

    That is beyond obvious, and I don't understand how anyone would feel safe from reading about a CVE on a widely used technology when they run dozens of containers on their server. I have docker containers and as soon as I read the article I went and checked because I have no idea what technology most are built with.

    > No more Umami. I’m salty. The CVE was disclosed, they patched it, but I’m not running Next.js-based analytics anymore.

    Nonsensical reaction.

    • qingcharles 9 hours ago
      Yeah, my Umami box was hit, but the time between the CVE disclosure and my box getting smacked was incredibly low. Umami patched it very quickly. And then patched it again a second time when the second CVE dropped right after.

      Nothing is immune. What analytics are you going to run? If you roll your own you'll probably leave a hole somewhere.

    • Hackbraten 9 hours ago
      > No more Umami. I’m salty.

      But kudos for the word play!

  • whalesalad 10 hours ago
    [flagged]
    • mrkeen 10 hours ago
      Someone mined Monero on my server a few years ago. I was running Jenkins.
  • venturecruelty 9 hours ago
    I still can't believe that there are so many people out here popping boxen and all they do is solve drug sudokus with the hardware. Hacks are so lame now.
  • j45 10 hours ago
    Never expose your server IP directly to the internet, vps or baremetal.
    • palata 10 hours ago
      Unless you need it to be reachable from the Internet, at which point it has to be... reachable from the Internet.
      • j45 6 hours ago
        Public facing services routed through a firewall or waf (cloudflare) always.

        Backend access trivial with Tailscale, etc.

    • sergsoares 9 hours ago
      Not expose the server IP is one practice (obfuscation) in a list of several options.

      But that alone would not solve the problem being a RCE from HTTP, that is why edge proxy provider like Cloudflare[0] and Fastfy[1] proactivily added protections in his WAF products.

      Even cloudflare had an outage trying to protect his customers[3].

      - [0] https://blog.cloudflare.com/waf-rules-react-vulnerability/ - [1] https://www.fastly.com/blog/fastlys-proactive-protection-cri... - [2] https://blog.cloudflare.com/5-december-2025-outage/

    • cortesoft 8 hours ago
      Any server? How do you run a public website? Even if you put it behind a load balancer, the load balancer is still a “server exposed to the internet”
      • j45 6 hours ago
        Public facing services routed through a firewall or waf (cloudflare) always.

        Backend access trivial with Tailscale, etc.

        Public IP never needs to be used. You can just leave it an internal IP if you really want.

        • cortesoft 6 hours ago
          A firewall is a server, too, though.
    • mrkeen 10 hours ago
      You're going to hate this thing called DNS
      • j45 6 hours ago
        Been running production servers for a long time.

        DNS is no issue. External DNS can be handled by Cloudflare and their waf. Their DNS service can can obsfucate your public IP, or ideally not need to use it at all with a Cloudflare tunnel installed directly on the server. This is free.

        Backend access trivial with Tailscale, etc.

        Public IP doesn't always need to be used. You can just leave it an internal IP if you really want.

    • miramba 10 hours ago
      Is there a way to do that and still be able to access the server?
      • m00x 10 hours ago
        Yes, cloudflare tunnels do this, but I don't think it's really necessary for this.

        I use them for self-hosting.

        • doublerabbit 7 hours ago
          That server is still exposed to the internet on a public IP. Just only known and courted through a 3rd party's castle.
          • j45 6 hours ago
            The tunnel doesn't have to use the Public IP inbound, the cloudflare tunnel calls outbound that can be entirely locked up.

            If you are using Cloudflare's DNS they can hide your IP on the dns record but it would still have to be locked down but some folks find ways to tighten that up too.

            If you're using a bare metal server it can be broken up.

            It's fair that it's a 3rd party's castle. At the same time until you know how to run and secure a server, some services are not a bad idea.

            Some people run pangolin or nginx proxy manager on a cheap vps if it suits their use case which will securely connect to the server.

            We are lucky that many of these ideas have already been discovered and hardened by people before us.

            Even when I had bare metal servers connected to the internet, I would put a firewall like pfsense or something in between.

      • Carrok 10 hours ago
        Many ways. Using a "bastion host" is one option, with something like wireguard or tinc. Tailscale and similar services are another option. Tor is yet another option.
        • cortesoft 8 hours ago
          The bastion host is a server, though, and would be exposed to the internet.
        • venturecruelty 9 hours ago
          >Never expose your server IP directly to the internet, vps or baremetal.
      • sh3rl0ck 10 hours ago
        Either via a VPN or a tunnel.
      • j45 6 hours ago
        Yes, of course.

        Free way - sign up for a cloudflare account. Use the DNS on cloudflare, they wil put their public ip in front of your www.

        Level 2 is install the cloudflare tunnel software on your server and you never need to use the public IP.

        Backend access securely? Install Tailscale or headscale.

        This should cover most web hosting scenarios. If there's additional ports or services, tools like nginx proxy manager (web based) or others can help. Some people put them on a dedicated VPS as a jump machine.

        This way using the Public IP can almost be optional and locked down if needed. This is all before running a firewall on it.

      • iLoveOncall 10 hours ago
        Yes, CloudFlare ZeroTrust. It's entirely free, I use it for loads of containers on multiple hosts and it works perfectly.
        • j45 6 hours ago
          It's really convenient. I don't love that its a one of one service, but it's a decent enough placeholder.
    • procaryote 10 hours ago
      As in "always run a network firewall" or "keep the IP secret"? Because I've had people suggest both and one is silly.
      • j45 6 hours ago
        A network firewall is mandatory.

        Keeping the IP secret seems like a misnomer.

        Its often possible to lock down the public IP entirely to not accept connections except what's initiated from the inside (like the cloudflare tunnel or otherwise reaching out).

        Something like a Cloudflare+tunnel on one side, tailscale or something to get into it on the other.

        Folks other than me have written decent tutorials that have been helpful.