Bugs Rust won't catch

(corrode.dev)

126 points | by lwhsiao 4 hours ago

17 comments

  • collinfunk 1 hour ago
    Hi, I am one of the maintainers of GNU Coreutils. Thanks for the article, it covers some interesting topics. In the little Rust that I have used, I have felt that it is far too easy to write TOCTOU races using std::fs. I hope the standard library gets an API similar to openat eventually.

    I just want to mention that I disagree with the section titled "Rule: Resolve Paths Before Comparing Them". Generally, it is better to make calls to fstat and compare the st_dev and st_ino. However, that was mentioned in the article. A side effect that seems less often considered is the performance impact. Here is an example in practice:

      $ mkdir -p $(yes a/ | head -n $((32 * 1024)) | tr -d '\n')
      $ while cd $(yes a/ | head -n 1024 | tr -d '\n'); do :; done 2>/dev/null
      $ echo a > file
      $ time cp file copy
    
      real 0m0.010s
      user 0m0.002s
      sys 0m0.003s
      $ time uu_cp file copy
    
      real 0m12.857s
      user 0m0.064s
      sys 0m12.702s
    
    I know people are very unlikely to do something like that in real life. However, GNU software tends to work very hard to avoid arbitrary limits [1].

    Also, the larger point still stands, but the article says "The Rust rewrite has shipped zero of these [memory saftey bugs], over a comparable window of activity." However, this is not true [2]. :)

    [1] https://www.gnu.org/prep/standards/standards.html#Semantics [2] https://github.com/advisories/GHSA-w9vv-q986-vj7x

    • dapperdrake 45 minutes ago
      First of all, thank you for presenting a succinct take on this viewpoint from the other side of the fence from where I am at.

      So how can I learn from this? (Asking very aggressively, especially for Internet writing, to make the contrast unmistakable. And contrast helps with perceiving differences and mistakes.) (You also don’t owe me any of your time or mental bandwidth, whatsoever.)

      So here goes:

      Question 1:

      How come "speed", "performance", race conditions and st_ino keep getting brought up?

      Speed (latency), physically writing things out to storage (sequentially, atomically (ACID), all of HDD NVME SSD ODD FDD tape, "haskell monad", event horizons, finite speed of light and information, whatever) as well as race conditions all seem to boil down to the same thing. For reliable systems like accounting the path seems to be ACID or the highway. And "unreliable" systems forget fast enough that computers don’t seem to really make a difference there.

      Question 2:

      Does throughput really matter more than latency in everyday application?

      Question 3 (explanation first, this time):

      The focus on inode numbers is at least understandable with regards to the history of C and unix-like operating systems and GNU coreutils.

      What about this basic example? Just make a USB thumb drive "work" for storing files (ignoring nand flash decay and USB). Without getting tripped up in libc IO buffering, fflush, kernel buffering (Hurd if you prefer it over Linux or FreeBSD), more than one application running on a multi-core and/or time-sliced system (to really weed out single-core CPUs running only a single user-land binary with blocking IO).

    • s20n 1 hour ago
      Sorry, complete noob here. Why didn't you just cd into $(yes a/ | head -n $((32 * 1024)) | tr -d '\n')? Why do you need to use the while loop for cd?

      EDIT: got it. -bash: cd: a/a/a/....../a/a/: File name too long

      • collinfunk 1 hour ago
        No need to apologize at all. Doing it in one cd invocation would fail since the file name is longer than PATH_MAX. In that case passing it to a system call would fail with errno set to ENAMETOOLONG.

        You could probably make the loop more efficient, but it works good enough. Also, some shells don't allow you to enter directories that deep entirely. It doesn't work on mksh, for example.

        • dapperdrake 42 minutes ago
          Facetious reply:

          > However, GNU software tends to work very hard to avoid arbitrary limits [1].

          • Joker_vD 22 minutes ago
            Yes? The quote says "tends to", and you still can cd into that directory, albeit not in a single invocation. Windows has similar limitations [0], it's just that their MAX_PATH is just 260 so it's somewhat more noticeable... and IIRC the hard limit of 32 K for paths in non-negotiable.

            [0] https://learn.microsoft.com/en-us/windows/win32/fileio/maxim...

    • cyberax 34 minutes ago
      To be fair, Vec::set_len bug in Rust was in 2021. And even then it had to be annotated as `unsafe`. It was then deprecated and a linter check was added: https://github.com/rust-lang/rust-clippy/issues/7681
      • Dr_Emann 2 minutes ago
        To be even fair-er, it wasn't actually memory unsafety, it was "just" unsoundness, there was a type, that IF you gave it an io reader implementation that was weird, that implementation could see uninit data, or expose uninit data elsewhere, but the only readers actually used were well behaved readers.
  • wahern 1 hour ago
    > What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing

    They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls. Most of those mistakes are exceedingly amateur from the perspective of long-time GNU coreutils (or BSD or Solaris base) developers, issues that were identified and largely hashed out decades ago, notwithstanding the continued long tail of fixes--mostly just a trickle these days--to the old codebases.

    • pando85 3 minutes ago
      Memory safety catches buffer overflows. CI catches logic bugs. Neither catches the Unix API gotchas nobody documented.
    • nine_k 1 hour ago
      More than that: it seems that Rust stdlib nudges the developer towards using neat APIs at an incorrect level of abstraction, like path-based instead of handle-based file operations. I hope I'm wrong.
      • NobodyNada 46 minutes ago
        Nearly every available filesystem API in Rust's stdlib maps one-to-one with a Unix syscall (see Rust's std::fs module [0] for reference -- for example, the `File` struct is just a wrapper around a file descriptor, and its associated methods are essentially just the syscalls you can perform on file descriptors). The only exceptions are a few helper functions like `read_to_string` or `create_dir_all` that perform slightly higher-level operations.

        And, yeah, the Unix syscalls are very prone to mistakes like this. For example, Unix's `rename` syscall takes two paths as arguments; you can't rename a file by handle; and so Rust has a `rename` function that takes two paths rather than an associated function on a `File`. Rust exposes path-based APIs where Unix exposes path-based APIs, and file-handle-based APIs where Unix exposes file-handle-based APIs.

        So I agree that Rust's stdilb is somewhat mistake prone; not so much because it's being opinionated and "nudg[ing] the developer towards using neat APIs", but because it's so low-level that it's not offering much "safety" in filesystem access over raw syscalls beyond ensuring that you didn't write a buffer overflow.

        [0]: https://doc.rust-lang.org/std/fs/index.html

      • JuniperMesos 17 minutes ago
        After reading this article, I'm inclined to think that the right thing for this project to do is write their own library that wraps the Rust stdlib with a file-handle-based API along with one method to get a file handle from a Path; rewrite the code to use that library rather than rust stdlib methods, and then add a lint check that guards against any use of the Rust standard library file methods anywhere outside of that wrapper.
    • AlotOfReading 1 hour ago
      Someone once coined a related term, "disassembler rage". It's the idea that every mistake looks amateur when examined closely enough. Comes from people sitting in a disassembler and raging the high level programmers who had the gall to e.g. use conditionals instead of a switch statement inside a function call a hundred frames deep.

      We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.

      • irishcoffee 1 hour ago
        When I read the article I came away with the impression that shipping bugs this severe in a rewrite of utils used by hundreds of millions of people daily (hourly?) isn’t ok. I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.

        Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.

        Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.

        • gmueckl 38 minutes ago
          I think that legitimate real world issues in rust code should be talked about more often. Right now the language enjoys a reputation that is essentiaöly misleading marketing. It isn't possible to create a programing language that doesn't allow bugs to happen (even with formal verification you can still prove correctness based on a wrong set of assumptions). This weird, kind of religious belief that rust leads to magically completely bug free programs needs to be countered and brought in touch with reality IMO.
          • testdelacc1 4 minutes ago
            Is it possible you’ve misunderstood what Rust promises?

            > It isn't possible to create a programing language that doesn't allow bugs to happen

            Yes, that’s true. No one doubts this. Except you seem to think that Rust promises no bugs at all? I don’t know where you got this impression from, but it is incorrect.

            Rust promises that certain kinds of bugs like use-after-free are much, much less likely. It eliminates some kinds of bugs, not all bugs altogether. It’s possible that you’ve read the claim on kinds of bugs, and misinterpreted it as all bugs.

            I’ve had this conversation before, and it usually ends like https://www.smbc-comics.com/comic/aaaah

        • lelanthran 10 minutes ago
          I find it hilarious that this comment is being downvoted.

          Exactly what is the controversial take here?

          > I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.

          Nope. this is fine.

          > Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.

          Maybe this?

          > Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.

          Nope, this is fine too.

    • slopinthebag 1 hour ago
      Seems pretty impressive they rewrote the coreutils in a new language, with so little Unix experience, and managed to do such a good job with very little bugs or vulns. I would have expected an order of magnitude more at least.

      Shows how good Rust is, that even inexperienced Unix devs can write stuff like this and make almost no mistakes.

      • nine_k 56 minutes ago
        Yes, it's the lack of Unix experience that's terrifying. So many of mistakes listed are rookie mistakes, like not propagating the most severe errors, or the `kill -1` thing. Why were people who apparently did not have much experience using coreutils assigned to rewrite coreutils?
        • aw1621107 35 minutes ago
          > Why were people who apparently did not have much experience using coreutils assigned to rewrite coreutils?

          From what I understand, "assigned" probably isn't the best way to put it. uutils started off back in 2013 as a way to learn Rust [0] way before the present kerfuffle.

          [0]: https://github.com/uutils/coreutils/tree/9653ed81a2fbf393f42...

          • nineteen999 8 minutes ago
            Yeah perhaps learning UNIX API's and Rust at the same time doesn't lead to a drop in replacement ready to be shipped in major distributions. Who whould have thunk it.
        • JuniperMesos 16 minutes ago
          Why is it even possible to represent a negative PID, let alone treat the integer -1 as a PID meaning "all effective processes"? This seems like a mistake (if not a rookie mistake) in the Linux kernel API itself.
  • hombre_fatal 54 minutes ago
    One thing that's hard about rewriting code is that the original code was transformed incrementally over time in response to real world issues only found in production.

    The code gets silently encumbered with those lessons, and unless they are documented, there's a lot of hidden work that needs to be done before you actually reach parity.

    TFA is a good list of this exact sort of thing.

    Before you call people amateur for it, also consider it's one of the most softwarey things about writing software. It was bound to happen unless coreutils had really good technical docs and included tests for these cases that they ignored.

  • Joker_vD 29 minutes ago
    > The pattern is always the same. You do one syscall to check something about a path, then another syscall to act on the same path. Between those two calls, an attacker with write access to a parent directory can swap the path component for a symbolic link. The kernel re-resolves the path from scratch on the second call, and the privileged action lands on the attacker’s chosen target.

    It's actually even worse than that somewhat, because the attacker with write access to a parent directory can mess with hard links as well... sure, it only messes with the regular files themselves but there is basically no mitigations. See e.g. [0] and other posts on the site.

    [0] https://michael.orlitzky.com/articles/posix_hardlink_heartac...

    • sysguest 27 minutes ago
      hmm... maybe a 'write lock' on the directory? though this will become more hairy without timeouts/etc...
  • oconnor663 17 minutes ago
    > The trap is that get_user_by_name ends up loading shared libraries from the new root filesystem to resolve the username.

    That's kind of horrifying. Is there a reliable list somewhere of all the functions that do that? Is that list considered stable?

  • fschuett 57 minutes ago
    Thanks for the list. I like these lists, so I can put them into a .md file, then launch "one agent per file" on my codebase and see if they can find anything similar to the mentioned CVEs.

    Rust won't catch it, but now the agents will.

    Edit: https://gist.github.com/fschutt/cc585703d52a9e1da8a06f9ef93c... for anyone who needs copying this

  • 9fwfj9r 57 minutes ago
    So it's basically failing on - necessary atomicity for filesystem operation - annoying path & string encoding - inertia for historical behaviors
  • jolt42 1 hour ago
    I wonder if Rust becomes more popular with AI as Rust can help catch what AI misses, but then if that's the case then what about Haskell, or Lean, or?
    • tayo42 43 minutes ago
      The way Haskell handles memory is weird and can be unpredictable.
  • rvz 1 hour ago
    This is what happens when many people hype about a technology that solves a specific class of vulnerabilities, but it is not designed to prevent the others such as logic errors because of human / AI error.

    Granted, the uutils authors are well experienced in Rust, but it is not enough for a large-scale rewrite like this and you can't assume that it's "secure" because of memory safety.

    In this case, this post tells us that Unix itself has thousands of gotchas and re-implementing the coreutils in Rust is not a silver bullet and even the bugs Unix (and even the POSIX standard) has are part of the specification, and can be later to be revealed as vulnerabilities in reality.

  • micheles 1 hour ago
    > uutils now runs the upstream GNU coreutils test suite against itself in CI. That’s the right scale of defense for this class of bug. That's the minimum, it is absurd that they did not start from that!
  • immanuwell 36 minutes ago
    rust promised you memory safety and delivered - but turns out the filesystem doesn't care about your borrow checker, and these 44 cves are the receipt
  • Analemma_ 1 hour ago
    I know nobody's perfect and I'm not asking for perfection, but these bugs are pretty alarming? It seems like these supposed coreutils replacements are being written by people who don't know anything about Unix, and also didn't even bother looking at the GNU tools they are trying to replace. Or at least didn't have any curiosity about why the GNU tools work the way they do. Otherwise they might've wondered about why things operate on bytes and file descriptors instead of strings and paths.

    I hate to armchair general, but I clicked on this article expecting subtle race conditions or tricky ambiguous corners of the POSIX standard, and instead found that it seems to be amateur hour in uutils.

    • lelanthran 1 hour ago
      > It seems like these supposed coreutils replacements are being written by people who don't know anything about Unix, and also didn't even bother looking at the GNU tools they were supposed to be replacing.

      They're a group of people who want to replace pro-user software (GPL) with pro-business software (MIT).

      I don't really want them to achieve their goal.

    • ronjakoi 48 minutes ago
      They are deliberately not looking at coreutils code because the Rust versions are released as MIT and they don't want the project contaminated by GPL. I am not fond of this, personally.
  • SpectreHat 55 minutes ago
    [dead]
  • tokyobreakfast 1 hour ago
    [flagged]
  • marsven_422 1 hour ago
    [dead]
  • slopinthebag 1 hour ago
    I find it interesting how people will criticise Rust for not preventing all bugs, when the alternative languages don't prevent those same bugs nor the bugs rust does catch. If you're comparing Rust to a perfect language that doesn't exist, you should probably also compare your alternative to that perfect language as well right?

    I'd be interested in a comparison with the amount of bugs and CVE's in GNU coreutils at the start of its lifetime, and compare it with this rewrite. Same with the number of memory bugs that are impossible in (safe) Rust.

    Don't just downvote me, tell me how I'm wrong.

    • throawayonthe 15 minutes ago
      i don't think CVEs were a thing at the start of the GNU rewrite
  • Scarbutt 1 hour ago
    But Google told us Rust can only have 0.2 vulnerabilities per million lines of code.
    • Ayaan2004 44 minutes ago
      windows also told us they will soon focus on rust to make operating system
    • mqus 44 minutes ago
      Bug =! Vulnerability