My only complaint with this excellent list is that it treats "generics" and "lifetimes" as separate things. There's a reason the lifetime is inside the generic brackets. The code is generic over some lifetimes just as it can be generic over some types.
As a Rust beginner I read lifetimes backwards, thinking <'a> means I'm "declaring a lifetime" which I then use. What that actually declares is a placeholder for a lifetime the compiler will attempt to find wherever that struct or function is used, just as it would attempt to find a valid type for a type generic <T> at the points of usage.
Once I fixed that misconception everything made much more sense. Reminding myself that only the function signature matters, not the actual code, was the other thing I needed to really internalize.
The compiler messages hinder this sometimes, as when the compiler says "X doesn't live long enough" it actually means "using my limited and ever-evolving ability to infer possible lifetimes from your code, I can't find one that I can use here".
This is also (for me, anyway) a common "it's fine but it won't compile" case, where you don't have enough lifetime parameters. In other words, you're accidentally giving two things the same lifetime parameter when it's not actually necessary to require that the compiler come up with a single lifetime that works for both. The compiler error for that does not typically lead you to a solution directly.
Lifetimes and types are different, but the part where they are generic is the same. I think of it as "who controls/decides the value of this parameter". It's a crucial part of understanding lifetimes, not just a misconception.
It's actually a classic and much-repeated case in Rust education:
fn pick_first<'a>(x: &'a str, y: &'a str) -> &'a str {
x // We only actually return x, never y
}
fn main() {
let s1 = String::from("long-lived");
let result;
{
let s2 = String::from("short-lived");
result = pick_first(&s1, &s2);
} // s2 dropped here
println!("{}", result);
}
The error here is "borrowed value [pointing to &s2] does not live long enough". Of course it does live long enough, it's just that the constraints in the function signature don't say this usage is valid.
Thinking as a beginner, I think part of the problem here is the compiler is overstating its case. With experience, one learns to read this message as "borrowed value could not be proved to live as long as required by the function declaration", but that's not what it says! It asserts that the value in fact does not live long enough, which is clearly not true.
(Edit: having said this, I now realize the short version confuses beginners because of the definition of “enough”. They read it as “does not live long enough to be safe”, which the compiler is not—and cannot be—definitively saying.)
When this happens in a more complex situation (say, involving a deeper call tree and struct member lifetimes as well), you just get this same basic message, and finding the place where you've unnecessarily tied two lifetimes together can be a bit of a hunt.
My impression is that it's difficult or impossible for the compiler to "explain its reasoning" in a more complex case (I made an example at [0] [1]), which is understandable, but it does mean you always get this bare assertion "does not live long enough" and have to work through the tree of definitions yourself to find the bad constraint.
Would output along the following lines be an improvement?
error[E0597]: `buffer` does not live long enough
--> src/main.rs:63:21
|
59 | let report = {
| ------ borrow later stored here
60 | let mut buffer = Vec::new();
| ---------- binding `buffer` declared here
61 | let ctx = Context {
62 | config: &config,
| ------ this field and `buffer` are required by `Context` to have the same lifetime
63 | buffer: &mut buffer,
| ^^^^^^^^^^^ borrowed value does not live long enough
...
68 | };
| - `buffer` dropped here while still borrowed
help: consider making different fields in `Context` have independent lifetimes
|
4 | struct Context<'a> {
| ^^
5 | config: &'a Config,
| ^^
6 | buffer: &'a mut Vec<u8>,
| ^^
7 | }
I'm fairly out of touch with Rust. I think generics and lifetimes are also separate in the sense that only the generics get monomorphised, while lifetimes don't. I.e., you get distinct structs Foo<u32> and Foo<i32>, depending on the (type) argument with which Foo was instantiated (just like it is in C++), but only one Bar<'a> no matter what (lifetime) argument it was "instantiated" with.
You're slightly incorrect. Lifetimes do get "monomorphized" in the sense that you can have multiple concrete lifetimes be filled in for a given lifetime parameter (that's why they're called parameters) but also, lifetimes are fully erased far before you get to codegen monomorphization, which is what happens with generics.
> It's possible for a Rust program to be technically compilable but still semantically wrong.
This was my biggest problem when I used to write Rust. The article has a small example but when you start working on large codebases these problems pop up more frequently.
Everyone says the Rust compiler will save you from bugs like this but as the article shows you can compile bugs into your codebase and when you finally get an unrelated error you have to debug all the bugs in your code. Even the ones that were working previously.
> Rust does not know more about the semantics of your program than you do
Also this. Some people absolutely refuse to believe it though.
I think the key idea is that Rust gives you a lot of tools to encode semantics into your program. So you've got a much greater ability for the compiler to understand your semantics than in a language like JavaScript (say) where the compiler has very little way of knowing any information about lifetimes.
However, you've still got to do that job of encoding the semantics. Moreover, the default semantics may not necessarily be the semantics you are interested in. So you need to understand the default semantics enough to know when you need something different. This is the big disadvantage of lifetime elision: in most cases it works well, but it creates defaults that may not be what you're after.
The other side is that sometimes the semantics you want to encode can't be expressed in the type system, either because the type system explicitly disallows them, or because it doesn't comprehend them. At this point you start running into issues like disjoint borrows, where you know two attributes in a struct can be borrowed independently, but it's very difficult to express this to the compiler.
That said, I think Rust gives you more power to express semantics in the type system than a lot of other languages (particularly a lot of more mainstream languages) which I think is what gives rise to this idea that "if it compiles, it works". The more you express, the more likely that statement is to be true, although the more you need to check that what you've expressed does match the semantics you're aiming for.
Of course, if your program compiles, that doesn't mean the logic is correct. However, if your program compiles _and_ the logic is correct, there's a high likelihood that your program won't crash (provided you handle errors and such, you cannot trust data coming from outside, allocations to always work, etc). In Rust's case, this means that the compiler is much more restrictive, exhaustive and pedantic than others like C's and C++'s.
In those languages, correct logic and getting the program to compile doesn't guarantee you are free from data races or segmentation faults.
Also, Rust's type system being so strong, it allows you to encode so many invariants that it makes implementing the correct logic easier (although not simpler).
>In those languages, correct logic and getting the program to compile doesn't guarantee you are free from data races or segmentation faults.
I don't believe that it's guaranteed in Rust either, despite much marketing to the contrary. It just doesn't sound appealing to say "somewhat reduces many common problems" lol
>Also, Rust's type system being so strong, it allows you to encode so many invariants that it makes implementing the correct logic easier (although not simpler).
C++ has a strong type system too, probably fancier than Rust's or at least similar. Most people do not want to write complex type system constraints. I'm guessing that at most 25% of C++ codebases at most use complex templates with recursive templates, traits, concepts, `requires`, etc.
Comparing type systems is difficult, but the general experience is that it is significantly easier to encode logic invariants in Rust than in C++.
Some of the things you can do, often with a wild amount of boilerplate (tagged unions, niches, etc.), and some of the things are fundamentally impossible (movable non-null owning references).
C++ templates are more powerful than Rust generics, but the available tools in Rust are more sophisticated.
Note that while C++ templates are more powerful than Rust generics at being able to express different patterns of code, Rust generics are better at producing useful error messages. To me, personally, good error messages are the most fundamental part of a compiler frontend.
True but you lose out on much of the functionality of templates, right? Also you only get errors when instantiating concretely, rather than getting errors within the template definition.
> but you lose out on much of the functionality of templates, right?
I don't think so? From my understanding what you can do with concepts isn't much different from what you can do with SFINAE. It (primarily?) just allows for friendlier diagnostics further up in the call chain.
You're right but concepts do more than SFINAE, and with much less code. Concept matching is also interesting. There is a notion of the most specific concept that matches a given instantiation. The most specific concept wins, of course.
No, concepts interoperate with templates. I guess if you consider duck typing to be a feature, then using concepts can put constraints on that, but that is literally the purpose of them and nobody makes you use them.
If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later? This behavior is in fact used to decide between alternative template specializations for the same template. Concepts do it better in some ways.
> If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later?
Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
A big concern here would be accidentally depending on something that isn't declared in the concept, which can result in a downstream consumer who otherwise satisfies the concept being unable to use the template. You also don't get nicer error messages in these cases since as far as concepts are concerned nothing is wrong.
It's a tradeoff, as usual. You get more flexibility but get fewer guarantees in return.
Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.
>Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
What I meant is, if the thing is not instantiated then it is not used. Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that. Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to. But that's not a problem with the language.
> Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.
I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled. IIRC Swift takes advantage of this (polymorphic generics by default with optional monomorphization) and the Rust devs are also looking into it (albeit the other way around).
> Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that.
I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
> Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to.
Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
>I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
The actual effects depend on a lot of things. I'm just saying, it seems contrived to me, and the most likely outcome of this type of broken template is failed compilation.
>As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled.
This is incompatible with how C++ templates work. There are methods to separately compile much of a template. If concepts could be made into concrete classes and used without direct inheritance, it might work. But this would require runtime concepts checking I think. I've never tried to dynamic_cast to a concepts type, but that would essentially be required to do it well. In practice, you can still do this without concepts by making mixins and concrete classes. It kinda sucks to have to use more inheritance sometimes, but I think one can easily design a program to avoid these problems.
>I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
This sounds wrong to me. Template parameters plus template code actually turns into real code. Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable". No language I can dream of that has generics could do any different.
>Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.
I don't agree that Rust tools are more sophisticated and they definitely are not more abundant. You just have a language that is more anal up front. C++ has many different compilers, analyzers, debuggers, linting tools, leak detectors, profilers, etc. It turns out that 40 years of use leads to significant development that is hard to rebuild from scratch.
I seem to have struck a nerve with my post, which got 4 downvotes so far. Just for saying Rust is not actually better than C++ in this one regard lol.
> However, if your program compiles _and_ the logic is correct, there's a high likelihood that your program won't crash (provided you handle errors and such, you cannot trust data coming from outside, allocations to always work, etc).
That is one hell of a copium disclaimer. "If you hold it right..."
Rust certainly doesn't make it impossible to write bad code. What it does do is nudge you towards writing good code to a noticeably appreciable degree, which is laudable compared to the state of the industry at large.
I feel like you're attacking a strawman here. Of course you can write unreliable software in Rust. I'm not aware of anyone who says you can't. The point is not that it's a magic talisman that makes your software good, the point is that it helps you to make your software good in ways other languages (in particular C/C++ which are the primary point of comparison for Rust) do not. That's all.
> The point is not that it's a magic talisman that makes your software good, the point is that it helps you to make your software good in ways other languages (in particular C/C++ which are the primary point of comparison for Rust) do not.
> Some people absolutely refuse to believe it though.
Who says this? I've never seen someone argue it makes it impossible to write incorrect code. If that were the case then there's no reason for it to have an integrated unit testing system. That would be an absurd statement to make, even if you can encode the entire program spec into the type system, there's always the possibly the description of a solution is not aligned with the problem being solved.
>They certainly know that you dont want to process garbage data from freed memory.
It depends on what you mean by "freed". Can one write a custom allocator in Rust? How does one handle reading from special addresses that represent hardware? In both of these scenarios, one might read from or write to memory that is not obviously allocated.
Both of those things can be done in Rust, but not in safe Rust, you have to use unsafe APIs that don't check lifetimes at compile time. Safe Rust assumes a clear distinction between memory allocations that are still live and those that have been deallocated, and that you never want to access the latter, which of course is true for most applications.
You can indeed write custom allocators, and you can read to or write from special addresses. The former will usually, and the latter will always, require some use of `unsafe` in order to declare to the compiler: "I have verified that the rules of ownership and borrowing are respected in this block of code".
I don't think anyone believes the “if it compile it works” phrase literally.
It's just that once it compiles Rust code will work more often than most languages, but that doesn't mean Rust code will automatically be bug free and I don't think anyone believes that.
Yeah, even the official Rust book points it out and if my memory serves me right (not punintended) also gives an example in the form of creating a memory leak (not to be confused with memory unsafe).
As "unsafe". An example would be of how AMD GPUs some time ago didn't free a programs' last rendered buffers and you could see the literal last frame in its entirety. Fun stuff.
That is not a memory leak though! That's using/exposing an uninitialized buffer, which can happen even if you allocate and free your allocations correctly. Leaking the buffer would prevent the memory region from being allocated by another application, and would in fact prevent that from happening.
This is also something that Rust does protect against in safe code, by requiring initialization of all memory before use, or using MaybeUninit for buffers that aren't, where reading the buffer or asserting that it has been initialized is an unsafe operation.
It's a security hole. Rust doesn't prevent you from writing unsafe code that reads it. The bug wasn't that it could be read by a well conforming language, it was that it was handed off uninitialized to use space at all.
There are definitively people in the ecosystem who peddle sentiments like "Woah, Rust helps so much that I can basically think 'if this compiles, everything will work', and most of the times it does!", and that's the confusing part for many people. Examples found in 30 seconds of searching:
I read the comments you linked and don't really think they literally believe Rust is magic. I dunno though I guess I could imagine a vibe coder tacitly believing that. Not saying you're wrong. I just think most people say that tongue in cheek. This saying has been around forever in the Haskell community for decades. Feels like a long running joke at this point
When I’ve said that, I’ve meant that almost the only remaining bugs were bad logic on my part. It’s free from the usual dumb mistakes I would make in other languages.
I don't know the authors of those posts, so I don't want to put word in their mouth, but neither seem to be delusional about the "if it compiles, it works" phrase. The first one qualifies it with "most of the time", and the second one explicitly mentions using type state as a tool to aid correctness...
But I don't doubt there are people who take that phrase too literally, though.
Both examples you linked are people talking casually about the nature of Rust, rather than about the specific rule. That goes very much with your parent commenter's assertion that nobody takes it literally. The first example even starts with 'Most of the time' (this is true, though not guaranteed. I will explain further down). Human languages are imperfect and exaggerations and superlatives are common in casual communication.
But I have not seen any resource or anyone making technical points ever assert that the Rust compiler can verify program logic. That doesn't even make sense - the compiler isn't an AI that knows your intentions. Everybody is always clear that it only verifies memory safety.
Now regarding the 'most of the time' part. The part below is based purely on my experience and your mileage may vary. It's certainly possible to compile Rust programs with logical/semantic errors. I have made plenty. But the nature of C/C++ or similar manually memory-managed languages is such that you can make memory safety bugs quiet easily and miss them entirely. They also stay hidden longer.
And while logical errors are also possible, most people write and test code in chunks of sizes small enough where they feel confident enough to understand and analyze it entirely within their mind. Thus they tend to get caught and eliminated earlier than the memory safety bugs.
Now since Rust handles the memory safety bugs for you and you're reasonably good at dealing with logical bugs, the final integrated code tends to be bug-free, surprisingly more often than in other languages - but not every time.
There is another effect that makes Rust programs relatively more bug-free. This time, It's about the design of the code. Regular safe Rust, without any runtime features (like Rc, Arc, RefCell, Mutex, etc) is extremely restrictive in what designs it accepts. It accepts data structures that have a clear tree hierarchy, and thus a single-owner pattern. But once you get into stuff like cyclic references, mutual references, self references, etc, Rust will simply reject your code even if it can be proven to be correct at compile time. You have three options in that case: Use runtime safety checks (Rc, RefCell, Mutex, etc. This is slightly slower) OR use unsafe block and verify it manually, OR use a library that does the previous one for you.
Most of the code we write can be expressed in the restricted form that safe Rust allows without runtime checks. So whenever I face such issues, my immediate effort is to refactor the code in such way. I reach for the other three methods only if this is not possible - and that's actually rare. The big advantage of this method is that such designs are relatively free of the vast number of logical bugs you can make with a non-tree/cyclic ownership hierarchy. (Runtime checks convert memory safety bugs into logical bugs. If you make a mistake there, the program will panic at runtime.) Therefore, the refactored design ends up very elegant and bug-free much more often than in other languages.
> "Woah, Rust helps so much that I can basically think 'if this compiles, everything will work', and most of the times it does!"
I think is is a fairly bad example to pick, because the fact that the person says “I can basically think” and “most of the time it does” (emphasis mine) shows that they don't actually believes it will makes bug-free programs.
They are just saying that “most of the time” the compiler is very very helpful (I agree with them on that).
Needs a (2020) in the title. I don't think anything major is outdated, but in particular in section 10, one of the desired syntaxes is now supported as an unstable feature but there wasn't any mention of that:
#![feature(closure_lifetime_binder)]
fn main() {
let identity = for<'a> |x: &'a i32| -> &'a i32 { x };
}
I don't see how it's a misconception to say that a 'static lifetime lives for the life of the program. The author says "it can live arbitrarily long", which by definition must include... the life of the program. Where exactly is the error then?
Because something with 'static lifetime does not in fact live for the entire program.
It just means that it could live until the end of the program and that case should be considered when dealing with it, there's no guarantee that it will drop earlier. But it may drop at any time, as long as there are no remaining references to it, it does not need to be in memory forever.
It's a subtle distinction and it is easy to misinterpret. For instance Tokio tasks are 'static and it felt wrong initially because I thought it would never drop them and leak memory. But it just means that it doesn't know when they will be dropped and it cannot make any promises about it., that's all.
> Well yes, but a type with a 'static lifetime is different from a type bounded by a 'static lifetime. The latter can be dynamically allocated at run-time, can be safely and freely mutated, can be dropped, and can live for arbitrary durations.
A 'static lifetime does not live for the rest of the program. It rather is guaranteed to live for as long as anyone is able to observe it. Data allocated in an Rc for example, lives as long as there are references to it. The ref count will keep it alive, but it will in fact still be deallocated once all references are gone (and it cannot be observed anymore).
"life of the program" might imply it needs to begin life at program start. But it can be allocated at runtime, like an example in the list shows. So its rather "lives until the end of the program", but it doesnt need to start life at the start of the program
> Others think someone from the Rust (programming language, not video game) development community was responsible due to how critical René has been of that project, but those claims are entirely unsubstantiated.
As a Rust beginner I read lifetimes backwards, thinking <'a> means I'm "declaring a lifetime" which I then use. What that actually declares is a placeholder for a lifetime the compiler will attempt to find wherever that struct or function is used, just as it would attempt to find a valid type for a type generic <T> at the points of usage.
Once I fixed that misconception everything made much more sense. Reminding myself that only the function signature matters, not the actual code, was the other thing I needed to really internalize.
The compiler messages hinder this sometimes, as when the compiler says "X doesn't live long enough" it actually means "using my limited and ever-evolving ability to infer possible lifetimes from your code, I can't find one that I can use here".
This is also (for me, anyway) a common "it's fine but it won't compile" case, where you don't have enough lifetime parameters. In other words, you're accidentally giving two things the same lifetime parameter when it's not actually necessary to require that the compiler come up with a single lifetime that works for both. The compiler error for that does not typically lead you to a solution directly.
Thinking as a beginner, I think part of the problem here is the compiler is overstating its case. With experience, one learns to read this message as "borrowed value could not be proved to live as long as required by the function declaration", but that's not what it says! It asserts that the value in fact does not live long enough, which is clearly not true.
(Edit: having said this, I now realize the short version confuses beginners because of the definition of “enough”. They read it as “does not live long enough to be safe”, which the compiler is not—and cannot be—definitively saying.)
When this happens in a more complex situation (say, involving a deeper call tree and struct member lifetimes as well), you just get this same basic message, and finding the place where you've unnecessarily tied two lifetimes together can be a bit of a hunt.
My impression is that it's difficult or impossible for the compiler to "explain its reasoning" in a more complex case (I made an example at [0] [1]), which is understandable, but it does mean you always get this bare assertion "does not live long enough" and have to work through the tree of definitions yourself to find the bad constraint.
[0] https://play.rust-lang.org/?version=stable&mode=debug&editio...
[1] https://play.rust-lang.org/?version=stable&mode=debug&editio...
This was my biggest problem when I used to write Rust. The article has a small example but when you start working on large codebases these problems pop up more frequently.
Everyone says the Rust compiler will save you from bugs like this but as the article shows you can compile bugs into your codebase and when you finally get an unrelated error you have to debug all the bugs in your code. Even the ones that were working previously.
> Rust does not know more about the semantics of your program than you do
Also this. Some people absolutely refuse to believe it though.
However, you've still got to do that job of encoding the semantics. Moreover, the default semantics may not necessarily be the semantics you are interested in. So you need to understand the default semantics enough to know when you need something different. This is the big disadvantage of lifetime elision: in most cases it works well, but it creates defaults that may not be what you're after.
The other side is that sometimes the semantics you want to encode can't be expressed in the type system, either because the type system explicitly disallows them, or because it doesn't comprehend them. At this point you start running into issues like disjoint borrows, where you know two attributes in a struct can be borrowed independently, but it's very difficult to express this to the compiler.
That said, I think Rust gives you more power to express semantics in the type system than a lot of other languages (particularly a lot of more mainstream languages) which I think is what gives rise to this idea that "if it compiles, it works". The more you express, the more likely that statement is to be true, although the more you need to check that what you've expressed does match the semantics you're aiming for.
Of course, if your program compiles, that doesn't mean the logic is correct. However, if your program compiles _and_ the logic is correct, there's a high likelihood that your program won't crash (provided you handle errors and such, you cannot trust data coming from outside, allocations to always work, etc). In Rust's case, this means that the compiler is much more restrictive, exhaustive and pedantic than others like C's and C++'s.
In those languages, correct logic and getting the program to compile doesn't guarantee you are free from data races or segmentation faults.
Also, Rust's type system being so strong, it allows you to encode so many invariants that it makes implementing the correct logic easier (although not simpler).
I don't believe that it's guaranteed in Rust either, despite much marketing to the contrary. It just doesn't sound appealing to say "somewhat reduces many common problems" lol
>Also, Rust's type system being so strong, it allows you to encode so many invariants that it makes implementing the correct logic easier (although not simpler).
C++ has a strong type system too, probably fancier than Rust's or at least similar. Most people do not want to write complex type system constraints. I'm guessing that at most 25% of C++ codebases at most use complex templates with recursive templates, traits, concepts, `requires`, etc.
Some of the things you can do, often with a wild amount of boilerplate (tagged unions, niches, etc.), and some of the things are fundamentally impossible (movable non-null owning references).
C++ templates are more powerful than Rust generics, but the available tools in Rust are more sophisticated.
I don't think so? From my understanding what you can do with concepts isn't much different from what you can do with SFINAE. It (primarily?) just allows for friendlier diagnostics further up in the call chain.
If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later? This behavior is in fact used to decide between alternative template specializations for the same template. Concepts do it better in some ways.
Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
A big concern here would be accidentally depending on something that isn't declared in the concept, which can result in a downstream consumer who otherwise satisfies the concept being unable to use the template. You also don't get nicer error messages in these cases since as far as concepts are concerned nothing is wrong.
It's a tradeoff, as usual. You get more flexibility but get fewer guarantees in return.
>Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
What I meant is, if the thing is not instantiated then it is not used. Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that. Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to. But that's not a problem with the language.
I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled. IIRC Swift takes advantage of this (polymorphic generics by default with optional monomorphization) and the Rust devs are also looking into it (albeit the other way around).
> Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that.
I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
> Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to.
Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
The actual effects depend on a lot of things. I'm just saying, it seems contrived to me, and the most likely outcome of this type of broken template is failed compilation.
>As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled.
This is incompatible with how C++ templates work. There are methods to separately compile much of a template. If concepts could be made into concrete classes and used without direct inheritance, it might work. But this would require runtime concepts checking I think. I've never tried to dynamic_cast to a concepts type, but that would essentially be required to do it well. In practice, you can still do this without concepts by making mixins and concrete classes. It kinda sucks to have to use more inheritance sometimes, but I think one can easily design a program to avoid these problems.
>I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
This sounds wrong to me. Template parameters plus template code actually turns into real code. Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable". No language I can dream of that has generics could do any different.
>Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.
I seem to have struck a nerve with my post, which got 4 downvotes so far. Just for saying Rust is not actually better than C++ in this one regard lol.
That is one hell of a copium disclaimer. "If you hold it right..."
Cloudflare used a tool, broke parts of the internet.
Citation needed.
Who says this? I've never seen someone argue it makes it impossible to write incorrect code. If that were the case then there's no reason for it to have an integrated unit testing system. That would be an absurd statement to make, even if you can encode the entire program spec into the type system, there's always the possibly the description of a solution is not aligned with the problem being solved.
Rust programs can't know what you want to do, period.
https://qouteall.fun/qouteall-blog/2025/How%20to%20Avoid%20F...
Soundness does not cover semantic correctness. Maybe you want to wipe $HOME.
It depends on what you mean by "freed". Can one write a custom allocator in Rust? How does one handle reading from special addresses that represent hardware? In both of these scenarios, one might read from or write to memory that is not obviously allocated.
It's just that once it compiles Rust code will work more often than most languages, but that doesn't mean Rust code will automatically be bug free and I don't think anyone believes that.
Could've been clearer above.
This is also something that Rust does protect against in safe code, by requiring initialization of all memory before use, or using MaybeUninit for buffers that aren't, where reading the buffer or asserting that it has been initialized is an unsafe operation.
- https://bsky.app/profile/codewright.bsky.social/post/3m4m5mv...
- https://bsky.app/profile/naps62.bsky.social/post/3lpopqwznfs...
If Rust code compiles, it probably has a lower defect rate than corresponding code written by the same team in another language, all else being equal.
But I don't doubt there are people who take that phrase too literally, though.
But I have not seen any resource or anyone making technical points ever assert that the Rust compiler can verify program logic. That doesn't even make sense - the compiler isn't an AI that knows your intentions. Everybody is always clear that it only verifies memory safety.
Now regarding the 'most of the time' part. The part below is based purely on my experience and your mileage may vary. It's certainly possible to compile Rust programs with logical/semantic errors. I have made plenty. But the nature of C/C++ or similar manually memory-managed languages is such that you can make memory safety bugs quiet easily and miss them entirely. They also stay hidden longer.
And while logical errors are also possible, most people write and test code in chunks of sizes small enough where they feel confident enough to understand and analyze it entirely within their mind. Thus they tend to get caught and eliminated earlier than the memory safety bugs.
Now since Rust handles the memory safety bugs for you and you're reasonably good at dealing with logical bugs, the final integrated code tends to be bug-free, surprisingly more often than in other languages - but not every time.
There is another effect that makes Rust programs relatively more bug-free. This time, It's about the design of the code. Regular safe Rust, without any runtime features (like Rc, Arc, RefCell, Mutex, etc) is extremely restrictive in what designs it accepts. It accepts data structures that have a clear tree hierarchy, and thus a single-owner pattern. But once you get into stuff like cyclic references, mutual references, self references, etc, Rust will simply reject your code even if it can be proven to be correct at compile time. You have three options in that case: Use runtime safety checks (Rc, RefCell, Mutex, etc. This is slightly slower) OR use unsafe block and verify it manually, OR use a library that does the previous one for you.
Most of the code we write can be expressed in the restricted form that safe Rust allows without runtime checks. So whenever I face such issues, my immediate effort is to refactor the code in such way. I reach for the other three methods only if this is not possible - and that's actually rare. The big advantage of this method is that such designs are relatively free of the vast number of logical bugs you can make with a non-tree/cyclic ownership hierarchy. (Runtime checks convert memory safety bugs into logical bugs. If you make a mistake there, the program will panic at runtime.) Therefore, the refactored design ends up very elegant and bug-free much more often than in other languages.
I think is is a fairly bad example to pick, because the fact that the person says “I can basically think” and “most of the time it does” (emphasis mine) shows that they don't actually believes it will makes bug-free programs.
They are just saying that “most of the time” the compiler is very very helpful (I agree with them on that).
It just means that it could live until the end of the program and that case should be considered when dealing with it, there's no guarantee that it will drop earlier. But it may drop at any time, as long as there are no remaining references to it, it does not need to be in memory forever.
It's a subtle distinction and it is easy to misinterpret. For instance Tokio tasks are 'static and it felt wrong initially because I thought it would never drop them and leak memory. But it just means that it doesn't know when they will be dropped and it cannot make any promises about it., that's all.
Contagious borrow issue is a common problem for beginners.
> Others think someone from the Rust (programming language, not video game) development community was responsible due to how critical René has been of that project, but those claims are entirely unsubstantiated.
What is this culture war you're fighting?