Some genius decided that, to make time input convenient, YAML would parse HH:MM:SS as SS + 60×MM + 60×60×HH. So you could enter 1:23:45 and it would give you the correct number of seconds in 1 hour, 23 minutes, and 45 seconds.
They neglected to put a maximum on the number of such sexagesimal places, so if you put, say, six numbers separated by colons like this, it would be parsed as a very large integer.
Imagine my surprise when, while working at a networking company, we had some devices which failed to configure their MAC addresses in YAML! After this YAML config file had been working for literal years! (I believe this was via netplan? It's been like a decade, I don't remember.)
Turns out, if an unquoted MAC address had even a single non-decimal hex digit, it would do what we expected (parse as a string). This is not only by FAR the more common case, but also we had an A in our vendor prefix, so we never ran into this "feature" during initial development.
Then one day we ran out of MAC addresses and got a new vendor prefix. This time it didn't have any letters in it. Hilarity ensued.
(This behavior has thankfully been removed in more recent YAML standards.)
Perl has a Poland Problem. The customary file extension for Perl files is *.pl. This worked well until Apache introduced content negotiation and the convention to add a language code as file extension. It had index.html.en, index.html.de, for example.
index.html.pl is where the problem started and the reason why the officially recommended file extension for Perl files used to be (still is?) *.plx.
I don't have the Camel book at hand, but Randal Schwartz's Learning Perl 5th edition says:
"Perl doesn't require any special kind of filename or extension, and it's better not to use an extension at all. But some systems may require an extension like plx (meaning PerL eXecutable); see your system's release notes for more information."
It should have been an Apache problem, yes. Not only did it turn out that at least the language negotiation part of content negotiation wasn't the best idea but the way Apache handled it was problematic apart from the pl problem. In the end the Perl community took the issue upon them, so historically I'd say it was a Perl problem (of choice).
"The limits of my keyboard mean the limits of my programming language."
If only they had had ⊥ and ⊤ somewhere on their keys to work with Booleans directly while designing the languages. In another branch of history, perchance.[1]
Programming with string templates, in a highly complex and footgun-rich markup language, is one of the things I find most offputting about the DevOps ecosystem.
This is why I generally use Terraform for Kubernetes. It's not perfect, but it's miles better than the various different YAML-templating solutions (Kustomize, Helm) popular in the Kubernetes ecosystem.
Several years ago when I was writing a deployment system for a cloud distributed database, I tried to automate everything with Ansible playbooks and the Ansible "API" (LOL). I pretty quickly gave up on implementing anything but the most trivial logic in templated YAML and switched to Python (wrapping maximally-dumb Ansible playbooks) for everything nontrivial.
Always quote all yaml strings. If you have a yaml file that has something that isn't a simple value (number, boolean) such as for example a date, time, ip-address, mac address, country code, phone number, server name, configuration name, etc. etc. then you are asking for trouble. Just DON'T DO THAT. It's pretty simple.
"Yeah but it's so convenient"
"Yeah but the benefit of yaml is that you don't need quotes everywhere so that it's more human readable"
IMO the proposed solution of StrictYAML + schema is the right one here and what we use extensively for human readable configs. StrictYAML (linked to in the post) is essentially a string-type-only restriction of YAML, so you impose your type coercion on the parsed data structure.
This has been fixed since 2009 with YAML 1.2. The problem is that everyone uses libyaml (_e.g._ PyYAML _etc._) which is stuck on 1.1 for reasons.
The 1.2 spec just treats all scalar types as opaque strings, along with a configurable mechanism[0] for auto-converting non-quoted scalars if you so please.
As such, I really don't quite grok why upstream libraries haven't moved to YAML 1.2. Would love to hear details from anyone with more info.
It’s silly to have so many keyword synonyms as specified in that earlier regex. I’m also glad we can’t specify numeric literals as roman numerals. KISS
Honestly I’d prefer if “yes” and “no” were the only ways to spell the boolean values. They make sense in pretty much all contexts where booleans are used, whereas “true” and “false” rarely make sense.
In boolean logic true/false is ubiquitious and well known.
As you can see, if one tries to be cute with it, one will get all sorts of issues.
So at this point it doesnt make sense to use anything else.
The true/false terminology makes sense in boolean logic because you’re dealing with the truth of propositions. However, it does not make sense in the context of a configuration language, where there are no propositions that could be true or false.
It makes sense in the context of a configuration language because virtually 100% of programmers and other technical computer users understand “true” and “false” as the canonical Boolean values, and as far as I know that has always been the case. It never would have made sense to invent different unfamiliar terms like “yes” and “no” because of some niche philosophical distinction between “Boolean logic” and “configuration” that almost nobody in the real world cares about.
The fix is to make conversion user-controllable. If you want to disallow bare scalars except for booleans and numbers or whatever, it's just a little bit of configuration away.
Logging: no could also be log in norwegian. Or log only for the norwegian region. That's the thing with too many keywords and optional quoting, you can't know.
And for this reason, "logging: false" would be clearer than "logging: no" to represent "I do not want logging".
`false` could be a code for something else just as well as `no`. For example, it could mean that I only want to see logs of false information appearing in the system. The only proper solution is to require quotes around strings.
While JSON is annoying because it lacks some pretty basic features (comments, trailing comma), at least its spec is short. YAML is huuuge - there are way too many ways to do the same thing.
How often do people even encounter this issue?
I have been using YAML for 5+ years and have never had it before.
Further, I use `yamllint` which points this out as a lint issue "truthy value should be one of [false, true]".
I don't recall encountering the norway problem in the wild.
Ansible has a pretty common issue with file permissions, because pretty much every numeric representation of a file mode is a valid number in YAML - and most of them are not what you want.
Sure, we can open up a whole 'nother can of worms if we should be programming infrastructure provisioning in YAML, but it's what we have. Chef with Ruby had much more severe issues once people started to abuse it.
Because that’s annoying. YAML is often written and read by humans. If you want a verbose and more regular way to do it, there is always JSON. But JSON is really annoying to deal with for humans, although it is much better than YAML for several applications.
I can't tell if it's irony or not given the sentiment in this thread, but that is not a declaration of a multiline Description field, that's a field of User named "Description:>-" that happens to be missing its trailing ":"
Seeing that used systemically, versus just for "risky" fields makes me want to draw attention to the fantastic remarshal tool[1], which offers a "--yaml-style >" (and "|" and the rest) which will render yaml fields quoted as one wishes
I guess sometikes it is out of your control. I work on a workflow manager where users specify their workflows with YAML. So there's little we can do to prevent them from writing things like no, n, t in a place it could cause some issue like ij the article.
I like that in concept, but 1) literally no one does that (prime example - Kubernetes docs) and 2) it looks much more messy with quotes, when you know that they are unnecessary in 95% of cases.
Being liberal in what you accept, also known as the “robustness principle”, doesn’t mean being ambiguous or surprising about how you accept it. If anything, robustness requires a great deal more precision and clarity (at least with your own reasoning, then with how you communicate what to expect from it).
Postel's Law does not deserve to be nicknamed the Robustness Principle.
Robustness has a meaning and it refers to handling bad inputs gracefully. An example of a lack of robustness is allowing a malicious actor to execute arbitrary code by supplying a datum larger than some buffer limit.
Trying to make sense of invalid inputs and do something with them isn't robustness. It's just example of making an extension to a spec. The extension could be robust or not.
Postel's Law amounts to "have extensions and hacks to handle incorrectly formatted data, rather than rejecting them. So, OK, yes, that entails being robust to certain bad inputs that are outside of the spec, but which land into onto one of the extensions. It doesn't entail being robust to inputs that fall outside of the core spec and all hacks/extensions.
Cherry picking certain bad inputs and giving them a meaning isn't, by itself, bona fide robustness; robustness means handling all bad inputs without crashing or allowing security to be compromised.
Postel's law isn't about accepting arbitrary invalid inputs. It's about inputs that are technically invalid but the intent is obvious from looking at it, and handling those according to intent.
In a distributed non-adversarial setting, this is exactly what you want for robustness.
The problem, as we've come to realise in the time since Postel's law was formulated, is that there is no such thing as a distributed non-adversarial setting. So I get what you're saying.
But your definition of robustness is too narrow as well. There's more to robustness than security. When Outlook strips out a certificate from an email for alleged security reasons, then that's not robustness, that's the opposite, brokenness: You had one job, to deliver an attachment from A to B, and you failed.
Robustness and security can be at odds. It's quite OK to say, "on so and so occasion I choose to make the system not robust, because the robust solution would not be sufficiently secure".
The only area in which is it acceptable to reason this way is graphical user interfaces. (And only if you've provided an API already for reliable automation, so that nobody has to automate the application through its GUI.). Is say graphical, because, no, not in command interfaces.
Even in the area of GUIs, new heuristics about intent cause annoyances to the users. But only annoyances and nothing more.
Like for example when you update your operating system, and now the window manager thinks that whenever you move a window so that its title bar happens to touch the top of the screen, you must be indicating the intent to maximize it.
I suppose the ship has sailed now that people are deploying LLMs in this way and that and those things intuit intent. They are like Postel's Law on amphetamines. There is a big cost to it, like warming the planet, and the systems become fragile for their lack of specification.
> When Outlook strips out a certificate from an email for alleged security reasons
I would say it's being liberal in what it accepts, if it's an alternative to rejecting the e-mail for security reasons.
It has taken a datum with a security problem and "fixed" it, so that it now looks like a datum without that security problem.
(I can't find references online to this exact issue that you're referring to, so I don't have the facts. Are you talking about incoming or outgoing? Is it a situation like an expired or otherwise invalid certificate not being used when sending an outgoing mail? That would be "conservative in what you send/do".)
You are spot on in regards to consideration of adversarial context; however, it is instructive to review the nuanced difference between the RFC761 (1980) statement, viz. "be conservative in what you do, be liberal in what you accept from others" and the substitution of "send" for "do" in RFC1122 (1989). The latter is, with hindsight, an error, since it refocused the attention of some rigid thinkers entirely onto protocol mechanics and away from implementation behaviour, despite the commentary beneath that admonishes such a mindset and concurs wholly with your point.
Or to put it otherwise, Postel was right to begin with, albeit perhaps just a little too cryptic, and has been frequently misquoted and misinterpreted ever since.
Pandering to customers will make you a lot of money today but very narrow margins tomorrow. If you’re in startup mentality your bosses may be 100% fine with that. But you will likely be stuck supporting that crap because you didn’t become wealthy in the IPO/merger.
This has little to do with the robustness principle, however mis-stated. It's just shitty design. But if someone was still hell-bent on invoking it, then if anything, it's a straight-up violation of the adjacent words "be conservative in what you do"¹, and further disregards the commentary in RFC1122²:
... assume that the network is
filled with malevolent entities that will send in packets
designed to have the worst possible effect ...
This problem occurs because pyyaml load() uses the full YAML 1.1 schema. There is another function BaseLoader that will interpret everything as a string which is the workaround that the article suggests. Just another way to achieve it.
It’s a bit of a sore spot in the YAML community as to why PyYAML can’t / won’t support YAML 1.2. It was in maintenance mode for a while. YAML 1.2 also introduced breaking changes.
From a SO comment: “ As long as you're okay with the YAML 1.1 standard, PyYAML is still perfectly fine, secure, etc. If you want to support the YAML 1.2 spec (released in 2009), you can use ruamel.yaml, which started out as a fork of PyYAML. –
CrazyChucky
Commented Mar 26, 2023 at 20:51”
Yeah it's a problem I had to put up a PR on a tool I was using because I ran into the Norway problem on yaml I was getting from another team. I did ask them to add quotes just in case
A supplier we contracted with and we gave requirements to asked me what format do we want the export/import of the data to be in and I said JSON. It’s simple, easy and can be converted into anything else very easily
In Lisp, if you want to read text into symbols (e.g. file of words), you just switch to a dedicated package in which those symbols are interned. Then if NIL happens to come up, it will be a symbol named "NIL" in that package, unrelated to the special object.
I reckon if this is really a big concern for anybody, then they are probably writing way too much YAML to begin with. If you're being caught out by things like this and need to debug it, then it maps very cleanly to types in most high level languages and you can generate your YAML from that instead.
Sadly you usually realize you've been writing too much YAML way past the turning point, and it will be a pain to move a single file to JSON for instance when you have a whole process and system that otherwise ingest YAML, including keeping track of why this specific part of JSON and not YAML.
So people work around the little paper cuts, while still hitting the traps from time to time as they forget them.
> generate YAML
I've a hard time finding a situation where I'd want to do that. Usually YAML is chosen for human readability, but here we're already in a higher level language first. JSON sounds a more appropriate target most of the time ?
> I have been pressured multiple times by Brian Ingerson (one of the authors of the YAML specification) to remove this paragraph, despite him acknowledging that the actual incompatibilities exist. As I was personally bitten by this "JSON is YAML" lie, I refused and said I will continue to educate people about these issues, so others do not run into the same problem again and again. After this, Brian called me a (quote)complete and worthless idiot(unquote).
> In my opinion, instead of pressuring and insulting people who actually clarify issues with YAML and the wrong statements of some of its proponents, I would kindly suggest reading the JSON spec (which is not that difficult or long) and finally make YAML compatible to it, and educating users about the changes, instead of spreading lies about the real compatibility for many years and trying to silence people who point out that it isn't true.
> Addendum/2009: the YAML 1.2 spec is still incompatible with JSON, even though the incompatibilities have been documented (and are known to Brian) for many years and the spec makes explicit claims that YAML is a superset of JSON. It would be so easy to fix, but apparently, bullying people and corrupting userdata is so much easier.
Are there no cases where well-formed JSON could be subject to the problems covered in the article, when parsed by a compliant YAML parser? I'm asking because I know nothing about YAML and not much more about JSON.
Not that I know. JSON requires strings to be quoted which is basically the problem here. Of course it’s not a great human writable configuration format (no comments being a huge problem).
I’m just pointing out that it should be very simple to swap a YAML file for a JSON file in any system that accepts YAML
Configuration files for programs. These tend to be short.
DSLs which are large manifests for things like cloud infrastructure. These tend to be long, they grow over time.
My pet hypothesis is these DSLs exist mostly for neutrality - the vendor can't assume you have Python or something present. But as a user, you can assume just that and gain a lot by authoring in a proper language and generating YAML.
> Configuration files for programs. These tend to be short.
This is where I use YAML and it shines there. IMO easier to read and write by hand than JSON, and short sweet config files don't have the various problems people run into with YAML. It's great.
I can't run the examples right now, but looking at the last "print(template.to_json())" line, looks like the main use case is JSON ?
On cloud infra, yes, having one or two layers of languages is a natural situation. GCP and AWS both accepting (encouraging?) JSON as a subset of YAML makes it a simpler choice when choosing an auto generating target.
You mention people wanting to author the generated files, I think in other situations
tweaking the auto-generated files will be seen as riskier with potential overwriting issues, so lower readability will be seen as a positive.
That's the point really, you can generate JSON or YAML and it doesn't really matter. If you want to include 100 similar objects in that output, you can use a for loop. You can't do that in plain JSON/YAML.
I do a lot of ansible which needs to run on multiple versions, and their yaml typing are not consistent - whenever I have a variable in a logic statement, I nearly always need to apply the "| bool" filter.
This is likely hair splitting, but you are far more likely getting bitten by the monster amount of variance in jinja2 versions/behaviors than by anything "yaml-y"
For example, yaml does not care about this whatsoever
- name: skip on Tuesdays
when: ansible_date_time.weekday != "Tuesday"
but different ansible versions are pretty yolo about whether one needs to additionally wrap those fields in jinja2 mustaches
and another common bug is when the user tries to pass in a boolean via "-e" because those are coerced into string key-value pairs as in
$ ansible -e not_today=true -m debug -a var=not_today all
localhost | SUCCESS => {
"not_today": "true"
}
but if one uses the jinja/python compatible flavor, it does the thing
$ ansible -e not_today=True -m debug -a var=not_today all
localhost | SUCCESS => {
"not_today": true
}
It may be more work than you care for, since sprinkling rampant |bool likely doesn't actively hurt anything, but the |type_debug filter[1] can help if it's behaving mysteriously
Google App Engine used to do this to environment variables defined in YAML. IIRC it would convert the string "true" to "Yes", which was a fun surprise when deploying Java And NodeJS apps.
I will die on the hill than writing out ansible.builtin. are characters of my life I'll never get back, and refuse to. If it's built in why do I have to qualify it?!
Also, watch out: 0x644 != 0644 which is the mode you meant
It's not a coincidence that YAML is a perfect acronym for "yet another migraine looming".
I mean ok it is technically a coincidence but it definitely feels like the direct result of the "what could possibly go wrong" approach the spec writers apparently took
Some genius decided that, to make time input convenient, YAML would parse HH:MM:SS as SS + 60×MM + 60×60×HH. So you could enter 1:23:45 and it would give you the correct number of seconds in 1 hour, 23 minutes, and 45 seconds.
They neglected to put a maximum on the number of such sexagesimal places, so if you put, say, six numbers separated by colons like this, it would be parsed as a very large integer.
Imagine my surprise when, while working at a networking company, we had some devices which failed to configure their MAC addresses in YAML! After this YAML config file had been working for literal years! (I believe this was via netplan? It's been like a decade, I don't remember.)
Turns out, if an unquoted MAC address had even a single non-decimal hex digit, it would do what we expected (parse as a string). This is not only by FAR the more common case, but also we had an A in our vendor prefix, so we never ran into this "feature" during initial development.
Then one day we ran out of MAC addresses and got a new vendor prefix. This time it didn't have any letters in it. Hilarity ensued.
(This behavior has thankfully been removed in more recent YAML standards.)
index.html.pl is where the problem started and the reason why the officially recommended file extension for Perl files used to be (still is?) *.plx.
I don't have the Camel book at hand, but Randal Schwartz's Learning Perl 5th edition says:
"Perl doesn't require any special kind of filename or extension, and it's better not to use an extension at all. But some systems may require an extension like plx (meaning PerL eXecutable); see your system's release notes for more information."
If only they had had ⊥ and ⊤ somewhere on their keys to work with Booleans directly while designing the languages. In another branch of history, perchance.[1]
[1] https://en.wikipedia.org/wiki/APL_(programming_language)#/me...
Boolean and propositional logic is not the same.
It's not that bad, because you can explicitly turn that behavior off, but ask me how I know =(
The YAML document from hell (566 points, 2023, 353 comments) https://news.ycombinator.com/item?id=34351503
That's a Lot of YAML (429 points, 2023, 478 comments) https://news.ycombinator.com/item?id=37687060
No YAML (Same as above) (152 points, 2021, 149 comments) https://news.ycombinator.com/item?id=29019361
"Yeah but it's so convenient"
"Yeah but the benefit of yaml is that you don't need quotes everywhere so that it's more human readable"
DON'T
00,01,02,03,04,05,06,07,OH SHIT
and, setting that aside, the very next paragraph says that this is a legit representation of -2.0 which means something has gone gravely wrong
The 1.2 spec just treats all scalar types as opaque strings, along with a configurable mechanism[0] for auto-converting non-quoted scalars if you so please.
As such, I really don't quite grok why upstream libraries haven't moved to YAML 1.2. Would love to hear details from anyone with more info.
[0]:https://yaml.org/spec/1.2.2/#chapter-10-recommended-schemas
This:
could be replaced withAnd for this reason, "logging: false" would be clearer than "logging: no" to represent "I do not want logging".
Don’t use bool at all.
While it has the YAML-like significant whitespace, it looks nice because it doesn't try to be clever.
That sounds like a breaking change that rendered old YAML documents to be parsed differently.
The tag schema used is supposed to be modifiable folks!
And why anyone would still be using 1.1 at this point is just forehead palming foolishness.
https://www.theverge.com/2020/8/6/21355674/human-genes-renam...
I've only seen it used for configuration.
Don't ask me why though, might have something to do with how it's written like a python file, no user would want to write their data in yaml format.
Ansible has a pretty common issue with file permissions, because pretty much every numeric representation of a file mode is a valid number in YAML - and most of them are not what you want.
Sure, we can open up a whole 'nother can of worms if we should be programming infrastructure provisioning in YAML, but it's what we have. Chef with Ruby had much more severe issues once people started to abuse it.
Plus, ansible-lint flags that reliably.
See also p95 but the same couple of users always see the p99 time, due to some bug.
Edit: This stack overflow like provides more details https://stackoverflow.com/questions/3790454/how-do-i-break-a...
Seeing that used systemically, versus just for "risky" fields makes me want to draw attention to the fantastic remarshal tool[1], which offers a "--yaml-style >" (and "|" and the rest) which will render yaml fields quoted as one wishes
1: https://github.com/remarshal-project/remarshal#readme and/or $(brew install remarshal)
If you accept crap, then eventually you will receive only crap.
Robustness has a meaning and it refers to handling bad inputs gracefully. An example of a lack of robustness is allowing a malicious actor to execute arbitrary code by supplying a datum larger than some buffer limit.
Trying to make sense of invalid inputs and do something with them isn't robustness. It's just example of making an extension to a spec. The extension could be robust or not.
Postel's Law amounts to "have extensions and hacks to handle incorrectly formatted data, rather than rejecting them. So, OK, yes, that entails being robust to certain bad inputs that are outside of the spec, but which land into onto one of the extensions. It doesn't entail being robust to inputs that fall outside of the core spec and all hacks/extensions.
Cherry picking certain bad inputs and giving them a meaning isn't, by itself, bona fide robustness; robustness means handling all bad inputs without crashing or allowing security to be compromised.
In a distributed non-adversarial setting, this is exactly what you want for robustness.
The problem, as we've come to realise in the time since Postel's law was formulated, is that there is no such thing as a distributed non-adversarial setting. So I get what you're saying.
But your definition of robustness is too narrow as well. There's more to robustness than security. When Outlook strips out a certificate from an email for alleged security reasons, then that's not robustness, that's the opposite, brokenness: You had one job, to deliver an attachment from A to B, and you failed.
Robustness and security can be at odds. It's quite OK to say, "on so and so occasion I choose to make the system not robust, because the robust solution would not be sufficiently secure".
Ouch, no. Dragons be there. Famous last words.
The only area in which is it acceptable to reason this way is graphical user interfaces. (And only if you've provided an API already for reliable automation, so that nobody has to automate the application through its GUI.). Is say graphical, because, no, not in command interfaces.
Even in the area of GUIs, new heuristics about intent cause annoyances to the users. But only annoyances and nothing more.
Like for example when you update your operating system, and now the window manager thinks that whenever you move a window so that its title bar happens to touch the top of the screen, you must be indicating the intent to maximize it.
I suppose the ship has sailed now that people are deploying LLMs in this way and that and those things intuit intent. They are like Postel's Law on amphetamines. There is a big cost to it, like warming the planet, and the systems become fragile for their lack of specification.
> When Outlook strips out a certificate from an email for alleged security reasons
I would say it's being liberal in what it accepts, if it's an alternative to rejecting the e-mail for security reasons.
It has taken a datum with a security problem and "fixed" it, so that it now looks like a datum without that security problem.
(I can't find references online to this exact issue that you're referring to, so I don't have the facts. Are you talking about incoming or outgoing? Is it a situation like an expired or otherwise invalid certificate not being used when sending an outgoing mail? That would be "conservative in what you send/do".)
Surely someone at some point thought it was obvious that “No” should mean “false”, and that’s why we’re now in this mess.
Or to put it otherwise, Postel was right to begin with, albeit perhaps just a little too cryptic, and has been frequently misquoted and misinterpreted ever since.
[2] https://datatracker.ietf.org/doc/html/rfc1122#page-12
Trying to find a tag-line for it I like, maybe “markdown for config”?
It’s a bit of a sore spot in the YAML community as to why PyYAML can’t / won’t support YAML 1.2. It was in maintenance mode for a while. YAML 1.2 also introduced breaking changes.
From a SO comment: “ As long as you're okay with the YAML 1.1 standard, PyYAML is still perfectly fine, secure, etc. If you want to support the YAML 1.2 spec (released in 2009), you can use ruamel.yaml, which started out as a fork of PyYAML. – CrazyChucky Commented Mar 26, 2023 at 20:51”
- https://stackoverflow.com/q/75850232
!!boolean
https://dev.to/kalkwst/a-gentle-introduction-to-the-yaml-for...
So people work around the little paper cuts, while still hitting the traps from time to time as they forget them.
> generate YAML
I've a hard time finding a situation where I'd want to do that. Usually YAML is chosen for human readability, but here we're already in a higher level language first. JSON sounds a more appropriate target most of the time ?
> In my opinion, instead of pressuring and insulting people who actually clarify issues with YAML and the wrong statements of some of its proponents, I would kindly suggest reading the JSON spec (which is not that difficult or long) and finally make YAML compatible to it, and educating users about the changes, instead of spreading lies about the real compatibility for many years and trying to silence people who point out that it isn't true.
> Addendum/2009: the YAML 1.2 spec is still incompatible with JSON, even though the incompatibilities have been documented (and are known to Brian) for many years and the spec makes explicit claims that YAML is a superset of JSON. It would be so easy to fix, but apparently, bullying people and corrupting userdata is so much easier.
Well that’s disappointing.
I guess software are human texts after all.
I’m just pointing out that it should be very simple to swap a YAML file for a JSON file in any system that accepts YAML
Configuration files for programs. These tend to be short.
DSLs which are large manifests for things like cloud infrastructure. These tend to be long, they grow over time.
My pet hypothesis is these DSLs exist mostly for neutrality - the vendor can't assume you have Python or something present. But as a user, you can assume just that and gain a lot by authoring in a proper language and generating YAML.
See https://github.com/cloudtools/troposphere for a great example for AWS CloudFormation.
This is where I use YAML and it shines there. IMO easier to read and write by hand than JSON, and short sweet config files don't have the various problems people run into with YAML. It's great.
On cloud infra, yes, having one or two layers of languages is a natural situation. GCP and AWS both accepting (encouraging?) JSON as a subset of YAML makes it a simpler choice when choosing an auto generating target.
You mention people wanting to author the generated files, I think in other situations tweaking the auto-generated files will be seen as riskier with potential overwriting issues, so lower readability will be seen as a positive.
https://github.com/dhall-lang/dhall-kubernetes
For example, yaml does not care about this whatsoever
but different ansible versions are pretty yolo about whether one needs to additionally wrap those fields in jinja2 mustaches and another common bug is when the user tries to pass in a boolean via "-e" because those are coerced into string key-value pairs as in but if one uses the jinja/python compatible flavor, it does the thing It may be more work than you care for, since sprinkling rampant |bool likely doesn't actively hurt anything, but the |type_debug filter[1] can help if it's behaving mysteriously1: https://docs.ansible.com/ansible/11/collections/ansible/buil...
anything encased in quotes is a string, anything not is not a string (bool, int or float)
TLDR: unquoted hex hash in YAML is fine until it happens to match \d+E\d+ when it gets interpreted as a float in scientific notation.
[0]https://www.brautaset.org/posts/yaml-exponent-problem.html
Sadly many libraries still don't support it.
Escaped json probably hits that sweetspot by being a bit uglier than yaml, but 100 times simpler than xml, though.
Also, watch out: 0x644 != 0644 which is the mode you meant
I mean ok it is technically a coincidence but it definitely feels like the direct result of the "what could possibly go wrong" approach the spec writers apparently took