There's no single best way to store information

(quantamagazine.org)

36 points | by 7777777phil 3 hours ago

5 comments

  • bob1029 12 minutes ago
    The best way to store information depends on how you intend to use (query) it.

    The query itself represents information. If you can anticipate 100% of the ways in which you intend to query the information (no surprises), I'd argue there might be an ideal way to store it.

  • andix 3 minutes ago
    It's always Markdown. Markdown is the best way to store information. ;)
  • __MatrixMan__ 1 hour ago
    There are, however, several objectively bad ways. In "Service Model" (a novel that I recommend) a certain collection of fools decides to sort bits by whether it's a 1 or a 0, ending up with a long list of 0's followed by a long list of 1's.
    • Rygian 1 hour ago
      In a similar vein, someone decided that everyone should have subdirectories under home named "Pictures", "Videos", "Music", "Documents", …
    • dsvf 49 minutes ago
      It _does_ open up amazing opportunities for compression though.
    • lo_zamoyski 1 hour ago
      That depends on the aim. The purpose of something determines how fitting the means are.

      Also, let us not confuse "relative" with "not objective". My father is objectively my father, but he is objectively not your father.

  • pbreit 1 hour ago
    Postgres is close.
    • imhoguy 1 hour ago
      I would say Sqlite is closer, you find it on every phone, browser, server. I bet Sqlite files will be still readable in 2100. And I love Postgres.
    • mjevans 1 hour ago
      Or (real) SQLite for reasonably scaled work.

      I also like (old) .ini / TOML for small (bootstrap) config files / data exchange blobs a human might touch.

      +

      Re: PostgreSQL 'unfit' conversations.

      I'd like some clearer examples of the desired transactions which don't fit well. After thinking about them in the background a bit I've started to suspect it might be an algorithmic / approach issue obscured by storage patterns that happen to be enabled by some other platforms which work 'at scale' supported by hardware (to a given point).

      As an example of a pattern that might not perform well under PostgreSQL, something like lock-heavy multiple updates for flushing a transaction atomically. E.G. Bank Transaction Clearance like tasks. If every single double-entry booking requires it's own atomic transaction that clearly won't scale well in an ACID system. Rather the smaller grains of sand should be combined into a sandstone block / window of transactions which are processed at the same time and applied during the same overall update. The most obvious approach to this would be to switch from a no-intermediate values 'apply deduction and increment atomically' action to a versioned view of the global data state PLUS a 'pending transactions to apply' log / table (either/both can be sharded). At a given moment the transactions can be reconciled, for performance a cache for 'dirty' accounts can store the non-contested value of available balance.

  • kittikitti 20 minutes ago
    Or it's the opposite, where the slowest possible retrieval time is the intended effect, as is the basis of many cryptographic algorithms.