A Social Filesystem

(overreacted.io)

229 points | by icy 15 hours ago

29 comments

  • swyx 1 hour ago
    > Apps may come and go, but files stay—at least, as long as our apps think in files.

    yes: https://www.swyx.io/data-outlasts-code-but

    all lasting work is done in files/data (can be parsed permissionlessly, still useful if partially corrupted), but economic incentives keep pushing us to keep things in code (brittle, dies basically when one of maintainer|buildtools|hardware substrate dies).

    when standards emerge (forcing code to accept/emit data) that is worth so much to a civilization. a developer ecosystem tipping the incentive scales such that companies like the Googl/Msft/OpenAI/Anthropics of the world WANT to contribute/participate in data standards rather than keep things proprietary is one of the most powerful levers we as a developer community collectively hold.

    (At the same time we shoudl also watch out for companies extending/embracing/extinguishing standards... although honestly outside of Chrome I struggle to think of a truly successful example)

    • danabramov 1 hour ago
      Nice to see you :) I didn't know the "indirection" law, that's funny.
  • theturtletalks 2 hours ago
    POSSE and AT Protocol can be understood as interoperable marketplaces. Platforms like Reddit and Instagram already function this way: the product is user content, the payment is attention, and the platform’s cut is ads or behavioral data. Dan argues that this structure is not inevitable. If social data is treated as something people own and store themselves, applications stop being the owners of social graphs and become interfaces that read from user-controlled data instead.

    I am working on a similar model for commerce. Sellers deploy their own commerce logic such as orders, carts, and payments as a hosted service they control, and marketplaces integrate directly with seller APIs rather than hosting sellers. This removes platform overhead, lowers fees, and shifts ownership back to the people creating value, turning marketplaces into interoperable discovery layers instead of gatekeepers.

  • skybrian 5 hours ago
    This article goes into a lot of detail, more than is really needed to get the point across. Much of that could have been moved to an appendix? But it's a great metaphor. Someone should write a user-friendly file browser for PDS's so you can see it for yourself.

    I'll add that, like a web server that's just serving up static files, a Bluesky PDS is a public filesystem. Furthermore it's designed to be replicated, like a Git repo. Replicating the data is an inherent part of how Bluesky works. Replication is out of your control. On the bright side, it's an automatic backup.

    So, much like with a public git repo, you should be comfortable with the fact that anything you put there is public and will get indexed. Random people could find it in a search. Inevitably, AI will train on it. I believe you can delete stuff from your own PDS but it's effectively on your permanent record. That's just part of the deal.

    So, try not to put anything there that you'll regret. The best you could do is pick an alias not associated with your real name and try to use good opsec, but that's perilous.

    • danabramov 4 hours ago
      My goal with writing is generally to move things out of my head in the shape that they existed in my head. If it's useful but too long, I trust other people to pick what they find valuable, riff on it, and so on.

      >Someone should write a user-friendly file browser for PDS's so you can see it for yourself.

      You can skip to the end of the article where I do a few demos: https://overreacted.io/a-social-filesystem/#up-in-the-atmosp.... I suggest a file manager there:

      >Open https://pdsls.dev. [...] It’s really like an old school file manager, except for the social stuff.

      And yes, the paradigm is essentially "everyone is a scraper".

      • skybrian 4 hours ago
        Thanks! I saved a link to pdsls. I think there's room for improvement in making the UI user-friendly; maybe I'll try it someday.
        • danabramov 4 hours ago
          The devs are responsive to feedback if you mention @pdsls.dev on Bluesky! I often point out small issues and they get fixed the next day.
    • seridescent 4 hours ago
      > Someone should write a user-friendly file browser for PDS's so you can see it for yourself.

      https://pdsls.dev/ can serve this purpose IMO :) it's a pretty neat app, open source, and is totally client-side

      edit: whoops, pdsls is already mentioned at the end of the article

    • verdverm 47 minutes ago
      Private data will come to ATProto, it's not a finished protocol
    • DustinBrett 5 hours ago
      I think that is the general style of overreacted.io posts.
  • camgunz 19 minutes ago
    I'm skeptical of these kind of like, self-describing data models. Like, I generally like at proto--because I like IPFS--but I think the whole "just add a lexicon for your service and bickety bam, clients appear" is a leap too far.

    For example, gaze upon dev.ocbwoy3.crack.defs [0] and dev.ocbwoy3.crack.alterego [1]. If you wanted to construct a UI around these, realistically you're gonna need to know wtf you're building (it's a twitter/bluesky clone); there simply isn't enough information in the lexicons to do a good job. And the argument can't be "hey you published a lexicon and now people can assume your data validates", because validation isn't done on write, it's done on read. So like, there really is no difference between this and like, looking up the docs on the data format and building a client. There are no additional guarantees.

    Maybe there's an argument for moving towards some kind of standardization, but... do we really need that? Like are we plagued by dozens of slightly incompatible scrobbling data models? Even if we are, isn't this the job of like, an NPM library and not a globally replicated database?

    Anyway, I appreciate that, facially, at proto is trying to address lock in. That's not easy, and I like their solution. But I don't think that's anywhere near the biggest problem Twitter had. Just scanning the Bluesky subreddit, there's still problems like too much US politics and too many dick pics. It's good to know that some things just never change I guess.

    [0]: https://lexicon.garden/lexicon/did:plc:s7cesz7cr6ybltaryy4me...

    [1]: https://lexicon.garden/lexicon/did:plc:s7cesz7cr6ybltaryy4me...

  • hollowonepl 1 hour ago
    Interesting concept for all new social platforms that already live in federated, distributed environments that share communication protocols and communication data formats.

    I bet more difficult to push existing commercial platforms to anyhow consider.

    That would make marketing tools to manage social communications and posting across popular social media, much easier. Never the less Social Marketing tools have already invented we similar analogy just to make control over own content and feedback across instances and networks.

    We still live in a world where some would say BSKY some would say Mastodon is the future… while everybody still has facebook and instagram and youngsters tik tok too. Those are closed platforms where only tools to hack them, not standards persist

  • christophilus 1 hour ago
    I’ve been reading “The Unix Programming Environment”. It’s made me realize how much can be accomplished with a few basic tools and files (mostly plain text). I want to spend some time thinking of what a modern equivalent would look like. For example, what would Slack look like if it was file (and text) oriented and UNIXy? Well, UNIX had a primitive live chat in the form of live inter-user messaging. I’d love to see a move back to simpler systems that composed well.
  • itmitica 42 minutes ago
    To share is to lose control. You can't undo, even once shared, it can't be undone. You can't retract a published novel. You can't retract a broadcast music or show. What makes you think you can do it over internet?
  • skeledrew 3 hours ago
    I've been thinking of this for some time, conceptually, but perhaps from a more fundamental angle. I think the idea of "files" is pretty dated and can be thrown out. Treat everything as data blobs (inspired by PerKeep[0]) addressed by their hashes and many of the issues described in the article just aren't even a thing. If it really makes sense, or for compatibility sake, relevant blobs can be exposed through a filesystem abstraction.

    Also, users don't really want apps. What users want are capabilities. So not Bluesky, or YouTube for example, but the capability to easily share a life update with interested parties, or the capability to access yoga tutorial videos. The primary issue with apps is that they bundle capabilities, but many times particular combinations of capabilities are desired, which would do well to be wired together.

    Something in particular that's been popping up fairly often for me is I'm in a messaging app, and I'd like to lookup certain words in some of the messages, then perhaps share something relevant from it. Currently I have to copy those words over to a browser app for that lookup, then copy content and/or URL and return to the messaging app to share. What I'd really love is the capability to do lookups in the same window that I'm chatting with others. Like it'd be awesome if I could embed browser controls alongside the message bubbles with the lookup material, and optionally make some of those controls directly accessible to the other part(y|ies), which may even potentially lead to some kind of adhoc content collaboration as they make their own updates.

    It's time to break down all these barriers that keep us from creating personalized workflows on demand. Both at the intra-device level where apps dominate, and at the inter-device level where API'd services do.

    [0] https://perkeep.org/

    • danabramov 3 hours ago
      I'm using filesystem more as a metaphor than literally.

      I picked this metaphor because "apps" are many-to-many to "file formats". I found "file format" to be a very powerful analogy for lexicons so I kind of built everything else in the explanation around that.

      You can read https://atproto.com/specs/repository for more technical details about the repository data structure:

      The repository data structure is content-addressed (a Merkle-tree), and every mutation of repository contents (eg, addition, removal, and updates to records) results in a new commit data hash value (CID). Commits are cryptographically signed, with rotatable signing keys, which allows recursive validation of content as a whole or in part. Repositories and their contents are canonically stored in binary DAG-CBOR format, as a graph of data objects referencing each other by content hash (CID Links). Large binary blobs are not stored directly in repositories, though they are referenced by hash (CID).

      Re: apps, I'd say AT is actually post-app to some extent because Lexicons aren't 1:1 to apps. You can share Lexicons between apps and I totally can see a future where the boundaries are blurring and it's something closer to what you're describing.

  • Jonovono 5 hours ago
    I can’t remember how many times I’ve read an article and enjoyed it so much and then looked and saw it was written by Dan ;) always a pleasure !
  • motoxpro 4 hours ago
    I've always thought walled gardens are the effect of consumer preferences, not the cause.

    The effect of the internet (everything open to everyone) was to create smaller pockets around a specific idea or culture. Just like you have group chats with different people, thats what IG and Snap are. Segmentation all the way down.

    I am so happy that my IG posts arent available on my HN or that my IG posts arent being easily cross posted to a service I dont want to use like truth social. If you want it to be open, just post it to the web.

    I think I don't really understand the benefit of data portability in the situation. It feels like in crypto when people said I want to use my Pokemon in game item in Counterstrike (or any game) like, how and why would that even be valuable without the context? Same with a Snap post on HN or a HN post on some yet-to-be-created service.

    • dameis 3 hours ago
      >I am so happy that my IG posts arent available on my HN or that my IG posts arent being easily cross posted to a service I dont want to use like truth social.

      ATProto apps don't automatically work like this and don't support all types of "files" by default. The app's creator has to built support for a specific "file type". My app https://anisota.net supports both Bluesky "files" and Leaflet "files", so my users can see Bluesky posts, Leaflet posts, and Anisota posts. But this is because I've designed it that way.

      Anyone can make a frontend that displays the contents of users PDSs.

      Here's an example...

      Bluesky Post on Bluesky: https://bsky.app/profile/dame.is/post/3m36cqrwfsm24

      Bluesky Post on Anisota: https://anisota.net/profile/dame.is/post/3m36cqrwfsm24)

      Leaflet post on Leaflet: https://dame.leaflet.pub/3m36ccn5kis2x

      Leaflet post on Anisota: https://anisota.net/profile/dame.is/document/3m36ccn5kis2x

      I also have a little side project called Aturi that helps provide "universal links" so that you can open ATProto-based content on the client/frontend of your choice: https://aturi.to/anisota.net

      • verdverm 36 minutes ago
        Except that a lot of the app builders in ATProto seem to think the protocol was designed to make their lives easier when bootstrapping their network from Bluesky userbase.

        (Imo, that is a perverse interpretation, it's about user choice, which they are effectively taking away from me by auto importing and writing to my Bsky graph)

        re: the debates on reusing follows from Bluesky in other apps instead of their own

    • jrv 3 hours ago
      > I think I don't really understand the benefit of data portability in the situation.

      Twitter was my home on the web for almost 15 years when it got taken over by a ... - well you know the story. At the time I wished I could have taken my identity, my posts, my likes, and my entire social graph over to a compatible app that was run by decent people. Instead, I had to start completely new. But with ATProto, you can do exactly that - someone else can just fork the entire app, and you can keep your identity, your posts, your likes, your social graph. It all just transfers over, as long as the other app is using the same ATProto lexicon (so it's basically the same kind of app).

      • jrowen 3 hours ago
        But what if your entire social graph didn't choose to transfer over as well? What if they don't want to be on that app? What if someone that was very indecent made a compatible app? Would you want your entire Twitter history represented on there?

        For better or worse, I don't think it makes sense to decentralize social. The network of each platform is inherently imbued with the characteristics and culture of that platform.

        And I feel like Twitter is the anomalous poster child for this entire line of thinking. Pour one out, let it go, move on, but I don't think creating generalized standards for social media data is the answer. I don't want 7 competing Twitter-like clones for different political ideologies that all replicate each others' data with different opt-in/opt-out semantics. That sounds like hell.

        • dameis 3 hours ago
          The framing of "portability" is a bit confusing. Your data is not actually "transferring" anywhere, it's always in your PDS. These other apps and clients are just frontends that are displaying the data that is in your PDS. The data is public and open, though private data is in the works and hopefully will arrive in 2026.
          • jrowen 1 hour ago
            The data is not transferring, but the user is. When I sign up for e.g. Twitter, I don't want to sign up for Mastodon, or Bluesky, or Truth Social, or whatever other platform someone might create later. Thus I would not choose to put my data in a PDS. I feel like that would actually leave me with less ownership and control than I have now.

            My point is that I don't believe the separation of frontend and data is desirable for a social network. I want to know that I am on a specific platform that gives me some degree of control and guarantee (to the extent that I trust that platform) over how my data is represented. I don't really have to worry that it's showing up in any number of other places that I didn't sign up for (technically I do since everything public can be scraped of course, but in practice there are safeguards that go out the window when you explicitly create something like a PDS).

          • harvey9 3 hours ago
            This sounds like I need to host my PDS. Easy for me with no public profile but if I was someone famous wouldn't that mean I needed enterprise class hosting?
            • danabramov 2 hours ago
              You don't need to host your own PDS for any of this to work. It works the same way regardless of who hosts your PDS.

              I think what may be confusing you is that Bluesky (the company) acts in two different roles. There's hosting (PDS) and there's an app (bsky.app). You can think of these conceptually as two different services or companies.

              Yes, when you sign up on Bluesky, you do get "Bluesky hosting" (PDS). But hosting doesn't know anything about apps. It's more like a Git repo under the hood.

              Different apps (Bluesky app is one of them) can then aggregate data from your hosting (wherever it is) and show different projections of it.

              Finally, no, if you're famous, you don't need enterprise hosting. Hosting a PDS can be extremely cheap (like $1/mo maybe)? PDS doesn't get traffic spikes on viral content because it's amortized by the app (which serves from its DB).

    • jrowen 3 hours ago
      I agree. I don't understand the driving force here.

      I have all of the raw image files that I've uploaded to Instagram. I can screenshot or download the versions that I created in their editor. Likewise for any text I've published anywhere. I prefer this arrangement, where I have the raw data in my personal filesystem and I (to an extent) choose which projections of it are published where on the internet. An IG follow or HN upvote has zero value to me outside of that platform. I don't feel like I want this stuff aggregated in weird ways that I don't know about.

      • danabramov 3 hours ago
        For me, part of it is that we have no power collectively against products turning their back on users because coordination to "export data all at once and then import it into specific other place" is near-impossible. So this creates a perverse cycle where once you capture enough of the market, competition has very little chance unless they change the category entirely.

        What AT enables is forking products with their data and users. So, if some product is going down a bad road, a motivated team can fork it with existing content, and you can just start using the new thing while staying interoperable with the old thing. I think this makes the landscape a lot more competitive. I wrote about this in detail in https://overreacted.io/open-social/#closed-social which is another longread but specifically gets into this problem.

        I hear you re: not wanting "weird aggregation", that just be a matter of taste. I kind of feel like if I'm posting something on the internet, I might as well have it on the open web as aggregatable by other apps.

    • danabramov 3 hours ago
      >The effect of the internet (everything open to everyone) was to create smaller pockets around a specific idea or culture. Just like you have group chats with different people, thats what IG and Snap are. Segmentation all the way down.

      I actually agree with that. See from the post:

      >For some use cases, like cross-site syndication, a standard-ish jointly governed lexicon makes sense. For other cases, you really want the app to be in charge. It’s actually good that different products can disagree about what a post is! Different products, different vibes. We’d want to support that, not to fight it.

      AT doesn't make posts from one app appear in all apps by default, or anything like that. It just makes it possible for products to interoperate where that makes sense. It is up to whoever's designing the products to decide which data from the network to show. E.g. HN would have no reason to show Instagram posts. However, if I'm making my own aggregator app, I might want to process HN stuff together with Reddit stuff. AT gives me that ability.

      To give you a concrete example where this makes sense. Leaflet (https://leaflet.pub/) is a macroblogging platform, but it ingests Bluesky posts to keep track of quotes from the Leaflets on the network, and display those quotes in a Leaflet's sidebar. This didn't require Leaflet and Bluesky to collaborate, it's just naturally possible.

      Another reason to support this is that it allows products to be "forked" when someone is motivated enough. Since data is on the open network, nothing is stopping from a product fork from being perfectly interoperable with the original network (meaning it both sees "original" data and can contribute to it). So the fork doesn't have to solve the "convince everyone to move" problem, it just needs to be good enough to be worth running and growing organically. This makes the space much more competitive. To give an example, Blacksky is a fork of Bluesky that takes different moderation decisions (https://bsky.app/profile/rude1.blacksky.team/post/3mcozwdhjo...) but remains interoperable with the network.

      • skybrian 3 hours ago
        There's also a risk of adversarial cross-site syndication: your stuff can and probably will show up on websites you don't control.

        That's just how it works and I accept the risk.

        People concerned about that probably shouldn't publish on Bluesky. Private chat makes more sense for a lot of things.

  • clnhlzmn 4 hours ago
    Seems similar to remoteStorage [0]. What happened to that anyway?

    [0]: https://remotestorage.io/

    • danabramov 4 hours ago
      This doesn't look similar to me.

      remoteStorage seems aimed at apps that don't aggregate data across users.

      AT aims to solve aggregation, which is when many users own their own data, but what you want to display is something computed from many of them. Like social media or even HN itself.

    • Vinnl 3 hours ago
      remoteStorage is still occasionally getting updates. https://solidproject.org is a somewhat newer, similar project backed by Tim Berners-Lee. (With its own baggage.)

      I think of those projects as working relatively well for private data, but public data is kinda awkward. ATProto is the other way around: it has a lot of infra to make public data feasible, but private data is still pretty awkward.

      It's a lot more popular though, so maybe has a bigger chance of solving those issues? Alternatively, Bluesky keeps its own extensions for that, and starts walling those bits off more and more as the VCs amp up the pressure. That said, I know very little about Bluesky, so this speculation might all be nonsense.

  • ahussain 3 hours ago
    It seems like the biggest downside of this world is iteration speed.

    If the AT instagram wants to add a new feature (i.e posts now support video!) then can they easily update their "file format"? How do they update it in a way that is compatible with every other company who depends on the same format, without the underlying record becoming a mess?

    • danabramov 2 hours ago
      That's a great question!

      Adding new features is usually not a problem because you can always add optional fields and extend open unions. So, you just change `media: Link | Picture | unknown` to `media: Link | Picture | Video | unknown`.

      You can't remove things true, so records do get some deprecated fields.

      Re: updating safely, the rule is that you can't change which records it would consider valid after it gets used in the wild. So you can't change whether some field is optional or required, you can only add new optional fields. The https://github.com/bluesky-social/goat tool has a linting command that instantly checks whether your changes pass the rules. In general it would be nice if lexicon tooling matures a bit, but I think with time it should get really good because there's explicit information the tooling can use.

      If you have to make a breaking change, you can make a new Lexicon. It doesn't have to cause tech debt because you can make all your code deal with a new version, and convert it during ingestion.

      • skybrian 7 minutes ago
        Are these just guidelines or is this enforced in some way? I guess readers could validate and skip anything that doesn't match their schema.
  • noelwelsh 5 hours ago
    This, Local-first Software [1], the Humane Web Manifesto [2], etc. make me optimistic that we're moving away from the era of "you are the product" dystopian enshittification to a more user-centric world. Here's hoping.

    [1]: https://www.inkandswitch.com/essay/local-first/

    [2]: https://humanewebmanifesto.com/

    • pegasus 4 hours ago
      Indeed. And we can get inspired and involved in bringing about that better world.
  • geokon 5 hours ago
    This was a nice intro to AT (though I feel it could have been a bit shorter)

    The whole things seems a bit over engineered with poor separation of concerns.

    It feels like it'd be smarter to flatten the design and embed everything in the Records. And then other layers can be built on top of that

    Making every record includes the author's public-key (or signature?). Anything you need to point at you'd either just give its hash, or hash + author-public-key. This way you completely eliminate this goofy filesystem hierarchy. Everything else is embed it in the Record.

    Lexicons/Collections are just a field in the Record. Reverse looking up the hash to find what it is, also a separate problem.

    • evbogue 5 hours ago
      Yes. SSB and ANProto do this. We actually can simply link to a hash of a pubkey+signature which opens to a timestamped hashlink to a record. Everything is a hash lookup this way and thus all nodes can store data.
    • danabramov 4 hours ago
      I'm not sure I understand your proposal. Do you taking my example (a Twitter post) and showing how it would be stored in your system?
      • geokon 3 hours ago
        Sure, you'd have something like:

            {:record   {:person-key **public-key** 
                        :type :twitter-post
                        :message "My friend {:person-key **danabramov-public-key**} suggested I make this on this HN post {:link **record-of-hn-post-hash**}. Look at his latest post {:link **danabramov-newtwitter-post-hash** :person-key **danabramov-public-key**} it's very cool!"}
            :hash      **hash-of-the-record**
            :signature **signature-by-author**}
        
        So everything is self-contained. The other features you'd build on top of this basic primitive

        - Getting the @danabramov username would be done by having some lookup service that does person-key->username. You could have several. Usernames can be changed with the service.. But you can have your own map if you want, or infer it from github commits :)) There are some interesting ideas about usernames about. How this is done isn't specified by the Record

        - Lexicon is also done separately. This is some validation step that's either done by a consumer app/editor of the record or by a server which distributes records (could be based on the :type or something else). Such a server can check if you have less than 300 graphemes and reject the record if it fails. How this is done isn't specified by the Record

        - Collection.. This I think is just organizational? How this is done isn't specified by the Record. It's just aggregating all records of the same type from the same author I guess?

        - Hashes.. they can point at anything. You can point at a webpage or an image or another record (where you can indicate the author). For dynamic content you'd need to point at webpage that points at a static URL which has the dynamic content. You'd also need to have a hash->content mapping. How this is done isn't specified by the Record

        This kind of setup makes the Record completely decoupled from rest of the "stack". It becomes much more of independent moveable "file" (in the original sense that you have at the top) than the interconnected setup you end up with at the end.

        • danabramov 3 hours ago
          I have a few questions:

          - How do you rotate keys? In AT, the user updates the identity document. That doesn't break their old identity or links.

          - When you have a link, how do you find its content? In AT, the URL has identity, which resolves to hosting, which you can ask for stuff.

          - When aggregating, how do you find all records an application can understand? E.g. how would Bluesky keep track of "Bluesky posts". Does it validate every record just in case? Is there some convention or grouping?

          Btw, you might enjoy https://nostr.com/, it seems closer to what you're describing!

          • geokon 3 hours ago
            1. It's an important problem, but I think this just isn't done at the Record layer. Nor can you? You'd probably want to do that on the person-key->username service (which would have some log-in and way to tie two keys to one username)

            2. In a sense that's also not something you think about at the Record level either. It'd be at a different layer of the stack. I'll be honest, I haven't wrapped my head entirely around `did:plc`, but I don't see why you couldn't have essentially the same behavior, but instead of having these unique DID IDs, you'd just use public keys here. pub-key -> DID magic stuff.. and then the rest you can do the same as AT. Or more simply, the server that finds the hashed content uses attached meta-data (like the author) to narrow the search

            Maybe there is a good reason the identity `did:plc` layer needs to be baked in to the Record, but I didn't catch it from the post. I'd be curious to here why you feel it needs to be there?

            3. I'm not 100% sure I understand the challenge here. If you have a soup of records, you can filter your records based on the type. You can validate them as they arrive. You send your records to the Bluesky server and they validates them as they arrive.

            • verdverm 41 minutes ago
              2. The point of the PLC is to avoid tying identity to keys, specifically for the point that if you lose your keys, you lose your identity. In reality, no body wants that as part of the system

              3. The soup means you need to index everything. There is no Bluesky server to send things to, only your PDS. Your DID is how I know what PDS to talk to to get your records

  • yladiz 3 hours ago
    I know this is somewhat covered in another comment, but, the concepts described in the post could have been reduced quite a bit, no offense Dan. While I like the writing generally, I would consider writing and then letting it sit for a few days, rereading, and then cutting chaff (editing). This feels like a great first draft but without feedback, and could have greatly benefited from an editing process, and I think using the argument that you want to put out something for others to take and refine isn’t really a strong one… a bit more time and refinement could have made a big difference here (and given you have a decently sized audience I would keep in mind).
    • danabramov 3 hours ago
      From my perspective, there is no chaff. I've already the read the entire thing from top to bottom over 20 times (as I usually do with my writing), I've done several full edit passes, and I've removed everything inessential that I could find. The rest is what I wanted to be included into this article.

      I know my style is verbose but I try to include enough details to substantiate the argument at the level that I feel confident it fully stands for itself. If others find something useful in it, I trust that they can riff on those bits or simplify.

    • lanyard-textile 2 hours ago
      There is not much actionable here, as well intentioned as your comment is.

      It's like saying this MR could use some work but not citing a specific example.

  • jadbox 2 hours ago
    How do people view AT Protocol vs Nostr? Why choose one over the other? Which has a better chance at replacing X?
  • nonethewiser 4 hours ago
    But how do you get people to actually want this? This stuff is pretty niche even within tech.
    • danabramov 4 hours ago
      Bluesky is not huge, but 40M users is not nothing either. You don't get people to want this, you just try to build better products. The hope is that this enables us all to build better products by making them more interoperable by default. Whether this pans out remains to be seen.
      • demux 2 hours ago
        I also don't think the average user gets the value of the protocol yet. Most of those users were looking for a new, more politically palatable home but with the same features as Twitter. The new generation of apps on the protocol will be vital in showing users what's possible. IMO the two most valuable features at a practical level are: - social graph portability, which might look like having an onboarding experience that bootstraps your community on that app - lexicon cross compatibility, i.e. your data from app A shows up in a contextually relevant spot in app B. Or app B writes records that show up in app A. This is pretty key to get right because it might confuse or anger users if they aren't condition to expect it. Once the average user groks these features though, I'd be surprised if they voluntarily switch back to the standard corpo apps that eventually exit to some company who tries to monetize the shit out of every feature.
    • heyitsaamir 3 hours ago
      I think most people do want this. They want to own their data. If you ask someone if they post on IG, if they should own that, or IG, they'll tell you it's them.

      The hard problem IMO is how do you incentivize companies from adopting this since walled gardens helps reduce competition.

      • __MatrixMan__ 2 hours ago
        We want more control over data that we've created, and more control over data that's about us. I'm not sure either of these concepts align well with "ownership" though. Property and data are concepts that don't mix.

        Language nitpicking aside... you subvert the walls of their gardens and aggregate the walled-off data without the walls, so users face a choice not between:

        - facebook

        - everything else

        but instead between

        - facebook and everything else

        - just facebook

        But that approach only works if we can solve the "data I created" problems in a way that doesn't also require us to acknowledges facebook's walls.

  • metabagel 5 hours ago
    How does this relate to the SOLID project?

    https://solidproject.org/

    • danabramov 4 hours ago
      I'd say some of the worldview is shared but the architecture and ethos is very different. Some major differences:

      - AT tries to solve aggregation of public data first. I.e. it has to be able to express modern social media. Bluesky is a proof that it would work in production. AFAIK, Solid doesn't try to solve aggregation, and is focused on private data first. (AT plans private data support but not now.)

      - AT embraces "apps describe on their own formats" (Lexicons). Solid uses RDF which is a very different model. My impression is RDF may be more powerful but is a lot more abstract. Lexicon is more or less like *.d.ts for JSON.

  • jrm4 4 hours ago
    The more I read and consider Bluesky and this protocol, the more pointless -- and perhaps DANGEROUS -- I find the idea.

    It really feels like no one is addressing the elephant in the room of; okay, someone who makes something like this is interested in "decentralized" or otherwise bottom-up ish levels of control.

    Good goal. But then, when you build something like this, you're actually helping build a perfect decentralized surveillance record.

    This why I say that most of Mastodon's limitations and bugs in this regard (by leaving everything to the "servers") are actually features. The ability to forget and delete et al is actually important, and this makes that HARDER.

    I'm just kind of like, JUST DO MASTODONS MODEL, like email. It's better and the kinks are more well thought about and/or solved.

    • danabramov 4 hours ago
      Author here. I think it's fair to say that AT protocol's model is "everyone is a scraper", including first party. Which has both bad and good. I share your concern here. For myself, I like the clarity of "treat everything you post as scraped" over "maybe someone is scraping but maybe not" security by obscurity. I also like that there is a way for me to at least guarantee that if I intentionally make something public, it doesn't get captured by the container I posted it into.
    • bee_rider 4 hours ago
      This seems like tensions between normal/practical and “opsec” style privacy thinking… Really, we can never be sure anything that gets posted on the internet won’t be captured by somebody outside our control. So, if we want to be full paranoid, we should act like it will be.

      But practically lots of people have spent a long time posting their opinions carelessly on the internet. Just protected by the fact that nobody really has (or had) space to back up every post or time to look at them too carefully. The former has probably not been the case for a long time (hard drives are cheap), and the latter is possibly not true anymore in the LLM era.

      To some extent maybe we should be acting like everything is being put into a perfect distributed record. Then, the fact that one actually exists should serve as a good reminder of how we ought to think of our communications, right?

      • jrv 4 hours ago
        Exactly. Anything that's ever been public on the internet is never really gone anyways, and it's unsafe to assume so. This is similar to publishing a website or a blog post. Plus, from a practical (non-opsec) point of view, you can delete items (posts, likes, reposts, etc.) on ATProto, and those items will disappear from whatever ATProto app you are using - usually even live. You need to dive into the protocol layer to still see deleted items.
      • jrm4 1 hour ago
        Your last point is one that I used to be very strongly favor of, and today?

        Nooooooooooo. No. No. No.

        It's not going to happen and we shouldn't even consider it. Seriously. This thing we are doing here, which is "connecting people to each other," those forces for MANY will be far more powerful than "let me stop and think about the fact that this is forever." I just don't think we are wired for it, we're wired for a world in which we can just talk?

        I think it's better to try to engineer some specific counter to "everything is recorded all the time" (or, as in here, not try to usher it into existence even more) than to try to say "welp, everything is recorded all the time, better get used to it."

        • bee_rider 52 minutes ago
          It would be nice to engineer a way around this, but I don’t see it. Fundamentally if we want to be able to talk to random people, we’ll have to expect that some might be capturing communications, right?
    • skybrian 4 hours ago
      It's true that Mastodon is somewhat better if you don't want to be found, though it's hardly a guarantee. From a "seeing like a state" perspective, Bluesky is more "legible" and that has downsides.

      But I think there's room for both models. There are upsides to more legibility too. Sometimes we want to be found. Sometimes we're even engaging in self-promotion.

      Also, I'll point out that Hacker News is also very legible. Everything is immutable after the first hour and you can download it. We just live with it.

    • mozzius 4 hours ago
      This is a line of thinking that just supposes we shouldn’t post things on the internet at all. Which, sure, is probably the right move if you’re that concerned about OPSEC, but just because ActivityPub has a flakier model doesn’t mean it isn’t being watched
    • case0x 4 hours ago
      >helping build a perfect decentralized surveillance record

      a record of what? Posts I wish to share with the public anyway?

      • Spivak 3 hours ago
        It's not about the access, it's about the completeness. Imagine this paradigm takes off (I hope it does!), everyone has their own PDS and finally owns their data. Social apps link into their PDS to publish and share data exactly as they're supposed to.

        Well now someone's PDS is a truly complete record of their social activity neatly organized for anyone that's interested. It's not a security issue, after all the data was still public before, but the barrier to entry is now zero. It's so low that you can just go to stalker.io, put in their handle, and it will analyze their profile and will print out a scary accurate timeline of their activity and location leveraging AI's geoguesser skill.

        • __MatrixMan__ 3 hours ago
          If that's your threat model, then I think the way forward is to maintain separate identities. There are trade-offs there also of course: fragment yourself too much and the people who trust you will now only trust a portion of what you have to say... unless you have the time and energy to rebuild that trust multiple times.

          Of course that's the same with the web we have today, the only difference is that you get control over which data goes with which identity rather than having that decision made for you by the platform boundaries.

        • dameis 3 hours ago
          That is how it works, but people shouldn't be posting their location or sensitive information publicly if they don't want it exposed like that. That's basic opsec. Private data is currently being worked on for ATProto and will hopefully begin existing in 2026.
          • dzaima 2 hours ago
            > people shouldn't be posting their location or sensitive information publicly if they don't want it exposed like that

            They shouldn't, but they still could: accidentally paste in the wrong browser tab; have been stupid when they were 12 years old; have gotten drunk; or a number of other things.

    • iameli 4 hours ago
      what if I want to publish something publicly on the internet though
      • heliumtera 4 hours ago
        Maybe some could want to publish something publicly but anonymously?
        • dameis 3 hours ago
          You can do that already — just don't post under an account that has your real identity
    • skeledrew 3 hours ago
      When it comes to the internet, tech is law. There is no way to publicly share something and maintain control over it. Even on the Fediverse, if either a client or server wants to ignore part of the protocol or model, it can. Like a system message to delete particular posts for anti-surveillance reasons can simply be ignored by any servers or clients that were designed/modified for surveillance. Ultimately the buck lies with the owner of some given data to not share that data in the first place if there's a chance of misuse.
    • plagiarist 4 hours ago
      Shouldn't the ability to forget and delete content that was ever public on the internet be considered fictional anyway?
  • elbci 13 hours ago
    agree! Social-media contributions as files on your system: owned by you, served to the app. Like .svg specifications allows editing in inkscape or illustrator a post on my computer would be portable on mastodon or bluesky or a fully distributed p2p network.
  • LoganDark 1 hour ago
    I did a double take at "DID as identity" because Dissociative Identity Disorder shares the same acronym
  • James_K 3 hours ago
    AT Proto seems very overengineered. We already have websites with RSS feeds, which more or less covers the publishing end in a way far more distributed and reliable than what AT offers. Then all you need is a kind of indexer to provide people with notifications and discovery and you're done. But I suppose you can't sell that to shareholders because real decentralised technology probably isn't going to turn as much of a profit as a Twitter knockoff with a vague decentralised vibe to it that most users don't understand or care about.
    • danabramov 3 hours ago
      Why so much cynicism? The people working there genuinely care about this stuff. Maybe you disagree with technical decisions but why start by projecting your fantasies about their motivations?

      RSS is OK for what it does, but it isn't realtime, isn't signed, and doesn't support arbitrary structured data. Whereas AT is signed, works with any application-defined data structures, and lets you aggregate over millions of users in real time with subsecond end-to-end latency.

  • eduction 4 hours ago
    Unpopular opinion: this should be done with xml, not json. XML can have types, be self describing, and be extended (the X in XML).

    That said it’s a very elegant way to describe AT protocol.

    • danabramov 2 hours ago
      I'd be curious to see what that would look like!
  • sneak 5 hours ago
    Losing private keys is much more common than losing domains.
    • danabramov 4 hours ago
      Yes, which is why by default, key management is done by your hosting. You log into your host with login/password or whatever mechanism your host supports.

      Adding your own emergency rotational key in case your hosting goes rogue is supported, but is a separate thing and not required for normal usage. I'd like this to be more ergonomical though.

  • EGreg 3 hours ago
    As someone who explicitly designed social protocols since 2011, who met Tim Berners-Lee and his team when they were building SOLID (before he left MIT and got funded to turn it into a for-profit Inrupt) I can tell you that files are NOT the best approach. (And neither is SPARQL by the way, Tim :) SOLID was publishing ACLs for example as web resources. Presumably you’d manage all this with CalDAV-type semantics.

    But one good thing did come out of that effort. Dmitri Zagidulin, the chief architect on the team, worked hard at the W3C to get departments together to create the DID standard (decentralized IDs) which were then used in everything from Sidetree Protocol (thanks Dan Buchner for spearheading that) to Jack Dorsey’s “Web5”.

    Having said all this… what protocol is better for social? Feeds. Who owns the feeds? Well that depends on what politics you want. Think dat / hypercore / holepunch (same thing). SLEEP protocol is used in that ecosystem to sync feeds. Or remember scuttlebutt? Stuff like that.

    Multi-writer feeds were hard to do and abandoned in hypercore but you can layer them on top of single-writer. That’s where you get info join ownership and consensus.

    ps: Dan, if you read this, visit my profile and reach out. I would love to have a discussion, either privately or publicly, about these protocols. I am a huge believer in decentralized social networking and build systems that reach millions of community leaders in over 100 countries. Most people don’t know who I am and I’m happy w that. Occasionally I have people on my channel to discuss distributed social networking and its implications. Here are a few:

    Ian Clarke, founder of Freenet, probably the first decentralized (not just federated) social network: https://www.youtube.com/watch?v=JWrRqUkJpMQ

    Noam Chomsky, about Free Speech and Capitalism (met him same day I met TimBL at MIT) https://www.youtube.com/watch?v=gv5mI6ClPGc

    Patri Friedman, grandson of Milton Friedman on freedom of speech and online networks https://www.youtube.com/watch?v=Lgil1M9tAXU

    • danabramov 3 hours ago
      To be clear, I'm using files in a relatively loose sense to focus on the "apps : formats are many-to-many" angle. AT does not literally implement a full filesystem. As the article progresses, I restrict some freedoms in the metaphor (no directories except collections, everything is JSON, etc). If you're interested in the actual low-level repository format, it is described here: https://atproto.com/specs/repository
    • demux 2 hours ago
      FYI the CTO of Bluesky was an early dev of Secure ScuttleButt
  • catapart 6 hours ago
    yeah yeah yeah, everyone get on the AT protocol, so that the bluesky org can quickly get all of these filthy users off of their own servers (which costs money) while still maintaining the original, largest, and currently only portal to actually publish the content (which makes money[0]). let them profit from a technical "innovation" that is 6 levels of indirection to mimic activity pub.

    if they were decent people, that would be one thing. but if they're going to be poisoned with the same faux-libertarian horseshit that strangled twitter, I don't see any value in supporting their protocol. there's always another protocol.

    but assuming I was willing to play ball and support this protocol, they STILL haven't solved the actual problem that no one else is solving either: your data exists somewhere else. until there's a server that I can bring home and plug in with setup I can do using my TV's remote, you're not going to be able to move most people to "private" data storage. you're just going to change which massive organization is exploiting them.

    I know, I know: hardware is a bitch and the type of device I'm even pitching seems like a costly boondoggle. but that's the business, and if you're not addressing it, you're not fomenting real change; you're patting yourself on the back for pretending we can algorithm ourselves out of late-stage capitalism.

    [0] *potentially/eventually

    • danabramov 4 hours ago
      >that the bluesky org can quickly get all of these filthy users off of their own servers (which costs money)

      That's not correct, actually hosting user data is cheap. Most users' repos are tiny. Bluesky doesn't save anything by having someone move to their own PDS.

      What's expensive is stuff like video processing and large scale aggregation. Which has to be done regardless of where the user is hosting their data.

      • catapart 2 hours ago
        come on, man, let's be real. you're talking modern, practical application; I'm talking reasonable user buy in at big boy social media levels. the video hosting IS what I'm talking about being expensive. you think bsky is going to be successfully while ignoring the instagram crowd forever? what are we doing here?

        bsky saves the video processing and bandwidth by not hosting that content on bsky. it's a smaller problem, but in a large enough pool, images become heavy, too. and, either way, the egress of that content is expensive if you're doing it for the entire world, instead of letting each individual's computer (their pds) do it.

        I'm happy to admit that text is cheap and bsky isn't looking to offload their data as it stands now. but let's be honest about the long term, which is what my original comment takes aim at.

        • danabramov 1 hour ago
          I still don't think this is correct. The Bluesky app always processes video, whether you're self-hosting or not. The personal data serves stores the original blob, but the Bluesky's video service will have to pick it up and transcode it (to serve it from CDN) either way.

          Also, this:

          >let them profit from a technical "innovation" that is 6 levels of indirection to mimic activity pub.

          is also wrong because AT solves completely different problems. None of the stuff I wrote about in the post can be solved or or is being solved by ActivityPub. Like, AP is just message passing. It doesn't help with aggregation, identity, signing, linking cross-application data, any of that.

          • catapart 1 hour ago
            right on, man, that must be the only way transcoding can be done and completely future proof so it will never change to let the user transcode their own damn content. I get it; you're frustrated so you're nitpicking. funny how dang doesn't swoop in to tut tut you for not steelmanning instead of strawmanning.

            in any case, you're completely right about the activity pub comment. that was absolutely mockery and not actually a complaint. artistic license and all that. god forbid we express ourselves. but sure, I can recognize that AT proto is useful in that it provides mechanisms we didn't really have before. that said, it's not novel (just new) and it's not irreplaceable. LIKE I SAID: there's always another protocol.

            any time you want to actually address the point of the comments, I'd be happy to get your take on why it's fine, actually, for the CEO to imply she doesn't have to care what users think as long as they aren't paying her. but if you're not ready to have the real conversation, I'll let you be satisfied with whatever other potshots you want to take at my reasonable indignation.

    • lou1306 6 hours ago
      > until there's a server that I can bring home and plug in with setup I can do using my TV's remote, you're not going to be able to move most people to "private" data storage

      Quite some BSky users are publishing on their own PDS (Personal Data Server) right now. They have been for a while. There are already projects that automate moving or backign up your PDS data from BSky, like https://pdsmoover.com/

      • catapart 1 hour ago
        yeah, I was one of them. developers are not the endgame, though. true social media needs people who are not going to do anything more complicated than "go to website, sign up". there's no world where setting up your own pds is that simple without an organized piece of software to do that kind of thing.

        personally, I could probably get behind recommending something like umbrel[0], if it included something like a "include a pds" option during config. but even that is asking for a lot of mind-share for a non-tech user. it would take a super smooth setup process for that to be realistic. point is, though, I'm not saying it can't be done; I'm saying no one is doing it and what people are doing is not getting the job done for wider adoption.

        [0] https://umbrel.com/ *and, naturally, at this point, I'd prefer they include something that isn't based on AT proto for social publication. I wouldn't mind if they had both, but just an AT proto implementation wouldn't attract me.

      • avsm 5 hours ago
        Microblogging is also the least interesting part of the ATProto ecosystem. I've switched all my git hosting over to https://tangled.org and am loving it, not least of which is that my git server (a 'knot' in Tangled parlance) is under my control as a PDS and has no storage limits!
        • skybrian 5 hours ago
          Is it as easy for other people to read as a Github repo? Want to share?
          • catapart 1 hour ago
            yeah, tangled seems like a pretty well-designed piece of tech. I've never used it, myself, but I did an audit and found that it's not only analogous to github as far as UX, but it also includes features like CI/CD, which other public/social repo servers have struggled with.

            only reason I backed away from it is that when the bsky team had a big "fuck the users" moment, the user purporting to be the tangled founder was happy to cheer them on. so between having to use AT proto, and assuming that the tangled dev doesn't really disagree with bsky's "fuck the users" sentiment, I moved on. but, obviously, whiny moral grandstanding is irrelevant to whether or not someone made a good product. if you've got a use for it, I'd certainly recommend giving it a try!

          • icy 2 hours ago
            Tangled founder here; it's just as easy! For example, here's the entire Tangled codebase monorepo: https://tangled.org/tangled.org/core — you can clone this directly as you would a git repo anywhere else.
      • andai 5 hours ago
        What's a PDS?
  • doctorflan 3 hours ago
    I was hoping this was literally just going to be some safe version of a BBS/Usenet sort of filesharing that was peer-based king of like torrents, but just simple and straightforward, with no porn, infected warez, randomware, crypto-mining, racist/terrorist/nazi/maga/communist/etc. crap, where I could just find old computing magazines, homebrew games, recipes, and things like that.

    Why can’t we have nice things?

    I guess that’s what Internet Archive is for.

  • bschmidt999 1 hour ago
    [dead]
  • ninkendo 5 hours ago
    > When great thinkers think about problems, they start to see patterns. They look at the problem of people sending each other word-processor files, and then they look at the problem of people sending each other spreadsheets, and they realize that there’s a general pattern: sending files. That’s one level of abstraction already. Then they go up one more level: people send files, but web browsers also “send” requests for web pages. And when you think about it, calling a method on an object is like sending a message to an object! It’s the same thing again! Those are all sending operations, so our clever thinker invents a new, higher, broader abstraction called messaging, but now it’s getting really vague and nobody really knows what they’re talking about any more.

    https://www.joelonsoftware.com/2001/04/21/dont-let-architect...

    • dang 4 hours ago
      "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

      https://news.ycombinator.com/newsguidelines.html

    • danabramov 3 hours ago
      Author here! I grew up reading Joel's blog and am familiar with this post. Do you have a more pointed criticism?

      I agree something like "hyperlinked JSON" maybe sounds too abstract, but so does "hyperlinked HTML". But I doubt you see web as being vague? This is basically web for data.

      • ninkendo 1 hour ago
        > Do you have a more pointed criticism?

        Sure.

        After taking the time to re-read the article since I initial posted my (admittedly shallow) dismissal, I realized this article is really a primer/explainer for the AT protocol, which I don't really have enough background in to criticize.

        My criticism is more about the usefulness of saying "what if we treated social networking as a filesystem": which is that this doesn't actually solve any problems or add any value. The idea of modeling a useful thing (social media)[0] as a filesystem is generalizing the not-useful parts of it (ie. the minutia of how you actually read/write to it) and not actually addressing any of the interesting or difficult parts of it (how you come up with relevant things to look at, whether a "feed" should be a list of people you follow or suggestions from an algorithm, how you deal with bad actors, sock puppets, the list goes on forever.)

        This is relevant to Joel's blog because of the point he makes about Napster: It was never about the "peer to peer" or "sharing", that was the least interesting part. The useful thing about Napster was that you could type in a song and download it. It would have been popular if it wasn't peer to peer, so long as you could still get any music you wanted for free.

        Modeling social media as a filesystem, or constructing a data model about how to link things together, and hypergeneralizing all the way to "here's how to model any graph of data on the filesystem!" is basically a "huh, that's neat" little tech demo but doesn't actually solve anything. Yes, you can take any graph-like structured data and treat it as files and folders. I can write a FUSE filesystem to browse HN. I can spend the 20 minutes noodling on how the schema should work, what a "symlink" should represent, etc... but at the end of the day, you've just taken data and changed how it's presented.

        There's no reason for the filesystem to be the "blessed" metaphor here. Why not a SQL database? You can `SELECT * FROM posts WHERE like_count > 100`, how neat! Or how about a git repo? You can represent posts as commits, and each person's timeline as a branch, and ooh then you could cherry-pick to retweet!

        These kind of exercises basically just turn into nerd-sniping: You think of a clever "what if we treated X as Y" abstraction, then before you really stop to think "what problem does that actually solve", you get sucked into thinking about various implementation details and how it to model things.

        The AT protocol may be well-designed, it may not be, but my point is more that it's not protocols that we're lacking. It's a lack of trust, lack of protection from bad actors, financial incentives that actively harm the experience for users, and the negative effects on what social media does to people. Nobody's really solved any of this: Not ActivityPub, not Mastadon, not BlueSky, not anyone. Creating a protocol that generalizes all of social media so that you can now treat it all homogeneously is "neat", but it doesn't solve anything that you couldn't solve via a simple (for example) web browser extension that aggregated the data in the same way for you. Or bespoke data transformations between social media sites to allow for federation/replication. You can just write some code to read from site A and represent it in site B (assuming sites A and B are willing.) Creating a protocol for this? Meh, it's not a terrible idea but it's also not interesting.

        - [0] You could argue whether social media is "useful", let's just stipulate that it is.

        • danabramov 1 hour ago
          I think there was a bit of a communication failure between us. You took the article as a random "what if X was Y" exploration. However, what I tried to communicate something more like:

          1. File-first paradigm has some valuable properties. One property is apps can't lock data out of each other. So the user can always change which apps they use.

          2. Web social app paradigm doesn't have these properties. And we observe the corresponding problems: we're collectively stuck with specific apps. This is because our data lives inside those apps rather than saved somewhere under our control.

          3. The question: Is there a way to add properties of the file-first paradigms (data lives outside apps) to web social apps? And if it is indeed possible, does this actually solve the problems we currently have?

          The rest of the article explores this (with AT protocol being a candidate solution that attempts to square exactly this problem). I'm claiming that:

          1. Yes, it is possible to add file-first paradigm properties to web social apps

          2. That is what AT protocol does (by externalizing data and adding mechanisms for aggregation from user-controlled source of truth)

          3. Yes, this does solve the original stated problems — we can see in demos from the last section that data doesn't get trapped in apps, and that developers can interoperate with zero coordination. And that it's already happening, it's not some theoretical thing.

          I don't understand your proposed alternative with web extension but I suspect you're thinking about solving some other problems than I'm describing.

          Overall I agree that I sacrificed some "but why" in this article to focus on "here's how". For a more "but why" article about the same thing, you might be curious to look at https://overreacted.io/open-social/.