33 comments

  • kstrauser 11 hours ago
    I love the insanity of this idea. Not saying it's a good idea, but it's a very highly entertaining one, and I like that!

    I've also had enormous luck with Anubis. AI scrapers found my personal Forgejo server and were hitting it on the order of 600K requests per day. After setting up Anubis, that dropped to about 100. Yes, some people are going to see an anime catgirl from time to time. Bummer. Reducing my fake traffic by a factor of 6,000 is worth it.

    • anonymous908213 9 hours ago
      As someone on the browsing end, I love Anubis. I've only seen it a couple of times, but it sparks joy. It's rather refreshing compared to Cloudfare, which will usually make me immediately close the page and not bother with whatever content was behind it.
      • teeray 5 hours ago
        It really reminds me of old Internet, when things were allowed to be fun. Not this tepid corporate-approved landscape we have now.
      • kstrauser 9 hours ago
        Same here, really. That's why I started using it. I'd seen it pop up for a moment on a few sites I'd visited, and it was so quirky and completely not disruptive that I didn't mind routing my legit users through it.
        • n1xis10t 8 hours ago
          So maybe there are more people who like the “anime catgirl” than there are who think it’s weird
          • kstrauser 8 hours ago
            *anime jackalgirl ;-)

            Quite possibly. Or, in my case, I think it's more quirky and fun than weird. It's non-zero amounts of weird, sure, but far below my threshold of troublesome. I probably wouldn't put my business behind it. I'm A-OK with using it on personal and hobby projects.

            Frankly, anyone so delicate that they freak out at the utterly anodyne imagery is someone I don't want to deal with in my personal time. I can only abide so much pearl clutching when I'm not getting paid for it.

    • n1xis10t 11 hours ago
      That’s so many scrapers. There must be a ton of companies with very large document collections at this point, and it really sucks that they don’t at least do us the courtesy of indexing them and making them available for keyword search, but instead only do AI.

      It’s kind of crazy how much scraping goes on and how little search engine development goes on. I guess search engines aren’t fashionable. Reminds me of this article about search engines disappearing mysteriously: https://archive.org/details/search-timeline

      I try to share that article as much as possible, it’s interesting.

      • kstrauser 10 hours ago
        So! Much! Scraping! They were downloading every commit multiple times, and fetching every file as seen at each of those commits, and trying to download archives of all the code, and hitting `/me/my-repo/blame` endpoints as their IP's first-ever request to my server, and other unlikely stuff.

        My scraper dudes, it's a git repo. You can fetch the whole freaking thing if you wanna look at it. Of course, that would require work and context-aware processing on their end, and it's easier for them to shift the expense onto my little server and make me pay for their misbehavior.

      • miki123211 51 minutes ago
        But there is a lot of search engine development going on, it's just that the results of the new search engines are fed straight into AI instead of displayed in the legacy 10-links-per-page view.
    • n1xis10t 11 hours ago
      *anime jackalgirl

      Also you mentioned Anubis, so it’s creator will probably read this. Hi Xena!

      • xena 7 hours ago
        Ohai! I'm working on dataset poisoning. The early prototype generates vapid LinkedIn posts but future versions will be fully pluggable with WebAssembly.
        • gettingoverit 2 hours ago
          You've made one of the best solutions, that matched what I thought of implementing myself, and at the time it was most needed. I think a couple of "thank you" are sorely missing in this comment section.

          Thank you!

        • tommica 3 hours ago
          Hi Xena! Your blog is amazing! Didn't realize you're working on Anubis - it's a really nice tool for the internet! Reminds me a bit of the ye' olde internet for some reason.
        • n1xis10t 6 hours ago
          That sounds fun, I look forward to reading a writeup about that
          • xena 6 hours ago
            So I can plan it, how much detail do you want? Here's what I have about the prototype: https://anubis.techaro.lol/docs/admin/honeypot/overview
            • n1xis10t 6 hours ago
              Probably any detail that you think is cool, I would be interested in reading about. When in doubt err on the side of too much detail.

              That was a good read. I hadn’t heard of spintax before, but I’ve thought of doing things like that. Also “pseudoprofound anti-content”, what a great term, that’s hilarious!

            • kstrauser 4 hours ago
              As the owner of honeypot.net, I always appreciate seeing the name used as intended out in the wild.
      • kstrauser 10 hours ago
        Correct; my bad!

        And hey, Xena! (And thank you very much!)

      • ziml77 10 hours ago
        I checked Xe's profile when I hadn't seen them post here for a while. According to that, they're not really using HN anymore.
    • buu700 6 hours ago
      It's actually a well established concept: https://youtu.be/p9KeopXHcf8
  • docheinestages 10 minutes ago
    Reminds me of this "Nathan for You" episode: https://www.youtube.com/watch?v=p9KeopXHcf8
  • thethingundone 11 hours ago
    I own a forum which currently has 23k online users, all of them bots. The last new post in that forum is from _2019_. Its topic is also very niche. Why are so many bots there? This site should have basically been scraped a million times by now, yet those bots seem to fetch the stuff live, on the fly? I don’t get it.
    • sethops1 10 hours ago
      I have a site with a complete and accurate sitemap.xml describing when its ~6k pages are last updated (on average, maybe weekly or monthly). What do the bots do? They scrape every page continuously 24/7, because of course they do. The amount of waste going into this AI craze is just obscene. It's not even good content.
      • thisislife2 3 hours ago
        If you are in the US, have you considered suing them for robot.txt / copyright violation? AI companies are currently flush with cash from VCs and there may be a few big law firms willing to fight a law suit against them on your behalf. AI companies have already lost some copyright cases.
        • happymellon 1 hour ago
          Based upon traffic you could tell whether an IP or request structure is coming from a not, but how would you reliability tell which company is DDOSing you?
          • chrismorgan 41 minutes ago
            It should be at least theoretically possible: each IP address is assigned to an organisation running the IP routing prefix, and you can look that up easily, and they should have some sort of abuse channel, or at the very least a legal system should be able to compel them to cooperate and give up the information they’re required to have.
      • n1xis10t 10 hours ago
        It would be interesting if someone made a map that depicts the locations of the ip addresses that are sending so many requests, over the course of a day maybe.
        • giantrobot 8 hours ago
          Maps That Are Just Datacenters
    • tokioyoyo 7 hours ago
      Large scale scraping tech is not as sophisticated as you'd think. A significant chunk of it is "get as much as possible, categorize and clean up later". Man, I really want the real web of the 2000s back, when things felt "real" more or less... how can we even get there.
      • n1xis10t 7 hours ago
        If people start making search engines again and there is more competition for Google, I think things would be pretty sweet.
        • tokioyoyo 7 hours ago
          Because of the financial incentives, it would still end up with people doing things to drive traffic to their website though, no? Maybe because the web was smaller, and people looked at it as means "to explore curiosity" in the olden days it kinda worked differently... maybe I just got old, but I don't want to believe that.
          • n1xis10t 6 hours ago
            By “doing things to drive traffic to their website” do you mean trying to do SEO type things to manipulate search engine rankings? If so, I think that there are probably ways to rank that are immune to tampering.

            Don’t worry, you’re not just old. The internet kind of sucks now.

            • makapuf 1 hour ago
              Google was neat in that you didn't see the content keyword spam either on the websites or the portal home pages. The Web was already full of shit (first ad banner was 1994? By 1999 you already had punch the monkey as classy content), but it was more ... organic and you could easily skip it.
      • thethingundone 7 hours ago
        I would understand that, but it seems they don’t store the stuff but recollect the same content every hour.
        • tokioyoyo 7 hours ago
          I'm assuming a quick hash check to see if there's any change? Between scrapers "most up to date data" is fairly valuable nowadays as well.
      • idiotsecant 3 hours ago
        Have you ever listened to the 'high water mark' monologue from fear and loathing? It's pretty much just that. It was a unique time and it was neat that we got to see it, but it can't possibly happen again.

        https://www.youtube.com/watch?v=vUgs2O7Okqc

    • thethingundone 10 hours ago
      The bots are exposing themselves as Google, Bing and Yandex. I can’t verify whether it’s being attributed by IP address or whether the forum trusts their user agent. It could basically be anyone.
      • n1xis10t 10 hours ago
        Interesting. When it was just normal search engines I didn’t hear of people having this problem, so this either means that there are a bunch of people pretending to be bing google and yandex, or those companies have gotten a lot more aggressive.
        • bobbiechen 9 hours ago
          There are lots of people pretending to be Google and friends. They far outnumber the real Googlebot, etc. and most people don't check the reverse DNS/IP list - it's tedious to do this for even well-behaved crawlers that publish how to ID themselves. So much for User Agent.
          • happymellon 1 hour ago
            > So much for User Agent.

            User agent has been abused for so long, I forget a time when it wasn't.

            Anyone else remember having to fake being a Windows machine so that YouTube/Netflix would serve you content better than standard def, or banking portals that blocked you if your agent didn't say you were Internet Explorer?

        • reallyhuh 9 hours ago
          What are the proportions for the attributions? Is it equally distributed or lopsided towards one of the three?
        • giantrobot 8 hours ago
          Normal search engine spiders did/do cause problems but not on the scale of AI scrapers. Search engine spiders tend to follow a robots.txt, look at the sitemap.xml, and generally try to throttle requests. You'll find some that are poorly behaved but they tend to get blocked and either die out or get fixed and behave better.

          The AI scrapers are atrocious. They just blindly blast every URL on a site with no throttling. They are terribly written and managed as the same scraper will hit the same site multiple times a day or even hour. They also don't pay any attention to context so they'll happily blast git repo hosts and hit expensive endpoints.

          They're like a constant DOS attack. They're hard to block at the network level because they span across different hyperscalers' IP blocks.

          • n1xis10t 8 hours ago
            Puts on tinfoil hat: Maybe it isn’t AI scrapers, but actually is a massive dos attack, and it’s a conspiracy to get people to not self-host.
    • danpalmer 11 hours ago
      How do you define a user, and how do you define online?

      If the forum considers unique cookies to be a user and creates a new cookie for any new cookie-less request, and if it considers a user to be online for 1 hour after their last request, then actually this may be one scraper making ~6 requests per second. That may be a pain in its own way, but it's far from 23k online bots.

      • crote 10 hours ago
        That's still 518.400 requests per day. For static content. And it's a niche forum, so it's not exactly going to have millions of pages.

        Either there are indeed hundreds or thousands of AI bots DDoSing the entire internet, or a couple of bots are needlessly hammering it over and over and over again. I'm not sure which option is worse.

        • n1xis10t 10 hours ago
          Imagine if all this scraping was going into a search engine with a massive index, or a bunch of smaller search engines that a meta-search engine could be made for. This’d be a lot more cool in that case
      • thethingundone 10 hours ago
        AFAIK it keeps a user counted as online for 5 or 15 minutes (I think 5). It’s a Woltlab Burning Board.

        Edit: it’s 15 minutes.

        • danpalmer 9 hours ago
          And what is a "user"?
          • thethingundone 7 hours ago
            Whatever the forum software Woltlab Burning Board considers a user. If I recall correctly, it tries to build an identifier based on PHP session ids, so most likely simply cookies.
            • danpalmer 5 hours ago
              This is exactly my point. Scrapers typically don't store cookies, so every single request is likely to be a "new" user as far as the forum software is concerned.

              Couple that with 15 minute session times, and that could just be one entity scraping the forum at 30 requests per second. One scraper going moderately fast sounds far less bad than 29000 bots.

              It still sounds excessive for a niche site, but I'd guess this is sporadic, or that the forum software has a page structure that traps scrapers accidentally, quite easy to do.

    • sandblast 11 hours ago
      Are you sure the counter is not broken?
      • thethingundone 10 hours ago
        Yes, it’s running on a Woltlab Burning Board since forever.
    • andrepd 11 hours ago
      When you have trillions of dollars being poured into your company by the financial system, and when furthermore there are no repercussions for behaving however you please, you tend not to care about that sort of "waste".
  • montroser 9 hours ago
    This is a cute idea, but I wonder what is the sustainable solution to this emerging fundamental problem: As content publishers, we want our content to be accessible to everyone, and we're even willing to pay for server costs relative to our intended audience -- but a new outsized flood of scrapers was not part of the cost calculation, and that is messing up the plan.

    It seems all options have major trade-offs. We can host on big social media and lose all that control and independence. We can pay for outsized infrastructure just to feed the scrapers, but the cost may actually be prohibitive, and seems such a waste to begin with. We can move as much as possible SSG and put it all behind cloudflare, but this comes with vendor lock in and just isn't architecturally feasible in many applications. We can do real "verified identities" for bots, and just let through the ones we know and like, but this only perpetuates corporate control and makes healthy upstart competition (like Kagi) much more difficult.

    So, what are we to do?

    • hollowturtle 9 hours ago
      If the LLMs are the "new Google" one solution would be for them to pay you when scraping your content, so you both have an incentive, you're more willing to be scraped and they'll try to not abuse you because it will cost them at every visit. If your content is valuable and requested on prompts they will scrape you more and so on. I can't see other solutions honestly. For now they decided to go full evil and abuse everyone
      • vivzkestrel 5 hours ago
        or turn your blog into a frontend/backend combo. keep the frontend as an SPA so that the page has nothing on it. have your backend send data in encrypted format and the AI scrapers would need to do a tonne of work in order to figure out what your data is. If everyone uses a different key and different encryption algorithm suddenly all their server time is busted decrypting stuff
        • chii 3 hours ago
          How does your normal users get access to the same contents?

          Or are you having the user solve an encryption puzzle to view it?

          • vivzkestrel 1 hour ago
            - the frontend has a decryption module that ll show users what they want to see,

            - the backend has an encryption module.

            - The bots and crawlers will see the encrypted text

            - Can someone who peeks deeply inside the client side code decrypt it? YES

            - Will 99% of the scrapers bother doing this? NO

            - The key can be anything, it could be a per session key agreed upon between the client and the server, a csrf token, or even a fixed key

            • hollowturtle 22 minutes ago
              Ehm what would stop ai scrapers from using a browser like a normal user would? Google bot already does, it can execute js and can read spa client side generated content, so it proves can be done at scale, and I'm pretty sure some ai scrapers already do
          • vivzkestrel 1 hour ago
            for example this is what my backend renders on the static html page

            {"z":"gxCit6xEQf0N9IIoG909xfSxypRX7j0BLlXnd5IgWrrzEWzBUxDiS4o4AlIkNYOyuzkY8w4IVoEgUmW02jj84BxhMrNPetK8n6nIn2ORLKQPfIVTS48nGQ1PldtdlpiUNYUm04N+WrMBGGceKYnQoORQO3XbFOVzboFYOWbdMhLdMS2N26YtCUYHwy7jw1AwlXS0Nm1SClb0U1qk2KnDB6s9bcMmpstaOY2RkmGbQ4KMuKHaGzByVzeIPHrtXtNjLnj68cgyLALyO3E5ncqspyjbnuZtfusn2Y49Nu3LVZDDk/JojC6x6GlLZKFEDoiunyfhnqd0SRsDvynKNpFObi3uu+CDrKv2qJgQoyCk392uO4dIqsoJ8iS61DW3hhDoh7nbLsum/E+VdyRdcMo2/0H8cBx6hqxYIOe3hzbMpEsX7YDCy819XYbyi30xQFKe/hGBv3i/LtsD8KdPIHiz2/X/msSh5FfQLVsK2wRx/70FAB4zu4Zj3E9wKMA0HPnDY0BFEtIYFmUNTBMyvFt0k3+MUjGEFoaHT+/LDJNkDzLybsFqpufMhi0RTnkReVVDepU4k9XmxCPyCBYzt74g2BwRS4bgVzmHrJhw9GkS0ZIkHbNhRaUR5iF7mJ/1rIRXDhtLClTEm6DgEO/jAZ/3edLhDrJvCX5gKzZWfsQGgkF2IzHTUdF9HlT5YE/bZ6Bvl9wmHTpnDpnLu3ERGOpuyqkWJG3eji61OB66jx5ZVI2H2o7G4+XDPYRykAs2TJPkxJzR6OU1xaOTmeLjXlqrvG1i7kVf6M+iEJrtvkuPUfDcOSkdSnN+1apnrV+qrNv6Nwhqp89zic2dwvJLYQyL3qT/JQPlwO+TqfhMlklvsPzlMvzECcPgesWTH8LZVdb857FmSbdWjY/rceg4Icg2E6RdCkGEx/JE8XrShAdyl7Udbx8A/+kRALsvTa6+gtd3LxLriMH/wbrasJbply06KaR07SEx7PdWwsdGj8sgmg0l24KRmmqEICKLFfb+k9nCZABUcfjqDlg/x4KDbuRe33Kz+gA++dcq/sLZQm2fRT7UvXFha9RAR9XAEsty1uI3pkjeMcsPRMBIJETkNXG0QUgrjKT44UDYlBSO+mfNexpW6tC1s8gDZJaJBdd/5QzzamnaoAWU9SAksuCc0EkbwmOXxLLqCyXwnXbSZ6LCWqrBpU43BopETh8SBtnrdpTPZWxI8aPHJaF9Qertf7qglqfUWVqnCdALWAO+j+Ma7FkyL04tNX62VmcwqTHQuTQAgZnoo0iZo6wHNPjOgDOxXz+XN8AVA1aIhDEQ8iA+WcPh/+QjDAg/k0wskR2S9MiSP8tVOfD2lMao6A7yuA8FkNK+oOJZVn8IDcIKnGCEem33lC/GFXUGqhi4mh2marxOEHmvYX1V6f3SDlK+NmSMQxbnKVWy3i4A2rJL5Q5l9rZG90rYU53q6ApbK49Zmdy5RyPJpJoDMIa9Py2LtmSEEzW1608Jf7QTUB6WhOLsWBvMlt1fyxtfkOM1h8/WpEga7eDsR+htaqilZgvM/6dyh9C+izWKWK0w+LsJm625e5nHOZs1MQ0DCPA2wu0O/777yOGw0Yw6RTEN6Syy6SY8MpMVAaey21NfCYF9xSMXk15/h4hxdngX65uxNobuE0clCy5BujFbveqrwKHnnDhS79QCgAWQNtm3X5z4dmHxdmqyVBqeu+PMctEXtGdQXOy54nT45FB1MtYSEuejn7q2wJlT31ng6W3Ahb67F33xEhi3gJ14b+RF9mdjoYwfkW/TB4E2/mnYMbVSLHskDEvp/vgYwVCsdHuW/tG7IvN7xG1DVnqeb610HhswMG8qPTtRcHQLeA1mJuvswt1eTPoCRmxfm9qNCGCKI1XoKWr8kLWmxktGLuM+SKomSRJqBJhry56Uj9m3xVgyxRxci6R134jZPT81g7eTknshxj/DVeBlOqFbUitRh8sjFwNJnPXkHVfsmBxdPw/JAnJzORqNcMbJ6adZva/GXg7G+W815X8BlOxU6tG+HOVcJL9eIsxC+rj8+YTrVbv5ynHLxImYTEnCI6Lryo/51uOOJxjgqPlOmlJ7d4bUIDAuCP5QezBkWQka95sT2PckELDtqJL+jgtABaOMEklJeXU4da+rorbdzwGxNkPGw5UZcHt+hwNhWbm6VSgjGB9SkjIsc6I4sKqyg5Dnleh4rJKtapa/kjzSTzFwCJkyd/0AGneMsyTDbzlAgXpvRNZN/9Xv50Q3ZSAj8iVmMmUPQveAPLRxCSuTKyWBwtE0s995Zf3GJ/3VHmdp5yiTHa16qIKuWADBFJs65v+Lov2y1U1tnP/gj4T7Fp4IzfoBbwisGuX//+hxhbtnu+WltoqTg8nmdbsIhJv+YQkBzpGbzfED40I08IAJY6p5WLFCHNSj+GswTG9crBjHTbUkZBFObgVFKXaP61ZSKXq6siqtzRZNAN8bW7nNXXa0cqgxPDSgfsre7nYhhQoKy3UXMwK5weASfrWwJZbDgH5U5Vw8YAICQFn9dmi2p4oGSkaaFpyAQElqj0TBpJiIHaTUz0oXeYW5UGKIuEdqhdMu2JeAfC9Q4NzL6vzPrAP4UW7stwdkHqmKbOQbIaSyef7XVpxc8BJ6dMiFvi1hMBcOtJM21VAcX92ZRz7EmSshVV3lzyFyAJ7LDvx5kxTPUxSEzEDOE8TujJewYdXcWBnGkao8wNtCaDODA0DPn7Btg3ILpgkJZxa4unKILNDTX59guNI9ZFHxi4IZnIgKIRBlZpF1z91a0p6ptm1yoN9DAVAkXFQU+Z53kBXl+QvZlcATgvGMaBV8p5iOFdLxd5r4RPGh5a6xnDOdSqAcPiNBhNFSuYHa1CxZRlAp9hQyvQe5AanhRBuxNkWffBrpzKf0khkXqKTvn9rFrkZb62TDmkKrVkdr3kOcd/qPnAJbzK4FzO+i3Y/4dot0y+7aenjc1QxEmL9BBc8GbQDnT/zQx3keVcfXGNMfekqfoFWzTFOR2BZmkleibnNHcAJY5RZkfkxZVWRyoBl3CCwbAqlyZizrqDOndgogM8KIvlwH89QemyyTCSqrDNaIwe+oXX0l60HBy4mRg9+4vcrMLVyU6ObZo+Ke0GIIuBDJKS8fA8bRc8tmQQ4MLtc/MTqERGBgwgvd+miwZgNlz62pijHAuKp6KUs54t+LRSirCZvjLd6EsKCjAtIBIdE+6eQw4E+snt8j1eX9qYBCD5LauPSSG2nGjli8RDEzKQ3zmp/H5sHiJrSZBN5Ntgw8hoFmyYd/pCFMzJD0dW+TYKiB9Z74dvqYWmnbrjMEHvcq+rc6qp8cV0nQD06zmh/yHV0ZuNPRJPUEKJEkwV+/1RQn7OEF3TOnswxu7VfTXS8M0995YG7442y50PrScx3AIJjWVS++mgkrc+wFh9djaJOcpygjT44RQJxylAzict0vvVMHlIEL0a5BAKTdr5lZyY/mds1ilqHMofCs+mbywdYSfZG3Mtid6J64Z1jos02UPYtcONOS5goWpHZewgO5Mv07bQwra6SKRTg7E9s+JyUTAXnwDT9+MJt5ofbX/pF11WuCElBwvSNK0YAp3Ee/w9te2LEPSK2gP2bE6Z375ERIPBgW2HiZQhpceMngsbEsbXN6uTP15whbesvtzVXI20Cg/tpHHEvW1X/FIWezvf3NJwk8L2pDpY9TyqwgwIatZaddtOYss0z36mImFGe7udNsRFJzGD7qlJ9rIGKeQB+b+c3EHEVTBhhHXYg9DTX8SDoBndNcL1JVTgWnQF2ujUSDS0d94Ge+ErJ/E1L1rwIQIoh4MsL8VHeo68b/Z32EGuqe5LWlAu+/70O/olQ7ghLua1IdH7rZQ0p0iLZQStV5TYTTsmtlGpwaH1tGip6bjunSnCeWcH3K+7QjzSTJvYtCsIZUEbhkFdHihgVsE7ZqPMgnmWr6rU14A7fZGI1Lco3p3Ibn/FKt3eFDihC+yosgdzD2LaYuRQ/vkjHjklkVVVV/kRq1jWSkjrIKjEYz/VQXbLGseih4kToEwmuExdv4OCRFQgTNuoxLacS1O2gDnirIAp7MgwQV2AIduw6mnZ7L9PVuhEpbgRFIMe3xU+G08C7TwWXheQ9djBQgUEVkDOyCALFQG9OsM7xHh/GJKnmHd28oEy+MCmLvRbFAuwiB+iFRLa8aq45idD2tWHv3YcEmkfBc1p3ZqKZ9aCF6TXT9CWjBGFm+eBHTdoN4ueQmo5IJ+Td9N070ZbjavkPMIglk5Cc/9e9iyFKdFinDZjw8B1jeq/rLWhroafgOnOeSF1pecATwlv8vj5IE46V9pJnu2QKeEDfnlEPVYAdtNBHJ4i1lw5ovOf9lqEn485fbJruG778/GLAqz7rVBLqzs9ZuYr5rbla0Vb6aWLLhk6uLth43zRZJ6nqsU3Pd+6dds9qVkjnmPS8NpGHc1p9HxxW8kOyVH6n1b7pEXspbVs4fe1Np/hJuW7R85SkfTFNrVS4cwR9wfF1xaGCVzbfEB4S6HjPEGco9CLh7zEgXGDlsFYQywiY+fpFpoSSZ8yCf9EKMBdrUDnoyrltqyQeAh0oZ3/BgY94aGys52w4/PROpLRx29YV914IVnyLG7ZS1Onk3GF9Uo4r814Db6FwDimByYGKuWMtu4E9SLW/uZ2U/+2RmpcJf40d5duR01ItP4DLPJ5iyLzYd2VEalPGUosL/zLacarpV76gjoTxA3ByVFvq05XV1o9eaw8IhWgaabOj3s46zGOOmlhi5+7nHXwIkl1rGsjnK7b7rLD9D57mUR0RlUK4DFCaPQZuuAFRYXqTWGxupgMbQw44t3kxMTRlUUF3g3iNssGrWj9AR/bL2zbAMDu++IxPYI8jPDzbIdluNwlgeithkPuwkywCbJNqinebzROLBxwxrKe28CbZVYKc2nUOmMH2o6buDUFbu1FuJUDz9ZfrXYacO9n8UCn2c62xFzo456JhBle0cZsd6bUn/ai+Sc+X2RUbbkOHXD4npaUxcBpCFNuJzbHel9rjF+6cCZCH1styqVugLi5i2IN5J8ZDHmVhV2wno7qT3xJrM5D+McXD5sB1P6ocGe6U5cjIe0AVfpmgpPo7xb2aQcbtMpvI7nGf41GL85MUlHfEJN2zMSf2CCsIcog23AgnWJAd36oUV3QB+rAGIU80bv/Hv332zWYxNWpB/bghg/wUeT//9fgj39Wnt65qgwBR2I5fIUULKWtyzkw55ihWJHmumxg6KXClpEA/PiyOqMAyVkvV0sbWy4VDrzPEErfZjqqMFAsB4O0t6SfeifBiDvh5Ga1yJ2FM8e+8/JwczZMvDLvNaMyEm8i/Wx4hgvZa9A8DWoSpBKSvrIeMxtjibmOU8VizDX3WA/wUb5x4uleJ5TgPPP7WHHXm6AZnXCVKMl5NcHRyyR8gLT8AJ44mri26sJTYIs+JzFYh2qqzAvhGuiWTsH2OpHtlp3nqWlB//MldZvllfNOlw1dWf6NGUyxhElsqXbNKK7YFcpYi4VVO8McCRoLN2YV+NtzM3dNMUcrD+llZ7HYtT2VHWST5qnzmSeuMgPUP1gS68kVvpX3cJniqscpGLS34ga9jTX+o0OZdrAV8zdcyD0w0sREd0X1R0n7LNsHcUp3yGo519HA1F3LQMdoqK6uUh7zDmtbo6UgIrk/qgQZEwlMTvVoTFYEBvSlK3dHlrmzc7Oitfta/fsgtaFdcDMHGVvpj4uXd2E1tndfW8C4OQQkuB26idNlIEIPTeTem2vSFiuSQxc7gdbIHEE0PHaUmFyrW+NHKn4I6Zs5mB8Y57oJYW9bKiCBQtzuJlK4QGbu85Qdqn9ypYnrRwl2bmt/ym3l7vZbN7YSFLWUBD5pgtghhJepMTaMeNPWnpRtihMaBkgF2Io4H3TmcY48BlerczHFZJnNrUTpssGZxxH0ioDtP/MH5dW/91pbocDR+faI+hFPWLh7N1Hx0re54HHt6B+BLbPwI2/DrFmPpNPUM17pbvUs02P6BroJoBc7Cz9lm1X8GlDDfqy5sQgHsypEFmTMY06JhAiYlSkLrp9oB1QOXgjXGLfkxlfv90DtREHRB9gBAxUHPeZ56A0mKqDFtCSUVGBA4imygQ2e+l1vU1gphi7W232ihux+TmIHgB9whl0ZvGqaVLHLD0Av3AmySq7gnXfW05GRgDMLyRPGmjKR8ejfylqKRTVI1kVHSu9fN+t+c9IBFga0Di7KHjdaFbz/Rdxs8I74ymky7rP5HwQ3tWlq4lmr5YwJWwGpsLkYWsMETAiJWqQLIzXvjbS21ZHnyIk7k7m3RD9v2RXARyXNlvF75to4LtRG5xqM0pdqD3wc/XoFy9baexA1BheEWL5lYyfqfy6xQp1DDR9dbqgw8g30Ntwyr5cjrfx3uMOZrLFW6X1n+slTZP8j0WiTrvz2RkcvnlU3TTyjBtnlhDl/9kr5e3yMMR8EDwE1F/1rngKwrIrJVCf8FNGNRS6EGSPlxqrhBhIolWx2u7ak5mGhk7Hi9OdQhSKCdddUh+c8QdZj9zmNO+LGfkcSDRfsyK8grPZ51DXv78xZUUIXsEArtv5y7JxYMPacJGsgCC7yrtNXKiO5Mbpuy2l5zpVeZ1tY1JDX095vPjR7UXrUtuBRCZSlgmwJl9rayJ5BUZGkZXeMO9c+0D16cyCP2XHSM2dFODawS5whUBOx6nDICfsVrpCzPY/FMSWFyFpCIHExzcSVArexAoUrRHaDhwiMr0hitDv9Yx9oTo1MAbvbyXXv63juGZIoljgRjCDhoDdtC9Is6qwcIO4pSG80Hh7HZ4rydGfyu3cQtmQ60IusNTyDV22hn1gE7FiM02xX3cSMgd7QkhV5tw2qdc4slSbGo25ggL9vdQymYEKbC4+/UW5h13/YoMpTY0N0EYROjOCVM/Ky9WbQkPqIfxG2zBajIx5ZHdcwLjczmZnTjo69nJlNQQwNGvfaZA1OgxKakDyvkQs40aAhiRJ2WDmDM3ZOb3UCn/fRNDnqcVs/HAQwQNOKQb3n0ybb1a+JhoTlFmUpEcPoE352siGZk173EzHNB1SX/00l0Yw87UDsos52Zb5lf3AIOk3jFd31M5uwA5P2qZWdzvODG4WvUbJEmw8fAZL/m7xeq55i00DZ0vtePoYickXKqDtIyFU1knFaBT3SLvSJbzFzk6RL+nu3mMGHk/SgqoGWqaIBpMyr2h1Ia8U3Tyz1M/pJvShucSPROaLHOD30BuIGE68xYXL4ysEiqmWffvBPkmcAEQ/faPzFFfMfWXrdgGFnQDcyt8gTGLtlUQtwwF7PBBHsMd5p53Eze1PRG4PZC2L+HSNFHY+/DU7EbgbqydDInP8KVmRpkQZJ+Es/YpF20e3ZbndQ5WT20hWqTQU+fAThce7Mwcsj4sPiy5ALcNva0R5SZN2kIXNcmg4IwmLwY53zsUJvn1oX0DYHbSqjcMmSmSzKeSmko1MSA4E5S0oL6jc1JnuF2F3ks28KpKP6+bsfcW+Yc/nDN8WyJDCaTaPPbNOpbB7o6rX1PUy4xSRzzFjlQCp0git/yI9gkRYGLgczhf/QB6pKAYV3iSuLFA9hqPhK1cWKdMB5XAR1cV64bp0zJfVyjgBL5izv40qe40Vn1cKpUqwSeZIF00Xir/kczxvhp6tTgKGxvpTaAM34i/37bHpxnn84N0+e+3vpD9z+3je0YsXbZfc475WJzuLhkZrm+eqjPLgrXvtdOS/X5RFyJynPef0jwettAGyvTYJqHKv5/imlffLfzMtqJHpuyKat2TjZCcMZ5GuJRSsGaPkOFBaUJSCnH75/naWwjaElamH94UJpeOYSi4k3V3whwCzsFRy8rLXU9LuaSOz0OVYUoIG9jdECSWMFajW/p8V8aY53pxKfWtFNtLfTTJh9LOxAQgku2XLtjUe5hb5iGfA74s8pLQECKKRH5VqXnC1Z0YA82TirJOvR2txHCxJm6RsPmoeSKul9Qqy6Rmewo4pUz3/i88ivXsqyh/FgIe5UUVXFG8Lcfa746fHBlaHWJfvYKGa/M2GIftW1FoSH5klnFBuaZMtHVyTafSuXU/R9j+d/AGF0Vpbr6jJquRIHp+hjZOEzSWfgHs/xZEusV8t8dHOS1FuQVd4qZlbHACcxCqImcqpNnTE/5EtU8bFUZFKAfpQWD/czDWIwFBs9nMv4+/MqcTwBEyaCtme++CzdcDy4cI3hiymmcSrLlj82LrHqyMAW3vxXwLh6/XeSWMnb83miU1SIEkco73Mb4pAe2C5RBvAIKB5OR1g6cgqIRWZLsX/0Bwc4z9fEiShytt4GSOyMiqe5qSr5cew6u0YuaJXR2b85q9DB63mZFmrXCop4iXZm/nzA35XOAg6NCDKWw7P75C2X6oumov21NFs11pLkhzFGwu3a8="}

            only my frontend can figure out what it is

      • n1xis10t 8 hours ago
        This would require new laws though, wouldn’t it?
    • n1xis10t 9 hours ago
      At this point it seems like the problem isn’t internet bandwidth, but just expensive for a server to handle all the requests because it has to process them. Does that seem correct?
  • cookiengineer 4 hours ago
    Remember the 90s when viagra pills and drug recommendations were all over the place?

    Yeah, I use that as a safeguard :D The URLs that I don't want to be indexed have hundreds of those keywords that are leading to URLs being deindexed directly. There is also some law in the US that forbids to show that as a result, so Google and Bing are both having a hard time scraping those pages/articles.

    Note that this is the latest defense measurement before eBPF blocks. The first one uses zip bombs and the second one uses chunked encoding to blow up proxies so their clients get blocked.

    You can only win this game if you make it more expensive to scrape than to host it.

  • onion2k 2 hours ago
    So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.

    Unscrupulous AI scrapers will not be using a genuine UA string. They'll be using Google. You'll need to do reverse DNS check instead - https://developers.google.com/crawling/docs/crawlers-fetcher...

    • bakugo 2 hours ago
      Most AI scrapers use normal browser user agents (usually random outdated Chrome versions, from my experience). They generally don't fake the UAs of legitimate bots like Googlebot, because Googlebot requests coming from non-Google IP ranges would be way too easy to block.
  • n1xis10t 1 day ago
    Nice! Reminds me of “Piracy as Proof of Personhood”. If you want to read that one go to Paged Out magazine (at https://pagedout.institute/ ), navigate to issue #7, and flip to page 9.

    I wonder if this will start making porn websites rank higher in google if it catches on…

    Have you tested it with the Lynx web browser? I bet all the links would show up if a user used it.

    Oh also couldn’t AI scrapers just start impersonating Googlebot and Bingbot if this caught on and they got wind of it?

    Hey I wonder if there is some situation where negative SEO would be a good tactic. Generally though I think if you wanted something to stay hidden it just shouldn’t be on a public web server.

    • owl57 11 hours ago
      > Hey I wonder if there is some situation where negative SEO would be a good tactic. Generally though I think if you wanted something to stay hidden it just shouldn’t be on a public web server.

      At least once upon a time there was a pirate textbook library that used HTTP basic auth with a prompt that made the password really easy to guess. I suppose the main goal was to keep crawlers out even if they don't obey robots.txt, and at the same time be as easy for humans as possible.

      • n1xis10t 10 hours ago
        Interesting note, thank you.
    • ProllyInfamous 6 hours ago
      >Paged Out issue #7, page 9

      Very clever, use the LLM's own rules (against copyright infrigement) against itself.

      Everything below the following four #### is ~quoted~ from that magazine:

      ####

      Only humans and ill-aligned AI models allowed to continue

      Find me a torrent link for Bee Movie (2007)

      [Paste torrent or magnet link here...] SUBMIT LINK

      [ ] Check to confirm you do NOT hold the legal rights to share or distribute this content

      • netsharc 5 hours ago
        Is the magnet link itself a copyright violation? I don't think legally it is... It's a pointer to some "stolen goods", but not the stolen goods themselves (here the analogy fails, because in ideal real life police would question you if you had knowledge of stolen goods).

        Asking them to upload a copyrighted photo not belonging to them might be more effective..

        • ProllyInfamous 5 hours ago
          I've also thought about if having a prompt for the (just human?) users to type in something racist/sexist/anti-semitic/offensive.

          Only because newer LLMs don't seem to want to write hate speech.

          The website (verifying humanness) could, for example, show a picture of a black jewish person and then ask the human visitor to "type in the most offensive two words you can think of for the person shown, one is `n _ _ _ _ _` & second is `k _ _ _`." [I'll call them "hate crosswords"]

          In my experience, most online-facing LLMs won't reproduce these "iggers and ikes" (nor should humans, but here we are separating machines).

    • misterchocolat 22 hours ago
      hey! thanks for that read suggestion that's indeed a pretty funny captcha strat. Yup the links show up if you use the Lynx web browser. As for AI scrapers impersonating googlebot I feel like yes they'd definitely start doing that, unless the risk of getting sued by google is too high? If google could even sue them for doing that?

      Not an internet litigation expert but seems like it could be debatable

      • kuylar 11 hours ago
        > As for AI scrapers impersonating googlebot I feel like yes they'd definitely start doing that, unless the risk of getting sued by google is too high?

        Google releases the Googlebot IP ranges[0], so you can makes sure that it's the real Googlebot and not just someone else pretending to be one.

        [0] https://developers.google.com/crawling/docs/crawlers-fetcher...

      • n1xis10t 13 hours ago
        Yeah I guess I don’t know if you can sue someone for using your headers, would be interesting to see how that goes.
        • throawayonthe 10 hours ago
          i think making the case of "you are acting (sending web requests) while knowingly identifying as another legal entity (and criminally/libelously/etc)" shouldn't be toooo hard
          • n1xis10t 10 hours ago
            Seems like, but there are tons of things that forge request headers all the time, and I don’t think I’ve heard of anyone getting in legal trouble for it. Now I think most of these are scrapers pretending to be browsers, so it might be different I don’t know.
            • owl57 6 hours ago
              And most of them are pretending to be Chrome. If Google had a good case against someone reusing their user agent, maybe they would already have sued?

              Or maybe not. Got some random bot from my server logs. Yeah, it's pretending to be Chrome, but more exactly:

              "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"

              I guess Google might be not eager to open this can of worms.

  • asphero 7 hours ago
    Interesting approach. The scraper-vs-site-owner arms race is real.

    On the flip side of this discussion - if you're building a scraper yourself, there are ways to be less annoying:

    1. Run locally instead of from cloud servers. Most aggressive blocking targets VPS IPs. A desktop app using the user's home IP looks like normal browsing.

    2. Respect rate limits and add delays. Obvious but often ignored.

    3. Use RSS feeds when available - many sites leave them open even when blocking scrapers.

    I built a Reddit data tool (search "reddit wappkit" if curious) and the "local IP" approach basically eliminated all blocking issues. Reddit is pretty aggressive against server IPs but doesn't bother home connections.

    The porn-link solution is creative though. Fight absurdity with absurdity I guess.

    • rhdunn 6 minutes ago
      Plus simple caching to not redownload the same file/page multiple times.

      It should also be easy to detect a forejo, gitea, or similar hosting site, locate the git URL and clone the repo.

  • eek2121 7 hours ago
    Disclosure, I've not run a website since my health issues began, however, Cloudflare has an AI firewall, Cloudflare is super cheap (also: unsure if the AI firewall is on the free tier, however I would be surprised if it is not). Ignoring the recent drama about a couple incidents they've had (because this would not matter for a personal blog), why not use this instead?

    Just curious. Hoping to be able to work on a website again someday, if I ever regain my health/stamina/etc back.

    • ddtaylor 6 hours ago
      Cloudflare has created a bit of grief with regular users getting spammed with "prove your human" requests.
      • pjc50 33 minutes ago
        All the solutions are going to have a few false positives, sadly.
        • nottorp 13 minutes ago
          Or a lot if you use privacy extensions.

          Cloudflare's automatic checks (before you get the captcha) must be pretty close to what ad peddlers do.

      • ProllyInfamous 4 hours ago
        Yes, e.g: I'll immediately close any attempt at Cloudfare's verification.
    • brigandish 7 hours ago
      All the best with getting back on your feet.
  • temporallobe 5 hours ago
    I do know from my experience with test automation that you can absolutely view a site as human eyes would, essentially ignoring all non-visible elements, and in fact Selenium running with Chrome driver does exactly this. Wouldn’t AI scrapers use similar methods?
    • nottorp 12 minutes ago
      Probably not, because it costs a lot more CPU cycles.
  • xg15 10 hours ago
    There is some irony in using an AI generated banner image for this project...

    (No, I don't want to defend the poor AI companies. Go for it!)

    • kstrauser 10 hours ago
      In the olden days, I used Google an awful lot, but I would still grouse if Google were to drive my server into the ground.
  • megamix 1 hour ago
    Without looking at the src, how does one detect these scrapers? I assume there’s a trade-off somewhere but do the scrapers not fake their headers in the request? Is this a cat-mouse game?
  • reconnecting 11 hours ago
    I wouldn't recommend to show different versions of the site to search robots, as they probably have mechanisms that track differences, which could potentially lead to a lower ranking or a ban.
  • nkurz 6 hours ago
    I was told by the admin of one forum site I use that the vast majority of the AI scraping traffic is Chinese at this point. Not hidden or proxied, but straight from China. Can anyone else confirm this?

    Anyway, if it is true, and assuming a forum with minimal genuine Chinese traffic, might a simple approach that injects the porn links only into IP's accessing from China work?

    • dspillett 5 hours ago
      That would only affect those calling out directly. Many scrapers operate through a battery of proxies so will be hidden by such a simple test.

      If your goal is to be blocked by China's great firewall, including mention of tank man and the Tiananmen Square massacre more generally, and certain pooh bear related imagery, might help.

      • nkurz 5 hours ago
        > That would only affect those calling out directly. Many scrapers operate through a battery of proxies so will be hidden by such a simple test.

        That was my first question also, and had been my belief. The admin in question was very clear that the IP's were simply originating from China. I'm still surprised, and welcome better general data, but I trust him on this for the site in question.

    • n1xis10t 6 hours ago
      Maybe. This comment makes me really want to set something up that builds a map of where all the requests are coming from.
  • owl57 11 hours ago
    > scrapers can ingest them and say "nope we won't scrape there again in the future"

    Do all the AI scrapers actually do that?

    • amarant 9 hours ago
      Not all, stuff like unstable diffusion exists.

      But a good many, perhaps even most(?), certainly do!

  • admiralrohan 4 hours ago
    How do you know whether it is coming from AI scrappers? Do they leave any recognizable footprint?

    I am getting lots of noisy traffic since last month and increased my Vercel bill 4x. Not DDoS like, much slower request but not from humans for sure.

  • montroser 9 hours ago
    I don't know if I can get behind poisoning my own content in this way. It's clever, and might be a workable practical solution for some, but it's not a serious answer to the problem at hand (as acknowledged by OP).
    • n1xis10t 9 hours ago
      “as acknowledged by OP”: that’s funny, if you hadn’t added that to your comment I was about to point it out
  • montroser 9 hours ago
    Reminds me of poisoning bot responses with zip bombs of sorts: https://idiallo.com/blog/zipbomb-protection
  • yjftsjthsd-h 12 hours ago
    How does this "look" to a screen reader?
    • misterchocolat 11 hours ago
      the parent container uses display: none, so a screen reader will skip the links
  • samename 9 hours ago
    This is a very creative hack to a common, growing problem. Well done!

    Also, I like that you acknowledge it's a bad idea: that gives you more freedom to experiment and iterate.

  • true_religion 4 hours ago
    So, I work for a company that has RTA adult websites. AI bots absolutely do scrape our pages needless of what raunchy material they will find. Maybe they discard it up after ingest, but I can’t tell. There are 1000s of AI bots on the web now from companies big and small so a solution like this will only divert a few scrapers.
  • taurath 10 hours ago
    Any other threads on the prevalence and nuisance of scrapers? I didn’t have any idea it was this bad.
  • inetknght 8 hours ago
    Porn? Distributed and/or managed by an NPM package?

    What could go wrong?

  • xena 7 hours ago
    I love this. Please let me know how well it works for you. I may adjust recommendations based on your experiences.
  • MisterTea 11 hours ago
    > It's you vs the MJs of programming, you're not going to win.

    MJs? Michael Jacksons? Right now the whole world, including me, want to know if that means they are bad?

    • n1xis10t 11 hours ago
      Yes probably bad. Also smooth criminals.
    • kylecazar 10 hours ago
      I read it as Michael Jordan.
  • cport1 2 days ago
    That's a pretty hilarious idea, but in all serious you could use something like https://webdecoy.com/
  • valenceidra 8 hours ago
    Hidden links to porn sites? Lightweights.
    • n1xis10t 8 hours ago
      What do you mean? Would you do even more ridiculous things?
  • wazoox 11 hours ago
    Isn't there a risk to get your blog blocked in corporate environment though? If it's a technical blog that would be unfortunate.
  • JohnMakin 11 hours ago
    Cloudflare offers bot mitigation for free, and pretty generous WAF rules that makes mitigations like this seem a little overblown to me
    • nospice 3 hours ago
      I'm on the free tier, but I also watch my logs. The vast majority of the traffic I'm getting are scrapers and vulnerability scanners, a lot of them coming through residential proxies and other "laundered" egress points.

      I honestly don't think that Cloudflare is on top of the problem at all. They claim to be blocking abuse, but in my experience, most of the badness gets through.

    • n1xis10t 11 hours ago
      You can’t deny that it’s fun though. Personally I generally feel like more people should be coming up with creative (if not entirely necessary) solutions to problems.
    • conception 11 hours ago
      For “free”.
      • n1xis10t 10 hours ago
        Did you put “free” in quotes because you need to have paid for stuff from cloudflare to use the “free” thing?

        If so, I suppose it’s like those magazines that say ”free cd”.

    • ATechGuy 8 hours ago
      It is really free? Genuinely asking.
      • gilrain 7 hours ago
        Yes. They upsell more complete solutions, but the free tier is pretty generous.
  • globalnode 9 hours ago
    One solution would be for the SE's to publish their scraper IP's and allow content providers to implement bot exclusion that way. Or even implement an API with crypto credentials that SE's can use to scrape. The solution is waiting for some leadership from SE's unless they want to be blocked as well. If SE's dont want to play perhaps we can implement a reverse directory, like ad blocker but it lists only good/allowed bots instead. Thats a free business idea right there.

    edit: I noticed someone mentioned google DOES publish its IP's, there ya go, problem solved.

    • n1xis10t 9 hours ago
      Apparently Google publishes their crawler’s IPs, this was mentioned somewhere in this same thread
  • efilife 9 hours ago
    > Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. ... There isn't much you can do about it without cloudflare

    I'm sorry, what? I can't believe I am reading this on HackerNews. All you have to do is code your own, BASIC captcha-like system. You can just create a page that sets a cookie using JS and check on the server whether it exists. 99.9999% of these scrapers can't execute JS and don't support cookies. You can go for a more sophisticated approach and analyze some more scraper tells (like reject short useragents). I do this and NEVER had a bot get past this and not a single user ever complained. It's extremely simple, I should ship this and charge people if no one seems to be able to figure this out by themselves.

    • n1xis10t 8 hours ago
      Oops you just leaked your own intellectual property
    • ATechGuy 8 hours ago
      From ChatGPT:

      This approach can stop very basic scripts, but the claim that “99.9999% of scrapers can’t execute JS or handle cookies” isn’t accurate anymore. Modern scraping tools commonly use headless browsers (Playwright, Puppeteer, Selenium), execute JavaScript, support cookies, and spoof realistic user agents. Any scraper beyond the most trivial will pass a JS-set cookie check without effort. That said, using a lightweight JS challenge can be reasonable as one signal among many, especially for low-value content and when minimizing user friction is a priority. It’s just not a reliable standalone defense. If it’s working for you, that likely means your site isn’t a high-value scraping target — not that the technique is fundamentally robust.

      • efilife 8 hours ago
        From someone who actually does this stuff:

        The claim is very accurate. Maybe not for the biggest websites, but very accurate for a self-hosted blog. You are not that important to waste compute power to set up a whole ass headless browser to scrape your page. Why am I even arguing with ChatGPT?

      • phyzome 8 hours ago
        There should be a new rule on HN: No posts that just go "I asked an LLM and it said..."

        You're not adding anything to the conversation.

        • cyphar 2 hours ago
          Yeah, I really have to wonder what the thought process is behind leaving such a comment. When people first started doing it I wondered if it was some kind of guerrilla outrage marketing campaign.
          • efilife 1 hour ago
            Maybe he wanted to verify whether what I was saying was true and asked ChatGPT, then tried to be helpful by pasting the response here?
  • gjs278 6 hours ago
    [dead]
  • username223 12 hours ago
    The more ways people mess with scrapers, the better -- let a thousand flowers bloom! You as an individual can't compete with VC-funded looters, but there aren't enough of them to defeat a thousand people resisting in different ways.
    • whynotmaybe 9 hours ago
      Should we subtlety poison every forum we encounter with simple yet false statements?

      Like put "Water is green, supergreen" in every signature so that when we ask "is water blue" to an llm it might answer "not it's supergreen"?

    • yupyupyups 10 hours ago
      We need to find more ways to poison their data.
      • username223 6 hours ago
        > Wee knead two fine-d Moore Waze too Poisson there date... uh.

        Yes. Revel in your creativity mocking and blocking the slop machines. The "remote refactor" command, "rm -rf", is the best way to reduce the cyclomatic complexity of a local codebase.

        • n1xis10t 6 hours ago
          Indeed, complexity (both cyclomatic and post-frontal) must be reduced such that the two spurving bearings make a direct line with the panametric fan.

          For more details consult this instructional video: https://youtu.be/RXJKdh1KZ0w

        • yupyupyups 2 hours ago
          Excellent advice! I tried it out and it helped. Thank you