6 comments

  • forgotpwd16 46 minutes ago

      74910,74912c187768,187779
      < [Example 1: If you want to use the code conversion facetcodecvt_utf8to output tocouta UTF-8 multibyte sequence
      < corresponding to a wide string, but you don't want to alter the locale forcout, you can write something like:\237 D.27.21954
                                                                                                                                    \251ISO/IECN4950wstring_convert<std::codecvt_utf8<wchar_t>> myconv;
      < std::string mbstring = myconv.to_bytes\050L"Hello\134n"\051;
      ---
      >
      > [Example 1: If you want to use the code conversion facet codecvt_utf8 to output to cout a UTF-8 multibyte sequence
      > corresponding to a wide string, but you don’t want to alter the locale for cout, you can write something like:
      >
      > § D.27.2
      > 1954
      >
      > © ISO/IEC
      > N4950
      >
      > wstring_convert<std::codecvt_utf8<wchar_t>> myconv;
      > std::string mbstring = myconv.to_bytes(L"Hello\n");
    
    Is indeed faster but output is messier. And doesn't handle Unicode in contrast to mutool that does. (Probably also explains the big speed boost.)
    • TZubiri 33 minutes ago
      In my experience with parsing PDFs, speed has never been an issue, it has always been a matter of quality.
      • DetroitThrow 1 minute ago
        I tried a small PDF and got a memory error. It's definitely much faster than MuPDF on that file.
    • lulzx 36 minutes ago
      fixed.
      • forgotpwd16 20 minutes ago
        Yeah, sorry for confusion. When said Unicode, meant foreign text rather (just) the unescaped symbols, e.g. Greek. At one random Greek textbook[0], zpdf output is (extract | head -15):

          01F9020101FC020401F9020301FB02070205020800030209020701FF01F90203020901F9012D020A0201020101FF01FB01FE0208 
          0200012E0219021802160218013202120222 0209021D0212021D012E013202200222000301FA021A0220021C022002160213012E0222000F000301F90206012C
        
          020301FF02000205020101FC020901F90003020001F9020701F9020E020802000205020A 
          01FC028C0213021B022002230221021800030200012E021902180216021201320221021A012E00030209021D0212021D012E013202200222000301FA021A0220021C022002160213012E0222000F000301F90206012C 
         
          0200020D02030208020901F90203020901FF0203020502080003012B020001F9012B020001F901FA0205020A01FD01FE0208 
          020201300132012E012F021A012F0210021B013202200221012E0222 0209021D0212021D012E013202200222000301FA021A0220021C022002160213012E0222000F000301F90206012C 
        
        This for entire book. Mutool extracts the text just fine.

        [0]: https://repository.kallipos.gr/handle/11419/15087

      • TZubiri 32 minutes ago
        Lol, but there's 100 competitors in the PDF text extraction space, some are multi million dollar industries: AWS textract, ABBY PDFreader, PDFBox, I think you may be underestimating the challenge here.
  • lulzx 3 hours ago
    I built a PDF text extraction library in Zig that's significantly faster than MuPDF for text extraction workloads.

    ~41K pages/sec peak throughput.

    Key choices: memory-mapped I/O, SIMD string search, parallel page extraction, streaming output. Handles CID fonts, incremental updates, all common compression filters.

    ~5,000 lines, no dependencies, compiles in <2s.

    Why it's fast:

      - Memory-mapped file I/O (no read syscalls)
      - Zero-copy parsing where possible
      - SIMD-accelerated string search for finding PDF structures
      - Parallel extraction across pages using Zig's thread pool
      - Streaming output (no intermediate allocations for extracted text)
    
    What it handles:

      - XRef tables and streams (PDF 1.5+)
      - Incremental PDF updates (/Prev chain)
      - FlateDecode, ASCII85, LZW, RunLength decompression
      - Font encodings: WinAnsi, MacRoman, ToUnicode CMap
      - CID fonts (Type0, Identity-H/V, UTF-16BE with surrogate pairs)
    • tveita 2 hours ago
      What kind of performance are you seeing with/without SIMD enabled?

      From https://github.com/Lulzx/zpdf/blob/main/src/main.zig it looks like the help text cites an unimplemented "-j" option to enable multiple threads.

      There is a "--parallel" option, but that is only implemented for the "bench" command.

      • lulzx 2 hours ago
        I have now made parallel by default and added an option to enable multiple threads.

        I haven't tested without SIMD.

    • cheshire_cat 2 hours ago
      You've released quite a few projects lately, very impressive.

      Are you using LLMs for parts of the coding?

      What's your work flow when approaching a new project like this?

      • lulzx 1 hour ago
        Claude Code.
      • littlestymaar 1 hour ago
        > Are you using LLMs for parts of the coding?

        I can't talk about the code, but the readme and commit messages are most likely LLM-generated.

        And when you take into account that the first commit happened just three hours ago, it feels like the entire project has been vibe coded.

        • Neywiny 1 hour ago
          Hard disagree. Initial commit was 6k LOC. Author could've spent years before committing. Ill advised but not impossible.
          • littlestymaar 56 minutes ago
            Why would you make Claude write your commit message for a commit you've spent years working on though?
            • Neywiny 44 minutes ago
              1. Be not good at or a fan of git when coding

              2. Be not good at or a fan of git when committing

              Not sure what the disconnect is.

              Now if it were vibecoded, I wouldn't be surprised. But benefit of the doubt

    • jeffbee 1 hour ago
      What's fast about mmap?
    • jonstewart 1 hour ago
      What’s the fidelity like compared to tika?
      • lulzx 1 hour ago
        The accuracy difference is marginal (1-2%) but the speed difference is massive.
  • mpeg 2 hours ago
    very nice, it'd be good to see a feature comparison as when I use mupdf it's not really just about speed, but about the level of support of all kinds of obscure pdf features, and good level of accuracy of the built-in algorithms for things like handling two-column pages, identifying paragraphs, etc.

    the licensing is a huge blocker for using mupdf in non-OSS tools, so it's very nice to see this is MIT

    python bindings would be good too

  • odie5533 2 hours ago
    Now we just need Python bindings so I can use it in my trash language of choice.
  • agentifysh 2 hours ago
    excellent stuff what makes zig so fast
    • observationist 1 hour ago
      Not being slow - they compile straight to bytecode, they aren't interpreted, and have aggressive, opinionated optimizations baked in by default, so it's even faster than compiled c (under default conditions.)

      Contrasted with python, which is interpreted, has a clunky runtime, minimal optimizations, and all sorts of choices that result in slow, redundant, and also slow, performance.

      The price for performance is safety checks, redundancy, how badly wrong things can go, and so on.

      A good compromise is luajit - you get some of the same aggressive optimizations, but in an interpreted language, with better-than-c performance but interpreted language convenience, access to low level things that can explode just as spectacularly as with zig or c, but also a beautiful language.

      • Zambyte 1 hour ago
        Zig is safer than C under default conditions, not faster. By default does a lot of illegal behavior safety checking, such as array and slice bounds checking, numeric overflow checking, and invalid union access checking. These features are disabled by certain (non default) build modes, or explicitly disabled at a per scope level.

        It may be easier to write code that runs faster in Zig than in C under similar build optimization levels, because writing high performance C code looks a lot like writing idiomatic Zig code. The Zig standard library offers a lot of structures like hash maps, SIMD primitives, and allocators with different performance characteristics to better fit a given use-case. C application code often skips on these things simply because it is a lot more friction to do in C than in Zig.

      • agentifysh 1 hour ago
        will add this to the list, now learning new languages is less of a barrier with LLMs
    • AndyKelley 1 hour ago
      It makes your development workflow smooth enough that you have the time and energy to do stuff like all the bullet points listed in https://news.ycombinator.com/item?id=46437289
      • forgotpwd16 39 minutes ago
        >you have the time and energy to do stuff like all the bullet points listed

        Don't disagree but in specific case, per the author, project was made via Claude Code. Although could as well be that Zig is better as LLM target. Noticed many new vibe projects decide to use Zig as target.

  • littlestymaar 1 hour ago
    - First commit 3hours ago.

    - commit message: LLM-generated.

    - README: LLM-generated.

    I'm not convinced that projects vibe coded over the evening deserve the HN front page…

    Edit: and of course the author's blog is also full of AI slop…

    2026 hasn't even started I already hate it.

    • dmytrish 42 minutes ago
      ...and it does not work. I tried it on ~10 random pdfs, including very simple ones (e.g. a hello world from typst), it segfaults on every single one.
      • forgotpwd16 35 minutes ago
        Tried few and works. Maybe you've older or newer Zig version than whatever project targets. (Mine is 0.15.2.)
        • dmytrish 18 minutes ago

             ~/c/t/s/zpdf (main)> zig version
             0.15.2
          
          Sky is blue, water is wet, slop does not work.
    • kingkongjaffa 1 hour ago
      Wait, but why?

      If it's really better than what we had before, what does it matter how it was made? It's literally hacked together with the tools of the day (LLMs) isn't that the very hacker ethos? Patching stuff together that works in a new and useful way.

      5x speed improvements on pdf text extraction might be great for some applications I'm not aware of, I wouldn't just dismiss it out of hand because the author used $robot to write the code.

      Presumably the thought to make the thing in the first place and decide what features to add and not add was more important than how the code is generated?