Mistral Medium 3.5

(mistral.ai)

204 points | by meetpateltech 2 hours ago

20 comments

  • simjnd 1 hour ago
    I'm not sure what people are on in the comments. It doesn't beat the other models, but it sure competes despite its size.

    GLM 5.1 is an excellent model, but even at Q4 you're looking at ~400GB. Kimi K2.5 is really good too, and at Q4 quantization you're looking at almost ~600GB.

    This model? You can run it at Q4 with 70GB of VRAM. This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).

    For the Claude-pilled people, I don't know if you only run Opus but when I was on the Pro plan Sonnet was already extremely capable. This beats the latest Sonnet while running locally, without anyone charging you extra for having HERMES.md in your repo, or locking you out of your account on a whim.

    Mistral has never been competitive at the frontier, but maybe that is not what we need from them. Having Pareto models that get you 80% of the frontier at 20% of the cost/size sounds really good to me.

    • UncleOxidant 2 minutes ago
      Yeah, you can run it locally if you have enough VRAM, but the reports trickling in are saying about 3 tok/sec. This was on a Strix Halo box which definitely has the needed VRAM, but isn't going to have as high mem bandwidth as a GPU card, it's going to be similar on a Mac - that's the dilemma... the unified memory machines have the VRAM, but the bandwidth isn't great for running dense models. This size of a dense model is only going to be runnable (usefully) by very few people who have multiple GPU cards with enough memory to add up to about 70GB.
    • gregsadetsky 57 minutes ago
      I didn't know about HERMES.md ... (??) - found information here for others who are curious https://github.com/anthropics/claude-code/issues/53262
      • giancarlostoro 4 minutes ago
        That is insane, if you billed me an extra $200 for a bug in your system I'd flat out cancel my subscription. If you're not going to credit that back to me, you don't deserve anymore of my money. I'm a Claude first guy, but if you're going to bill me incorrectly, that's on you, own it, fix it.
    • giancarlostoro 8 minutes ago
      > For the Claude-pilled people, I don't know if you only run Opus but when I was on the Pro plan Sonnet was already extremely capable.

      Before February I was able to use Opus on High exclusively on my Max plan no problem. Now I've shifted to just using Sonnet on high and yeah, its pretty capable. I love that, Claude Pilled. ;)

    • Aurornis 36 minutes ago
      > This model? You can run it at Q4 with 70GB of VRAM. This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).

      The one thing I would want everyone curious about local LLMs to know is that being able to run a model and being able to run a model fast are two very different thresholds. You can get these models to run on a 128GB Mac, but we need to first tell if Q4 retains enough quality (models have different sensitivities to quantization) and how fast it runs.

      For running async work and background tasks the prompt processing and token generation speeds matter less, but a lot of Mac Studio buyers have discovered the hard way that it's not going to be as responsive as working with a model hosted in the cloud on proper hardware.

      For most people without hard requirements for on-site processing, the best use case for this model would be going through one of the OpenRouter hosted providers for it and paying by token.

      > This beats the latest Sonnet while running locally

      Almost every open weight model launch this year has come with claims that it matches or exceeds Sonnet. I've been trying a lot of them and I have yet to see it in practice, even when the benchmarks show a clear lead.

      • zozbot234 32 minutes ago
        Cloud hardware is not inherently more "proper" than what's being proposed here, there's nothing wrong per se about targeting slower inference speeds in an on prem single-user context.
        • Aurornis 28 minutes ago
          > Cloud hardware is not inherently more "proper" than what's being proposed here

          Cloud hardware can run the original model. Quantization will reduce quality. The quality drop to Q4 is not trivial.

          Cloud hardware is also massively faster in time to first token and token generation speed.

          > there's nothing wrong per se about targeting slower inference speeds in a local single-user context.

          If that's what the user wants and expects then it's fine

          Most people working interactively with an LLM would suffer from slower turns.

        • cbg0 27 minutes ago
          The quantization for some models can be very detrimental and their quality can drop considerably from the posted benchmarks which are probably at bf16, this is why having considerable RAM can be important.
    • liuliu 17 minutes ago
      The competition is on DeepSeek v4 Flash for similar size / deployment target.
    • YetAnotherNick 48 minutes ago
      It has similar SWE bench score to qwen 3.6 27b[1]. No one is comparing it to frontier.

      [1]: There is no other common benchmark in the blog.

    • 2ndorderthought 35 minutes ago
      The point is it's open weight and is tiny compared to a lot of it's competitors. 4gpus for world class performance - sweet!
    • DeathArrow 31 minutes ago
      >This model? You can run it at Q4 with 70GB of VRAM. >This beats the latest Sonnet while running locally

      Not sure it will beat Sonet at Q4.

      >This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).

      For $3500 I can get 7-8 years of GLM using coding plans, have a faster model and much better code quality.

      • kobalsky 8 minutes ago
        > For $3500 I can get 7-8 years of GLM

        mind sharing where's the go to place to pay for open models?

    • redrove 1 hour ago
      It’s 128b dense model. Good luck getting more than 3t/s out of a mac. It doesn’t matter if it fits or not.
      • zozbot234 35 minutes ago
        You could run it on a single Mac Studio with M3 Ultra, or two Mac Studios with M4 Max at higher perf than that. And lightly quantizing this could give us modern dense models in the ~80GB params range, which is a very compelling target.
        • freakynit 31 minutes ago
          Wouldn't matter much still. M3 ultra has 819GB/s unified memory bandwidth. That means theoretical max tokem rate is 819/128 =~ 6.39 t/s. At 80 GB (5 bit quantization), its still near about 10 t/s ... far from a good coding experience. Also, these are theoretical max.. real world token generation rates would be at least 15-20% less.
    • freakynit 42 minutes ago
      I was hoping a lot from it... but this one, is not up to that mark. For example, here is it's comparion with 4.7x smaller model, qwen3.7-27b.

      https://chatgpt.com/share/69f239e8-7414-83a8-8fdd-6308906e5f...

      Tldr: qwen3.6-27b, a 4.7x smaller model, have similar performance.

      • r0b05 39 minutes ago
        That's a chatgpt summary. Actual usage would a better test.
        • freakynit 35 minutes ago
          yep.. until then, this is good enough since the tests are standard, and the results are numeric and can be compared without any doubt.
      • lostmsu 36 minutes ago
        To be fair MoE from Qwen itself had the same "problem". 3.5 122B MoE was same or worse than 3.5 27B. Yet to see 122B 3.6.

        UPD. NVM, Mistral Medium 3.5 is dense. So yes, it is worse in every way.

  • vessenes 1 hour ago
    As always, rooting for these guys — model and national diversity is great. This looks like a solid foundation to build on; hopefully the 3.6/3.7 will dial in more gains. It looks like maybe from the computer use benchmarks that their vision pipeline could use improvement, but that’s just speculation.

    The different results on some benchmarks vibes as if this is truly an independently trained model, not just exfiltrated frontier logs, which I think is also really important - having different weight architectures inside a particular model seems like a benefit on its own when viewed from a global systems architecture perspective.

  • mtct88 2 hours ago
    It's okay, nothing exceptional, but any news from non US and non Chinese models is still good news.
    • pb7 2 hours ago
      This is the bar for Europe, huh?
      • deaux 1 hour ago
        Where are the competitive models from Singapore, Japan, Taiwan, Korea, Russia, Canada, India, the UK? From anywhere that isn't China or the US?

        There are none. Mistral Small 4 is pareto-competitive in its pricing bracket at $0.15/$0.60, at worst it's second to Gemma 4 26B A4B. The above countries have never had a model that is even close to being so.

        This particular Mistral Medium looks to be uncompetitive at that pricing. I'm surprised it's so expensive given its size. Wonder if we'll see other providers offer it for cheaper.

        but that doesn't mean Mistral has never produced anything useful.

        • argsnd 1 hour ago
          DeepMind, which is headquartered in London, probably had a significant role in the development of the Gemini and Gemma models.

          Yes, it might be a problem that the UK allows companies like this to be bought up by foreign countries.

          • wasfgwp 37 minutes ago
            Without Google’s funding its not obvious i DeepMind would have went anywhere.

            Unless the moved to US for funding while keeping a back office in the UK.

            It’s strange to expect anything significant to come out from Europe when VCs there are either very risk averse and/or don’t have enough cash to begin with. It’s not like government or EU funding can replace that since its almost always wasted or missdirected

        • johndough 1 hour ago
          > Korea

          EXAONE from LG AI Research https://huggingface.co/LGAI-EXAONE

          They had one of the best small models a few months ago and they released a new model just last week.

          There's also HyperCLOVA X (haven't tested it, but maybe it is also good) https://huggingface.co/naver-hyperclovax

          > India

          India has the Sarvam model series, which admittedly are not SotA, but they have pretty good voice capabilities https://huggingface.co/sarvamai

          The UAE (not part of the list above) also has a few noteworthy models: https://huggingface.co/tiiuae

          • deaux 42 minutes ago
            I'm familiar with those models. They're nowhere near competitive. Miles away from Mistral or (obviously) Chinese models.

            > (haven't tested it, but maybe it is also good)

            I have. It is not.

            • johndough 36 minutes ago
              You mentioned "pareto-competitive", and EXAONE certainly was that. The statement that the "above countries have never had a model that is even close to being so" is simply too broad.
          • cyanydeez 39 minutes ago
            they should ask unsloth to follow them. For my usecases locally w/128GB, Qwen3.5-Coder-Next is SOTA.
        • class4behavior 1 hour ago
          Although the Manus decision might change things for AI, Singapore-washing is quite rampant among Chinese companies, so I wouldn't call this place of origin an alternative market.
      • amunozo 1 hour ago
        This is the bar for anybody that's not the frontier labs.
      • locknitpicker 1 hour ago
        > This is the bar for Europe, huh?

        A few months ago China was being criticized left and right on how somehow it was not able to compete, and once DeepSeek showed up then all the hatred shifted onto how China was actually competing but exploring unfair competitive advantages.

        Funny how that works.

        Also, aren't the likes of OpenAI burning through over $2 of investment for each $1 of revenue?

        • pb7 1 hour ago
          China is not competing, it is distilling US models. Where are the Chinese models that are blowing US ones out of the water? There aren't any. The US continues to innovate, China replicate, Europe regulate. As is tradition.

          >Also, aren't the likes of OpenAI burning through over $2 of investment for each $1 of revenue?

          Yes, innovation costs money.

          Edit: In response to below, EUV machines use tech licensed from the US, so yes, the US worked on them.

          • 2ndorderthought 10 minutes ago
            I find it funny how people don't realize the technical achievements and papers coming out of deepseek or Alibaba. They are making this whole AI thing sustainable and cheap and available to do at home. That's the future. I should be able to run my own harness and model and never bother with openai or anthropic at all.
          • nickthegreek 1 hour ago
            2 businesses working to get money from the same customers in the same field is competition. Kellogs is competing with store brand cereal. People are choosing to use these Chinese AI apis because they are good enough for some workflows and cheaper. If they didn't exist, the money would go to the frontier labs. There is no world where this would not be defined as competition.
          • tirpen 1 hour ago
            > China is not competing, it is distilling US models

            China are cheating by using data obtained without permission to train their models in an evil commie way!

            They should have done what the US did instead and trained models on data obtained without permission in a fair and freedum way!

            > Where are the Chinese models that are blowing US ones out of the water?

            Kimi2 blows every US model out of the water in any comparison that includes both costs and performance.

            • 2ndorderthought 31 minutes ago
              Qwen3.6 runs on a single GPU and beats claudes sonnet. In benchmarks and real world tests from humans. Kimi is awesome but most people won't be able to host it themselves.

              A lot of people are slowly realizing the moat of 1T closed source models is gone as of the last few weeks. It's going to change the industry. April was a huge month for open models, it'll be curious to see if that continues.

              This Mistral submission is another nail in the coffin.

              • wasfgwp 29 minutes ago
                > beats claudes sonnet

                Based on benchmarks which don’t mean that much these days.

                > models is gone as of the last few weeks.

                Yes, that’s exactly what people were saying after every major release for the past year or so. It’s always a couple of weeks away

              • prodigycorp 24 minutes ago
                i run qwen 3.6. you need to drink some settle down juice.
          • locknitpicker 1 hour ago
            > China is not competing, it is distilling US models.

            I think you should check your notes. The likes of Kimi K2 thinking shows up as high as the second best general purpose model currently in existence. It seems they compete just fine.

            If you believe "distilling" is all it takes to put together a model at the top of any synthetic benchmark then I wonder what you would have to say about all US models that greatly underperform in comparison and still manage to be used extensively in professional settings.

            But your argument is an emotional one and not rarional, isn't it?

            • wasfgwp 31 minutes ago
              > high as the second best general purpose model

              According to benchmarks which are gamed to the extreme these days. Trusting them blindly isn’t exactly rational either. They don’t necessarily translate that well to real world tasks

              It’s obviously not “distilling” as such but there are reasons why Chinnese models are consistently several months behind OpenAI/Antropic

          • Jackpillar 35 minutes ago
            Theft is quite a slippery slope argument not in your favor in the context of US based LLMs and how/what they were trained on..
          • sagacity 1 hour ago
            Ah yes, like those EUV machines America and China have worked on.
      • wg0 1 hour ago
        I don't mind Chinese but US under Trump is a fascist state based on ethnic and theological grounds pretty much or soon would be if electorate doesn't decide otherwise.

        China and rest of the world has sane leadership that aren't mentally retarted.

        • gadders 41 minutes ago
          Yes, China is a much freer and more democratic country than the USA. It's not like you can get a Uygur killed to order for a new kidney or anything.
          • 2ndorderthought 5 minutes ago
            I would rather support Chinese tech companies then American ones who write manifestos, bomb children, praise wwii Germany, can't stay online, are publicly making weapons for wars I don't support, etc.

            Chinese AI companies are just trying to make money. They are also publicly contributing to forward the field. We all get to decide, but claiming deepseek is involved in genocide is beyond a stretch. Claiming anthropic and chatgpt are... Actually not so much given the president was threatening it and enabling it with an ally...

      • saulapremium 35 minutes ago
        You wouldn't happen to be a Trump supporter by any chance, would you?
  • schipperai 13 minutes ago
    With most OSS releases being MoEs, and modern GPUs optimized for MoEs, can somebody with knowledge of the topic explain or speculate why Mistral might have opted for a dense model?
  • Mashimo 59 minutes ago
    Compared to all other hosted LLMs that I have tested, Mistral seems to be the only one with rather strict CSP headers. When you ask them to create a website with some javascript library it will not preview, even though le chat offers canvas mode.

    Sometimes when a new release comes around from any provider I just want to test it a bit on the web. without paying and using an agent harness.

    Why are they like this ;_;

    Edit: Christ on a bike it's bad at drawing SVGs https://chat.mistral.ai/chat/23214adb-5530-4af9-bb47-90f5219...

    • 2ndorderthought 29 minutes ago
      I have never wanted, needed or hoped to draw svgs with an LLM. All of the models suck at it, some are just more fun or something.
      • Mashimo 9 minutes ago
        I can't speak for what you consider sucking, but there is a significant difference between Mistral and Kimi or Gemini. I find the others to be usable for my needs.
        • 2ndorderthought 2 minutes ago
          I agree there is a difference but does that translate to anything? It's not the same operations used to write code, and it's kind of useless. I wouldn't waste my power bill ensuring a model I was releasing was good at it.
  • maelito 42 minutes ago
    Given what Vibe already did in the previous versions with codestral-v2, that's great news. Keep up the good work ! I don't want to depend on the world's two hungry superpowers.
  • simonw 1 hour ago
    I can't figure out if this is available in the official Mistral API or not.

    Their model listing API returns this:

      {
        "id": "mistral-medium-2508",
        "object": "model",
        "created": 1777479384,
        "owned_by": "mistralai",
        "capabilities": {
          "completion_chat": true,
          "function_calling": true,
          "reasoning": false,
          "completion_fim": false,
          "fine_tuning": true,
          "vision": true,
          "ocr": false,
          "classification": false,
          "moderation": false,
          "audio": false,
          "audio_transcription": false,
          "audio_transcription_realtime": false,
          "audio_speech": false
        },
        "name": "mistral-medium-2508",
        "description": "Update on Mistral Medium 3 with improved capabilities.",
        "max_context_length": 131072,
        "aliases": [
          "mistral-medium-latest",
          "mistral-medium",
          "mistral-vibe-cli-with-tools"
        ],
        "deprecation": null,
        "deprecation_replacement_model": null,
        "default_model_temperature": 0.3,
        "type": "base"
      },
    
    So that has the alias "mistral-medium-latest", but the official ID is "mistral-medium-2508" which suggests it's the model they released in August 2025.

    But... that 1777479384 timestamp decodes to Wednesday, April 29, 2026 at 04:16:24 PM UTC

    So is that the new Mistral Medium?

    • simonw 1 hour ago
      Some poking around in the source code for https://github.com/mistralai/mistral-vibe got me to this:

        curl https://api.mistral.ai/v1/chat/completions \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $(llm keys get mistral)" \
        -d '{
          "model": "mistral-medium-3.5",
          "messages": [
            {"role": "user", "content": "Generate an SVG of a pelican riding a bicycle"}
          ]
        }'
      
      Which did work: https://gist.github.com/simonw/f3158919b18d2c47863b0a5dc257a... - it's pretty disappointing.

      Weird that it doesn't show up in the model list:

        curl https://api.mistral.ai/v1/models \
          -H "Content-Type: application/json" \
          -H "Authorization: Bearer $(llm keys get mistral)" | jq
      • Mashimo 53 minutes ago
        I also did some SVG tests, it's really bad.

        https://chat.mistral.ai/chat/897fbe7d-b1ae-4109-9b29-f3ccc4f...

        • spijdar 44 minutes ago
          Wow. I get that "how well can it make SVGs" isn't the (or a) gold standard for how useful a model is or isn't, but the fact the Gemma 4 26B A4B I'm running locally can blow it out of the water doesn't give me high confidence for the model. Maybe an unfair comparison, but...
          • Mashimo 28 minutes ago
            It's so bad I don't want to spend the 18 EUR just to test it for a month. It can't even create an SVG of the facebook logo. There should be plenty of examples of that around.

            Gemini fast could do that in under 5 seconds.

          • 2ndorderthought 23 minutes ago
            It sounds like they focussed performance on not drawing svgs. Which honestly, makes a lot of sense to me.
            • spijdar 11 minutes ago
              Drawing SVGs isn't something I really care about either, and I think it's still to "qualitatively compare" e.g. "Opus's pelican vs GPT's pelican vs GLM's pelican" or whatever the kids are doing.

              But what stands out to me is that it's barely able to draw a "recognizable" pelican at all. The Devstral 2 model even looks slightly better, though maybe I'm splitting hairs: https://simonwillison.net/2025/Dec/9/

          • cyanydeez 36 minutes ago
            I'm curios: are you doing a real apples to apples comparison, or are you running a harness that already curates prompts? There's a far and wide margin how any of these models respond based on already loaded context. Most models are pretty much hot garbage until their context is curated appropiately.
            • spijdar 28 minutes ago
              I just copied and pasted each prompt as specified by Mashimo and simonw into a chat interface, using a 4-bit Unsloth quantization of Gemma 4 26B, with the default sampler settings recommended by Google, and a system prompt of "You are a helpful assistant". The results are miles ahead of what the Mistral model output.

              I've gotten a lot of use out of Mistral models, and I imagine this model is pretty good at other things, but it really feels like a 128B parameter dense model should be at least a little better than this.

  • minimaxir 1 hour ago
    It's funny that 128B is now considered Medium. I remember back in the day when 355M parameters was considered medium with GPT-2.
    • speedgoose 1 hour ago
      And GPT-2 1.5B was considered too dangerous to release.

      They were perhaps right.

  • postalcoder 1 hour ago
    This release Mistral really reminds you of the gap between the frontier labs and everyone else.

    Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models. The difference in capability is enormous and choosing anything less has a real cost in terms of productivity.

    I've been a big fan of the smaller labs like Mistral and especially Cohere but it's been a while since I've been excited by a release by either company.

    That said, I'm using mistral voxtral realtime daily – it's great.

    • deaux 1 hour ago
      Can't agree at all. Productivity gap just 1 year ago was much larger for frontier model vs non-frontier. Let alone 2 years ago.
      • postalcoder 1 hour ago
        When I was thinking pre-agentic, I was actually thinking more pre-"coding seen as the main use case for these models".
        • deaux 39 minutes ago
          Coding has always been the main real-world business usecase since day one. There has been no point since the very first public availability of GPT 3.5 in November 2022, that it wasn't.

          A lot of us have been agentic coding since almost 2 years ago, mid-2024. I have. The productivity gap of "best vs 2nd vs 3rd best model" was biggest back then and has slowly been shrinking ever since.

    • onlyrealcuzzo 1 hour ago
      > Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models. The difference in capability is enormous and choosing anything less has a real cost in terms of productivity.

      It's just apples to oranges.

      There is not a clear, across the board, winner on non-agentic tasks between Gemini, ChatGPT, and Claude - the simple chatbot interface.

      But Claude Code is substantially better than Codex which itself is notably better than Gemini-cli.

      In this vein, it should not be surprising that Claude Code is way better than non-frontier models for agentic coding... It's substantially better than other frontier models at specialized agentic tasks.

      • philipbjorge 1 hour ago
        I’ve been comparing Claude Code and Codex extensively side by side over the past couple of weeks with my favorite prompting framework superpowers…

        From my perspective, Claude Code is decidedly not better than Codex. They’re slightly different and work better together. I would have no issues dropping CC entirely and using codex 100%.

        If you’re working off of “defaults”, in other words no custom prompting, Claude Code does perform a lot better out of the box. I think this matters, but if you’re a professional software developer, I’d make the case that you should be owning your tools and moving beyond the baked in prompts.

      • postalcoder 1 hour ago
        I think there's a fair amount of evidence that the heavy harnesses actually drag down performance compared to bare harnesses.
      • nothinkjustai 1 hour ago
        CC is not better than Codex, nor is it better than OpenCode, Crush, Pi etc…
    • locknitpicker 1 hour ago
      > Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models.

      This is a very naive and misguided opinion. In most tasks, including complex coding tasks, you can hardly tell the difference between a frontier model and something like GPT4.1. You need to really focus on areas such as context window, tool calling and specific aspects of reasoning steps to start noticing differences. To make matters worse, frontier models are taking a brute force approach to results which ends up making them far more expensive to run, both in terms of what shows up on your invoice and how much more you have to wait to get any resemblance of output.

      And I won't even go into the topic or local models.

      • postalcoder 1 hour ago
        > You need to really focus on areas such as context window, tool calling and specific aspects of reasoning steps to start noticing differences.

        This is like saying "the current models and the old models are the same if you ignore every important advance they've made"

  • seb_lz 51 minutes ago
    I'm using mistral-medium-2508 for some text transformation operations. It's giving me better results than mistral-large for my use cases. Looking forward to testing this new model, although I'm not sure if it's really meant at replacing the previous medium model since it's a lot more expensive and presented more as a coding / agentic model (mistral-medium-2508 was priced $0.4/$2 per 1M tokens, mistral-medium-3.5 is $1,5/$7.5).
  • syntaxing 47 minutes ago
    This is a very interesting strategy that might pay off. This model is a very good option for enterprise self host. I would argue a lot of companies are VRAM constrained rather than compute constrained. You could fit 4-5 running instances on one H100 cluster where you can only fit 1-2 Kimi K2 or GLM5.
  • Alifatisk 15 minutes ago
    A 1000B model, can we call it 1KB model?
  • mark_l_watson 1 hour ago
    I like the idea of Mistral, but the last time I evaluated Mistral Vibe it was really nice for $15/month but not as effective as Gemini Plus with AntiGravity and gemini-cli. I am currently running Gemini Ultra on a 3 month 'special deal' and AntiGravity with Opus 4.7 tokens is pretty much fantastic.

    That said, when I stop spending money on Gemini Ultra, I will give Mistral Vibe another 1-month test.

    I like the entire business model and vibe of Mistral so much more than OpenAI/Anthropic/Google but I also have stuff to get done. I am curious if Mistral Vibe for $15/month is a stable business model (i.e., can they make a profit).

    • danelski 6 minutes ago
      How do you feel about the responsiveness of gemini-cli? I tried it on a paid plan and the 10-minute hang-ups (per step, not the whole plan execution) really break the illusion of performance gains, unless you run it in the background and do something else in the meantime. It's more noticeable when Americans are awake.
    • amunozo 1 hour ago
      I'm testing it right now and it seems very buggy and unstable, just like before.
  • Tepix 1 hour ago
    I use Mistral Le Chat quite a bit.

    One thing in particular I was disappointed in was its bad explanations when asking about French grammar. It made multiple mistakes and the other models got it right, even Qwen 3.6 27b!

    Anyway, I'm hoping they catch up some more.

    • kubb 1 hour ago
      There's a good chance that they'll catch up. The "AI race" is a race to the bottom, with the leaders blowing huge wads of cash on capabilities that get replicated months later by the competition at a fraction of the cost.

      The only benefit of leading is mindshare. OpenAI is doubling down on that, by investing in communication companies. That's their pathetic attempt at a "moat".

      • pb7 1 hour ago
        They catch up by distilling frontier models. They will eventually figure out how to prevent that from happening. No one has any interest in investing tens of billions if the product can be copied and sold for less.
        • amarcheschi 1 hour ago
          >No one has any interest in investing tens of billions if the product can be copied and sold for less.

          That is what has happened until now though

  • wyre 1 hour ago
    I'm rooting for Mistral. It seems they are making a big bet that smaller models will win over larger ones and I can see it happening. I was running some simple chat and tool-calling benchmarks for small models and Mistral Small 4 performed well for it's price ($.15/$.60). Seeing this today got me excited, benchmarks seems solid compared to models much larger, but it's priced higher than Haiku, 5.4 mini, and all the the Chinese models it's comparing itself too. It's not even winning those benches either, just being competitive with them, which is great, those models are 5x+ the size, but they are also 1/2 the price. Hard to be excited about that.
  • spwa4 2 hours ago
    TLDR: Mistral Medium 3.5, text-only, 128B dense model, 256k context window, modified MIT license. Model is ~140G ...

    https://huggingface.co/mistralai/Mistral-Medium-3.5-128B

    They more or less claim this exceeds Claude Sonnet 3.5 on most things, but is worse than Sonnet 3.6, and exceeds all other open models.

    Oh and they have a cloud service that will code your apps "in the cloud". But, yeah, at this point, so does my cat.

    And, yes, unsloth is on it: https://huggingface.co/unsloth/Mistral-Medium-3.5-128B-GGUF (but 4bit quant is 75G)

    • wolttam 1 hour ago
      Sonnet 4.5 and 4.6*

      There is no way it exceeds “all other” open models - but it does exceed all of Mistral’s past models.

      You can see it getting blown past by GLM 5.1 and Kimi in this.

      Still excited to give it a try

      • 2ndorderthought 19 minutes ago
        It looks like qwen 3.6 is winning and smaller for the April small model roll out
    • pama 1 hour ago
      Unfortunately they only compare to old “all other open models”. There are probably over 10 other open models better than it by now.
    • Marciplan 1 hour ago
      You mean Sonnet 4.5 and 4.6 riight
      • spwa4 51 minutes ago
        right
  • Giorgi 1 hour ago
    Oh they are still a thing?! Completely forgot about Mistral. I am assuming they are still burning trough investor money.
    • danelski 0 minutes ago
      I believe they'll get profitable sooner than their frontier competition. Their operating costs seem to peanuts compared to the providers they're compared to most often while having the local advantage of not being Chinese nor American.
    • sev_verso 1 hour ago
      What's better than Voxtral for locally processed voice input? More competition is always better.
  • amunozo 2 hours ago
    I want to believe it's gonna be good, but after trying GPT-5.5 even the most advanced Chinese models seem depressing.
    • r0b05 2 hours ago
      This is a French model sir
      • spwa4 2 hours ago
        Évidemment

        Funny detail: Google AI (the one they use in search) can't spell évidemment correctly.

        • baq 2 hours ago
          What's French for 'goblin'...?
    • ako 2 hours ago
      Then you’ll be happy to learn it’s not Chinese
      • dotancohen 2 hours ago
        GP is stating that the second best in the field, the Chinese, is so far behind the best in the field, GPT 5.5, that it is not even worth testing anything else.
        • amunozo 1 hour ago
          Thanks for the translation, I did not express it very clearly. Anything that I try is so much worse.
        • Ritewut 1 hour ago
          Is GPT 5.5 the best in the field? I think Opus is still better despite Anthropic's recent stumbling.
    • manishsharan 1 hour ago
      I am not following this obsession with SOTA and benchmark rankings

      I have been using DeepSeek and GLMnmodels with OpenCode and Codex and Claudr side by side.

      I have not found the Chinese models lacking. I enjoy for coding and like to maintain full control of my codebade and deeply care about the GOF patterns. So I am very stringent in terms of what I want the LLM to code and how to code.

      So from my perspective, they are all about the same.

      • amunozo 1 hour ago
        That I agree with, but for more complex autonomous changes the differences are considerable. However, it seems that most models will reach the saturation time in which they will be useful for almost everything and the difference will be in more and more niche and specialized tasks.
    • lava_pidgeon 2 hours ago
      Honestly I depends on the context which this performance matters. Mistral is quiet cheap
  • InputName 2 hours ago
    Looks at first graph. It's SWE-Bench Verified. A benchmark Open-AI stopped using two months ago due to contamination.

    Doesn't look to promising. Is there any reason to consider Mistral other than it's not US?

    • 2ndorderthought 17 minutes ago
      They did not stop using it due to contamination. They said it's flawed and indirectly said anthropics results were impossible. It's very possible they are sore losers
    • tpurves 2 hours ago
      If it's not US and it's within a few percent of SOTA that might be good enough for a lot of people (eg Europeans)
      • NitpickLawyer 2 hours ago
        Gemma has been better for us at EU languages than mistral (for comparable sized models) :/ so ... dunno. What mistral does well and others are lagging behind is deploying on prem with their engineers and know-how, offering tuned models for your tasks and finetuning on your own data. (I expect google to start offering this next)
        • deaux 44 minutes ago
          It's sad that despite their strength in this for onprem, they're so behind on this in the cloud. No publicly available cloud SFT at all. Meanwhile Google has been offering that for years - though remains to be seen if they will for Gemini 3 when GA.

          And on top of it a range of providers like Fireworks and so on that offer it for Chinese models. This seems such an obvious thing for Mistral to offer.

    • amunozo 1 hour ago
      Price and speed.