DeepSeek v4

(api-docs.deepseek.com)

435 points | by impact_sy 2 hours ago

45 comments

  • simonw 1 hour ago
    I like the pelican I got out of deepseek-v4-flash more than the one I got from deepseek-v4-pro.

    Flash: https://gist.github.com/simonw/4a7a9e75b666a58a0cf81495acddf...

    Pro: https://gist.github.com/simonw/9e8dfed68933ab752c9cf27a03250...

    Both generated using OpenRouter.

    For comparison, here's what I got from DeepSeek 3.2 back in December: https://simonwillison.net/2025/Dec/1/deepseek-v32/

    And DeepSeek 3.1 in August: https://simonwillison.net/2025/Aug/22/deepseek-31/

    And DeepSeek v3-0324 in March last year: https://simonwillison.net/2025/Mar/24/deepseek/

    • nsoonhui 1 minute ago
      To me this is the perfect proof that

      1) LLM is not AGI. Because surely if AGI it would imply that pro would do better than flash?

      2) and because of the above, Pelican example is most likely already being benchmaxxed.

    • murkt 19 minutes ago
      DeepSeek pelicans are the angriest pelicans I’ve seen so far.
    • JSR_FDED 1 hour ago
      No way. The Pro pelican is fatter, has a customized front fork, and the sun is shining! He’s definitely living the best life.
      • w4yai 1 hour ago
        yeah. look at these 4 feathers (?) on his bum too.
      • oliver236 50 minutes ago
        a lot of dumplings
    • nickvec 1 hour ago
      The Flash one is pretty impressive. Might be my favorite so far in the pelican-riding-a-bicycle series
    • mikae1 33 minutes ago
      Being a bicycle geometry nerd I always look at the bicycle first.

      Let me tell you how much the Pro one sucks... It looks like failed Pedersen[1]. The rear wheel intersects with the bottom bracket, so it wouldn't even roll. Or rather, this bike couldn't exist.

      The flash one looks surprisingly correct with some wild fork offset and the slackest of seat tubes. It's got some lowrider[2] aspirations with the small wheels, but with longer, Rivendellish[3], chainstays. The seat post has different angle than the seat tube, so good luck lowering that.

      [1] https://en.wikipedia.org/wiki/Pedersen_bicycle

      [2] https://en.wikipedia.org/wiki/Lowrider_bicycle

      [3] https://www.rivbike.com/

      • simonw 29 minutes ago
        This is an excellent comment. Thanks for this - I've only ever thought about whether the frame is the right shape, I never thought about how different illustrations might map to different bicycle categories.
        • mikae1 16 minutes ago
          Some other reactions:

          I wonder which model will try some more common spoke lacing patterns. Right now there seems to be a preference for radial lacing, which is not super common (but simple to draw). The Flash and Pro one uses 16 spoke rims, which actually exist[1] but are not super common. The Pro model fails badly at the spoke pattern.

          Both bikes have the drive side on the left, which is very very uncommon. That can't exist in the training data.

          [1] https://cicli-berlinetta.com/product/campagnolo-shamal-16-sp...

      • jojobas 13 minutes ago
        The Pedersen looks like someone failed the "draw a bicycle" test and decided to adjust the universe.
    • brutal_chaos_ 15 minutes ago
      What was your prompt for the image? Apologies if this should be obvious.
      • shawn_w 12 minutes ago
        >Generate an SVG of a pelican riding a bicycle

        at the top of the linked pages.

    • EnPissant 3 minutes ago
      This should not be the top comment on every model release post. It's getting tiring.
    • theanonymousone 20 minutes ago
      Where is the GPT 5.5 Pelican?
    • ycui1986 1 hour ago
      I really like the pro version. The pelican is so cute.
    • lobochrome 21 minutes ago
      Why they so angry?
    • whateveracct 43 minutes ago
      [flagged]
      • fastball 41 minutes ago
        It's just Simon Willison (the person you are replying to) who always makes a pelican, as his personal flippant benchmark. It's not that deep.
      • dewey 40 minutes ago
        No benchmark will be perfect, especially if it's public but it's a fun experiment to visually see how these models get better and better.
      • post-it 41 minutes ago
        Why is it so wrong?
      • simonw 33 minutes ago
        Thanks for the "scientific air" remark, that gave me a genuine LOL.
    • catelm 17 minutes ago
      I think the pelican on a bike is known widely enough that of seizes to be useful as a benchmark. There is even a pelican briefly appearing in the promo video of GPT-5, if I'm not mistaken https://openai.com/gpt-5/. So the companies are apparently aware of it.
  • nthypes 2 hours ago
    https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

    Model was released and it's amazing. Frontier level (better than Opus 4.6) at a fraction of the cost.

    • 0xbadcafebee 1 hour ago
      I don't think we need to compare models to Opus anymore. Opus users don't care about other models, as they're convinced Opus will be better forever. And non-Opus users don't want the expense, lock-in or limits.

      As a non-Opus user, I'll continue to use the cheapest fastest models that get my job done, which (for me anyway) is still MiniMax M2.5. I occasionally try a newer, more expensive model, and I get the same results. I have a feeling we might all be getting swindled by the whole AI industry with benchmarks that just make it look like everything's improving.

      • versteegen 33 minutes ago
        Which model's best depends on how you use it. There's a huge difference in behaviour between Claude and GPT and other models which makes some poor substitutes for others in certain use cases. I think the GPT models are a bad substitute for Claude ones for tasks such as pair-programming (where you want to see the CoT and have immediate responses) and writing code that you actually want to read and edit yourself, as opposed to just letting GPT run in the background to produce working code that you won't inspect. Yes, GPT 5.4 is cheap and brilliant but very black-box and often very slow IME. GPT-5.4 still seems to behave the same as 5.1, which includes problems like: doesn't show useful thoughts, can think for half an hour, says "Preparing the patch now" then thinks for another 20 min, gives no impression of what it's doing, reads microscopic parts of source files and misses context, will do anything to pass the tests including patching libraries...
      • ind-igo 38 minutes ago
        Agree with your assessment, I think after models reached around Opus 4.5 level, its been almost indistinguishable for most tasks. Intelligence has been commoditized, what's important now is the workflows, prompting, and context management. And that is unique to each model.
      • kmarc 42 minutes ago
        This resonates with me a lot.

        I do some stuff with gemini flash and Aider, but mostly because I want to avoid locking myself into a walled garden of models, UIs and company

      • post-it 39 minutes ago
        What do you run these on? I've gotten comfortable with Claude but if folks are getting Opus performance for cheaper I'll switch.
        • slopinthebag 34 minutes ago
          Try Charm Crush first, it's a native binary. If it's unbearable, try opencode, just with the knowledge your system will probably be pwned soon since it's JS + NPM + vibe coding + some of the most insufferable devs in the industry behind that product.

          If you're feeling frisky, Zed has a decent agent harness and a very good editor.

      • szundi 38 minutes ago
        [dead]
    • onchainintel 1 hour ago
      How does it compare to Opus 4.7? I've been immersed in 4.7 all week participating in the Anthropic Opus 4.7 hackathon and it's pretty impressive even if it's ravenous from a token perspective compared to 4.6
      • greenknight 1 hour ago
        The thing is, it doesnt need to beat 4.7. it just needs to do somewhat well against it.

        This is free... as in you can download it, run it on your systems and finetune it to be the way you want it to be.

        • p1esk 1 hour ago
          Do you think a lot of people have “systems” to run a 1.6T model?
          • CJefferson 40 minutes ago
            To me, the important thing isn't that I can run it, it's that I can pay someone else to run it. I'm finding Opus 4.7 seems to be weirdly broken compared to 4.6, it just doesn't understand my code, breaks it whenever I ask it to do anything.

            Now, at the moment, i can still use 4.6 but eventually Anthropic are going to remove it, and when it's gone it will be gone forever. I'm planning on trying Deepseek v4, because even if it's not quite as good, I know that it will be available forever, I'll always be able to find someone to run it.

          • applfanboysbgon 1 hour ago
            No, but businesses do. Being able to run quality LLMs without your business, or business's private information, being held at the mercy of another corp has a lot of value.
            • forrestthewoods 58 minutes ago
              What type of system is needed to self host this? How much would it cost?
              • disiplus 44 minutes ago
                Depends how many users you have and what is "production grade" for you but like 500k gets you a 8x B200 machine.
              • p1esk 49 minutes ago
                Depends on fast you want it to be. I’m guessing a couple of $10k mac studio boxes could run it, but probably not fast enough to enjoy using it.
              • fragmede 31 minutes ago
                One GB200 NVL72 from Nvidia would do it. $2-3 million, or so. If you're a corporation, say Walmart or PayPal, that's not out of the question.

                If you want to go budget corporate, 7 x H200 is just barely going to run it, but all in, $300k ought to do it.

                • gloflo 26 minutes ago
                  How many users can you serve with that?
            • choldstare 1 hour ago
              Not really - on prem llm hosting is extremely labor and capital intensive
              • applfanboysbgon 1 hour ago
                But can be, and is, done. I work for a bootstrapped startup that hosts a DeepSeek v3 retrain on our own GPUs. We are highly profitable. We're certainly not the only ones in the space, as I'm personally aware of several other startups hosting their own GLM or DeepSeek models.
        • onchainintel 1 hour ago
          Completely agree, not suggesting it needs ot just genuinely curious. Love that it can be run locally though. Open source LLMs punching back pretty hard against proprietary ones in the cloud lately in terms of performance.
        • kelseyfrog 1 hour ago
          What's the hardware cost to running it?
          • bbor 56 minutes ago
            I was curious, and some [intrepid soul](https://wavespeed.ai/blog/posts/deepseek-v4-gpu-vram-require...) did an analysis. Assuming you do everything perfectly and take full advantage of the model's MoE sparsity, it would take:

            - To run at full precision: "16–24 H100s", giving us ~$400-600k upfront, or $8-12/h from [us-east-1](https://intuitionlabs.ai/articles/h100-rental-prices-cloud-c...).

            - To run with "heavy quantization" (16 bits -> 8): "8xH100", giving us $200K upfront and $4/h.

            - To run truly "locally"--i.e. in a house instead of a data center--you'd need four 4090s, one of the most powerful consumer GPUs available. Even that would clock in around $15k for the cards alone and ~$0.22/h for the electricity (in the US).

            Truly an insane industry. This is a good reminder of why datacenter capex from since 2023 has eclipsed the Manhattan Project, the Apollo program, and the US interstate system combined...

            • zargon 47 minutes ago
              That article is a total hallucination.

              "671B total / 37B active"

              "Full precision (BF16)"

              And they claim they ran this non-existent model on vLLM and SGLang over a month and a half ago.

              It's clickbait keyword slop filled in with V3 specs. Most of the web is slop like this now. Sigh.

          • redox99 1 hour ago
            Probably like 100 USD/hour
          • slashdave 1 hour ago
            "if you have to ask..."
        • johnmaguire 1 hour ago
          ... if you have 800 GB of VRAM free.
          • inventor7777 1 hour ago
            I remember reading about some new frameworks have been coming out to allow Macs to stream weights of huge models live from fast SSDs and produce quality output, albeit slowly. Apart from that...good luck finding that much available VRAM haha
      • rvz 1 hour ago
        It is more than good enough and has effectively caught up with Opus 4.6 and GPT 5.4 according to the benchmarks.

        It's about 2 months behind GPT 5.5 and Opus 4.7.

        As long as it is cheap to run for the hosting providers and it is frontier level, it is a very competitive model and impressive against the others. I give it 2 years maximum for consumer hardware to run models that are 500B - 800B quantized on their machines.

        It should be obvious now why Anthropic really doesn't want you to run local models on your machine.

        • deaux 59 minutes ago
          Vibes > Benchmarks. And it's all so task-specific. Gemini 3 has scored very well in benchmarks for very long but is poor at agentic usecases. A lot of people prefering Opus 4.6 to 4.7 for coding despite benchmarks, much more than I've seen before (4.5->4.6, 4->4.5).

          Doesn't mean Deepseek v4 isn't great, just benchmarks alone aren't enough to tell.

        • snovv_crash 1 hour ago
          With the ability of the Qwen3.6 27B, I think in 2 years consumers will be running models of this capability on current hardware.
        • colordrops 1 hour ago
          What's going to change in 2 years that would allow users to run 500B-800B parameter models on consumer hardware?
    • NitpickLawyer 1 hour ago
      > (better than Opus 4.6)

      There we go again :) It seems we have a release each day claiming that. What's weird is that even deepseek doesn't claim it's better than opus w/ thinking. No idea why you'd say that but anyway.

      Dsv3 was a good model. Not benchmaxxed at all, it was pretty stable where it was. Did well on tasks that were ood for benchmarks, even if it was behind SotA.

      This seems to be similar. Behind SotA, but not by much, and at a much lower price. The big one is being served (by ds themselves now, more providers will come and we'll see the median price) at 1.74$ in / 3.48$ out / 0.14$ cache. Really cheap for what it offers.

      The small one is at 0.14$ in / 0.28$ out / 0.028$ cache, which is pretty much "too cheap to matter". This will be what people can run realistically "at home", and should be a contender for things like haiku/gemini-flash, if it can deliver at those levels.

      • slopinthebag 33 minutes ago
        Anthropic fans would claim God itself is behind Opus by 3-6 months and then willingly be abused by Boris and one of his gaslighting tweets.

        LMAO

        • NitpickLawyer 23 minutes ago
          > Anthropic fans ...

          I have no idea why you'd think that, but this is straight from their announcement here (https://mp.weixin.qq.com/s/8bxXqS2R8Fx5-1TLDBiEDg):

          > According to evaluation feedback, its user experience is better than Sonnet 4.5, and its delivery quality is close to Opus 4.6's non-thinking mode, but there is still a certain gap compared to Opus 4.6's thinking mode.

          This is the model creators saying it, not me.

    • doctoboggan 1 hour ago
      Is it honestly better than Opus 4.6 or just benchmaxxed? Have you done any coding with an agent harness using it?

      If its coding abilities are better than Claude Code with Opus 4.6 then I will definitely be switching to this model.

      • bokkies 5 minutes ago
        Apparently glm5.1 and qwen coder latest is as good as opus 4.6 on benchmarks. So I tried both seriously for a week (glm Pro using CC) and qwen using qwen companion. Thought I could save $80 a month. Unfortunately after 2 days I had switched back to Max. The speed (slower on both although qwen is much faster) and errors (stupid layout mistakes, inserting 2 footers then refusing to remove one, not seeing obvious problems in screenshots & major f-ups of functionality), not being able to view URLs properly, etc. I'll give deepseek a go but I suspect it will be similar. The model is only half the story. Also been testing gpt5.4 with codex and it is very almost as good as CC... better on long running tasks running in background. Not keen on ChatGPT codex 'personality' so will stick to CC for the most part.
      • madagang 1 hour ago
        Their Chinese announcement says that, based on internal employee testing, it is not as good as Opus 4.6 Thinking, but is slightly better than Opus 4.6 without Thinking enabled.
        • ibic 13 minutes ago
          In case people wonder where the announcement is (you can easily translate it via browser if you don't read Chinese): https://mp.weixin.qq.com/s/8bxXqS2R8Fx5-1TLDBiEDg

          It's still a "preview" version atm.

        • mchusma 1 hour ago
          I appreciate this, makes me trust it more than benchmarks.
        • deaux 58 minutes ago
          That's super interesting, isn't Deepseek in China banned from using Anthropic models? Yet here they're comparing it in terms of internal employee testing.
    • bbor 1 hour ago
      For the curious, I did some napkin math on their posted benchmarks and it racks up 20.1 percentage point difference across the 20 metrics where both were scored, for an average improvement of about 2% (non-pp). I really can't decide if that's mind blowing or boring?

      Claude4.6 was almost 10pp better at at answering questions from long contexts ("corpuses" in CorpusQA and "multiround conversations" in MRCR), while DSv4 was a staggering 14pp better at one math challenge (IMOAnswerBench) and 12pp better at basic Q&A (SimpleQA-Verified).

      • Quasimarion 1 hour ago
        FWIW it's also like 10x cheaper.
    • sergiotapia 2 hours ago
      The dragon awakes yet again!
      • kindkang2024 1 hour ago
        There appears a flight of dragons without heads. Good fortune.

        That's literally what the I Ching calls "good fortune."

        Competition, when no single dragon monopolizes the sky, brings fortune for all.

    • rapind 1 hour ago
      Pop?
  • xnx 1 minute ago
    Such different time now than early 2025 when people thought Deepaeek was going to kill the market for Nvidia.
  • yanis_t 1 hour ago
    Already on Openrouter. Pro version is $1.74/m/input, $3.48/m/output, while flash $0.14/m/input, 0.28/m/output.
    • astrod 56 minutes ago
      Getting 'Api Error' here :( Every other model is working fine.
      • poglet 24 minutes ago
        Try interacting with it through the website, it will give an error and some explanation on the issue. I had to relax my guardrail settings.
    • esafak 1 hour ago
      • 77ko 55 minutes ago
        Its on OR - but currently not available on their anthropic endpoint. OR if you read this, pls enable it there! I am using kimi-2.6 with Claude Code, works well, but Deepseek V4 gives an error:

        `https://openrouter.ai/api/messages with model=deepseek/deepseek-v4-pro, OR returns an error because their Anthropic-compat translator doesn't cover V4 yet. The Claude CLI dutifully surfaces that error as "model...does not exist"

  • fblp 1 hour ago
    There's something heartwarming about the developer docs being released before the flashy press release.
    • onchainintel 1 hour ago
      Insert obligatory "this is the way" Mando scene. Indeed!
    • necovek 1 hour ago
      Where's the training data and training scripts since you are calling this open source?

      Edit: it seems "open source" was edited out of the parent comment.

      • b65e8bee43c2ed0 41 minutes ago
        doesn't it get tiring after a while? using the same (perceived) gotcha, over and over again, for three years now?

        no one is ever going to release their training data because it contains every copyrighted work in existence. everyone, even the hecking-wholesome safety-first Anthropic, is using copyrighted data without permission to train their models. there you go.

        • necovek 5 minutes ago
          There is an easy fix already in widespread use: "open weights".

          It is very much a valuable thing already, no need to taint it with wrong promise.

          Though I disagree about being used if it was indeed open source: I might not do it inside my home lab today, but at least Qwen and DeepSeek would use and build on what eg. Facebook was doing with Llama, and they might be pushing the open weights model frontier forward faster.

        • fragmede 18 minutes ago
          it's not a gotcha but people using words in ways others don't like.
      • bl4ckneon 5 minutes ago
        Aww yes, let me push a couple petabytes to my git repo for everyone to download...
        • necovek 5 minutes ago
          An easier thing would be to say "open weights", yes.
  • revolvingthrow 8 minutes ago
    > pricing "Pro" $3.48 / 1M output tokens vs $4.40

    I’d like somebody to explain to me how the endless comments of "bleeding edge labs are subsidizing the inference at an insane rate" make sense in light of a humongous model like v4 pro being $4 per 1M. I’d bet even the subscriptions are profitable, much less the API prices.

    edit: $1.74/M input $3.48/M output on OpenRouter

  • seanobannon 2 hours ago
  • rohanm93 14 minutes ago
    This is shockingly cheap for a near frontier model. This is insane.

    For context, for an agent we're working on, we're using 5-mini, which is $2/1m tokens. This is $0.30/1m tokens. And it's Opus 4.6 level - this can't be real.

    I am uncomfortable about sending user data which may contain PII to their servers in China so I won't be using this as appealing as it sounds. I need this to come to a US-hosted environment at an equivalent price.

    Hosting this on my own + renting GPUs is much more expensive than DeepSeek's quoted price, so not an option.

    • fractalf 4 minutes ago
      Right now Im much more worried about sending data to the US and A.. At least theres a less chanse it will be missused against -me-
  • sidcool 1 hour ago
    Truly open source coming from China. This is heartwarming. I know if the potential ulterior motives.
  • mchusma 1 hour ago
    For comparison on openrouter DeepSeek v4 Flash is slightly cheaper than Gemma 4 31b, more expensive than Gemma 4 26b, but it does support prompt caching, which means for some applications it will be the cheapest. Excited to see how it compares with Gemma 4.
  • punkpeye 2 minutes ago
    Incredible model quality to price ratio
  • zargon 1 hour ago
    The Flash version is 284B A13B in mixed FP8 / FP4 and the full native precision weights total approximately 154 GB. KV cache is said to take 10% as much space as V3. This looks very accessible for people running "large" local models. It's a nice follow up to the Gemma 4 and Qwen3.5 small local models.
    • sbinnee 1 hour ago
      Price is appealing to me. I have been using gemini 3 flash mainly for chat. I may give it a try.

      input: $0.14/$0.28 (whereas gemini $0.5/$3)

      Does anyone know why output prices have such a big gap?

  • gardnr 11 minutes ago
    865 GB: I am going to need a bigger GPU.
  • storus 41 minutes ago
    Oh well, I should have bought 2x 512GB RAM MacStudios, not just one :(
  • gbnwl 2 hours ago
    I’m deeply interested and invested in the field but I could really use a support group for people burnt out from trying to keep up with everything. I feel like we’ve already long since passed the point where we need AI to help us keep up with advancements in AI.
    • satvikpendem 31 minutes ago
      Don't keep up. Much like with news, you'll know when you need to know, because someone else will tell you first.
    • wordpad 1 hour ago
      The players barely ever change. People don't have problems following sports, you shouldn't struggle so much with this once you accept top spot changes.
      • gbnwl 47 minutes ago
        I didn't express this well but my interest isn't "who is in the top spot", and is more _why and _how various labs get the results they do. This is also magnified by the fact that I'm not only interested in hosted providers of inference but local models as well. What's your take on the best model to run for coding on 24GB of VRAM locally after the last few weeks of releases? Which harness do you prefer? What quants do you think are best? To use your sports metaphor it's more than following the national leagues but also following college and even high school leagues as well. And the real interest isn't even who's doing well but WHY, at each level.
      • ehnto 1 hour ago
        It is funny seeing people ping pong between Anthropic and ChatGPT, with similar rhetoric in both directions.

        At this point I would just pick the one who's "ethics" and user experience you prefer. The difference in performance between these releases has had no impact on the meaningful work one can do with them, unless perhaps they are on the fringes in some domain.

        Personally I am trying out the open models cloud hosted, since I am not interested in being rug pulled by the big two providers. They have come a long way, and for all the work I actually trust to an LLM they seem to be sufficient.

        • DiscourseFan 1 hour ago
          I find ChatGPT annoying mostly
          • awakeasleep 1 hour ago
            Open settings > personalization. Set it to efficient base style. Turn off enthusiasm and warmth. You’re welcome
    • vrganj 17 minutes ago
      It honestly has all kinda felt like more of the same ever since maybe GPT4?

      New model comes out, has some nice benchmarks, but the subjective experience of actually using it stays the same. Nothing's really blown my mind since.

      Feels like the field has stagnated to a point where only the enthusiasts care.

    • trueno 34 minutes ago
      holy shit im right there with you
  • clark1013 37 minutes ago
    Looking forward to DeepSeek Coding Plan
  • jessepcc 1 hour ago
    At this point 'frontier model release' is a monthly cadence, Kimi 2.6 Claude 4.6 GPT 5.5, the interesting question is which evals will still be meaningful in 6 months.
  • tariky 23 minutes ago
    Anyone tried with make web UI with it? How good is it? For me opus is only worth because of it.
  • Aliabid94 1 hour ago
    MMLU-Pro:

    Gemini-3.1-Pro at 91.0

    Opus-4.6 at 89.1

    GPT-5.4, Kimi2.6, and DS-V4-Pro tied at 87.5

    Pretty impressive

    • ant6n 58 minutes ago
      Funny how Gemini is theoretically the best -- but in practice all the bugs in the interface mean I don't want to use it anymore. The worst is it forgets context (and lies about it), but it's very unreliable at reading pdfs (and lies about it). There's also no branch, so once the context is lost/polluted, you have to start projects over and build up the context from scratch again.
  • CJefferson 39 minutes ago
    What's the current best framework to have a 'claude code' like experience with Deepseek (or in general, an open-source model), if I wanted to play?
  • sibellavia 18 minutes ago
    A few hours after GPT5.5 is wild. Can’t wait to try it.
  • luew 15 minutes ago
    We will be hosting it soon at getlilac.com!
  • jdeng 1 hour ago
    Excited that the long awaited v4 is finally out. But feel sad that it's not multimodal native.
  • taosx 2 hours ago
  • luyu_wu 2 hours ago
    For those who didn't check the page yet, it just links to the API docs being updated with the upcoming models, not the actual model release.
  • sergiotapia 10 minutes ago
    Using it with opencode sometimes it generates commands like:

        bash({"command":"gh pr create --title "Improve Calendar module docs and clean up idiomatic Elixir" --body "$(cat <<'EOF'
        Problem
        The Calendar modu...
    
    like generating output, but not actually running the bash command so not creating the PR ultimately. I wonder if it's a model thing, or an opencode thing.
  • mariopt 1 hour ago
    Does deepseek has any coding plan?
  • KaoruAoiShiho 1 hour ago
    SOTA MRCR (or would've been a few hours earlier... beaten by 5.5), I've long thought of this as the most important non-agentic benchmark, so this is especially impressive. Beats Opus 4.7 here
  • aliljet 1 hour ago
    How can you reasonably try to get near frontier (even at all tps) on hardware you own? Maybe under 5k in cost?
    • revolvingthrow 39 minutes ago
      For flash? 4 bit quant, 2x 96GB gpu (fast and expensive) or 1x 96GB gpu + 128GB ram (still expensive but probably usable, if you’re patient).

      A mac with 256 GB memory would run it but be very slow, and so would be a 256GB ram + cheapo GPU desktop, unless you leave it running overnight.

      The big model? Forget it, not this decade. You can theoretically load from SSD but waiting for the reply will be a religious experience.

      Realistically the biggest models you can run on local-as-in-worth-buying-as-a-person hardware are between 120B and 200B, depending on how far you’re willing to go on quantization. Even this is fairly expensive, and that’s before RAM went to the moon.

      • zargon 20 minutes ago
        Flash is less than 160 GB. No need to quantize to fit in 2x 96 GB. Not sure how much context fits in 30 GB, but it should be a good amount.
        • redrove 2 minutes ago
          It seems to be 160GB at mixed FP4+FP8 precision, FYI. Full FP8 is 250GB+. (B)F16 at around double I would assume.
    • awakeasleep 1 hour ago
      The same way you fit a bucket wheel excavator in your garage
      • floam 28 minutes ago
        Very carefully
    • datadrivenangel 38 minutes ago
      A loaded macbook pro can get you to the frontier from 24 months ago at ~10-40tok/s, which is plenty fast enough for regular chatting.
    • 542458 45 minutes ago
      The low end could be something like an eBay-sourced server with a truckload of DDR3 ram doing all-cpu inference - secondhand server models with a terabyte of ram can be had for about 1.5K. The TPS will be absolute garbage and it will sound like a jet engine, but it will nominally run.

      The flash version here is 284B A13B, so it might perform OK with a fairly small amount of VRAM for the active params and all regular ram for the other params, but I’d have to see benchmarks. If it turns out that works alright, an eBay server plus a 3090 might be the bang-for-buck champ for about $2.5K (assuming you’re starting from zero).

    • jdoe1337halo 1 hour ago
      More like 500k
  • namegulf 1 hour ago
    Is there a Quantized version of this?
  • swrrt 1 hour ago
    Any visualised benchmark/scoreboard for comparison between latest models? DeepSeek v4 and GPT-5.5 seems to be ground breaking.
  • rvz 1 hour ago
    The paper is here: [0]

    Was expecting that the release would be this month [1], since everyone forgot about it and not reading the papers they were releasing and 7 days later here we have it.

    One of the key points of this model to look at is the optimization that DeepSeek made with the residual design of the neural network architecture of the LLM, which is manifold-constrained hyper-connections (mHC) which is from this paper [2], which makes this possible to efficiently train it, especially with its hybrid attention mechanism designed for this.

    There was not that much discussion around it some months ago here [3] about it but again this is a recommended read of the paper.

    I wouldn't trust the benchmarks directly, but would wait for others to try it for themselves to see if it matches the performance of frontier models.

    Either way, this is why Anthropic wants to ban open weight models and I cannot wait for the quantized versions to release momentarily.

    [0] https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

    [1] https://news.ycombinator.com/item?id=47793880

    [2] https://arxiv.org/abs/2512.24880

    [3] https://news.ycombinator.com/item?id=46452172

    • jeswin 1 hour ago
      > this is why Anthropic wants to ban open weight models

      Do you have a source?

      • louiereederson 43 minutes ago
        More like he wants to ban accelerator chip sales to China, which may be about “national security” or self preservation against a different model for AI development which also happens to be an existential threat to Anthropic. Maybe those alternatives are actually one and the same to him.
  • reenorap 1 hour ago
    Which version fits in a Mac Studio M3 Ultra 512 GB?
  • ls612 1 hour ago
    How long does it usually take for folks to make smaller distills of these models? I really want to see how this will do when brought down to a size that will run on a Macbook.
    • simonw 1 hour ago
      Unsloth often turn them around within a few hours, they might have gone to bed already though!

      Keep an eye on https://huggingface.co/unsloth/models

      Update ten minutes later: https://huggingface.co/unsloth/DeepSeek-V4-Pro just appeared but doesn't have files in yet, so they are clearly awake and pushing updates.

      • EnPissant 0 minutes ago
        Those are quants, not distills.
    • inventor7777 1 hour ago
      Weren't there some frameworks recently released to allow Macs to stream weights from fast SSDs and thus fit way more parameters than what would normally fit in RAM?

      I have never tried one yet but I am considering trying that for a medium sized model.

      • zozbot234 6 minutes ago
        These are more like experiments than a polished release as of yet. And the reduction in throughput is high compared to having the weights in RAM at all times, since you're bottlenecked by the SSD which even at its fastest is much slower than RAM.
      • simonw 1 hour ago
        I've been calling that the "streaming experts" trick, the key idea is to take advantage of Mixture of Expert models where only a subset of the weights are used for each round of calculations, then load those weights from SSD into RAM for each round.

        As I understand it if DeepSeek v4 Pro is a 1.6T, 49B active that means you'd need just 49B in memory, so ~100GB at 16 bit or ~50GB at 8bit quantized.

        v4 Flash is 284B, 13B active so might even fit in <32GB.

        • zozbot234 2 minutes ago
          The "active" count is not very meaningful except as a broad measure of sparsity, since the experts in MoE models are chosen per layer. Once you're streaming experts from disk, there's nothing that inherently requires having 49B parameters in memory at once. Of course, the less caching memory does, the higher the performance overhead of fetching from disk.
        • zargon 27 minutes ago
          > ~100GB at 16 bit or ~50GB at 8bit quantized.

          V4 is natively mixed FP4 and FP8, so significantly less than that. 50 GB max unquantized.

        • inventor7777 1 hour ago
          Ahh, that actually makes more sense now. (As you can tell, I just skimmed through the READMEs and starred "for later".)

          My Mac can fit almost 70B (Q3_K_M) in memory at once, so I really need to try this out soon at maybe Q5-ish.

      • the_sleaze_ 1 hour ago
        Do you have the links for those? Very interested
  • frozenseven 1 hour ago
  • hongbo_zhang 1 hour ago
    congrats
  • dhruv3006 52 minutes ago
    Ah now !
  • creamyhorror 1 hour ago
    [dead]
  • hubertzhang 1 hour ago
    [dead]
  • maryjeiel 1 hour ago
    [dead]
  • minhajulmahib 1 hour ago
    [flagged]
    • polski-g 1 hour ago
      Why did you bother to submit an AI comment?
      • sidcool 1 hour ago
        I suspect you may have replied to a bot. Dead internet theory
  • slopinthebag 40 minutes ago
    OMG

    OMG ITS HAPPENING

  • shafiemoji 1 hour ago
    I hope the update is an improvement. Losing 3.2 would be a real loss, it's excellent.
  • raincole 1 hour ago
    History doesn't always repeat itself.

    But if it does, then in the following week we'll see DeepSeek4 floods every AI-related online space. Thousands of posts swearing how it's better than the latest models OpenAI/Anthropic/Google have but only costs pennies.

    Then a few weeks later it'll be forgotten by most.

    • sbysb 1 hour ago
      It's difficult because even if the underlying model is very good, not having a pre-built harness like Claude Code makes it very un-sticky for most devs. Even at equal quality, the friction (or at least perceived friction) is higher than the mainstream models.
      • raincole 1 hour ago
        OpenCode? Pi?

        If one finds it difficult to set up OpenCode to use whatever providers they want, I won't call them 'dev'.

        The only real friction (if the model is actually as good as SOTA) is to convince your employer to pay for it. But again if it really provides the same value at a fraction of the cost, it'll eventually cease to be an issue.

        • throwa356262 1 minute ago

              "If one finds it difficult to set up OpenCode to use whatever providers they want, I won't call them 'dev'."
          
          
          I feel the same way. But look at the llama vs llama.cpp post from HN few days back and you will see most of the enthusiasts in this space are very non technical people.
      • cmrdporcupine 1 hour ago
        They have instructions right on their page on how to use claude code with it.
    • slopinthebag 37 minutes ago
      [flagged]