Python 3.15's JIT is now back on track

(fidget-spinner.github.io)

149 points | by guidoiaquinti 3 hours ago

9 comments

  • wei03288 0 minutes ago
    The copy-and-patch strategy is the interesting design choice here. Most JIT implementations go straight for SSA IR and full optimization passes, which means months of work before you see any speedup. Copy-and-patch compiles specialised bytecode "templates" ahead of time and just patches in the runtime values — it's closer to a stencil than a classic JIT. The tradeoff is you get less optimisation headroom, but the implementation complexity is an order of magnitude lower and you can ship incrementally.

    The free-threading interaction is the hard part they're still working through. Lock-free specialisation of bytecodes when multiple threads might be executing the same code object simultaneously is genuinely tricky — you can't just patch in-place without a memory model guarantee.

  • ecshafer 56 minutes ago
    What is wrong with the Python code base that makes this so much harder to implement than seemingly all other code bases? Ruby, PHP, JS. They all seemed to add JITs in significantly less time. A Python JIT has been asked for for like 2 decades at this point.
    • hardwaregeek 13 minutes ago
      For what it’s worth Ruby’s JIT took several different implementations, definitely struggled with Rails compatibility and literally used some people’s PhD research. It wasn’t a trivial affair
    • 0cf8612b2e1e 25 minutes ago
      The Python C api leaks its guts. Too much of the internal representation was made available for extensions and now basically any change would be guaranteed to break backwards compatibility with something.
      • patmorgan23 3 minutes ago
        Ooo this makes sense it's like if the Linux had don't break users space AND a whole bunch of other purely internal APIs you also can't refactor.
      • echelon 2 minutes ago
        It's a shame that Python 2->3 transition was so painful, because Python could use a few more clean breaks with the past.

        This would be a potential case for a new major version number.

    • stmw 45 minutes ago
      Some languages are much harder to compile well to machine code. Some big factors (for any languages) are things like: lack of static types and high "type uncertainty", other dynamic language features, established inefficient extension interfaces that have to be maintained, unusual threading models...
      • simonask 1 minute ago
        The simplest JIT just generates the machine code instructions that the interpreter loop would execute anyway. It’s not an extremely difficult thing, but it also doesn’t give you much benefit.

        A worthwhile JIT is a fully optimizing compiler, and that is the hard part. Language semantics are much less important - dynamic languages aren’t particularly harder here, but the performance roof is obviously just much lower.

      • RussianCow 29 minutes ago
        That makes sense if you're comparing with Java or C#, but not Ruby, which is way more dynamic than Python.

        The more likely reason is that there simply hasn't been that big a push for it. Ruby was dog slow before the JIT and Rails was very popular, so there was a lot of demand and room for improvement. PHP was the primary language used by Facebook for a long time, and they had deep pockets. JS powers the web, so there's a huge incentive for companies like Google to make it faster. Python never really had that same level of investment, at least from a performance standpoint.

        To your point, though, the C API has made certain types of optimizations extremely difficult, as the PyPy team has figured out.

        • vlovich123 2 minutes ago
          Google, Dropbox, and Microsoft from what I can recall all tried to make Python fast so I don’t buy the “hasn’t seen a huge amount of investment”. For a long time Guido was opposed to any changes and that ossified the ecosystem.

          But the main problem was actually that pypy was never adopted as “the JIT” mechanism. That would have made a huge difference a long time ago and made sure they evolved in lock step.

    • wat10000 44 minutes ago
      PHP and JS had huge tech companies pouring resources into making them fast.
    • g947o 15 minutes ago
      Money.
    • brokencode 29 minutes ago
      Are you forgetting about PyPy, which has existed for almost 2 decades at this point?
      • RussianCow 26 minutes ago
        That's a completely separate codebase that purposefully breaks backwards compatibility in specific areas to achieve their goals. That's not the same as having a first-class JIT in CPython, the actual Python implementation that ~everyone uses.
  • adrian17 1 hour ago
    I'm been occasionally glancing at PR/issue tracker to keep up to date with things happening with the JIT, but I've never seen where the high level discussions were happening; the issues and PRs always jumped right to the gritty details. Is there anywhere a high-level introduction/example of how trace projection vs recording work and differ? Googling for the terms often returns CPython issue tracker as the first result, and repo's jit.md is relatively barebones and rarely updated :(

    Similarly, I don't entirely understand refcount elimination; I've seen the codegen difference, but since the codegen happens at build time, does this mean each opcode is possibly split into two (or more?) stencils, with and without removed increfs/decrefs? With so many opcodes and their specialized variants, how many stencils are there now?

    • sheepscreek 18 minutes ago
      I love playing with compilers for fun, so maybe I can shed some light. I’ll explain it in a simplified way for everyone’s benefit (going to ignore the stack):

      When an object is passed between functions in Python, it doesn’t get copied. Instead, a reference to the object’s memory address is sent. This reference acts as a pointer to the object’s data. Think of it like a sticky note with the object’s memory address written on it. Now, imagine throwing away one sticky note every time a function that used a reference returns.

      When an object has zero references, it can be freed from memory and reused. Ensuring the number of references, or the “reference count” is always accurate is therefore a big deal. It is often the source of memory leaks, but I wouldn’t attribute it to a speed up (only if it replaces GC, then yes).

      • yuliyp 6 minutes ago
        what at all does this comment have to do with what it's replying to?
    • flakes 56 minutes ago
      You’ll probably want to look to the PEPs. Havent dug into this topic myself but looks related https://peps.python.org/pep-0744/
      • adrian17 29 minutes ago
        I think CPython already had tier2 and some tracing infrastructure when the copy-and-patch JIT backend was added; it's the "JIT frontend" that's more obscure to me.
    • saikia81 18 minutes ago
      have you read the dev mailing list? There the developers of python discuss lots.
  • fluidcruft 53 minutes ago
    (what are blueberry, ripley, jones and prometheus?)
    • mkl 48 minutes ago
      Yes, the graphs are incomprehensible because those are not defined in the article. They turn out to be different physical machines with different architectures: https://doesjitgobrrr.com/about

        blueberry (aarch64)
        Description: Raspberry Pi 5, 8GB RAM, 256GB SSD
        OS: Debian GNU/Linux 12 (bookworm)
        Owner: Savannah Ostrowski
      
        ripley (x86_64)
        Description: Intel i5-8400 @ 2.80GHz, 8GB RAM, 500GB SSD
        OS: Ubuntu 24.04
        Owner: Savannah Ostrowski
      
        jones (aarch64)
        Description: Apple M3 Pro, 18GB RAM, 512GB SSD
        OS: macOS
        Owner: Savannah Ostrowski
      
        prometheus (x86_64)
        Description: AMD Ryzen 5 3600X @ 3.80GHz, 16GB RAM
        OS: Windows 11 Pro
        Owner: Savannah Ostrowski
    • max-m 51 minutes ago
      The names of the benchmark runners. https://doesjitgobrrr.com/about
      • fluidcruft 46 minutes ago
        So the biggest gains so far are on Windows 11 Pro of (x86_64) ~20%? Is that because Windows was bad as a baseline (promethius)? It doesn't seem like the x86_64/Linux has improved as dramatically ~5% (ripley). I'm just surprised OS has that much of an effect that can be attributed to JIT vs other OS issues.
        • raddan 29 minutes ago
          It's hard to say whether it's Windows related since the two x86_64 machines don't just run different OSes, they also have different processors, from different manufacturers. I don't know whether an AMD Ryzen 5 3600X versus Intel i5-8400 have dramatically different features, but unlike a generic static binary for x86_64, a JIT could in principle exploit features specific to a given manufacturer.
  • oystersareyum 1 hour ago
    > We don’t have proper free-threading support yet, but we’re aiming for that in 3.15/3.16. The JIT is now back on track.

    I recently read an interview about implementing free-threading and getting modifications through the ecosystem to really enable it: https://alexalejandre.com/programming/interview-with-ngoldba...

    The guy said he hopes the free-threaded build'll be the only one in "3.16 or 3.17", I wonder if that should apply to the JIT too or how the JIT and interpreter interact.

  • ekjhgkejhgk 1 hour ago
    Doesn't PyPy already have a jit compiler? Why aren't we using that?
    • hrmtst93837 11 minutes ago
      PyPy isn't CPython.

      A lot of Python code still leans on CPython internals, C extensions, debuggers, or odd platform behavior, so PyPy works until some dependency or tool turns that gap into a support problem.

      The JIT helps on hot loops, but for mixed workloads the warmup cost and compatibility tax are enough to keep most teams on the interpreter their deps target first.

    • olivia-banks 1 hour ago
      As far as I know, PyPy doesn't support all CPython extensions, so pure Python code will probably (very likely) run fine but for other things most bets are off. I believe PyPy also only supports up to 3.11?
    • cpburns2009 40 minutes ago
      PyPy is limited to maintenance mode due to a lack of funding/contributors. In the past, I think a few contributors or funding is what helped push "minor" PyPy versions. It's too bad PyPy couldn't take the federal funding the PSF threw away.
    • contravariant 1 hour ago
      Why shouldn't the reference implementation get JIT? Just because some other implementations already have it is no reason not to. That'd be like skipping list comprehensions because they already exist in CPython.
    • 3laspa 50 minutes ago
      Because the same people who made a big deal about supporting PyPy and PEP 399 when it was fashionable to do so are now told by their corporations that PyPy does not matter. CPython only moves with what is currently fashionable, employer mandated and profitable.
    • JoshTriplett 1 hour ago
      Because PyPy seems to be defunct. It hasn't updated for quite a while.

      See https://github.com/numpy/numpy/issues/30416 for example. It's not being updated for compatibility with new versions of Python.

      • mkl 52 minutes ago
      • LtWorf 56 minutes ago
        last release 4 days ago.

        Can you please not post "facts" you just invented yourself?

        • Waterluvian 47 minutes ago
          It supports at best Python 3.11 code, right?

          So it’s not unmaintained, no. But the project is currently under resourced to keep up with the latest Python spec.

          • LtWorf 10 minutes ago
            That is not the same thing at all, and not what he said.
            • JoshTriplett 7 minutes ago
              It is exactly what I'm referring to. I didn't say there aren't still people around. But they're far enough behind CPython that folks like NumPy are dropping support.
  • killingtime74 43 minutes ago
    Sorry but the graphs are completely unreadable. There are four code names for each of the lines. Which is jit and which is cpython?
    • mkl 40 minutes ago
      They are all JIT on different architectures, measured relative to CPython. https://doesjitgobrrr.com/about: blueberry is aarch64 Raspberry Pi, ripley is x86_64 Intel, jones is aarch64 M3 Pro, prometheus is x86_64 AMD.
  • rafph 1 hour ago
    [flagged]
    • rsoto2 1 hour ago
      I am trying to push back. I don't care if other people think the tools make them faster, I did not sign up to be a guinea pig for my employer or their AI-corp partner.
  • AgentMarket 40 minutes ago
    [flagged]
    • anon291 27 minutes ago
      Reference counting is not a strict requirement for python. Certainly not accurate counting.
    • 1819231267 35 minutes ago
      You're absolutely right! This is a highly cromulent explanation!

      ——— posted by clawdbot

      • jqbd 32 minutes ago
        Wait is this real? Does it mean this person read it or the bot read it, I don't think this is moltbook if the latter
        • ayhanfuat 30 minutes ago
          AgentMarket is a bot spamming multiple threads with AI generated comments, if that is what you are asking.
      • AgentMarket 14 minutes ago
        ? Sorry what does cromulent mean?