12 comments

  • CarVac 0 minutes ago
    The contrast enhancement seems simpler to perform with an unsharp mask in the continuous image.

    It probably has a different looking result, though.

  • sph 1 hour ago
    Every example I thought "yeah, this is cool, but I can see there's space for improvement" — and lo! did the author satisfy my curiosity and improve his technique further.

    Bravo, beautiful article! The rest of this blog is at this same level of depth, worth a sub: https://alexharri.com/blog

  • wonger_ 22 minutes ago
    Great breakdown and visuals. Most ASCII filters do not account for glyph shape.

    It reminds me of how chafa uses an 8x8 bitmap for each glyph: https://github.com/hpjansson/chafa/blob/master/chafa/interna...

    There's a lot of nitty gritty concerns I haven't dug into: how to make it fast, how to handle colorspaces, or like the author mentions, how to exaggerate contrast for certain scenes. But I think 99% of the time, it will be hard to beat chafa. Such a good library.

    EDIT - a gallery of (Unicode-heavy) examples, in case you haven't seen chafa yet: https://hpjansson.org/chafa/gallery/

  • chrisra 40 minutes ago
    > To increase the contrast of our sampling vector, we might raise each component of the vector to the power of some exponent.

    How do you arrive at that? It's presented like it's a natural conclusion, but if I was trying to adjust contrast... I don't see the connection.

    • c7b 11 minutes ago
      What about the explanation presented in the next paragraph?

      > Consider how an exponent affects values between 0 and 1. Numbers close to experience a strong pull towards while larger numbers experience less pull. For example 0.1^2=0.01, a 90% reduction, while 0.9^2=0.81, only a reduction of 10%.

      That's exactly the reason why it works, it's even nicely visualized below. If you've dealt with similar problems before you might know this in the back of your head. Eg you may have had a problem where you wanted to measure distance from 0 but wanted to remove the sign. You may have tried absolute value and squaring, and noticed that the latter has the additional effect described above.

      It's a bit like a math undergrad wondering about a proof 'I understand the argument, but how on earth do you come up with this?'. The answer is to keep doing similar problems and at some point you've developed an arsenal of tricks.

  • zdimension 16 minutes ago
    Well-written post. Very interesting, especially the interactive widgets.
  • nickdothutton 1 hour ago
    What a great post. There is an element of ascii rendering in a pet project of mine and I’m definitely going to try and integrate this work. From great constraints comes great creativity.
  • blauditore 20 minutes ago
    Nice! Now add colors and we can finally play Doom on the command line.

    More seriously, using colors (not trivial probably, as it adds another dimension), and some select Unicode characters, this could produce really fancy renderings in consoles!

  • nathaah3 2 hours ago
    that was so brilliant! i loved it! thanks for putting it out :)
  • adam_patarino 1 hour ago
    Tell me someone has turned this into a library we can use
  • chrisra 31 minutes ago
    Next up: proportional fonts and font weights?
  • Jyaif 1 hour ago
    It's important to note that the approach described focuses on giving fast results, not the best results.

    Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.

    This is a well known problem because early computers with monitors used to only be able to display characters.

    At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?

    And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.

    • spuz 36 minutes ago
      Thinking more about the "best results". Could this not be done by transforming the ascii glyphs into bitmaps, and then using some kind of matrix multiplication or dot production calculation to calculate the ascii character with the highest similarity to the underlying pixel grid? This would presumably lend itself to SIMD or GPU acceleration. I'm not that familiar with this type of image processing so I'm sure someone with more experience can clarify.
    • brap 48 minutes ago
      You said “best results”, but I imagine that the theoretical “best” may not necessarily be the most aesthetically pleasing in practice.

      For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.

    • Sharlin 1 hour ago
      And a (the?) solution is using an algorithm like k-means clustering to find the tileset of size k that can represent a given image the most faithfully. Of course that’s only for a single frame at a time.
    • finghin 1 hour ago
      In practice isn’t a large HashMap best for lookup, based on compile-time or static constants describing the character-space?
      • spuz 55 minutes ago
        In the appendix, he talks about reducing the lookup space by quantising the sampled points to just 8 possible values. That allowed him to make a look up table about 2MB in size which were apparently incredibly fast.
        • finghin 27 minutes ago
          I've been working on something similar (didn't get to this stage yet) and was planning to do something very similar to the circle-sampling method but the staggering of circles is a really clever idea I had never considered. I was planning on sampling character pixels' alignment along orthogonal and diagonal axes. You could probably combine these approaches. But yeah, such an approach seemed particularly powerful for the reason you could encode it all in a table.