If you want to dive into the details, you can copy the "fonted" output to a unicode analyzer. [0] is an online unicode analyzer that seems to work well.
Is it? Even emoji---one of the most controversial additions ever---was fully justified for its possible accessibility issue when it was introduced in Unicode.
Like others have already said, it’s an accessibility nightmare. On the other hand, it’s not like this is going away anytime soon – maybe screenreaders could learn to understand and read some such “fonts” (e.g. bold/italic at least)?
Absolutely. The argument that screen readers shouldn't gain a heurisric for identifying this kind of text and normalising it down to pronouncable words is just prescribtivism, to my view.
ALL CAPS, SpOnGeBoB cASe, clap emphasis, and others carry specific meanings in colloquial written language, the use of other letterlike symbols can also. These should be presented in an accessible form to the user, rather than demanding that people refrain from using them.
That's true, but at some point, intention and accessibility will start to clash.
Like, there used to be that fad/meme of adding as many diacritics and other Unicode appendages to a text as possible. ("Cursed text" or something I think)
The diacritics will stack and turn the characters into monstrosities that will break the page layout and generally make the text look alien and distorted.
It also makes the text hard to read, which is the entire point.
But a screen reader is kind of at a dilemma here: If it ignores the diacritics and just reads the text normally, then the "weirdness" will be missing and the text will appear out of context. To convey that, the reader would have to intentionally read the text in a distorted voice - but this will make it hard to understand and could lead to unease and confusion if the distortion starts without warning.
There is also the question whether we want unexpected tone shifts at all. Like, it would be semantically correct to read all caps text in a shouting voice, but do we really want screen readers to randomly start shouting?
> • Don't use aria-label or aria-labelledby on any other non-interactive content such as p, legend, li, or ul, because it is ignored.
> • Don't use aria-label or aria-labelledby on a span or div unless its given a role. When aria-label or aria-labelledby are on interactive roles (such as a link or button) or an img role, they override the contents of the div or span. Other roles besides Landmarks (discussed above) are ignored.
I have often wanted to do exactly this, and was disappointed when I learned aria-label couldn’t be used to replace the value exposed for non-interactive content. I have hunted for other techniques a couple of times, and never been completely satisfied, though things have improved in the last year and a bit.
The `inert` attribute is a recent addition which may exclude the accessibility text from find-in-page (maybe desirable, maybe undesirable, depending on the situation). Firefox and Chromium shipped that refinement of its behaviour in the last year and a half, Safari hasn’t yet (and seems to have reservations about the whole idea <https://bugs.webkit.org/show_bug.cgi?id=269909>).
You can also play with putting the accessibility text in a pseudoelement’s content (e.g. <span data-a11y-text=…><span aria-hidden=true>…</span></span> and [data-a11y-text]::after { content: attr(data-a11y-text); … }), which should these days be exposed in the accessibility tree, but Firefox find-in-page now includes generated content (though you can’t bridge real and generated content), and it wouldn’t surprise me if Chromium eventually followed suit, so I’m not convinced it’s worth the bother, especially if you lose `inert` or have to add an element anyway. But keeping it as an attribute instead of a separate element has some appeal.
If you want to use such an effect on your own website that’s probably the way to go (although I’d probably try to use real text in HTML and replace it with some CSS magic... or just use a web font).
For social media / forum sites etc, they should definitely add this. Make a plain text / accessible (user) name mandatory and a display name optional. And give end users the choice to show canonical name or display name.
It also provides a way to post data on the public web in an obfusticated way, that a human can read but automated search tools are likely not looking for.
Great method if you had short human-readable information information that you didnt want AI to train on ;)
I wrote a tiny pipeline to check, and it seems styled Unicode has a very modest effect on an LLM's ability to understand text. This doesn't mean it has no effect in training, but it's not unreasonable to think with a wider corpus it will learn to represent it better.
Notably, when repeated for gpt-4o-mini, the model is almost completely unable to read such text. I wonder if this correlates to a model's ability to decode Base64.
I removed most count = 1 samples to make the comment shorter.
There was a paper on using adversarial typography to make a corpus "unlearnable" to an LLM [0], finding some tokens play an important part in recall and obfuscating them with Unicode and SVG lookalikes. If you're interested, I suggest taking a look.
Unicode obsfucation tricks trigger modern content filters faster than you can blink. Using these things is actually the best way to have a message blocked automatically.
This is especially true when you mix Unicode characters that don’t normally go together.
(Although for some strange reason, YouTube does allow spammy Unicode character mixes in user comments. I don’t know why)
Feel like I should be able to explain this, but I can't. What's the downside of using unicode? I note some webpages have UTF-8 in the head. Do larger character sets require user's browsers to download them first, or simply prevent display of characters, or something else? If bandwidth is the problem, how large are the files (i.e how delayed will the site load be). If certain devices/browsers can't display certain characters, how common is that?
In UTF-8, your standard latin characters are encoded just as they are in ASCII (1-byte each), and all UTF-8 characters are 1-4 bytes. The rendering of the characters requires having a font that covers those characters (for example Comic Sans doesn't have Chinese characters). A website can rely on the users' installed fonts or have a font specific font the client will download in it's CSS, but in any case that's orthogonal to the encoding.
A misappropriation of Amharic labeled "Tribal Font" is plainly racist. Amharic is not "tribal" just because it is African. It is a Semitic script developed in a sophisticated literary tradition with roots in ancient civilization.
Unicode is not supposed to have fonts at all. Unicode defines characters that you can then represent in various fonts. It just so happens that Unicode has many characters that happen to look like the letter "C" (as an example): © for copyright, ℂ for complex numbers (formally called Double-Struck Capital C), etc. The author uses these many variations as a fun way to make "fonts".
[0]: https://devina.io/unicode-analyser
ALL CAPS, SpOnGeBoB cASe, clap emphasis, and others carry specific meanings in colloquial written language, the use of other letterlike symbols can also. These should be presented in an accessible form to the user, rather than demanding that people refrain from using them.
Like, there used to be that fad/meme of adding as many diacritics and other Unicode appendages to a text as possible. ("Cursed text" or something I think)
The diacritics will stack and turn the characters into monstrosities that will break the page layout and generally make the text look alien and distorted.
It also makes the text hard to read, which is the entire point.
But a screen reader is kind of at a dilemma here: If it ignores the diacritics and just reads the text normally, then the "weirdness" will be missing and the text will appear out of context. To convey that, the reader would have to intentionally read the text in a distorted voice - but this will make it hard to understand and could lead to unease and confusion if the distortion starts without warning.
There is also the question whether we want unexpected tone shifts at all. Like, it would be semantically correct to read all caps text in a shouting voice, but do we really want screen readers to randomly start shouting?
(Edit: oh right, it was Zalgo, not cursed text)
> • Don't use aria-label or aria-labelledby on any other non-interactive content such as p, legend, li, or ul, because it is ignored.
> • Don't use aria-label or aria-labelledby on a span or div unless its given a role. When aria-label or aria-labelledby are on interactive roles (such as a link or button) or an img role, they override the contents of the div or span. Other roles besides Landmarks (discussed above) are ignored.
The basic technique is roughly this:
The `inert` attribute is a recent addition which may exclude the accessibility text from find-in-page (maybe desirable, maybe undesirable, depending on the situation). Firefox and Chromium shipped that refinement of its behaviour in the last year and a half, Safari hasn’t yet (and seems to have reservations about the whole idea <https://bugs.webkit.org/show_bug.cgi?id=269909>).You can also play with putting the accessibility text in a pseudoelement’s content (e.g. <span data-a11y-text=…><span aria-hidden=true>…</span></span> and [data-a11y-text]::after { content: attr(data-a11y-text); … }), which should these days be exposed in the accessibility tree, but Firefox find-in-page now includes generated content (though you can’t bridge real and generated content), and it wouldn’t surprise me if Chromium eventually followed suit, so I’m not convinced it’s worth the bother, especially if you lose `inert` or have to add an element anyway. But keeping it as an attribute instead of a separate element has some appeal.
https://news.ycombinator.com/item?id=43302835
Great method if you had short human-readable information information that you didnt want AI to train on ;)
There was a paper on using adversarial typography to make a corpus "unlearnable" to an LLM [0], finding some tokens play an important part in recall and obfuscating them with Unicode and SVG lookalikes. If you're interested, I suggest taking a look.
[0] https://arxiv.org/abs/2412.21123
This is especially true when you mix Unicode characters that don’t normally go together.
(Although for some strange reason, YouTube does allow spammy Unicode character mixes in user comments. I don’t know why)
LLMs? No. But LLMs are too slow for content moderation at scale.
Custom trained models? Maybe. Is the unicode characters in the training data?
No Zalgo text?
>Accessibility: Don't Use Fake Bold or Italic in Social Media
https://news.ycombinator.com/item?id=43302835
It'd be great if they used the "look-alike" mapping both ways.
https://en.wikipedia.org/wiki/UTF-8#Description
Apparently some of the variants do ....