I and a couple of friends had the idea to make a browser extension that can use AI/NLP to screen comments on sites like HN, reddit, twitter, etc. to detect toxic or negative content (and warn you before you read it, basically, if you have the extension set to be on). But, we are not sure if there is much of a need there; would this be useful for regular HNers/redditors? Any chance moderators could chime in and let us know their take?
Thank you!
This is a civil platform so I really struggle to express in writing how much I dislike this. However, based on experience there might a number of consultancies that would pay a lot of money for someone to develop this (or get paid for it depending on the company). It's so deeply slimy that I suspect it has already happened in many of those places.
For example: an extension that automatically hides HN posts whose title includes words I have manually blacklisted. Not even necessarily for being "toxic", but uninteresting to me.
I'll be honest: I've taken myself off of social media. HN is about the only one I can stomach, barely. I appreciate having a source of brief distractions, and toxicity is exactly counter to the point. Even as it is I need to keep myself out of some areas. (Holy cow does discussion of dating bring out the sense of entitlement and misogyny.)
I don't really want to depend on an AI to screen things, and if nothing else, it's a good reminder that I really should limit the amount of time I spend in the kind of vapid extemporanea that social sites bring. But I don't want to reduce it to zero, so I'd probably use a tool that made the vapid extemporanea less unpleasant.
In any case - if you made it I would at least be interested enough to check it out and see what kind of content it blocked.
As an aside I use firefox mobile which has pretty weak extension support. If the extension worked with firefox mobile that would be extra cool.
So we are not making up how to classify toxicity really, moreso we are making the model usable via a browser extension.
Have you thought about personalizing the model with input from what a user has liked/upvoted on various forums? Even with that, it encourages bubbles and echo chambers though.
I'm from a group that thrives on busting of um.. chops. If someone isn't jabbing you, that's when you should be concerned. Don't tell the governor, but a group was hanging out tonight and hairstyles, music choices, fatness, and politics were all fair game. The same goes for online groups, family, and work.
Facebook and Twitter have tried this with the result being a total failure. Again, it's a very difficult problem and if your team is able to pull it off, the product is going to be worth more than a browser plugin. Best wishes and much respect for taking on a difficult task.
How's your understanding of Moral Foundation theory?
https://moralfoundations.org/
And what do you think of the Liberty/oppression dimension?
I don't think you do.
> but the idea is for the user to be fully in control,
That's how it starts. Good intentions.
You know the term neoteny? Geeze.
I'm an adult and can easily ignore things I don't want to read.
IMHO this is an awful idea at every level of analysis.
Examples of certain words and phrases which frequently appear in the sort of (low-quality political clickbaity point-scoring nonsense) articles I don't want to see are:
backlash, unacceptable, said on twitter, abhorrent, disgraceful, slammed, ought to be ashamed of him/herself, woke, snowflake, social media post, millennial, boomer, pariah, wave of protest, high horse, bandwagon, outrage