In order to make abuse, griefing, spamming, and trolling more difficult, we will need to take a multifaceted approach and employ several tools to increase the amount of effort needed to spam as high as possible without burdening ordinary / new users. We have talked about several stop-gap solutions in isolation, but it is just as important to bring these tools together and conceptualize how they will work in concert towards achieving the goal of ensuring the people who are trying to make us mad are always more mad than we are.
The better our tools, the more mad they will have to be to overcome them, and the more satisfaction our beautiful moderators will have with every swing of the ban hammer.
At the moment, Lemmy only supports banning registered user accounts. We need to consider alternative means of banning people, because closing registration or playing whack-a-mole are both untenable.
Every computer making a connection through the Internet has an IP address. We can identify which IP addresses abuse is coming from and ban them. This will slow down your run-of-the-mill troll, but it is easy enough to circumvent with proxies, VPNs, new DHCP leases, and other means. None the less, it is a simple first step, and can prove to be useful depending on the nature of abuse being encountered.
- Rather simple to implement.
- IP bans with wildcards or ranges can block out entire problematic ISPs, subnets, or regions.
- Easy to circumvent.
- Potential for false positives.
- Many people could potentially share an IP address. (schools, organizations, etc.)
Browser Fingerprint Bans
The evil soulsucking worms in the ad-tech industry have poured a lot of blood, sweat, and adrenochrome into finding out innovative new ways of identifying and tracking users, and one of these is browser fingerprinting. Browser fingerprints are generated through the use of a long list of potential heuristics, including the user agent, a survey of installed system fonts, behavior of non-standard scripting, detection of browser plugins, measurements of computer performance, etc. etc. etc. Depending on the sophistication of the fingerprinting software, these can become incredibly unique. Eerily unique. It is also possible to use fuzzy hashing algorithms to prevent people from tacking one character on the user agent to completely alter their browser fingerprint hash.
When abusive accounts are banned, we could place their browser fingerprint hash into a rate-limit or ban list the same way we would with IP addresses. This would require us to capture browser fingerprints from banned users at some point, and compare them at account registration.
- More inconvenient to change.
- Will continue to test positive for abuse coming through proxies
- Increases the amount of private data we collect (though it could be anonymized)
- More complex to implement than IP bans.
- Could be circumvented by direct API access
- Users pay for good privacy practices by potentially getting caught in false positives
If users aren't banned, they should be able to post as freely as possible, but as long as it is easy for abusive users to create new accounts, we will have to curtail the actions of new accounts. There are several heuristics we can use for rate limiting.
- IP address
- Browser fingerprint
- Account age
- Account total karma
- Community karma
- Upvote/downvote ratio
- Downvote/comment ratio
- Unverified / no email address
In addition to these general heuristics, there are other activities which can be automatically detected to trigger increased rate limits
- Number of URLs posted
- Repeatedly posting the same URL
There are some activities which shouldn't trigger an automatic ban or rate limits, which none the less would be helpful to automatically flag for moderator review. A key aspect of this would be enabling moderators to configure which actions trigger these reports so that they remain useful without flooding other important user-generated reports. Using the community settings feature along with some new columns, we can allows new entries to be modified by moderation teams at will. These may include:
- List of automatic ban URLs and phrases
- List of automatic report URLs and phrases
Mitigating Spam Impact
As good as we are, spammers get through. There are a few ways to mitigate the impact has, both on the site and the mods
Reducing impact of removed comments
Removed comments still take up screenspace and impact comment/post sorting.
Minimize removed comments
When comments are removed, they still clog up screenspace and make it seem like the spammers are present, which isn't good. This is particularly noticeable for non-top-level comments, which are still lifted up near the top of posts, depending on the scores of their parents.
When comments are removed, ideally they would become unnoticeable. I recommend collapsing the comments, having them be automatically minimized. It also might help to tone-down the
banned indicator next to the username of the spammer. It catches the eye. (issue #155)
Removed comments have too much weight on their parents sorting
This is almost the opposite issue to the one above. Currently, removed comments seem to be weighted to sink to the bottom of the page. This is good for top-level comments, but is bad and abusable for child comments, who unduly drag down their comment tree. Ideally, removed comments wouldn't impact the sorting weight of the comments around them at all.
Remove votes by banned users from the site
The spammers have turned to vote manipulation to ensure that the posts and comments they want seen float to the top. They commonly run 20-40-account botnets to boots posts and 10-account botnets to hide comments they don't like.
Pavlichenko has given us the insights required to catch and kill their botnets, but we have no way of undoing the damage done to the sorting algorithm.
I propose that votes made by sitewide banned users should be discounted from the tabulations. Looking at a post that was vote-bombed by 40 alts shouldn't look like it has those 40 votes, nor should those 40 votes count in the sort.
The easiest way, I think, would be to make the site blind to votes by banned users. This way, temp-banned users would still have their votes all cast when they get back; same with wrongfully banned users.
Otherwise, a less-elegant solution could be to provide a 'remove all votes by user' gundam for mods when viewing a user's profile, much like the planned 'remove all posts/comments by user' gundam. However, this does require extra time, which adds up when banning 60 botnet accounts in an evening, and is likely irreversible.
These are mod tools that would be good to have for fighting spam/system abuse. General tools for everyday unrelated to battling spam [can be found here.](General Tools Issues)
Please consider checking both pages if working on an issue in here, in case there are multiple issues that might be best tackled together; for example, both pages speak to changes to the Reports page.
The spammers have turned to spamming the report log. Rich, for a group ostensibly concerned with the community's safety.
Anyway, it is a form of spam that is often missed by mods, until there are 2000+ reports to clear. This takes a long time, and it makes sleepy mods prone to accidentally resolving real reports, which are then gone forever.
I propose a button on the report log that would 'clear all reports by
Visual feedback on mod actions
Currently, it can be difficult for mods to tell when their actions have 'worked', which leads to unnecessary refreshing and, occasionally, accidentally incomplete actions. Areas where a visual indicator could be useful include:
- When clicking
banon the 'Ban from all communities I moderate' button
- When clicking
removeon posts on a user's profile; comments flip to
removed by moderator, there is no such visual indication for posts
Button to remove all posts/comments by user
I know this gundam has been planned/in development for some time, but I would be remiss not to include it here until it exists in all its inevitable glory!