RE: SOP For Community-Driven Identification of Dangerous Persons

Applauding the initiative.

One aspect of this seems to be that people can easily create new accounts to start over if/when their behavior is identified as problematic. The blockchain allows anonymity and easy creation of new accounts. So how can a reputation system work for identifying a problematic source of content? I am not sure. Right now it seems to be that we are identifying problematic content itself and we can downvote it which alerts others and hides it by default, and it reflects on the author's reputation, but of course the author can always choose to start fresh. It's not a great choice, though, since starting fresh means a new (not trusted) account, so people are probably more likely to take anything coming from such an account with a grain of salt. Not having good reputation is the cost to starting fresh. But how can this serve in identifying problematic content? It seems like a cost that someone with nefarious purposes will likely be willing to pay.

There is also the case where accounts with established reputation can be hacked (and can be a desirable hacking target even if there's no (liquid) funds in them) so as to be used for the hacker's purpose, even if temporarily. Because this thing (identify theft) exists far more in the digital world than in the physical, creating a reputation system to prevent problematic content seems even more difficult. Unless, of course, we can make it far more difficult for hacking into people's accounts and maintaining such control for any length of time, perhaps through better account recovery systems and similar.

H2
H3
H4
3 columns
2 columns
1 column
2 Comments
Ecency