The "spam wave" with the (now famous) Nicole account underlines how poor the moderation tools are on Mastodon. It's currently not possible to automatically quarantine such profiles… #mastoadmin #fediblock
@RGrunblatt you're right! We need to use automated spam detection based on the content of messages and not the origin server. It's critical.
@evan This is where being able to have a basic plug-in backbone could be beneficial for the Mastodon software. Devs could develop spam plug-in, instance specific monetization, Online Safety Act plugins meeting requirements for specific location, etc. without messing with the Mastodon's core code #Mastodon #MastoDev
@RGrunblatt
@RGrunblatt @paul nudge a little closer!
@paul @RGrunblatt I hope the Fediscovery work helps.
Another option is putting a proxy in front of the Mastodon server and doing the filtering there.
@paul plugins are hard and probably not the correct solution for this, but this is why we are working on FASPs (starting with Fediscovery). I wrote about this approach for trust & safety nearly 2 years ago already and this is still what I want to build: https://renchap.com/blog/post/evolving_mastodon_trust_and_safety/
@evan @RGrunblatt
@renchap @paul @evan @RGrunblatt this is cool, but I'd really like to see something like Reddit's automoderator system where posts/accounts can be held for quarantine based on criteria along the lines of:
IF new_account HAS description MATCHING "fediverse chick" THEN quarantine
Or
IF new_post HAS body MATCHING slur THEN quarantine
I'm not interested in outsourcing moderation because my moderation is the service I'm offering with my instance
Regardless thanks for all your work!
@jenbanim it is not only about outsourcing, you could self host your own FASP for your instance only if you choose to do so. But for spam fighting to be efficient, you need some coordination of data between instances
@paul @evan @RGrunblatt
@renchap @paul @evan @RGrunblatt sharing information is important, but the current inefficiency I'm facing is that twice a day I'm checking the remote accounts tab for usernames matching "Nicole" and suspending them if they have the word "fediverse chick" in their bio. Automating that would be a massive time saver
Although I totally understand that the problems I face as a small instance admin are not the same as the ggmbh and it makes sense to prioritize the needs of the majority of users
@jenbanim honestly it would be much easier for us to only solve problems for our own servers, but we want to have something that works for all admins.
@paul @evan @RGrunblatt
@renchap @paul @evan @RGrunblatt sorry I hope that didn't come off as snarky. Your article specifically mentions the difficulties of Mastodon gGmbH needing to administer multiple instances. Software that is well-suited for everything from single-user instances to six-figure MAUs obviously isn't going to be perfectly tailored for my 70-user instance
I really appreciate the work you do. I hope I made my case for the value of an automod-like tool well
@renchap I like this part, "It also allows anybody to start writing their own implementation, either because they want specific features or want to quickly experiment on an idea."
I was going to invest and open an instance up to public for my region, but Ohio passed their law, and there is no way I can comply under the current Mastodon software..
Right now, the law for the seventh-most populous state in the US, Ohio, , the Parental Notification by Social Media Operators Act, is on hold by a judge, but it will require verifiable notification age verification when it goes in effect.
Law:
https://codes.ohio.gov/ohio-revised-code/section-1349.09
Attorney General FAQ
https://www.ohioprotects.org/faq
AG FAQ reads,
"What does an operator need to do to comply with this law?
Before allowing a child to agree to the terms of service or otherwise register, sign up or create a unique account to access or use the website, service or product, an operator must:
Obtain parental consent by doing at least one of the following:
Require a parent or legal guardian to sign and return a form consenting to the child’s use or access via postal mail, fax or e-mail.
If a monetary transaction is involved, require the parent to use a credit card, debit card or other payment system that provides notification for each separate transaction.
Require a parent or legal guardian to call a toll-free telephone number to confirm the child’s use or access.
Require a parent or legal guardian to connect via videoconference to confirm the child’s use or access.
Verify a parent’s or legal guardian’s identity by checking a government-issued ID.
Present a list of features offered by the operator’s website, service or product regarding censoring or content moderation, including any features that can be disabled for a user’s profile. The operator must also provide a link of where these features are listed on the respective website, service or product.
After the operator receives parental consent, is that it?
No. After receiving initial consent, the operator must also send written confirmation to the parent or legal guardian via postal mail, fax or e-mail."