See THIS POST

Notice- the 2,000 upvotes?

https://gist.github.com/XtremeOwnageDotCom/19422927a5225228c53517652847a76b

It’s mostly bot traffic.

Important Note

The OP of that post did admit, to purposely using bots for that demonstration.

I am not making this post, specifically for that post. Rather- we need to collectively organize, and find a method.

Defederation is a nuke from orbit approach, which WILL cause more harm then good, over the long run.

Having admins proactively monitor their content and communities helps- as does enabling new user approvals, captchas, email verification, etc. But, this does not solve the problem.

The REAL problem

But, the real problem- The fediverse is so open, there is NOTHING stopping dedicated bot owners and spammers from…

  1. Creating new instances for hosting bots, and then federating with other servers. (Everything can be fully automated to completely spin up a new instance, in UNDER 15 seconds)
  2. Hiring kids in africa and india to create accounts for 2 cents an hour. NEWS POST 1 POST TWO
  3. Lemmy is EXTREMELY trusting. For example, go look at the stats for my instance online… (lemmyonline.com) I can assure you, I don’t have 30k users and 1.2 million comments.
  4. There is no built-in “real-time” methods for admins via the UI to identify suspicious activity from their users, I am only able to fetch this data directly from the database. I don’t think it is even exposed through the rest api.

What can happen if we don’t identify a solution.

We know meta wants to infiltrate the fediverse. We know reddits wants the fediverse to fail.

If, a single user, with limited technical resources can manipulate that content, as was proven above-

What is going to happen when big-corpo wants to swing their fist around?

Edits

  1. Removed most of the images containing instances. Some of those issues have already been taken care of. As well, I don’t want to distract from the ACTUAL problem.
  2. Cleaned up post.
  • HTTP_404_NotFound@lemmyonline.comOP
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    edit-2
    1 year ago

    What, corrective courses of action shall we seek?

    I sent messages to:

    1. https://startrek.website/u/ValueSubtracted (startek.website)
    2. https://oceanbreeze.earth/u/windocean (oceanbreeze.earth)
    3. https://normalcity.life/u/EuphoricPenguin22 (normalcity.life)

    I blocked / defederated these instances:

    1. https://lemmy.dekay.se/ (appears to just be a spambot server)
    • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      Just wanted to point out that according to your stats, unless I don’t understand them well, only 26 bots come from lemmy.world (which has open sign-ups, and uses the “easy to break” (/s) captcha) and 16 from lemmy.ml (which doesn’t have open sign-ups and relies on manual approvals).

      For some perspective, lemmy.world has almost 48k users right now. Speaking of “corrective action” is a bit of a stretch IMO.

      • HTTP_404_NotFound@lemmyonline.comOP
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        1 year ago

        This post isn’t about lemmy.world, nor am I blaming lemmy.world!

        I am trying to drag in the admins of the big instances, to come up with a collective plan to address this issue.

        There isn’t a single instance causing this problems. The bots are distributed amongst normal users, in normal instances.

        WIth- the exception of a instance or two with nothing but bot traffic.

        • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          1 year ago

          I’m just saying that context and scale matter. If an anti-spam solution is 99% effective, then chances are that on an instance with 100k users you are still going to have around 1k bots that have bypassed it.

          • HTTP_404_NotFound@lemmyonline.comOP
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            4
            ·
            1 year ago

            Your right- But, the problem is-

            At a fediverse-level, we don’t really have ANY spam prevention currently.

            Lets assume, at an instance level, all admins do their part, enable applicant approvals, enable captchas, email verification, and EVERY TOOL they have at their disposal.

            There is NOTHING stopping these bots from just creating new instances, and using those.

            Keep focused on the problem- the problem, is platform-wide lack of the ability to prevent bots.

            I don’t agree with the beehaw approach, of bulk-defederation, as such, a better solution is needed.

            • Kichae@kbin.social
              link
              fedilink
              arrow-up
              8
              arrow-down
              1
              ·
              edit-2
              1 year ago

              The beehaw approach wasn’t “bulk defederation”. They blocked two Lemmy instances they were having trouble with. The bulk of their block list are Mastodon and Pleroma instances well known for trolling other sites and stirring up shit.

              Edit: Autocomplete refuses to accept that I talk a lot about federation and defederating, and is desperately trying to convince me I’m talking about anything else that states with “de”.

              • HTTP_404_NotFound@lemmyonline.comOP
                link
                fedilink
                arrow-up
                3
                arrow-down
                3
                ·
                1 year ago

                https://beehaw.org/instances

                While- the majority of their instances do appear to be potentially quite noisy/potentially bad- there are quite a few, very large, well known instances on their defederation list.

                For example- a percentage of the individual IN THIS THREAD, are on instances defederated from beehaw.

                • Kichae@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  I didn’t say they blocked few people. I said they blocked few websites.

                  Lemmygrad is full of agitators, and Lemmy.world and SJW have, from my experiences, a disproportionate number of people who reject communal solutions to communal issues, while still feeling entitled to access to communal spaces.

                  Meanwhile, other large sites, like Lemmy.ml and kbin.social, and smaller regional sites, such as Midwest.social, Lemmy.ca, and feddit.uk, are federation with them just fine.

                  That doesn’t sound like mass defederating to me.

                  That sounds targeted.

            • o_o@programming.dev
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              1 year ago

              There is NOTHING stopping these bots from just creating new instances, and using those.

              I read somewhere that mastodon prevents this by requiring a real domain to federate with. This would make it costly for bots to spin up their own instances in bulk. This solution could be expanded to require domains of a certain “status” to allow federation. For example, newly created domains might be blacklisted by default.

              • HTTP_404_NotFound@lemmyonline.comOP
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                4
                ·
                1 year ago

                I read somewhere that mastodon prevents this by requiring a real domain to federate with.

                I remember back in the days of playing world of warcraft- The botters / gold sellers would be banned pretty often.

                However- they would be back the next day botting again, despite having to buy another 50$ account.

                The problem was- the profits they were able to make, far outweighed the 50$ price of entry.

                Likewise- playing minecraft, with trolls/griefers/etc- the same thing would occur. You could ban somebody, and they would just show up with a new account for an hour earlier. In this case- there wasn’t even the option of financial gain- just a dedicated troll

                For example, newly created domains might be blacklisted by default.

                I think that might help- but, I don’t think that would be the end-all, be-all solution. Especially since many scammers/bot owners already have dozens, if not HUNDREDs of domains sitting aside of nefarious purposes.

                • o_o@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 year ago

                  If “botters” are willing to spend >$5 per bot on established instances, then I don’t believe this is a solveable problem. For the fediverse, or for ANY platform, Reddit included. I am perfectly human, and would be hard-pressed to decline a >$150/hour “job” to create accounts on someone’s behalf.

                  Like any other online community, constant vigilant moderation is the only way to resolve this. I don’t see how Lemmy is in any worse position than Reddit so I don’t think we need to be all “doom and gloom” quite yet.

                  As for botters creating their own instances…

                  For example, newly created domains might be blacklisted by default.

                  This is just a start. Federation allows for many techniques to solve this. Perhaps even a “Fediverse Universal Whitelist” with an application process. I’m excited for the possibilities, but again I don’t think it’s quite time to be overly concerned yet. These are solvable problems.

            • fubo@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              Some older federated services, like IRC, had to drop open federation early in their history to prevent abusive instances from cropping up constantly, and instead became multiple different federations with different policies.

              That’s one way this service might develop. Not necessarily, but it’s gotta be on the table.

    • Mutelogic@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It looks like the OP is responsible for the upvote bots (inferred from his edit?). Maybe to prove the original point?

      • HTTP_404_NotFound@lemmyonline.comOP
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        That is correct- Please see my revised post. I removed lots of the data and parts, to help point out the bigger problem we need to solve.

      • HTTP_404_NotFound@lemmyonline.comOP
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        That is likely true- and my goal of this post, isn’t to look at that one post.

        Its to discuss what sorts of solutions we can apply to help squad this problem.

        Ideally, solutions that doesn’t involve mass-defederation.

      • HTTP_404_NotFound@lemmyonline.comOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Eh- its not really a spam instance.

        They are very straightforward with what their instance does- It crossposts reddit to lemmy, in that instance’s communities.

        In that case, its as simple as don’t subscribe to it. Don’t subscribe, and it won’t popup on your feed.

  • o_o@programming.dev
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Honestly, I’m interested to see how the federation handles this problem. Thank you for all the attention you’re bringing to it.

    My fear is that we might overcorrect by becoming too defederation-happy, which is a fear it seems that you share. However I disagree with your assertion that the federation model is more risky than conventional Reddit-like models. Instance owners have just as many tools (more, in fact) as Reddit does to combat bots on their instance. Plus we have the nuke-from-orbit defederation option.

    Since it seems like most of these bots are coming from established instances (rather than spoofing their own), I agree with you that the right approach seems to be for instance mods to maintain stricter signups (captcha, email verification, application, or other original methods). My hope is that federation will naturally lead to a “survival of the fittest” where more bot-ridden instances will copy the methods of the less bot-ridden instances.

    I think an instance should only consider defederation if it’s already being plagued by bot interference from a particular instance. I don’t think defederation should be a pre-emptive action.

    • Lvxferre@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Honestly, I’m interested to see how the federation handles this problem.

      Ditto. Perhaps we’re going to see a new solution for an old problem.

  • RoundSparrow@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    There is no built-in “real-time” methods for admins via the UI to identify suspicious activity from their users, I am only able to fetch this data directly from the database. I don’t think it is even exposed through the rest api.

    The people doing the development seem to have zero concern that their all the major servers are crashing with nginx 500 errors on their front page under routine moderate loads, nothing close to a major website. There is no concern to alert operators of internal federation failures, etc.

    I am only able to fetch this data directly from the database.

    I too had to resort to this, and published an open source tool - primitive and non-elegant, to try and get something out there for server operators: !lemmy_helper@lemmy.ml

          • RoundSparrow@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Yes, thank you. And if you come up with any that cross-reference comments and postings by remote instance server better than the ones in lemmy_helper, please share. I’d really like to see if we can get “most recent hour, most recent day” queries so we can at least see federated data is flowing from which servers.

  • Tugg@lemmyverse.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    I dont have much to add other than I am an experienced admin and was dismayed at how vulnerable Lemmy is. Having an option to have open registrations with no checks is not great. No serious platform would allow that.

    I dont know of a bulletproof way to weed put the bad actors, but a voting system that Lemmy can leverage, with a minimum reputation in order to stay federated might work. This would require some changes that I’m not sure the devs can or would make. Without any protection in place, people will get frustrated and abandon Lemmy. I would.

  • Rottcodd@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    The place feels different today than it did just a couple of days ago, and it positively reeks of bots.

    I’m seeing far fewer original posts and far more links to karma-farmer quality pabulum, all of which pretty much instantly somehow get hundreds of upvotes.

    The bots are here. And they’re circlejerking.

      • yesdogishere@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        how about going through the 4chan approach of nobody cares, everybody spams whatever they like? then the corpos can wallow in their own poo?

  • Fedora@lemmy.haigner.me
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 year ago
    1. Hiring kids in africa and india to create accounts for 2 cents an hour.

    Heads up that this depends on the operation size. Captchas are a solved problem. Commercial software exists that can solve Captchas automatically. You migrate from pay on demand services to computer vision software when it’s financially beneficial.

    Computers are cheaper and better at solving Captchas than humans atm, and it doesn’t look like that’s going to change any time soon. As long as you pay attention to your proxies, it’s rare to see solution attempts fail. Some pay on demand services no longer employ people.

  • dedale@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    Hello. The post you mentioned was made as a warning, to prove a point. That the fediverse is currently extremely vulnerable to bots.

    user ‘alert’, made the post then upvoted with his bots. To prove how easy it was to manipulate traffic, even without funding.

    see:
    https://kbin.social/m/lemmy@lemmy.ml/t/79888/Protect-Moderate-Purge-Your-Sever

    It’s proof that anyone could easily manipulate content unless instance owners take the bot issue seriously.

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I did update my post, shortly before you posted this, to include that- as well as- removing a lot of the data for individual instances as it derives from the point / problem I am trying to identify.

      The data, however, is quite valuable in exposing that this WILL be a problem for us, especially if we do not identify a solution for it.

  • db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    1 year ago

    I noticellot of instances which were flooded with bots due to the open registration. I have most of them degenerated for this reason.

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      1 year ago

      We need a better solution for this, rather then mass-bulk defederation.

      In my opinion- that is going to greatly slowdown the spread and influence of this platform. Also IMO- I think these bots are purposely TRYING to get instances to defederate from each other.

      Meta is pushing its “fediverse” thing. Reddit, is trying to squash the fediverse. Honestly, it makes perfect sense that we have bots trying to upvote the idea of getting instances to defederate each other.

      Once- everything is defederated- lots of communities will start to fall apart.

      • db0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        I agree. This is why I started the Fediseer which makes it easy for any instance to be marked as safe through human review. If people cooperate on this, we can add all good instances, no matter how small, while spammers won’t be able to easily spin up new instances and just spam.

          • db0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 year ago

            First we need to populate it. Once we have a few good people who are guaranteeing for new instances regularly, we can extend it to most known good servers and create a “request for guarantee” pipeline. The instance admins can then leverage it by either using it as a straight whitelist, or more lightly by monitoring traffic coming from non-guaranteed instances more closely.

            The fediseer just provides a list of guaranteed servers. It’s open ended after that so I’m sure we can find a proper use for this that doesn’t disrupt federation too much.

              • db0@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                Actually, not a handful. Everyone can vouch for others, so long as someone else has vouched for them

                • HTTP_404_NotFound@lemmyonline.comOP
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  One recommendation- how do we prevent it from being potentially brigaded?

                  Someone vouches for a bad actor, bad actor vouches for more bad actors- then they can circle jerk their own reputation up.

                  Edit-

                  Also, what prevents actors in “downvoting” instances hosting content they just don’t like?

                  ie- yesterday, half of lemmy was wanting to defederate sh.itjust.works due to a community called “the_donald”, containing a single troll shit-posting. (The admins have since banned, and remove that problem)- but, still, everyone’s knee-jerk reaction was to just defederate. Nuke from orbit.

          • db0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            For contributing, it’s open source so if you have ideas for further automation I’m all ears.

      • Kichae@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        The solution is to choose servers with admins who are enabling bot protections.

        If admins are not using methods to dissuade bot signups, then they’re not keeping their site clean for their users. They’re being a bad admin.

        If they’re not protecting their site against bots, they’re also not protecting the network against hosts. That makes them bad denizens of the Fediverse, and the rest of us should take action to protect the network.

        And that means cutting ties with those who endanger it.

        • HTTP_404_NotFound@lemmyonline.comOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          See the original post. (may have changes’ since you read it)

          I can spin up a fresh instance in UNDER 15 seconds, and be federated with your server in under a minute.

          There is literally nothing that can be done to stop this currently, unless servers completely wall themselves from the outside world, and follow a whitelisting approach. However, this ruins one of the massive benefits of the fediverse.

          • dedale@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            I can think of a way to help with the problem, but I don’t know how hard it would be to implement.

            Create some sort of trust score, where instance owners rate other instances they federate with.
            Then the score gets shared in the network. Like some sort of federated whitelisting.
            You would have to be prudent a first, but not do the whole task yourself.

            You could even add an “adventurousness” slider, to widen or restrict the network based on this score.

          • Kichae@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Yeah, setting up new instances is a different issue, of course. And there is definitely a lack tools to help with that as of yet. We need things like rate limiting on new federations, or on unusual traffic spikes, mod queues for posts that get caught up in them. Plus the ability to purge all posts and comments from users from defederated sites.

            Among other things.

          • o_o@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 year ago

            There are two worries here:

            1. Bots on established and valid instances (Should be handled by mods and instance admins, just like conventional non-federated forums. Perhaps more tooling is required for this— do you have any suggestions? However, I think it’s a little premature to say that federation is inherently more susceptible or that corrective action is desperately needed right now.).

            2. Bots on bot-created instances. (Could be handled by adding some conditions before federating with instances, such as a unique domain requirement. Not sure what we have in this space yet. This will limit the ability to bulk-create instances. After that, individual bot-run instances can be defederated with if they become annoyances.)

          • Saik0@lemmy.saik0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            I can spin up a fresh instance in UNDER 15 seconds, and be federated with your server in under a minute.

            And I can blacklist your instance in less than 5 seconds. We have the answer. Administrators of instances have the power to do whatever disposition they want already.

            • HTTP_404_NotFound@lemmyonline.comOP
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              4
              ·
              1 year ago

              Quit being a twerp, and work with us.

              And I can blacklist your instance in less than 5 seconds.

              First, you have to IDENTIFY the bad-instances. Have a tool for that? Have a method to filter out good from bad?

              No. You don’t.

              • Saik0@lemmy.saik0.com
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                edit-2
                1 year ago

                No. You don’t.

                Yes I do. Because I actually understand how servers work. If your just running Lemmy with no understanding of how the internet works… then you’re doing yourself a disservice.

                Edit: Oh I missed this the first time I read it…

                Quit being a twerp, and work with us.

                Yeah no. I have no interest to work with leeches that don’t understand how to run services. Let alone ones that jump straight to ad hominem.

                • HTTP_404_NotFound@lemmyonline.comOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  3
                  ·
                  edit-2
                  1 year ago

                  Then seriously, go fuck off back to your server, and don’t come fussing when you get overran by bots.

                  Wait- why are you even in this conversation? You have two users… and four posts.

  • AnarchoGravyBoat@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    @xtremeownage

    I think that one of the most difficult things to deal with more common bots, spamming, reposting, etc.

    Is that parsing all the commentary and dealing with it on a service wide level is really hard to do, in terms of computing power and sheer volume of content. Seems to me that do this on an instance level with user numbers in the 10’s of thousands is a heck of a lot more reasonable than doing it on a 10’s of millions of users service.

    What I’m getting at is that this really seems like something that could (maybe even should) be built into the instance moderation tools, at least some method of marking user activity as suspicious for further investigation by human admins/mods.

    We’re really operating on the assumption that people spinning up instances are acting in good faith, until they prove that they aren’t, I think the first step is giving good faith actors the tools to moderate effectively, then worrying about bad faith admins.

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      I think the first step is giving good faith actors the tools to moderate effectively, then worrying about bad faith admins.

      I agree with this 110%

  • Cinner@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Reposting this in comment from a reply elsewhere in the thread.

    If anything there should be SOME centralization that allows other (known, somehow verified) instances to vote to disallow spammy instances from federating. In some way that couldn’t be abused. This may lead to a fork down the road (think BTC vs BCH) due to community disagreements but I don’t really see any other way this doesn’t become an absolute spamfest. As it stands now one server admin could spamfest their own server with their own spam, and once it starts federating EVERYONE gets flooded. This also easily creates a DoS of the system.

    Asking instance admins to require CAPTCHA or whatever to defeat spam doesn’t work when the instance admins are the ones creating spam servers to spam the federation.

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I really hope that some researchers will get interested into this and develop some cool solutions to this. Maybe we are lucky and they even implement them into Lemmy.

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I agree, I think the data is easily there to perform the proper analysis, and there are enough hooks in the platform to apply the results.

  • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is troubling.

    At least we have the data though, hopefully these findings are useful for updating the Fediseer/Overseer so we can more easily detect bots

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I really wish we would have a good data scientist, or ML individual jump in this thread.

      I can easily dig through data, I can easily dig through code- but, someone who could perform intelligent anomaly detection would be a god-send right now.

      • monobot@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        There are data scientist around and we are monitoring where this goes.

        Bigest problem I currently see is how to effectively share data but preserve privacy. Can this be solved without sharing emails and ip addresses or would that be necessary? Maybe securely hashing emails and ip addresses is enough, but that would hide some important data.

        Should that be shared only with trusted users?

        Can we create dataset where humans would identify bots and than share with larger community (like kaggle), to help us with ideas.

        There are options and will be built, just jt can not happen in few days. People are working non stop to fix (currently) more important issues.

        Be patient, collect the data and let’s work on solution.

        And let’s be nice to each others, we all have similar goals here.