• RecallMadness@lemmy.nz
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 hours ago

      This is the sort of thing machine learning algorithms are pretty good at at.

      Coupled with however many millions of interactions a day, you would have no problem correlating changes to your algorithm against increases in revenue.

      But. It’s often not that impressive. Humans are equally good at noticing patterns.

      All it takes is for one person at FB to see their wife or daughter delete a post, ask them “why did you delete that post” and take away from the response of “It made me look fat” to go “there’s a new targeted ad that’ll get me a bonus”.

      In a similar vein, 80% of your banks anti-fraud systems isn’t deep learning models that detect fraudulent behaviour. Instead it’s “if the user is based in Russia, add 80 points, and if the account is at a branch in 10km of Heinersdorf Berlin, add another 50…. We’re pretty sure a Russian scammer goes on holiday every 6 months and opens a bunch of accounts there, we just don’t know which ones”.

    • Captain Janeway@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      63
      ·
      11 hours ago

      The most generous assumption is that they use statistics to determine correlations like this (e.g., deleted selfies resulted in a high CTR for beauty ads so they made that a part of their algo). The least generous interpretation is exactly what you’re thinking: an asshole came up with it because it’s logical and effective.

      Either way, ethics needs to be a bigger part of the programmers education. And we, as a society, need to make algorithms more transparent (at least social media algorithms). Reddit’s trending algorithm used to be open source during the good ole days.