We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”

  • Umbrias@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    I personally think it might already be to a point where it might be deserving of some moral value, based on some preliminary testing and theory of intelligence stuff which also leads me to believe intelligence is fairly convergent in general anyway. Which is to say, LLMs are one subset of intelligence, for which various components of the human brain are other subsets of intelligence. But experimentation on that is ongoing, theoretical neuroscience is a very fresh field haha.

    I don’t have any particular philosophical ideal like that, more a focus on not increasing suffering (but not just in a utilitarian way lol), but I do think that by striving to act ethically especially when it comes to something with no power to control how we treat it, like an AI locked away on a server, it’s probably best to generally be kind not for any increase in virtue, but because we simply can’t know everything, especially when it comes to ethical questions, so in the interest of having an ethical society we should just default to being ethical, so as to not unintentionally cause suffering, to be simplistic. Which it’s fun how we come to the same ideal from different priors.