• cum@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    1
    ·
    24 days ago

    That’s a load of shit lol, also there’s absolutely nothing good that can be drawn from these conclusions. All this can achieve is political pundits some ammo to cry about on their shows.

    • mmhmm@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 days ago

      I agree how these conclusions were developed is trash; however, there is real value to understanding the impact alignments have on a model.

      There is a reason public llms don’t disclose how to make illegal or patented drugs. Llms shy away from difficult topics like genocide, etc.

      It isnt by accident, they were aligned by corps to respect certain views of reality. All the llm does is barf out a statically viable response to a prompt. If they are weighted you deserve to know how