"AI models may be developing their own ‘survival drive’, researchers say" - Guardian
The "research paper" was a tweet by an AI company
The "experiment" was asking the LLM to shut down
A model is ~not~ shutdown ~ever~ by asking a model to shut itself down
*The only possible response is a hallucination*
You shut down a model by turning off the deterministic software running it; so works every time w/o fail
Yet Guardian's shill tech writers just report AI industry tweets as if it was fact
 
      she hacked you
she hacked you All GNU social JP content and data are available under the
 All GNU social JP content and data are available under the