An important thought that seems to elude the AI galaxy brains like billg who want to replace professionals with LLMs is: how are new phenomena identified and dealt with, given that LLMs are trained only on PRE-EXISTING data?
For example: how would a hypothetical LLM physician have identified a case of COVID19 back in February 2020? (Hint: it'd have diagnosed it as something else—wrongly—every time! Because not in the training data.)