@vaurora Also from this perspective, LLM is actually a good vulnerability scanner for various grading and review processes.
It is just that the reaction to it should be not "Let's ban ChatGPT from doing X" but rather the generic rule should be:
"If ChatGPT can do this job/pass this test, this is the job/test not worth doing"