Story as old as computers:
"ChatGPT spit out erroneous or inappropriate cancer treatment recommendations 1/3 of the time." We saw the same with IBM's "AI"s that were supposed to detect cancer, with all the COVID apps that tried detecting infections via smartphone sensors etc.
Computers do lack lived experience and a connection to the world, they operate based on very limited models constructed from sensor data. These abstractions cut off so much information that errors become basically unavoidable. Which isn't always a problem: In a different world having a system that helps filter data for people could be useful but when these systems are deployed as "time savers" or for "rationalization" (meaning firing people or not hiring enough people) these mistakes have grave consequences.