It’s actually very bad, worse than humans by far at what it does. But because people collectively believe it is good at stuff, people are focusing so much on “the singularity” or “it replacing [x job]” the real risk continues unhindered.
As an expert in this field I can say that is an inaccurate classification. As with most things it lacks the nuance that is vital. AI can and is smarter, better, and faster than humans at certain tasks, at other tasks it is much worse. Which tasks those are change year by year as well.
This is a good example of the true goal of ML: introduce bias and inefficiency in areas where inefficiency is desirable to capitalism.
This and many concerns are valid. There are tons of ethical concerns in the here and now and those concerns dont look much like the skynet fears you hear people talking about. Accidentally picking up human biases, prejudice, or ineffiency is a very legitimate problem. That said the problem is oddly enough the most human problem in AI. AI essentially has the same problem as humans in that they need to be “raised” (taught) by humans and in that process faults from the teacher are often passed to the student.