Lots not to like ... in many ways, their courses and especially their practice exercises look like they hired a couple of college kids to build them over a weekend, then posted them without having an editor look them over.
In multiple courses under the subjects of Python, SQL, Julia, and R-Lang, the exercises will mis-spell a language keyword or some library function / method that is widely used in Data Science. In a couple, the answer is already entered into the question ... just select the choice that matches what they've just showed.
In the "real world" projects, they tend to go beyond what the courses have covered. Yet, they're opinionated about which functions / methods are used (and sometimes even the order they're used in). So you do some research, find some functions that produce the exact desired results, and the project is rejected because your research didn't uncover the desired functions to use.
Now, there's lots to like, too. For example, despite having both R-lang and Julia (and Scala) on my to-learn list for years, this was my first hands-on experience with all three.
I did see an article fairly recently in which retail stores that replaced human cashiers with automated checkout lines found that human customers needed a significant amount of help using the terminals, so that the companies couldn't reduce staff as much as expected. The automated checkout lines were also not significantly faster, on average, than human-staffed checkout lines.
And customers were getting very frustrated over it.
This doesn't even count the claims about massive increases in theft.
(My personal thing has always been that if I'm going to do work myself that you otherwise would have paid someone else to do, you need to split the savings with me. Well, that and you need to pay for employee retraining.)
Now that the genie is out of the bottle, we already know some organization in some country is going to proceed with developing AI. Thus, regulations meant to kill it are wasted effort. Instead, we'll all be better off if we figure out ways to focus where AI development efforts are heading.
And that includes finding ways to spread the benefits more evenly throughout society, while preventing those who develop and deploy AI-based tools from pushing the costs / harms on others.
"But that sounds like socialism!," I hear you saying. Not at all. Think about the places where AI-based tools are likely to be deployed first: "anywhere that paid humans interact with human customers" is going to be high up on the list. So we have to ensure that the costs aren't borne only by the customers and (former) employees and that the benefits don't acrue only to the companies that formerly employeed the customer service staffers.
That includes requiring that there be a way for a human customer to escape the robot and interact with another human instead.
I disagree. This occurs and has occurred in organizations within different social and economic systems for as long as there have been organizations within societies.
A GNU+Linux bearing nomad migrating across a Windows-centric desert. I save the world from incompetent headquarters IT folks. I invite comment and discussion, but I dislike arguing.