In general, now that we are standardizing on a display tech (LCD/LED flatscreens) which in principle is capable of varying its framerate freely, why do we put up with frustrations like pulldown and horrors like algorithmic/"AI" "smooth" resampling/"judder correction", when we could just teach the signal chain to know what the correct framerate is and preserve it?
Our HDTVs could just show 24fps video at 24fps (or 48fps doubled, if that's easier). Nothing really prevents it.