@Unixbigot All strings representable as UTF-8 are representable in UTF-16 and vice versa. They both encode a series of Unicode scalar values in the ranges ([0, 0xD7FF], [0xE000, 0x10FFFF]).
Some things falsely call themselves UTF-16 which can encode a superset of this, namely scalar values in the range [0xDC00, 0xDFFF] not preceded by a value in the range [0xD800, 0xDBFF] and scalar values in the range [0xD800, 0xDBFF] not followed by a value in the range [0xDC00, 0xDFFF].
This encoding exists where systems were originally designed to use UCS-2 (which encodes scalar values in the range [0, 0xFFFF]), but then updated to treat this as UTF-16, while retaining compatibility with UCS-2 strings. This is known as "potentially ill-formed UTF-16" or "WTF-16", and is used by JavaScript, at least some versions of Java, and various Windows APIs.