@ivesen@a1ba Nothing wrong with that one, it's decoding. And BSD got {be,le}{16,32,64}{dec,enc} as well.
The one that's wrong is swapping, specially mindlessly because then instead of just "okay, it happens here" you end up having to hope for even/odd amounts of swapping.
@a1ba@ivesen Yeah compiler will optimise it, that said even just defining a macro for it is probably a good idea so the code is more readable and less chances of making a typo/off-by-one (at least doing it ifdef-free makes it easy to test).
And well if compiler doesn't optimises it… I feel like modern CPUs will :D
@lanodan@ivesen there is always code that already exists and there is code that has to be written. For the first, swapping bytes in place is the way to go (especially that there aren't many big endian systems these days, anyway). For the latter... just do byte shifting thing, compiler will optimize it into single load anyway (though some people claim not all compilers do this)
@lanodan@a1ba seems the BE->LE method suggested by the blog in question isn't optimized, while the byteswap that's argued against is. will this ever matter? :blobshrug: