Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
Palaiologos (kspalaiologos@fedi.absturztau.be)'s status on Sunday, 15-Dec-2024 18:55:51 JSTPalaiologos @taylan Why does Japanese use weird glyphs to encode their language? Why can't they use the Latin alphabet and settle on writing everything with Romaji all the time to convenience me, a stranger?
Asian languages have some benefits: e.g. thanks to the fact that Cantonese (a language also encoded with an inscrutable unreadable alphabet) has single-syllable digits, native Cantonese speakers on average remember phone numbers better and have an expanded working memory.
https://www.sciencedirect.com/science/article/pii/S0749596X22000766
https://www.npr.org/sections/krulwich/2011/07/01/137527742/china-s-unnatural-math-advantage-their-words
Etc...
I will give this question benefit of doubt and assume that it's genuine: the reason is mnemotechniques (APL glyphs make sense for what they do and have clear patterns) and context confusion. E.g. to me, & is exclusively address-of or bit-and. If a language used it in some other context, I would be confused and I would never recover. Which is why if you settle on terse syntax custom operators are our only option.
Why not words? Out of convenience: scanning and parsing the word requires more intellectual effort to parse and register than pattern-matching a glyph and when used often enough, verbose names become a nuance. You can ask this question to e.g. Haskell designers, who thought that `foldl1` is a good function name - why not use the `fold_left` convention like OCaml does? Without closer inspection, it appears more readable.
And finally: APL was developed as a mathematical notation. Mathematics do not use words, but they could. Instead of F = ma, we could say that force is equal to the mass multiplied by acceleration - imagine how difficult would it be to derive Telegrapher's equations with such an inefficient thought model.