Conversation
Notices
-
Embed this notice
Palaiologos (kspalaiologos@fedi.absturztau.be)'s status on Sunday, 15-Dec-2024 04:37:00 JST Palaiologos How I see APL is as an inherent and obvious product of programming language evolution.
Once you start with Assembly and get good enough, you notice that you no longer think in terms of opcodes, GOTOs or registers; instead, you think of variables and high-level control structures (if this then that). translating mental pseudocode to assembly becomes tedious.
Then, we saw the advent of higher level languages like C: you no longer have to spend so much intellectual effort on translating ideas into code, as the compiler can do it better. Further, the gist of the idea can be understood better by most programmers by seeing more structured code than assembly.
After that, we realised that writing a hash map from scratch in every source file is also an inefficient use of programmer time, as humans find it easy to conceptualise basic data structures like a dictionary. Unfortunately, this is where most programmers nowadays plateau: when writing code, they think of for loops, hashmaps and if statements. The average programmer deludes themselves that they are not bottlenecked by their typing speed or their skill at translating abstract ideas into code: they see programming as the act of such translation without regard to the art of reasoning in abstraction.
The CS-savvy programmers discover functional programming. They learn about recursion, which greatly simplifies many structural operations. But at some point they notice that recursion as a concept is powerful and low level: difficult to debug, difficult to reason about, difficult to match patterns on, verbose. At this point, they discover recursion schemes and become efficient at juggling around maps, filters, scans and reduces. They make it easy to do basic strength reduction and optimisations, change existing behaviours and add components to data processing pipelines, on top of being rather easy to reason about to a skilled programmer. But many people don't ever get to this stage - see the amazement at "ngn scans".
And then the ultimate step of this evolution is noticing that most of these maps that hide recursion deep down are not necessary and if only data was arranged in arrays, the loops could be tucked away inside of even the most primitive operations. This is what makes APL relevant and so representative of human thought - it minimises the amount of intellectual effort that is required to turn abstract ideas into code. It lets programmers focus not on the bread and butter of computer science but further develop ideas and improve as scientists and problem solvers.
But - as I mentioned before, most people stop at the third stage: they can never efficiently reason about abstract ideas or very high level code, because they're not representative of their mental model which was shoehorned into ALGOL-style programming. They never solve problems abstractly - they think in terms of code, which has high degree of cognitive overload due to the need of mentally materialising all the ceremony and magic that's related to inherently verbose mainstream languages. And this is why they never feel bottlenecked by the speed at which they write stuff.
This is why APL simply doesn't work in a commercial, group setting. Readability in the common sense is a dial that you can twist between programmer convenience and efficiency and how digestible the code is to the average person. Unfortunately, you will be hiring average people and you will not meet two efficient programmers with the same mental model, so the idea of abstracting away reality doesn't work. You have to agree on the lowest common denominator in terms of abstraction levels that makes working comfortable for everyone: it's easier to adopt a (simplified) shared common mental ground than get everyone to agree on the local zen of code.
But maybe forcing people to agree on the "zen of code" would be a good thing: so-called readable code is meant to simplify onboarding and development of code because programmers don't need to spend a long time figuring out how the whole thing works. That may be a bad thing: programmers may delude themselves into thinking that appearing to work is the same as working, and hence introduce subtle bugs into the code base that they don't really understand, which the upfront familiarity would have de facto required. This is a common trend in programming right now. I don't agree with it, because every code base builds up their own non-standard set of primitives in utility classes that take a long time to grasp, while APL regardless of where you use it is mostly the same and idiomatic due to the fact that primitives actually do things.
Bottom line: Is becoming more efficient at programming and abstract reasoning through the use of better tooling a ever goal for programmers? There is no tangible benefit from being specifically faster at greenfield programming. I find it desirable because I am a young programmer and first and foremost a scientist, but I would imagine that my more senior colleagues don't have a reason to chase this - they work on established code bases that grew hairs and limbs over the years, just like every commercial/long-term project and don't ever have the drive to return to greenfield programming.-
Embed this notice
Wolf480pl (wolf480pl@mstdn.io)'s status on Sunday, 15-Dec-2024 04:36:58 JST Wolf480pl @kspalaiologos for me, the most important part of readability isn't that a new person in the team can get productive quickly.
For me it's the ability to debug across abstraction layers.
That when I'm deploying a foo-inator and it doesn't do what I want, I can find its source code on the internet, and read it, and even though I'm seeing it for the first time in my life, I'll be able to narrow down where the bug is. And if it's in a library or in kernel I can read the source of that too.
Haelwenn /элвэн/ :triskell: likes this. -
Embed this notice
Wolf480pl (wolf480pl@mstdn.io)'s status on Sunday, 15-Dec-2024 04:37:46 JST Wolf480pl @kspalaiologos and so far I encountered three cathegories of code that turned out completely inscrutable to me:
Scala, React, and whatever Pipewire is written in (not sure if it's just GObject, or a special evolution of GObject found in gstreamer, or some different beast).
Haelwenn /элвэн/ :triskell: likes this. -
Embed this notice
Taylan (Now 18% More Deranged) (taylan@fedi.feministwiki.org)'s status on Sunday, 15-Dec-2024 05:16:06 JST Taylan (Now 18% More Deranged) @kspalaiologos
I don't get why not just use ASCII to represent the source code. Just use words like a normal programming language? -
Embed this notice
Taylan (Now 18% More Deranged) (taylan@fedi.feministwiki.org)'s status on Sunday, 15-Dec-2024 18:55:50 JST Taylan (Now 18% More Deranged) @kspalaiologos
> Why does Japanese use weird glyphs to encode their language? Why can't they use the Latin alphabet and settle on writing everything with Romaji all the time to convenience me, a stranger?
I think it would convenience their own population as well in the long term. Children would become literate quicker, and have an easier time learning English. For such a developed country, their English seems really bad on a population level.
The Turks did it about 100 years ago. Ditched the Arabic script, and adopted a custom Latin alphabet suitable for Turkish, similar to romaji. The Philippines did it way earlier due to Spanish colonization, ditching Baybayin in favor of the Latin script, which works fine for Tagalog, Cebuano, and presumably most if not all their other languages. Baybayin is being revived by some historians etc. but in general people don't care because the Latin script works just fine.
> Asian languages have some benefits: e.g. thanks to the fact that Cantonese (a language also encoded with an inscrutable unreadable alphabet) has single-syllable digits, native Cantonese speakers on average remember phone numbers better and have an expanded working memory.
Seems like speculation, so I would question whether it's worth the immense complexity of Hanzi, but I know very little about Chinese so ok.
> I will give this question benefit of doubt and assume that it's genuine: the reason is mnemotechniques (APL glyphs make sense for what they do and have clear patterns) and context confusion. E.g. to me, & is exclusively address-of or bit-and. If a language used it in some other context, I would be confused and I would never recover. Which is why if you settle on terse syntax custom operators are our only option.
Well, using words would solve that. Address-of, bit-and, etc.
> Why not words? Out of convenience: scanning and parsing the word requires more intellectual effort to parse and register than pattern-matching a glyph and when used often enough, verbose names become a nuance. You can ask this question to e.g. Haskell designers, who thought that `foldl1` is a good function name - why not use the `fold_left` convention like OCaml does? Without closer inspection, it appears more readable.
I'm very skpetical of this. Words being made up of indvidiual lettres is mailny a leraning aide. Once you've masterd a word, your brain prety much scans it as one token, which is why tpyos often beocme insivible.
(I might have overdone it with the intentional typos in the previous paragraph, but you get the point lol.)
> And finally: APL was developed as a mathematical notation. Mathematics do not use words, but they could. Instead of F = ma, we could say that force is equal to the mass multiplied by acceleration - imagine how difficult would it be to derive Telegrapher's equations with such an inefficient thought model.
I think the mental model is the same whether you write F = ma or "force = mass × accel."
I think the tradition of single-letter variables probably just stems from 1. maths having evolved from people writing manually on paper and blackboards, and 2. formulae most often being very short, without all that many different variables, compared to lengthy algorithms spanning many function definitions.
It's similar for operators, though it's useful to visually distinguish them from variables I suppose. (Writing "force = mass × accel" makes it much easier to parse than "force equals mass times accel.") You can do that while still using words though, like foo(bar) making it clear that foo is the operator thanks to the ().
In my experience, verbose variable and operator names are no hindrance to writing complex and readable code. I love Scheme (a very maths-adjacent language as well) and it has a tradition of verbosity, with operator names such as:
- call-with-current-continuation
- with-exception-handler
- define-syntax
- fold-left / fold-right
- vector-ref
- let-values
And so on.
(Call-with-current-continuation is a bit of a pathological case though, and is usually shortened to call/cc after all.)
Fun fact: I couldn't grasp how delimited continuations work in Scheme until I've read the relevant section of the GNU Guile user manual, which explains it in terms of the more verbose primitives call-with-prompt and abort-to-prompt. -
Embed this notice
Palaiologos (kspalaiologos@fedi.absturztau.be)'s status on Sunday, 15-Dec-2024 18:55:51 JST Palaiologos @taylan Why does Japanese use weird glyphs to encode their language? Why can't they use the Latin alphabet and settle on writing everything with Romaji all the time to convenience me, a stranger?
Asian languages have some benefits: e.g. thanks to the fact that Cantonese (a language also encoded with an inscrutable unreadable alphabet) has single-syllable digits, native Cantonese speakers on average remember phone numbers better and have an expanded working memory.
https://www.sciencedirect.com/science/article/pii/S0749596X22000766
https://www.npr.org/sections/krulwich/2011/07/01/137527742/china-s-unnatural-math-advantage-their-words
Etc...
I will give this question benefit of doubt and assume that it's genuine: the reason is mnemotechniques (APL glyphs make sense for what they do and have clear patterns) and context confusion. E.g. to me, & is exclusively address-of or bit-and. If a language used it in some other context, I would be confused and I would never recover. Which is why if you settle on terse syntax custom operators are our only option.
Why not words? Out of convenience: scanning and parsing the word requires more intellectual effort to parse and register than pattern-matching a glyph and when used often enough, verbose names become a nuance. You can ask this question to e.g. Haskell designers, who thought that `foldl1` is a good function name - why not use the `fold_left` convention like OCaml does? Without closer inspection, it appears more readable.
And finally: APL was developed as a mathematical notation. Mathematics do not use words, but they could. Instead of F = ma, we could say that force is equal to the mass multiplied by acceleration - imagine how difficult would it be to derive Telegrapher's equations with such an inefficient thought model.
-
Embed this notice