@taylan Why does Japanese use weird glyphs to encode their language? Why can't they use the Latin alphabet and settle on writing everything with Romaji all the time to convenience me, a stranger?
Asian languages have some benefits: e.g. thanks to the fact that Cantonese (a language also encoded with an inscrutable unreadable alphabet) has single-syllable digits, native Cantonese speakers on average remember phone numbers better and have an expanded working memory.
I will give this question benefit of doubt and assume that it's genuine: the reason is mnemotechniques (APL glyphs make sense for what they do and have clear patterns) and context confusion. E.g. to me, & is exclusively address-of or bit-and. If a language used it in some other context, I would be confused and I would never recover. Which is why if you settle on terse syntax custom operators are our only option.
Why not words? Out of convenience: scanning and parsing the word requires more intellectual effort to parse and register than pattern-matching a glyph and when used often enough, verbose names become a nuance. You can ask this question to e.g. Haskell designers, who thought that `foldl1` is a good function name - why not use the `fold_left` convention like OCaml does? Without closer inspection, it appears more readable.
And finally: APL was developed as a mathematical notation. Mathematics do not use words, but they could. Instead of F = ma, we could say that force is equal to the mass multiplied by acceleration - imagine how difficult would it be to derive Telegrapher's equations with such an inefficient thought model.
Once you start with Assembly and get good enough, you notice that you no longer think in terms of opcodes, GOTOs or registers; instead, you think of variables and high-level control structures (if this then that). translating mental pseudocode to assembly becomes tedious.
Then, we saw the advent of higher level languages like C: you no longer have to spend so much intellectual effort on translating ideas into code, as the compiler can do it better. Further, the gist of the idea can be understood better by most programmers by seeing more structured code than assembly.
After that, we realised that writing a hash map from scratch in every source file is also an inefficient use of programmer time, as humans find it easy to conceptualise basic data structures like a dictionary. Unfortunately, this is where most programmers nowadays plateau: when writing code, they think of for loops, hashmaps and if statements. The average programmer deludes themselves that they are not bottlenecked by their typing speed or their skill at translating abstract ideas into code: they see programming as the act of such translation without regard to the art of reasoning in abstraction.
The CS-savvy programmers discover functional programming. They learn about recursion, which greatly simplifies many structural operations. But at some point they notice that recursion as a concept is powerful and low level: difficult to debug, difficult to reason about, difficult to match patterns on, verbose. At this point, they discover recursion schemes and become efficient at juggling around maps, filters, scans and reduces. They make it easy to do basic strength reduction and optimisations, change existing behaviours and add components to data processing pipelines, on top of being rather easy to reason about to a skilled programmer. But many people don't ever get to this stage - see the amazement at "ngn scans". And then the ultimate step of this evolution is noticing that most of these maps that hide recursion deep down are not necessary and if only data was arranged in arrays, the loops could be tucked away inside of even the most primitive operations. This is what makes APL relevant and so representative of human thought - it minimises the amount of intellectual effort that is required to turn abstract ideas into code. It lets programmers focus not on the bread and butter of computer science but further develop ideas and improve as scientists and problem solvers.
But - as I mentioned before, most people stop at the third stage: they can never efficiently reason about abstract ideas or very high level code, because they're not representative of their mental model which was shoehorned into ALGOL-style programming. They never solve problems abstractly - they think in terms of code, which has high degree of cognitive overload due to the need of mentally materialising all the ceremony and magic that's related to inherently verbose mainstream languages. And this is why they never feel bottlenecked by the speed at which they write stuff. This is why APL simply doesn't work in a commercial, group setting. Readability in the common sense is a dial that you can twist between programmer convenience and efficiency and how digestible the code is to the average person. Unfortunately, you will be hiring average people and you will not meet two efficient programmers with the same mental model, so the idea of abstracting away reality doesn't work. You have to agree on the lowest common denominator in terms of abstraction levels that makes working comfortable for everyone: it's easier to adopt a (simplified) shared common mental ground than get everyone to agree on the local zen of code.
But maybe forcing people to agree on the "zen of code" would be a good thing: so-called readable code is meant to simplify onboarding and development of code because programmers don't need to spend a long time figuring out how the whole thing works. That may be a bad thing: programmers may delude themselves into thinking that appearing to work is the same as working, and hence introduce subtle bugs into the code base that they don't really understand, which the upfront familiarity would have de facto required. This is a common trend in programming right now. I don't agree with it, because every code base builds up their own non-standard set of primitives in utility classes that take a long time to grasp, while APL regardless of where you use it is mostly the same and idiomatic due to the fact that primitives actually do things.
Bottom line: Is becoming more efficient at programming and abstract reasoning through the use of better tooling a ever goal for programmers? There is no tangible benefit from being specifically faster at greenfield programming. I find it desirable because I am a young programmer and first and foremost a scientist, but I would imagine that my more senior colleagues don't have a reason to chase this - they work on established code bases that grew hairs and limbs over the years, just like every commercial/long-term project and don't ever have the drive to return to greenfield programming.
I could understand if the price tag of 200€ was too expensive. But 100€ for a book this big, really? An A4 page costs ~0.05€ to print; do some math in your head to notice how the author makes 0.07€ per page on this book (and probably even less if you add the publisher tax). Seriously, /that/ is too much? Most people don't understand two things: writing books incurs a non-linear cost. It's actually closer to being quadratic. If n is your amount of pages, it takes O(n^2) time to write, review and edit it all. Second, nobody is actually capable of doing a calculation like this in their head.
The only reason why you can buy e.g. JK Rowling's hot air for a few bucks a copy is precisely because publishers did the calculation and figured out that it's going to be economically feasible to batch print it and distribute it. Now take any less popular book where the publisher didn't entirely have such an assumption. Jane Hodge's "Shadow Of A Lady" (in the non-mass-market edition) by my calculations ends up at TWICE the cost per page compared to K. Sayood's book. No reviews about it being bonkers expensive. Go figure.
TL;DR: the average person can't divide two numbers in their head & that writing long form books is a stupid idea in this wretched economy.
@lanodan That happened with Ubuntu for the longest time in late 2022. They were shipping a version that was broken on Macs. For my CI pipelines I had to patch it.
Embed this noticePalaiologos (kspalaiologos@fedi.absturztau.be)'s status on Thursday, 13-Jun-2024 06:20:56 JST
Palaiologossomething makes me feel like i've stagnated as a mathematician and a programmer. i no longer solve problems on paper. i don't write as much code as i used to. the code that i do write - i usually solve new problems and is rarely difficult in the engineering perspective. this makes me worried. i have noticed that my problem solving ability has gone down very significantly.
Embed this noticePalaiologos (kspalaiologos@fedi.absturztau.be)'s status on Monday, 13-May-2024 16:04:19 JST
Palaiologosgerman business model was based on cheap energy from russia, cheap subcontractors in eastern eu and steadily growing exports to china. all three are gone by now, but german politicians are still stuck in a world that doesn’t exist anymore. so now after the whole country has been turned into a smelly coal-burning pit thanks to fake reports about nuclear and understating the coal plant emissions by 200x, there's no going back and germany is sooner or later going to level with eastern european countries.
@nullenvk@puniko they're more complicated but the basic idea of including/excluding an event and computing the optimal followup works for every DP problem.
@puniko thankfully they're not that bad. it's a competitive programming class and, i'm not gonna lie, if your hammer is reduction to a variant of knapsack every DP problem looks like a nail
If anything, I would panic that all the "C replacements" always statically link to everything wasting space and that every electron app I install bundles the same stuff over and over again so that you can't get rid of it and have one runtime per system.