This sounds like perhaps some programming language akin to natural language (e.g. Inform 7 or "Plain English") or maybe a multi-lingual environment could be a better fit for "the concept of Emacs" than Emacs Lisp?
(I mean, Emacs certainly is a complex multi-dimensional beast and there isn't a single concept of it, but maybe that would be an interesting experiment)
My first attempt at what would become #GRASP was in September 2016, when I was working on a structure editor running in the SLAYER framework that I was developing at the time. It wasn't called GRASP yet, though - I called it "Butterfly", because of the XKCD strip about "real programmers", but I didn't get very far.
SLAYER was based on Guile, and I ended up implementing something akin to Racket's Big Bang (which I didn't know about at that time), and I have spent considerable effort to find a way to represent stuff in Scheme in a nice way (I developed a library for working with lists of interspersed keywords and values, but I was never happy about that).
I think I may have been on a trajectory to build something similar to @disconcision 's Fructure, but I broke my laptop, and it took me a while to recover the latest changes from the disk (which I think eventually happened in 2018; until then, I was working on a small Intel Atom-based netbook that I borrowed from a friend, and on which I wrote "A Pamphlet Against R").
Then I started to look for a nice way of putting rectangles into other rectangles, which took the form of "The Draggable Rectangle Challenge" that I published on Quora (I was a Quora addict at that time)
I wrote a highly idiosyncratic solution in Racket, which nevertheless had its charm, but it ran terribly slowly on my friend's Raspberry Pi, and I haven't figured out any way to run it on Android.
The editor window title was "GRASP LIMB", where "GRASP" stands - of course - for GRAphical Scheme Programming, and "LIMB" is meant to be a pun on LISP's backronym "Lots of Irritating Superfluous Parentheses"; "LIMB" was meant to stand for "Lots of Intriguing Movable Boxes".
I started figuring out a way to build Android applications - I had to buy additional 2GB of RAM to my laptop to make Android Studio to run, and when (after breaking another laptop) I finally found a way to build Android apps in Termux, without Android Studio, it turned out to be a huge productivity booster to me - I could finally do programming in places where I wouldn't be able to use the laptop, and this setup prompted some of the design of GRASP.
I made the first version on/for the phone in the spring of 2020, mainly on the toilet (and it shows: https://www.youtube.com/watch?v=BmZ39IfElzg), and in 2021 I wrote what I thought would be the "final version" - but in both cases, the development eventually stalled.
I have been working on the current code base of GRASP since the beginning of 2022 (I think I wrote the parser in the fall of 2021), but I created a separate repository for it at the beginning of 2023.
It's a bit frustrating that - despite all the effort that I've put into the development - it still isn't a well polished app, but on the other hand, the experiences that I've been having while working on the Advent of Code solutions were rather positive - and although I think there's still a few bugs that I need to fix before I start nagging people to try and use it, to me peronally the experience has been rather satisfying, and I already started thinking about uploading it to F-Droid (which, from what I've read, is a process that will probably take some time)
I will try to write a summary of the development for this month and year (and the plans for the next year) tomorrow, because I need to go to sleep now.
Of all the software in the world, #Emacs has been the greatest source of inspiration for #GRASP
I sometimes try to conclude what exactly Emacs is. As a matter of fact, the accidental interview that I made with Bernard Greenberg earlier this year happened exactly because I was trying to make a youtube video about "the concept of Emacs". I haven't finished the video - and I don't know if I ever will, so I decided to write this post.
The two obvious non-answers about the essence of Emacs are are "text editor" and "opearting system", and the closest conceptual relatives are Smalltalk virtual machines.
Emacs didn't begin with Lisp. It began with TECO, and it was MIT students' attempt at creating a working environment that wouldn't take away any power from its users, but that would instead empower them even more.
The Emacs paper by Richard Stallman refers, among others, to Doug Englebart's NLS/Augment system.
In either case, it seems that Emacs was as much a social movement as it was a text editor.
The early offspring of Emacs were Eine (which wasn't Emacs) and Zwei (which was Eine initially) for Lisp Machines and Multics Emacs (which was an Emacs).
Greenberg told me that he was a very close friend with Daniel Weinreb, and that they were inspiring each other's work. (He also told me he didn't know Richard Stallman very well.)
In either case, Multics Emacs was the first Emacs to use Lisp, and Stallman loved that idea.
The only Emacs that I had an opportunity to use was (and constantly is) GNU Emacs, which Stallman took from Gosling and modified. Gosling was a former user of Multics Emacs, and once he was confined to UNIX, he missed it so much that he decided to recreate it.
Of course, UNIX already had its editor (developed by Bill Joy) which was called "ex", as an extension to the "ed" editor developed by Ken Thompson. There was a way of running it in "visual mode" in video terminals (as opposed to teleltypes) by using the command "vi". I don't know whether Gosling didn't like it, or loved Emacs so much, but he created a crippled implementation of a Lisp-like language called "MockLisp", to mimic some of the capabilities of Multics Emacs.
(Guy Steele, who originally started the TECO Emacs project, was later serving on a scientific board for Gosling's PhD at CMU)
This is a very twisted story, and it's hard to get a clear-cut idea of what "the essence of Emacs", so...
It operates on s-expressions rather than syntax objects, but it supports a pattern language akin to 'macro-by-example' with some extensions, and it also tracks the derivation, so that the expanded code can be 'impanded' back (this feature isn't implemented in the above code, but I had it running on my laptop, and in my bitbucket repo that atlassian decided to kill)
On various occasions, I think about C++. I devoted a few years of my life developing a 3d engine in C++. I learned the language from Stroustrup's book ("The C++ Programming Language"). I also developed a Qt-like widget framework using boost::signals
Also, many examples from "The Game Programming Gems" were using C++, and even some of the chapters in those series were devoted to some specific language features (such as template metaprogramming)
But only after I decided to embed #Guile#Scheme into my engine - which resulted in rewriting the whole thing in pure C - I got enough perspective to criticize the language
I have a strong feeling that the popularity of C++ stems from three factors: - convincing a lot of people that "C++ is a better C, so if you're using C, you can as well use C++" - lack of critical thinking among most programmers - the love of complexity among (usually inexperienced and naively enthusiastic) programmers
I started learning C++ around 2003 (so that was ISO/C++98), but - using macros and a GCC extension - I came up with my own range-based for loop:
I don't love the C programming language, but one thing it is fairly good at is modeling what the computer is going to do, and how the system is going to be organized (especially if you work with it at the level of object files)
C++ makes this so much harder, with all of its templates, overloading and name mangling. And its design methodology really is reminescent of a kid in a toy store pointing fingers at different toys and saying, "I want this! and that! oh this is so cool can I have it as well?"
There seems to be very little thought, and design thinking, in C++, on a very fundamental level. (And I liked how the recent drama-blog post pointed out Stroustrup's lack of experience)
But the worst thing, let me tell you, the worst thing, are all those proponents of C++ that you meet on the Internet, who have very little to no experience with anything else, and even their experience with C++ is somewhat miniscule (so that they don't even know about the existence of weird entities such as "r-value references"), and they appear every now and then to tell you how wrong you are in your criticism of C++, because it's the best single language ever invented, but somehow they are never able to refute your arguments
@lfa Overflow is something that happens during arithmetic operations. When I set 0x80 to a char variable (assuming it's at least 8-bit wide), that variable contains exactly this value: 0x80 (or bit pattern 0b10000000). There is no overflow.
The problem arises with the operator ==, which performs a widening conversion of the variable to the "int" type, because it treats that variable as a signed value (which is implementation-dependent, which is a problem on its own), and instead of treating it as 0x80, it treats it as 0xFF80 (on 16-bit platforms) or 0xFFFFFF80 (on 32-bit platforms), because this is how widening conversion with 2's complement number representation works.
The bottom line is that while there are some valuable traits of natural languages that should definitely be considered with care and attention, programming languages have a potential of being *better* than natural languages in some regards - especially if we take their compositionality to extreme with all due seriousness.
I remember asking a question on comp.lang.scheme about a function like "find" which - instead of an element satisfying a "predicate", would return the value of that "predicate". Someone replied that I can use the SRFI-1's "any" "quantifier" for this, and then I thought: of course! because it's a generalization of the "or" connective!
In the same vein, SRFI-2, written by Oleg Kiselyov, provides the and-let*, which is pretty much like Haskell's "do" notation over the Maybe monad (for those of you who haskel). And while the name "and-let*" is indeed a terrible one (although admittedly it does indicate that it's a combination of the "and" operator with the "let*" operator), the operator itself is very useful.
I devised its improved variant, described in the SRFI-202 document, which also has a capability of destructuring, and of receiving multiple values, and I use it a lot.
I don't have too much hands-on experience with Common Lisp to say anything significant about its coding style, but I remember how awkward it felt when I was working through Norvig's PAIP that he had to use '((t . t)) to represent empty bindings, because the empty list in Common Lisp is indistinguishable from the false value - and how much more natural it felt when I could use '() to represent empty bindings and #f to represent the match failure
There is a lot of excellent books about #Scheme and #Lisp - my favorite ones are Structure and Interpretation of Computer Programs and Paradigms of AI Programming.
The latter is a book that uses Common Lisp, and the presented programs are written with great style and taste.
When I was working through that book, I was translating some of the examples to Scheme. I was only a beginner, so I'd dare to say that I wrote fairly awkward Scheme at that time. It took me many years before I developed my current style of programming.
I was starting out with Guile (because I embedded it in the 3d games engine that I was developing at that time), and one of the greatest revelations was the discovery of the (ice-9 match) module, which contained the "match" macro that I later called the Wright-Cartwright-Shinn matcher, when I wrote SRFI-200.
I rediscovered SICP after I was already using Guile for some time. When I first encountered that book - a few years earlier - I only skimmed through some initial chapters and I thought to myself that I know that already and that the book may be interesing to beginners, but not to me. (Boy I was wrong.)
And in order to absorb the content of this book, I had to unlearn a lot of things that I have had already learned up to that point (mainly from the books about C and C++). The fact that the book didn't just pile more and more language features (hey Bjarne!), but instead carefully provided each new feature with the means of program analysis (such as the substitution model, or the environment model) was such a breakthrough. (And I find it very sad that maybe, like. 0.001% of programmers seem to understand that).
It is often said that even though SICP uses Scheme, it is not a book that teaches or promotes Scheme, but that it instead focuses on some fundamental concepts.
Yet, all the adaptations of the book to various different programming languages seem to fail at capturing the hallmark of SICP, namely - the metacircular evaluator. So instead of, say, presenting JavaScript evaluator in JavaScript, they build a Scheme interpreter in JavaScript - which kind of misses the point of metacircularity.
However - while I would claim that SICP actually promotes Scheme, it does not teach Scheme programming - and definitely not the "modern Scheme programming".
One thing that I consider awful about it, is overusing function aliases to car/cdr functions. I also don't like SICP's object-orientation.
I think of a script as a program that only automates what would otherwise be manual interactions of a human operator with the computer (so it does not create any - or many - new layers of representations)
I like cycling, swimming, pizza, ice cream, comics, computer games and Lisp programming in Emacs.Currently I'm developing #GRASP - the GRAphical Scheme Programming environment for Android, Desktop and Terminal, sharing #SchemeBites and #PlottingScheme