@dougmerritt Thanks ;) Yeah, though it's like Christmas, right, the celebration is on the eve beforehand (and midnight). Honestly I'm only a couple of pages into an endearingly dodgy text file I found titled, "the best snow crash text file.txt" which seemed like the most authentic way to read it.
Forgive the radio silence everyone. I did do a rewrite of NUD removing many of the silly things and improving readability shall we say. #lisp#cdrCoding . https://codeberg.org/tfw/nud
Hope to see you at the show tomorrow, in just under 16 hours (I'll toot here an hour or so before going live).
:frantically reads Snow Crash
Xin nian keai le everyone (Happy Spring festival / Happy Chinese New Year)
@dougmerritt@screwtape My friend the natural language processing expert (UMass Amherst, Watson) also agrees.
I still argue that a multimodel LLM + IR system like Wolfram Alpha + a full fledged RDF backing store for a reasonable percent of formal knowledge ~>= than people actually imagine when they think of AGI. Also, less energy intensive than pure neural net solutions. He says I’m wrong; I’ve only done undergrad genetic programming 20yr ago
@dougmerritt This was an after-ELS-scramble-idea. Sandewall wrote a lisp program called The Leonardo System from 2005 to 2009, then he began writing that open-access book named AICA from 2010-2014. He was an allegro cl person. To my knowledge, I have the sole patched version of his 2009 software, which had a problem that broke it with modern ( [] ) GNU CLISP. Scraps: https://codeberg.org/tfw/pawn-75 PDFs: https://www.ida.liu.se/ext/aica/
@dougmerritt So, Erik Sandewall https://www.ida.liu.se/ext/aica/ had an AI paradigm called Leonardo Software Individuals (for knowledge-based, cognitive computer programs). Bots need life spans of years, be self-awarely kept unique. In practice, for LLMs this would just mean running an LM locally and preserving its KV cache, between runs, which you can do, but everyone pretends you can't. I'd like to add these conditions to current norms.
@dougmerritt No, not at all, I misconstrued your question as being a haunting, guilty thought I felt I needed to explore in the future you somehow had access to. Relating to Sandewall's Software Individuals paradigm and the current practice of chatbots.
Er, I meant to just point out that LMs will happily babble authoritatively about having done something (like use a one-time-pad) while producing nonsense.