@sun Linux detergent doesn't oxidize your CPU unlike rust in Linux - so I wasn't reminded in the slightest of such detergent - nobody else would be mislead either.
Meanwhile, many people are mislead immensely by the the phrase "the Linux kernel", as few people are there to tell them that Linux is only a kernel and instead they see GNU being referred to as "Linux" a lot, so they figure it has the same meaning as OS's where the kernel doesn't have a particular name, for example; "the NT kernel", "the openBSD kernel" or "the macos kernel" (surprising such kernel actually has a not-well-known name - XNU) - the assumption they are directed to is that "the OS is Linux" and the kernel doesn't have a particular name - if only they knew that Linux is only a kernel and it should call it by it's name.
@leyonhjelm Software doesn't have a "foundation" like a building, as it doesn't face physical limitations - you can build the software equivalent of a huge castle built only on top of a single instruction.
@phoenix Yes, there is the lifestyle of Linux developers, where you program a proprietary kernel with proprietary software and surprise, surprise, that kernel fails to respect the users freedom even though it's meant to be GPLv2-only.
(Bonus points for complaining about the wholly free GNU Linux-libre version that does respect the users freedom that is actually legal to distribute without instantly losing your license, as it actually complies with the terms of the GPLv2).
@0 I believed you were referring to my recent comment about wget being usable as a web browser.
My point several months ago what that with bash scripting and wget's --recursive functionality, you could use wget as a web scraper, extract the text and metadata from the pages, shove that into a database and then do search operations on that database - which is not conventional, but it entirely possible to pull off just with GNU software, or with the help of software that is typically available for download on GNU/Linux distributions.
The hardest part of search engines is not the crawling, indexing or searching functionality, the issue is having the bandwidth and enough IPs to crawl an appreciable amount of the internet, the storage space to store the metadata and also enough CPU power to process many text search queries rapidly.
There are of course plenty of search engines on other people's computers that work fine without JavaScript like searx, 4get, duckduckgo and startpage.
One of my regrets is that I've never been capable of bare-metal programming for any useful purpose. Professionally I'm doing webdev shit and arithmetic in SQL (don't ask).
@KuteboiCoder@sun@amerika SQL is an extremely capable calculator (but don't ask me about the time I had to write an approximation to a transcendental function using it's taylor series because the sql engine didn't have the function built in)