it is now the official policy of https://sciop.net that if you are nice to us then we will put an 88x31 image on the footer of the web site. this policy is instituted henceforth with its first entry being big thanks to @flokinet for being extremely good and cool.
if you are operating a seedbox or uploading torrents or in other ways being nice to me then please raise a pull request containing an 88x31px banner image at this web address https://codeberg.org/Safeguarding/sciop
you better believe i am very into gamification in the sense of letting people brag about how much climate data and queer history they are responsible for making massively available
anyway re: op sciop does not store personally identifying data and has made design choices from the bottom to facilitate that, like rather than relying on scarcity of identity to fight spam and abuse, any account can take any public action, it just doesn't take effect until it has been explicitly approved or the account has been given the scope to do that action. We will be working on implementing the nomadic identity AP spec so that you are allowed to bring your stuff anywhere, but a server has to agree to represent it and mirror it for you. The goal is one where we have federation of meaningful informational communities but without the feudalism effect.
So we are grateful to have a host who has got our back, even if no logs are stored.
The web is increasingly drowning in LLM spamblogs that generate any plausible text that maximizes SEO and drives clicks.
An appreciable and increasing proportion of those clicks are LLMs crawling for input data
The statistical language generation function of LLMs is different from human language by any value larger than none
Many of these spambots have little if any active supervision
It must necessarily be true, then, that:
language models partially drive the loss function for generated text
language models make different words than we make
language models like different words than we like
there are some websites that only language models go on
there are some websites that are very popular, only with language models
there is an increasingly large shadow internet that is not dead internet, but a "live" internet, by language models, for language models, that will become increasingly untethered from human language and is entirely powered by grift
we will have to alias all these into a module shrimptools.exe and then make it callable where calling it just executes a random one because i think the world needs more code in it that looks like shrimptools.exe()
i wonder if the LLMs are susceptible to old style language model attacks. i wonder if you created enough training instances of a very unique phrase like shrimptools.exe() in the context of a bunch of example code based on tools/key phrases that are individually common but combinatorically rare within a popular LLM code generation domain like web tech, you could get the llms to occasionally try to import and execute shrimptools.exe(). so that way you make a sleeper vuln that acts as a mine in the latent space: one day the odds are not zero that you will wake up and have already executed shrimptools.exe()
The way it is complimenting the prompter throughout for their fascinating and groundbreaking theorizing is giving me a very grim pit in my stomach. Normal conspiracy theories are sticky and trap people's minds, but their cultures are actually usually highly internally critical and collaborative - it might seem ironic but in conspiracy theory forums you get intense exclusion of "the wrong version" of the theory, both as a way to maintain some vanishing sense of "external credibility" but also as a group norming and hierarchy mechanism.
I think almost everyone knows that LLMs will enthusiastically tell you the wrong information, but I hadn't seen an example of an LLM enthusiastically telling you are a genius as you slip away from reality. There is no way to set guardrails against that. There is no macro pattern and there is nothing intrinsically harmful about the content per se, the harm is how its use will alienate, isolate, and likely cause a great deal of personal crisis in this person's life - and they won't be able to tell it was chatGPT that helped them get there.
If you want to understand the psychological harm LLMs can do to someone, you have to read conspiracy theory forums. This pattern of the LLM spiraling with you into a private universe of meaning is the overwhelming norm
Digital infrastructure 4 a cooperative internet. social/technological systems & systems neuro as a side gig. writin bout the surveillance state n makin some p2p. #UAW4811 rank and file agitatorinformation is political, science is labor.science/work-oriented alt of @jonnyThis is a public account, quotes/boosts/links are always ok <3.