Honestly, I couldn't care less if some politicians aren't on the FediVerse.
"Oh, you used to lead a country? That's cute."
Y'know, the person who wrote the protocol which begat the world's largest computer network until the mid 1990s is on here.
The person who wrote the reference DNS server is on here too!
Last I checked DNS is queried around a billion times an hour by more or less every computer on the Internet.
That isn't all those people did either.
But it would get like, kind of crazy to name all their individual technical achievements without name dropping them.
As it is, if you know, you know.
Their follower counts?
Let's just say the individual who was employee #2 at Phoenix, that company that reverse engineered the IBMPC BIOS which facilitated "clone" PCs (IIRC Hewlett Packard came to them with some hardware that they blessed with their ЯƎ-ed BIOS as just one example) has fewer than 1000 followers.
That individual who also helped implement DHCP? So you don't need to sit there tweaking your IP and netmask and gateway and DNS settings every time you change networks? 1.3K followers.
If you know, you know.
If you care about politicians being on here?
Let me know when ANY politician has ever contributed to anything as widespread as those two who are already here, and presumably will continue to keep toiling in the shadows, doing neat things, releasing code in free and open source manners as they have done, for decades.
No lobbyists needed.
No billionaires bribing anyone for votes.
I don't think the FediVerse's priorities are off.
I think everyone who is so ignorant of who really co-creates technology claiming that the FediVerse/Mastodon should be more like Twitter/BlueSky/whatever the heck is just showing that they have a lot to learn?
We all do!
Good thing ignorance can be conquered!:)
Shipley held a contentious meeting on Monday with scores of #opinion section staffers, who posed tough questions to the #editorial page chief, including appeals for #Bezos to address them.
As recently as last week, according to a person present, Shipley said he sought to talk Bezos out of his decision. Shipley added, “I failed.”
@feld
Part of why I laid out the reasons that way is so you can easily have your spreadsheet sort them, and take or leave what you want. It's intended that you could easily pick out only specific kinds and go after them.
As for the ones that do not deserve to be on this list, please tell me which ones, and I will remove them immediately and re-check them at a later date. You can do this in a DM if you want.
Same goes to anyone else who thinks a server is on here mistakenly.
I despise talking about this, but as this is potentially a safety concern for my friends...
X/Twitter is rolling out a new feature which allows people you've blocked to see all your posts again. They can't reply to them, but they can read them all. It's not applied to everyone yet, but some folk are saying it's come into effect for them.
As you were. This will be my one and only twitter related post.
9/ Pagliery:
Michael Cohen is back on the stand. He's in a dark (blue? gray?) suit and a sky blue tie.
He keeps a vague, blank expression. He eyeballs each individual juror as they file in, and he nods at one of them.
As they sit down, he takes a deep breath and sighs.
We begin.
Yeah, it’s just a proof of concept. And because I’m down this rabbit hole, I should elaborate that the real bottleneck for this stuff is what’s called the context limit: how much text it can comprehend at a given moment, all at once. People in these AI companies are still sperging about training data quantity, but we’ve legitimately hit diminishing returns on that. We absolutely have not with context limit.
Right now GPT-14 is at 128K tokens and Claude2.1 is at 200K tokens. What’s stopping these LLMs from being an automated GM, or an effective lawyer, is that you can’t make requests like: “Taking on board these five megabytes of TTRPG rules text plus everything on their official forums that amount to the totality of this gameline, plus everything that took place in your game and all of your past rulings in it, what happens next?”
Or requests like: “Taking on board every applicable law in the City of Denver, including Federal, State, and Local laws, is it illegal to do X?”
What’s stopping those kinds of requests is that you can’t fit all that information into 200K tokens. I say megabytes of text instead of the gigabytes the PDFs consume, because the text form of these documents is much smaller. To put it into perspective, the PDF of the Mage: the Awakening 2e rulebook is 37MB, the text file I extract from it is 1.5MB, and the number of tokens it uses up is about 350K.
But in time, we’re going to have humongous context windows, for multiple reasons. One, the implementations themselves will be made to use less VRAM per token. Two, more VRAM will become available. In our lifetimes, we might see some ridonculous shit like context limits that can fit literally all of Wikipedia directly into them.
As an aside, if you want to read some of the copes OpenAI offers for limited context window, read about embeddings.
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.