NextCloud migration was successfully completed yesterday! Two things made this possible: NextCloud's amazing administrator dashboard, with its active tips to fix problems (including telling you what command to run) and it's even-more-amazing documentation; and ChatGPT, which provided a correct solution to a database problem I have personally never encountered before and for which there was only 1, and I mean 1, Google search result.
@mgrahamwoods I suspect there are other organizations out there that need your generosity more (food banks, shelters, etc.). But if you have the spare change, a little to Wikipedia never hurts.
I think the key idea is that the playbook - against Wikipedia, against science standards in classrooms, against scientific input to policy-making - is the same.
Science, as an institution, has been under this attack in a myriad of ways in my lifetime. Massive human failures are generally preceded by a concerted effort to undermine experts and expertise. This is the path many democracies are on now, and they have proven stubbornly resilient against altering that path.
In the original context, this was about attacks on Wikipedia. But this has been applied in a myriad of large-scale ways in just the last 40 years: attacking the teaching of biological evolution; efforts to undermine the resolution of acid rain and industrial pollution; climate science and the reality of human-induced climate shifts. I can go on.
The bottom line: the playbook is the same. Are we wise enough, as a species, to see the road ahead and thwart it?
I really enjoyed the recent “Citation Needed” newsletter. In particular: “First come the claims of bias, supported by cherry-picked or misrepresented examples. Then the demands for ‘balance’, which in practice mean giving equal weight to fringe views or demonstrably false claims. When these demands are refused, the attacks shift to the platform's legitimacy itself: its funding, its governance, its leaders, and its very right to exist as an independent entity.”
Today is a slow, steady march of upgrades from NextCloud 25. You have to go one major release at a time. So the morning has been 25 -> 26 -> 27 and now 27 -> 28.
This also comes with a steady march of PHP versions. I’ve transitions from 7.4 -> 8.1 -> 8.2. I am excited to finally hit 8.3, which I think will be on the NextCloud upgrade from 28 -> 29.
Finally ... FINALLY ... migrating ownCloud to NextCloud.
I had no idea what shit ownCloud is compared to #NextCloud ... and I loved ownCloud.
Still migrating the database structure to the NextCloud framework, but all database data was already successfully migrated and verified with some basic tests.
You have to migrate from ownCloud 10.3 (I had 10.5) to NextCloud 25, so next I look forward to moving up to a current NextCloud version.
The steady OS upgrade march on my home systems continues! I have now successfully upgraded 4/7 (I undercounted by 1 machine in my previous post ... too many machines!), with only minor stumbles on each one. Nothing that rendered anything unbootable! (so far .... )
I am relieved to have shut down my account on X. I was sad when Twitter died, and what replaced it was not worth it. I have not posted on X in any serious way in well over a year. It was time for two realities to match.
I was always happier in the open social web, and I remain happy to be even more fully committed to it now!
#SNOLAB is hiring for two Research Scientist positions to join the groundbreaking science taking place 2 km underground! The ideal candidates will have a concentration in either #LowBackgroundScience or #QuantumScience.
Apply by January 31st, 2025. For more information, view the full job posting:
It is always terrifying to do an Ubuntu OS upgrade, even when you are pretty sure that everything will work out OK.
That said, 2/3 of tonight's OS upgrades went off with no/few hitches. 1/3 failed before it even started since package is on the "update deny" list and I need to sort out why. That leaves 3 machines still to go after these. So ... 2/6 complete!
408 is http code for "Request timeout" which is why I lean toward the web server, not peertube, being the problem. However, I am no web socket or server guru, so where to start tracking the cause is a little unclear to me. The art of problem solving is asking the right question, and I haven't figured out the question.
I have a thought now, though: check the URL returning the timeout and see what my server settings are for that specific path....
Note that it could be a problem for smaller files than 500M. That just happens to be the smallest file so far where I have noticed the problem.
Sometimes reuploading the original file and rerunning transcoding succeeds, but it is NOT consistent. It is also not runner-dependent. The same runner can be fine on one job and fail on the next. So weird.
Been having a lot of problems with my peertube instance in the last week. Have not tracked down the issue, though I suspect something in the web server/socket setup that handles transactions between the server and runners on other systems.
The symptom: after transcoding, a runner reports that it "expected 204, got 408" when transacting via http with the main server. It seems time/video size related, happening with files at least 500M in size or larger. Not tied to an obvious file size limit...
Spending some time today catching up on the "Dot Social" #podcast via #Kasts. Really enjoying conversations with @evan and @Gargron. Looking forward to the ones with Ryan Barrett and @molly0xfff.
Astrophysicist and #ParticlePhysicist. #SNOLAB Research Group Manager, #Professor at Queen's University. I live and work in #Sudbury, #Ontario, #Canada. I study #neutrinos and hunt for #darkmatter. I study #physics. I am an #author #coder #runner.2025 Breakthrough Prize in Fundamental Physics co-laureate.Banner image: the lid that sits atop the tank that will soon hold the PICO-500 superheated bubble chamber dark matter experiment.