> why did Printables not check with the copyright holder before doing the takedown? isn't that a fairly fundamental part of copyright enforcement?
stupid laws and the resulting risk avoidance.
if someone makes a copyright claim to a platform and they do not action it in line with the schedule prescribed by DMCA, the platform operator is open to being sued by the claimant. now, in *theory* the process is "claim comes in, is vetted by a legal professional, and is actioned". but...
@millihertz ... the legal vetting is prohibitively expensive and onerous for any public platform that accepts user submitted content.
so, as a risk avoidance measure, they treat all claims as valid and process them automatically as long as they contain the minimum necessary information as defined by law, *regardless* of whether or not it is an obviously fraudulent or perjuring claim, because if they reject a single valid claim then they can be sued. so they accept everything.
@millihertz in the case of a fraudulent claim, the onus is then put on the copyright holder to file a counter claim, which then shifts the burden of legal risk away from the platform and onto the counter claimant, since the counter claim is filed under penalty of perjury.
@ryanc ah, looks like it's all Node? that's unfortunate. I don't really want to add that to my toolchain for this blog. I'm trying to keep it ADHD friendly so I can just write and go, no maintenance or services. the entirety of the stack for this right now is one C# console app calling out to pandoc and then SCP'ing the files to the server. I write the blog posts in Markdown (using whatever editor I feel like), run the tool, and the post goes up. super low friction.
@ryanc part of my requirement here is to keep disk usage small since I'm running a fairly small VPS and would like my blog to stay there and not cost me a lot of money.
re: that first service, nope, do they offer it in a standalone form that I can invoke on Windows?
TIL you can closely estimate the quality factor a JPEG was encoded with by looking at the AC table and doing some fairly simple maths.
as part of my blog generator I'm optimising images before publishing, so if I look at a JPEG and can see that it was already compressed below Q=85, I probably won't see enough of a size saving by re-encoding it down to Q=80 to justify the extra perceptual losses of a repeat encoding.
for PNG images I attempt an indexed colour encoding, subtractively combine it with the original image data, threshold the luma and chroma deltas to get a per-pixel "different/same" value, sum those, and if less than 1% of the image is affected then I keep it. if not I fall back to max compression RGB888. (I don't use transparency in blog images)
on average this is saving me about 50-60% over whatever random PNGs and JPEGs I'm passing in. most of the savings are coming from PNG optimisation on screenshots.
anyone got a copy of that "I am the keyboard I have an important message... E" processor interrupt meme with the pissed off looking bird laying around? I just remembered it and I can't find a copy.
and this is a 100% known problem. and it's solved using active current balancing on the GPU side. you put shunt resistors in series with the lines, measure the current, and actively shift the current draw in realtime to keep everything balanced. (you can do this with an ideal diode-OR controller, or separating the high-side feeds to separate sets of VRM phases)
nVidia *has* done this on some prior cards. but they reduced the shunt count here, running stuff in parallel, and this is the result.
one note on the der8auer video: he mentions that the power headroom on the 5090 is just 15%, due to the high power draw and only a single connector being used, and states his opinion that they should've gone for two connectors to offer more headroom.
I agree with this in general (15% cuts it too fine), but it's important to contextualise the problem here: assuming perfect sharing, two connectors would give you a headroom of 130%.
the magnitude of current imbalance within the cable is 350%.
@ignaloidas@phenidone I have my suspicions that there is a sneaky runaway effect from thermal expansion increasing contact pressure, in part due to the ridiculously high current density at the connector, and cards with proper current balancing are keeping it in check. but I can't test that theory without hands on the gear and I don't have the cash right now (not even for buying the same connectors and some cables, unfortunately)
@ignaloidas@phenidone ultimately though I'm firmly of the belief that when burning people and their expensive equipment is the impact of the risk, you need secondary safety controls (active current balancing, lockout when sufficiently bad connections occur) to account for bad connectors or user error. 12VHPWR has problems but the cards should be protecting against these dangerous failure cases regardless, and especially given that it's a known problem.
@ignaloidas@phenidone the 3090 and prior manage this fine by balancing across split high side power domains - if one leg is compromised it won't boot, and you don't get massive imbalances. the Asus ROG 4-series cards added per line shunts to detect bad connections (they were required to use nvidia's single combined high side design so this was the next best thing) and warn the user / refuse to power on to protect against the issue. we have the capability to be safe. nvidia just didn't do it.
he\himInto electronics, windows internals, cryptography, security, compute hardware, physics, colourimetry, lasers, stage lighting, D&B, DJing, demoscene, socialism.Currently looking for infosec work. See pinned post for details.I am mothman.Heavily ADHD.Nullsector/laser team @ EMF Camp, lasers & lighting orga @ NOVA Demoparty.I sell funny warning stickers at Unsafe Warnings: https://unsafewarnings.etsy.comAll posts encrypted with ROT256-ECB.Header photo by @jtruk