@jonny@forestine "prediction is not primarily a technological means for knowing future outcomes, but a social model for extracting and concentrating discretionary power" is such a good and succinct formulation
[in the faculty meeting] listen, this new technology is here to stay, and it's useless to fight against it. if we want to prepare our students for the jobs of the future, we *must* teach them how to (responsibly and ethically) purchase and consume up to five Taco BellⓇ Naked Chicken Chalupas™ each day
and now, to AT LAST silence your persistent clamoring, a brief thread about game boy tones and tuning. ahem. the frequency of each of the three tone-generating oscillators on the game boy are set with an 11-bit value (eight bits in one memory-mapped register, three in the other). that means there are 2048 possible frequencies. here's a chart showing a scatterplot of the frequency of each of the GB's oscillator values, along with a scatterplot of the frequency of all 128 possible midi notes
this chart shows some weirdnesses, e.g., that the bottom 35 MIDI notes are too low in frequency for the game boy to generate, and some of the game boy's frequencies are above MIDI range. but for the most part, it looks like the curve of game boy frequencies lines up with and is in "tune" with the "correct" frequencies, which is kind of a cool trick (i'm using MIDI note frequencies as a proxy here for "correct" frequencies, which I know is debatable!)
if we zoom in on the values in the C-3 to C-7 range (two octaves below middle C + two octaves above), you see that the difference in frequency there is pretty minimal (note that this chart is using a linear scale on the Y axis, not log like the previous two). above this range, I think the game boy will sound significantly out of tune with other MIDI-tuned instruments. makes me wonder if some of what we hear as the distinctive "sound" of authentic chiptunes comes from this aberration in tuning!
however! here's a chart showing the difference in frequency between each midi note and the frequency of the tone *nearest* to that midi note that it's possible for the game boy to generate. you can see that there's actually a fairly small range of midi notes that can be approximated by the game boy without sounding "out of tune" (depending on what you consider to be out of tune. no one except whiplash guy is going to notice 2–3Hz, but 100Hz is obviously going to not sound right)
and only now is it occurring to me that i should have graphed differences in percentage, rather than absolute hertz difference. oh well. anyway, that's all, i just thought that it was interesting to look at these differences! more on the math and underlying hardware here https://gbdev.io/pandocs/Audio_Registers.html and i used this table http://www.devrs.com/gb/files/sndtab.html to check my work. the end!
here's the reason I was thinking about all of this. now that I can send values back and forth between game boy software and the microcontroller on my custom game boy cart, i did what anyone in my situation would do: i made a game boy photoresistor theremin
@researchfairy i think a fundamental problem is that computers (especially tablets/phones) nowadays are *designed* to "de-skill," because it's much more difficult to monetize users who, like, actually know how their computers work and have the expectation that they should be able to independently control a computer's function. the culture surrounding computation compounds the problem—i have students who don't believe they CAN learn how computers work, because they're not "that kind of person"
the paper really should be called "People who don't give a shit one way or another react ambivalently to output of billion-dollar machine designed by hucksters to trick people into thinking its outputs are plausible exemplars of textual artifacts in a specified genre" (the study participants were crowd-sourced online and paid less than a living wage)
it falls prey to every fallacy of AI creativity research (and AI research in general), e.g., that "AI" is a monolithic technology, that "AI" is independent of human intention, that "AI"'s telos is to produce artifacts "indistinguishable" from "humans," that the ability to "replicate" certain genres of art (especially genres positioned as highly "creative," like poetry) are benchmarks along that telos, etc.
anybody out there have resources on writing reliable, modular, memory-safe assembly (to the extent that this is even possible)? (i'm especially interested in stuff related to 8-bit retro programming, but will settle for anything relevant)
what's especially infuriating is that this outcome is *totally obvious* to anyone who knows the first thing about language, i.e., that even the tiniest atom of language encodes social context, so of course any machine learning model based on language becomes a social category detector (see Rachael Tatman's "What I Won't Build" https://slideslive.com/38929585/what-i-wont-build) & any model put to use in the world becomes a social category *enforcer* (see literally any paper in the history of the study of algorithmic bias)
i got so angry after reading this paper on LLMs and African American English that i literally had to stand up and go walk around the block to cool off https://www.nature.com/articles/s41586-024-07856-5 it's a very compelling paper, with a super clever methodology, and (i'm paraphrasing/extrapolating) shows that "alignment" strategies like RLHF only work to ensure that it never seems like a white person is saying something overtly racist, rather than addressing the actual prejudice baked into the model
and what's ADDITIONALLY infuriating is some engineer or product team at openai (or whatever) is going to read this paper and think they can "fix" the problem by applying human feedback alignment blalala to this particular situation (or even this particular corpus!), instead of recognizing that there are an infinite number of ways (both overt and subtle) that language can enact prejudice, and the system they've made necessarily amplifies that prejudice