It took a full calendar year of episodes of excruciating, life-disturbing and very classic nerve pain to get a neurology referral that was accepted. Watching TikToks where medical providers all pat each other on the back for making the stupid joke about dropping everything to treat a "farmer" who comes in with pain, I wonder how long it would have taken if I were a different gender.
Sometimes people have given me (well intentioned) feedback that I can come off as overly precise, careful, scientifically pedantic, I don't know. Sometimes you may find that in how I express myself on social media because I am in so much goddamn pain. I'm just in so much pain and it's hard.
I bounced off the sharp edge of a very weird citation about programming ability and I was like, "what are they talking about, this claim about being able to predict such a reliable percentage of student outcomes sounds very sus"
- you all never met a goddamn distribution you couldn't turn into a mystical statement about programming ability. Distributions of achievement (again, not ability) can reflect factors that have nothing to do with the domain in which you are capturing them in. E.g., would you find this "bimodal success" in any intro class? Soooooo does that mean you're just capturing poverty? first year struggle? Where is the THEORY. Achievement is actually something we STUDY
You're getting my chaos thoughts on this kind of thing, I know I need to write a deep and thoughtful piece about assessment instead, but it's fking hard when there are so many fallacious beliefs being passed around this industry.
- something having high predictive value for a super basic split (e.g., struggling students vs those-who-don't-fail-and-bail) isn't that impressive. Lots of tests can have that predictive power without being VALID MEASURES of ability
As Cimpian and others have studied re: our beliefs about innate ability, constantly repeating the frame of innate ability even when arguing about it makes these beliefs feel commonplace and reasonable. Take a note of just how much airtime, how many features in blogs, how many times work like this gets amplified while work with millions of learners and global populations and far more careful interrogation of achievement evidence never does.
The "programming ability" stuff has a waiting audience
Reading this really made me think about how these stories get interpreted and amplified by influential voices.
I know this is old, and has a very vague retraction note at the top, but it's a good example of just how much the poor claims get repeated while the counterevidence doesn't. I do not believe this is an "important phenomenon" nor a discovery. Grandiose characterizes the "retraction" language too. By the way can't retract something never published. Fuck off.
The number one place I think about this type of reasoning being used was when the US was arguing against public education as a concept. Poverty is an exceptionally strong predictor of outcomes; so people said, why should we waste our time and money letting poor children go to school? This might seem absurd to us now but it was a very serious, very influential argument
That was immediately disproven by the fact that EDUCATION INTERVENES ON POVERTY
By the way, Atwood in this 2006 blog quotes a part of the paper that says, if we used this predictive test to only admit students who have no risk of failing, then the failure statistics of CS would transform. It kind of SOUNDS like a good thing.
But this is a very common fallacious argument about education and selection: student "failure risk" is not a static innate trait we're trying to detect and exclude based on, it's WHAT WE'RE SUPPOSED TO BE CHANGING WITH EDUCATION
Studying computer science education as if it is a phenomenon that we discovered growing on our shoreline instead of a goddamn field that we are constructing every moment. JFC. Abundant evidence continues to emerge that says how we teach computing is extremely broken. Students' failures in the context of a failing educational experience are not some kind of untroubled measure of ability.
I am going to make a thread that I can just link when people tell me about moving to Canada for science :)
- half my family emigrated to Canada. WITH A FACULTY JOB. You think this was just an easy walk in the park? It was financially precarious and many years of work and leaving behind an entire life (obviously)
- Canadian science literally relies on NIH funding too. Take a look at how many NIH dollars had Canadian faculty & institutions involved
@jmeowmeow Jeff, saying this gently and with compassion, these are not the same things and please don't equivocate like this. The frustrations that people have about software methodology are not the same as identity-based oppression like sexism and racism. I know you must know that and believe that, but you are quoting a book literally titled MAN MONTH that cites e.g. case studies with 12 men as generalized evidence for ability that should be applied to all human beings.
I can't believe Ashley and I are literally both writing pieces about gender barriers and beliefs about programming ability right now. Really cool. At the same time sort of sad and maddening how continuously necessary it all is. But also really cool. You can have two completely different PhDs and research areas and the world somewhat inevitably pushes you toward the same issues if you're a person who decides to care and pay attention over your career.
"The Transmitter identified at least 90 publications that cite a version of the dataset through a search on Google Scholar; 25 of those appear in journals published by the Institute of Electrical and Electronics Engineers. “IEEE is aware of this issue, and we are investigating,” an IEEE spokesperson told The Transmitter."
Yeah, this doesn't surprise me and is among the many reasons I tried and then rapidly cancelled an IEEE membership.
For better and for (mostly) worse some people are buffered and protected in being able to scream out loud, to iterate, to workshop, to roll the dice on hundreds of hot takes, knowing the good ones will accumulate to them and the bad ones will just melt away. We maintain this with our biases.
Something I have explored a lot as well is how uneven consequences are for different people, which is well known in psychology but feels constantly ignored in software in favor of a fantasy that people are fungible and facing the exact same risk calculations
Psychologist for the humans of tech. Evidence strategy for technical teams at: https://www.catharsisinsight.com/ Co-host at Change, Technically: https://www.changetechnically.fyi/Author: Psychology of Software Teams (CRC Press, coming 2026)Seizing the means of scientific production. Quant Psych PhD (but with a love for qual). Chronically underpublished. She/herFounded: Catharsis Consulting, Developer Success LabNeighborhood Cool Aunt of Science