@inthehands interesting that's not the usage of the hidden curriculum as my field uses it, which I always learned as the implicit social rules of status and performance not quite the same as tacit skill learning
if you have little sense of what typical patterns of change look like over time for these software activity variables, how will you know when you've changed them in a meaningfully distinct way that is lasting and real? For instance, how can you know that a pre- and post- change you are observing is due to the intervention you claim it's due to? Without real research design, you often cannot know this.
Make no mistake about it, software metrics might *sound* simple or be based on a fairly simple operationalization but that certainly does not mean they naturally *behave* in simple ways over large numbers of people. This insight was at the heart of our No Silver Bullets analysis of cycle time and it continues to plague me
We have so many pseudo-studies in software, and given its importance in the world and the size of this industry and the economic powers involved, I really believe we deserve better. Going through symbolic rituals of science doesn't mean you're really generating the evidence that will bring clarity to our decisions.
And very little work is rising to the challenge of moving beyond observational pre- and post- averages from individuals, and designing for either natural experiments or controlled experiments to truly put claims to the test that the change is due to our desired intervention.
As teams gather more software metrics and clamor for crude benchmarks, the industry ends up with many cherrypicked supposed change findings, ascribed to the causes we want to claim they are ascribed to, but usually measured with research and statistical approaches that lose tremendous amounts of important information in the service of simplistic aggregations to fit business desires, not the structures of the data or a behavioral change theory.
The inability to understand WHAT generative models are genuinely good at should be studied as its own cognitive bias network I stg. It's like people are looking for the places where we most obviously have existing solutions.
Reading the designs of these attempts reveals so many foolish mental models from the HUMAN that read just pure software bro to me: e.g., arbitrary limits that feel efficient or something (limiting # of characters considered by the model in the contract)
I want to be crystal clear about something. The people I love most in the world are being directly attacked, threatened and having their scientific careers destroyed, and if I have the internal bandwidth to share about the experience on here I expect conversation, not lecturing. I'm not interested in a fraction of mastodon reply whining in my replies when I'm talking about science in 2025, not while my wife has to think about her physical safety because she dares lead diversity in STEM programs
Everything is like, one-directional. "You work and then you wait to be told if the work aligned to business outcomes." Presumably only leadership gets to say whether it does
Just fascinating how isolated and removed from all product research and ux all this devex seems
Why are we asking developers about "how satisfied are you with your tools" and not "how satisfied do you feel this technological approach is going to improve users' lives" or "do you think you are making meaningful progress toward the goal of helping users" I think it is just FASCINATING that we want to measure purely individualistic goals for developers, when in my experience? Many developers think a lot about prosocial goals and are deeply impacted by that
Why is there all this emphasis on how developers need to be locked into delivering value for the customer and no formalization of "feedback from customers" in all these "developer experience metrics"
Good thing that system never had any bad effects (like ensuring your cancer will never be caught in early stage because all you ever get are ten minute limit appointments from someone who has to hit their huge patient quota in a week)
Measuring "PRs per dev" (presumably conditioned on the same unit of time even though what that rate is is very unclear to me whenever this measure is proposed) is really giving "number of patients per doctor's day" I can't be the only person who feels this way
@nddev@inthehands also very important to note as you are probably well aware that just because people may have *different* interaction needs that doesn't make those needs lesser or less valuable. I am not an expert in neurodiversity (just care about my many loved ones who are ND), but here is a fascinating recent study showing that social communication is highly complex and powerful among autistic people (as should be obvious imho) and that NTs have their own deficits!
I have been looking at research in software engineering about "motivation." I find it disappointing. It's not uncommon for work to state (without evidence) that software engineers are noticeably distinct from the general population in how their motivation functions. Why would this be true? On what grounds? It may be distinct in the way that every occupation and professional endeavor creates specific PRACTICES for it to be exercised, but the argument that core *psychology itself* differs?!
Writing a book about the Psychology of Software Teams. Defender of the mismeasured. Co-host at Change, Technically: https://www.changetechnically.fyi/Studying how developers thrive. I care about how people form beliefs about learning, build coalitional identities, and build strategies for resilience, productivity & motivation. Quant Psych PhD (but with a love for qual). Chronically underpublished.Founded: Catharsis Consulting, Developer Success LabNeighborhood Cool Aunt of Science