Tired: telemetry is a privacy issue.
Wired: telemetry is a consent issue.
Inspired: it is impossible to use telemetry to influence future decisions without implicitly or explicitly performing human experimentation.
Conversation
Notices
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:00:34 JST Cassandra Granade 🏳️⚧️
-
Embed this notice
✧✦Catherine✦✧ (whitequark@mastodon.social)'s status on Monday, 05-May-2025 17:00:29 JST ✧✦Catherine✦✧
@xgranade by this logic, any OSS development that has a UX element is human experimentation
-
Embed this notice
✧✦Catherine✦✧ (whitequark@mastodon.social)'s status on Monday, 05-May-2025 17:03:33 JST ✧✦Catherine✦✧
@xgranade I think this view is deranged. mainly because I've spent a career *not* collecting telemetry, and as a result making changes that are necessarily uninformed (and sometimes quite upsetting)
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:03:36 JST Cassandra Granade 🏳️⚧️
@whitequark To be clear, I don't think human experimentation is bad. Rather it's when that experimentation is uncontrolled and nonconsentual.
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:03:37 JST Cassandra Granade 🏳️⚧️
@whitequark Yes.
-
Embed this notice
✧✦Catherine✦✧ (whitequark@mastodon.social)'s status on Monday, 05-May-2025 17:10:39 JST ✧✦Catherine✦✧
@xgranade I think you're framing very broad categories of human *collaboration* as human *experimentation*, possibly as a way to be inflammatory, which I don't appreciate
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:10:44 JST Cassandra Granade 🏳️⚧️
@whitequark There's a lot we understand about human behavior, both at macroscopic and microscopic levels, that comes from experimentation. One-way mirror studies, A/B tests, and pretty much any kind of user testing fall under that, in that you're providing stimuli to a human being and seeing how they respond.
The understanding we glean as a society as a result of those experiments is immensely valuable. What I'm calling out is doing those kinds of experiments by accident.
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:10:45 JST Cassandra Granade 🏳️⚧️
@whitequark I mean, fair, you can think me deranged if you like... there's not much I can really do about that?
To be very clear on my part, though: none of that is to say that telemetry and the like are wrong. Rather the opposite, in that I wish there were better resources made available to help use it ethically. As you say, it's quite useful and can help make informed and useful decisions.
-
Embed this notice
✧✦Catherine✦✧ (whitequark@mastodon.social)'s status on Monday, 05-May-2025 17:25:35 JST ✧✦Catherine✦✧
@xgranade I see
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:25:41 JST Cassandra Granade 🏳️⚧️
@whitequark Maybe put differently, the same kinds of arguments and excuses used to justify A/B testing are now used to justify tweaking LLMs "based on user feedback." OpenAI has even admitted to that recent versions of 4o encouraged people to think of themselves as religious prophets because of how they interpreted user feedback.
I posit that understanding why A/B testing can both be useful and harmful is itself useful in deflating OpenAI's arguments.
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:25:43 JST Cassandra Granade 🏳️⚧️
@whitequark To wit, I don't think there's an inherent conflict between collaboration and experiment. Indeed, a participant in an ethically designed and conducted experiment is there on purpose, at least nominally because they agree with the goals of the experiment. That strikes me as quite collaborative, which is awesome in my book.
But also, experimentation is something that can go wrong when done without proper knowledge; I would love for there to be more resources to help.
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:25:48 JST Cassandra Granade 🏳️⚧️
@whitequark To be clear about my intentions as well, no, I'm not trying to be inflammatory nor is any of the above intended to incite anything.
Rather, I am incensed that AI chatbots and the like are the logical extrema of the kinds of experimentation that have been normalized in the tech industry, and that that extreme has caused significant harm. I posit that being clear about the roots of those ideas is helpful in understanding the extremes.
-
Embed this notice
✧✦Catherine✦✧ (whitequark@mastodon.social)'s status on Monday, 05-May-2025 17:45:45 JST ✧✦Catherine✦✧
@xgranade I think you bring up at least some reasonable points but the way in which you frame them (or framed to me initially) seems about as non-constructive as it gets, to be honest
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:45:46 JST Cassandra Granade 🏳️⚧️
@whitequark I do hope that's something that you don't find deranged, as you say; I value your opinion and expertise in general and on this stuff in particular. Whether you do find it deranged or not, though, that's more or less where I'm coming from?
I think tech as an industry (and yes, that includes my own time in such) has normalized a degree of nonconsentual experimentation that's now causing huge problems.
-
Embed this notice
✧✦Catherine✦✧ (whitequark@mastodon.social)'s status on Monday, 05-May-2025 17:48:50 JST ✧✦Catherine✦✧
@xgranade mainly, I do think there's room to improve (like, kilometers of it) in how A/B testing is done, but the way it was presented more or less told me "great, another thing I have to do to satisfy an external gatekeeper while doing primarily unpaid labor" which doesn't spring enthusiasm for the underlying critique even if it maybe should
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:48:51 JST Cassandra Granade 🏳️⚧️
@whitequark I mean, fair; I'm not sure I agree (indeed, or else I wouldn't have said what I did), but I definitely take the critique.
-
Embed this notice
✧✦Catherine✦✧ (whitequark@mastodon.social)'s status on Monday, 05-May-2025 17:56:32 JST ✧✦Catherine✦✧
@xgranade okay, then we more or less agree on the substance if not the form. thank you for taking the time to write an explanation
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:56:41 JST Cassandra Granade 🏳️⚧️
@whitequark But absent that structure, I absolutely recognize that people are doing experiments on actual humans, both to maximize the profits of unscrupulous corporations, and to learn how to better benefit other people — I encourage that that experimentation be done with all the empathetic concern and attention to ethical conduct that implies.
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:56:44 JST Cassandra Granade 🏳️⚧️
@whitequark And like, if an IRB is just acting as a gatekeeper? That's not helping anyone do things ethically, and it's not helping any potential or actual users. It's concentrating power, and that's it.
I don't know what the right structure is for providing expertise to people trying to do their best while also telling corporations running completely unethical uncontrolled and nonconsentual experiments to fuck off.
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Monday, 05-May-2025 17:56:47 JST Cassandra Granade 🏳️⚧️
@whitequark To be sure, that's why I emphasized from the second toot that I wished there were more resources available — I understand that reading, and I definitely don't intend to encourage more unpaid labor.
It's absolutely fucked up that the companies that do A/B testing don't give a shit about the ethical consequences of such, and the people who *do* give a shit don't have the resources to back that care up with aid from specialized expertise in ethical experiment design.
-
Embed this notice