After hearing about Eric Schmidt's guest lecture in an AI class, I looked up the transcript, and yes he really did say that if a Silicon Valley entrepreneur were to "illegally steal everybody's music" they would just "hire a whole bunch of lawyers to go clean the mess up." Then I was curious and looked up the syllabus for the course and based on the topic schedule, the most explicit ethics topic seemed to be "opportunities and risks" for which the guest speaker was... Eric Schmidt. 😕
So let's say that we cultivate the most ethically minded CS graduates ever, and then they go off and work for [insert whatever tech company you find most objectionable] - then what? Will they know what to do? What CAN they do?
I've been asking myself this question for a while. Starting to tackle it... this new paper is a first step.
I couldn't stop thinking about the Meta AI chatbot that made up a human experience in a parenting Facebook group - so I wrote about why online communities are for people, not chatbots. Decades of social computing research tells us that information seeking in online communities is as much about shared experiences and human connection as it is about getting an answer. Maybe generative AI shouldn't be *everywhere* doing *everything*. https://theconversation.com/ai-chatbots-are-intruding-into-online-communities-where-people-are-trying-to-connect-with-other-humans-229473
Here's today's academic hot take. The way we run conferences is absolutely wild. Can you imagine if a company had 100% employee turnover every year and all they had to rely on was (if you're lucky) some documentation, or having to reach out to the person who previously had your job which you then feel bad about because they don't work there anymore?
Also the fact that anyone thinks professors should be event planners.
On a post that has absolutely NOTHING to do with this, two people have been arguing in my comments for three days about whether spreadsheets are databases. The thread is now almost 50 comments.
Oh also I should share the new work out of my lab that was presented at #SIGCSE2024 this year!
"How do Computing Students Conceptualize Cybersecurity? Survey Results and Strategies for Curricular Integration," led by PhD student Noah Cowit, reports on a survey of computing students' preconceptions of cybersecurity and what this might suggest about strategies for integrating security concepts across CS curriculum. https://dl.acm.org/doi/10.1145/3626252.3630869
It is #SIGCSE2024 - the big ACM computer science education conference! There years ago I made this YouTube video about my thoughts about ethics integration across computing curriculum - I have a lot more thoughts on this since then, but if you've never seen me give a talk on this topic, this is a good introduction! https://youtu.be/LYRKqnLeIDo
I know a handful of famous people From The Internet but 100% the most famous person who follows me on TikTok is (verified) Santa Claus. But like, of COURSE he wants to learn about tech ethics.
As a reminder, if you are a teacher, do not use AI detectors. This piece mirrors what I've heard from a LOT of students on social media (panicked students commenting on my AI videos, with no reason to lie). In my opinion, whatever utility you're getting out of a way to catch cheating is not worth the risk of even one false positive. Especially the potential for *systematically biased* false positives. https://www.thedailybeast.com/ai-written-homework-is-rising-so-are-false-accusations
Last week I was in a book club conversation about Going Infinite, Michael Lewis' new book about Sam Bankman-Fried, and so have already been just stewing over effective altruism for the last week.
A TikTok about Amazon's new $9/month medical service for Prime members has AMAZING comments (in a "laughing through your tears about the U.S. healthcare system" kind of way). A selection:
"I delayed my CPR by 3 days and got a $1 digital credit, can't wait to use it!"
"Got my liver on prime day. Even bought 'used, like new' and saved a bunch."
"I just dropped off my bloodwork at Kohl's."
"Baby came nine months early with same day delivery!"
So you know those science communication publications that write accessible articles about research?
I got an email recently from one saying "We are interested in featuring your research in an upcoming issue of our magazine" and "Would you be available for a 10-minute exploratory phone call?"
I responded by asking what the publication's business model is.
And of COURSE the (very long) response boiled down to "we will charge you $2000 to publish this article."
I find it weird that the press releases and headlines around the new capabilities for ChatGPT are that it can "see" and "hear." We don't tend to describe facial recognition tech as "seeing" or the voice memo app on your phone "hearing" in this like anthropomorphized way.
The comments on my women in computing videos just keep on coming, and today it was someone who says he's been in IT for 30 years and has "never, ever" met a woman who has shown any interest in computers.
I seriously didn't know how to respond except to say that it's okay, not everyone meets women. Because what other explanation could there possibly be?!
This reminder of how many people are fooled by a photoshopped tweet screenshot (and believe some random X user without verifying) is really depressing given that we need to be convincing people that (thanks, generative AI!) if they are scrolling through social media and see an actual video of the CDC director at a press conference they absolutely cannot believe that it is real until verifying the original source of the video. https://www.forbes.com/sites/mattnovak/2023/09/02/no-theres-not-an-ebola-outbreak-at-burning-man/?sh=310308d65dd7
information science prof at university of colorado boulder, social computing / tech ethics researcher, exceptionally minuscule tiktok star, fangirl / geek, she/her