GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 00:55:38 JST Paul Cantrell Paul Cantrell
    • AJ Sadauskas

    There’s a lot to chew on in this short article (ht @ajsadauskas):
    https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination

    “An AI resume screener…trained on CVs of employees already at the firm” gave candidates extra marks if they listed male-associated sports, and downgraded female-associated sports.

    Bias like this is enraging, but completely unsurprising to anybody who knows half a thing about how machine learning works. Which apparently doesn’t include a lot of execs and HR folks.

    1/

    In conversation Monday, 19-Feb-2024 00:55:38 JST from hachyderm.io permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: ychef.files.bbci.co.uk
      AI hiring tools may be filtering out the best job applicants
      from Charlotte Lytton
      As firms increasingly rely on artificial intelligence-driven hiring platforms, many highly qualified candidates are finding themselves on the cutting room floor.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:03:03 JST Paul Cantrell Paul Cantrell
      in reply to
      • AJ Sadauskas

      @ajsadauskas Years ago, before the current AI craze, I helped a student prepare a talk on an AI project. Her team asked whether it’s possible to distinguish rooms with positive vs. negative affect — “That place is so nice / so depressing” — using the room’s color palette alone.

      They gathered various photos of room on campus, and manually tagged them as having positive or negative affect. They wrote software to extract color palettes. And they trained an ML system on that dataset.

      2/

      In conversation Monday, 19-Feb-2024 01:03:03 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:05:24 JST Paul Cantrell Paul Cantrell
      in reply to
      • AJ Sadauskas

      @ajsadauskas Guess what? Their software succeeded!…at identify photos taken by Macalester’s admissions department.

      It turns out that all the publicity photos, massaged and prepped for recruiting material, had more vivid colors than their manual photos. And they’d mostly used publicity photos for the “happy” rooms and their own photos for the “sad” rooms.

      They’d encoded a bias in their dataset, and machine learning dutifully picked up the pattern.

      Oops.

      3/

      In conversation Monday, 19-Feb-2024 01:05:24 JST permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        Web Server's Default Page
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:09:14 JST Paul Cantrell Paul Cantrell
      in reply to

      The student had a dilemma: she had to present her research, but the results sucked! the project failed! she was embarrassed! Should she try to fix it at the last minute?? Rush a totally different project?!?

      I nipped that in the bud. “You have a •great• presentation here.” Failure is fascinating. Bad results are fascinating. And people •need• to understand how these AI / ML systems break.

      4/

      In conversation Monday, 19-Feb-2024 01:09:14 JST permalink
      Haelwenn /элвэн/ :triskell: repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:15:12 JST Paul Cantrell Paul Cantrell
      in reply to

      She dutifully gave the talk on the project as is, complete with the rug pull at the end: “Here’s our results! They’re so broken! Look, it learned the bias in our dataset!Surprise!“ It got an audible reaction from the audience. People •loved• her talk.

      I wish there had been some HR folks at her talk.

      Train an AI on your discriminatory hiring practices, and guess what it learns? That should be a rhetorical question, but I’ll spell it out: it learns how to infer the gender of applicants.

      5/

      In conversation Monday, 19-Feb-2024 01:15:12 JST permalink
      carl marks repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:21:24 JST Paul Cantrell Paul Cantrell
      in reply to

      An interesting angle I’m sure someone is studying properly: when we feed these tabula rasa ML systems a bunch of data about the world as it is, and they come back puking out patterns of discrimination; can that serve as •evidence of bias• not just in AI, but in •society itself•?

      If training an ML system on a company’s past hiring decision makes it think that baseball > softball for an office job, isn’t that compelling evidence of hiring discrimination?

      6/

      In conversation Monday, 19-Feb-2024 01:21:24 JST permalink
      Haelwenn /элвэн/ :triskell: and Alexandre Oliva like this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:27:44 JST Paul Cantrell Paul Cantrell
      in reply to

      There’s an ugly question hovering over that previous post: What if the men •are• intrinsically better? What if discrimination is correct?? What if the AI, with its Perfect Machine Logic, is bypassing all the DEI woke whatever to find The Actual Truth??!?

      Um, yeah…no.

      A delightful tidbit from the article: a researcher studying a hiring AI “received a high rating in the interview, despite speaking nonsense German when she was supposed to be speaking English.”

      These systems are garbage.

      7/

      In conversation Monday, 19-Feb-2024 01:27:44 JST permalink
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:32:30 JST Paul Cantrell Paul Cantrell
      in reply to

      I mean, maaaaaaybe AI can help with applicant screening, but I’d need to see some •damn• good evidence that the net effect is positive. Identifying and countering training set bias, evaluating results, teasing out confounders and false successes — these are •hard• problems, problems that research work long months and years to overcome.

      Do I believe for a hot minute that companies selling these hiring AIs are properly doing that work? No. No, I do not.

      8/

      In conversation Monday, 19-Feb-2024 01:32:30 JST permalink

      Attachments


    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:36:54 JST Paul Cantrell Paul Cantrell
      in reply to

      AI’s Shiny New Thing promise of “your expensive employees are suddenly replaceable” is just too much of a candy / crack cocaine / FOMO promise for business leaders desperate to cut costs. Good sense cannot survive the onslaught.

      Lots of business right now are digging themselves into holes now that they’re going to spend years climbing out of.

      9/

      In conversation Monday, 19-Feb-2024 01:36:54 JST permalink
      Tokyo Outsider (337ppm) repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:41:16 JST Paul Cantrell Paul Cantrell
      in reply to

      Doing sloppy, biased resume screening is the •easy• part of HR. Generating lots of sort-of-almost-working code is the •easy• part of programming. Producing text that •sounds• generally like the correct words but is a subtle mixture of obvious, empty, and flat-out wrong — that’s the •easy• part of writing.

      And a bunch of folks in businesses are going to spend the coming years learning all that the hard way.

      10/

      In conversation Monday, 19-Feb-2024 01:41:16 JST permalink
    • Embed this notice
      maya_b (maya_b@hachyderm.io)'s status on Monday, 19-Feb-2024 01:43:45 JST maya_b maya_b
      in reply to

      @inthehands ie. you still have to do your homework, and getting something else to do it for you isn't likely to get it right

      In conversation Monday, 19-Feb-2024 01:43:45 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:46:02 JST Paul Cantrell Paul Cantrell
      in reply to

      At this point, I don’t think it’s even worth trying to talk down business leaders who’ve drunk the Kool-Aid. They need to make their own mistakes.

      BUT

      I do think there’s a competitive advantage here for companies willing to seize it. Which great candidates are getting overlooked by biased hiring? If you can identify them, hire them, and retain them — if! — then I suspect that payoff quickly outstrips the cost savings of having an AI automate your garbage hiring practices.

      /end

      In conversation Monday, 19-Feb-2024 01:46:02 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:47:34 JST Paul Cantrell Paul Cantrell
      in reply to
      • FlowChainSensei

      @flowchainsenseisocial I have not. I take it the relevance is the environment → affect connection?

      In conversation Monday, 19-Feb-2024 01:47:34 JST permalink
    • Embed this notice
      FlowChainSensei (flowchainsenseisocial@mastodon.social)'s status on Monday, 19-Feb-2024 01:47:35 JST FlowChainSensei FlowChainSensei
      in reply to
      • AJ Sadauskas

      @inthehands @ajsadauskas Have you read "Drunk Tank Pink"? #book

      In conversation Monday, 19-Feb-2024 01:47:35 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:51:25 JST Paul Cantrell Paul Cantrell
      in reply to
      • Jack Jackson

      @scubbo Indeed, which is why it needs to be studied by some researcher (which to be clear is not me) qualified to investigate the question in a robust way that withstands scrutiny.

      In conversation Monday, 19-Feb-2024 01:51:25 JST permalink
    • Embed this notice
      Jack Jackson (scubbo@fosstodon.org)'s status on Monday, 19-Feb-2024 01:51:26 JST Jack Jackson Jack Jackson
      in reply to

      @inthehands interesting proposition - which would, I imagine, be responded to with goalpost-moving or No True Scotsman-ing from True Believers if you actually tried it.

      In conversation Monday, 19-Feb-2024 01:51:26 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:55:19 JST Paul Cantrell Paul Cantrell
      in reply to
      • J Miller

      Yeah. The spam arms race is playing out in many spheres, and it feels kind of desperate right now tbh. A defining feature of our present moment.

      From @JMMaok:
      https://mastodon.online/@JMMaok/111953486878595616

      In conversation Monday, 19-Feb-2024 01:55:19 JST permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        J Miller (@JMMaok@mastodon.online)
        from J Miller
        @inthehands@hachyderm.io Good thread! This is all made even harder by the fact that applicants are simultaneously adopting LLMs. This reduces the effort needed to apply, resulting in larger applicant pools with different signals. Heck, the applicants will start to get advised to change softball to baseball. And in the pantheon of resume lies, that’s trivial. But this shift by applicants also means I can’t entirely blame companies for trying some machine learning.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 01:57:47 JST Paul Cantrell Paul Cantrell
      in reply to
      • abreaction

      @abreaction Better? Yes. Sure.

      “Better” in the sense of “fundamentally different by nature?” I really, really doubt that.

      The problems I mention in this post are •intrinsic• problems, baked into the nature of the tech: https://hachyderm.io/@inthehands/111953441495417192 They don’t vanish just because the tech gets better, no more than making a car go faster can make it play the piano.

      In conversation Monday, 19-Feb-2024 01:57:47 JST permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        Paul Cantrell (@inthehands@hachyderm.io)
        from Paul Cantrell
        I mean, maaaaaaybe AI can help with applicant screening, but I’d need to see some •damn• good evidence that the net effect is positive. Identifying and countering training set bias, evaluating results, teasing out confounders and false successes — these are •hard• problems, problems that research work long months and years to overcome. Do I believe for a hot minute that companies selling these hiring AIs are properly doing that work? No. No, I do not. 8/
    • Embed this notice
      abreaction (abreaction@mastodon.world)'s status on Monday, 19-Feb-2024 01:57:48 JST abreaction abreaction
      in reply to

      @inthehands

      They are betting that AI/ML is going to get better. From a historical view of technology, they are probably right.

      I detest the trend as well, but if it replaces basic clerking jobs, that saves people from tedium too.

      In conversation Monday, 19-Feb-2024 01:57:48 JST permalink
    • Embed this notice
      Sven A. Schmidt (finestructure@mastodon.social)'s status on Monday, 19-Feb-2024 03:04:22 JST Sven A. Schmidt Sven A. Schmidt
      in reply to

      @inthehands Reminds me of a lesson I learned about 30 years ago in a physics course. In pairs we had to run experiments a full day and then prepare an analysis.

      Our results were garbage. We tried everything to explain the results, all attempts failed. In the end we went in to present our “results” and expected to be roasted.

      On the contrary, our tutor was delighted. Turned out an essential part of the experiment was broken and he praised us for doing all the “false negative” analysis 😮

      In conversation Monday, 19-Feb-2024 03:04:22 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 03:38:53 JST Paul Cantrell Paul Cantrell
      in reply to
      • Sven A. Schmidt
      • buherator

      @buherator @finestructure
      That’s beautiful.

      In conversation Monday, 19-Feb-2024 03:38:53 JST permalink
    • Embed this notice
      buherator (buherator@infosec.place)'s status on Monday, 19-Feb-2024 03:38:54 JST buherator buherator
      in reply to
      • Sven A. Schmidt
      @finestructure @inthehands I heard a legend about a lab exercise at our uni where students were tasked to figure out the contents of a box by electrical measurements on some external connectors. Sometimes the box contained a potato wired up.
      In conversation Monday, 19-Feb-2024 03:38:54 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 04:26:47 JST Paul Cantrell Paul Cantrell
      in reply to
      • Dieu

      @hllizi
      Yes, much of the corporate appeal of AI is whitewashing bias.

      In conversation Monday, 19-Feb-2024 04:26:47 JST permalink
    • Embed this notice
      Dieu (hllizi@hespere.de)'s status on Monday, 19-Feb-2024 04:26:48 JST Dieu Dieu
      in reply to

      @inthehands maybe it's simply about ridding oneself of the awful decision making. Throwing dice in a way that allows one to convince oneself one's not just rolling dice.

      In conversation Monday, 19-Feb-2024 04:26:48 JST permalink
    • Embed this notice
      Dana Fried (tess@mastodon.social)'s status on Monday, 19-Feb-2024 04:27:00 JST Dana Fried Dana Fried
      in reply to
      • AJ Sadauskas

      @inthehands @ajsadauskas Amazon tried and scrapped the same approach years ago (for the exact same reason!); this is a well-known story; I have no idea how people can be making the same mistakes again.

      In conversation Monday, 19-Feb-2024 04:27:00 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 04:28:17 JST Paul Cantrell Paul Cantrell
      in reply to
      • Dave Mc

      @guigsy
      Classic.

      In conversation Monday, 19-Feb-2024 04:28:17 JST permalink
    • Embed this notice
      Dave Mc (guigsy@mstdn.social)'s status on Monday, 19-Feb-2024 04:28:18 JST Dave Mc Dave Mc
      in reply to
      • AJ Sadauskas

      @inthehands @ajsadauskas I did a project that put variable speed limits on some highways to help flow. We created a traffic sim to see if it would work elsewhere. Used regression to tweak the driver behaviour model so they behaved as we saw drivers respond to the variable limits in the reality. Seemed work. Until one run, someone forgot to turn the signs on and the modelled drivers still acted just as well. We'd made sim'd drivers respond better to congestion, not to react to the signs. Doh!

      In conversation Monday, 19-Feb-2024 04:28:18 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 04:32:52 JST Paul Cantrell Paul Cantrell
      in reply to
      • Matt McIrvin

      @mattmcirvin Indeed, I ran a successful exercise much along these lines with one of my classes (see student remarks downthread):
      https://hachyderm.io/@inthehands/109479808455388578

      I think there really is a “there” there with LLMs; it just bears close to no resemblance to the wildly overhyped Magic Bean hysteria currently sweeping biz. Generating bullshit does actually have useful applications. But until the dust settles, how much harm will it cause?

      In conversation Monday, 19-Feb-2024 04:32:52 JST permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: media.hachyderm.io
        Paul Cantrell (@inthehands@hachyderm.io)
        from Paul Cantrell
        Attached: 3 images OK, trying an experiment with my Programming Languages class! • Have an AI generate some of your writing assignment. • Critique its output. Call BS on its BS. Assignment details in screenshots below. I’ll let you know how it goes. (Here are the links from the screenshots:) Raw AI Text: https://gist.github.com/pcantrell/7b68ce7c5b2e329543e2dadd6853be21 Comments on AI Text: https://gist.github.com/pcantrell/d51bc2d4257027a6b4c64c9010d42c32 (Better) Human Text https://gist.github.com/pcantrell/f363734336e6063f61e451e2658b50a6 #ai #chatgpt #education #writing #highered #swift #proglang
    • Embed this notice
      Matt McIrvin (mattmcirvin@mathstodon.xyz)'s status on Monday, 19-Feb-2024 04:32:53 JST Matt McIrvin Matt McIrvin
      in reply to

      @inthehands I think there is one exception--for a lot of people in creative fields who may have some kind of borderline ADHD condition, getting past the blank page or the digital equivalent is a real struggle. And if there's something that can push them past that step from nothing to something, they'll find it useful.

      There's a powerful temptation to just use version zero, though, especially if you're not the creator but the person paying the creator.

      In conversation Monday, 19-Feb-2024 04:32:53 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 04:34:27 JST Paul Cantrell Paul Cantrell
      in reply to
      • Ivan Sagalaev :flag_wbw:

      @isagalaev True. Their stubborn focus on vision over other types of input is also baffling. Tesla’s whole approach to self-driving makes no sense to me; looks like a bottomless money pit from where I sit.

      (Note that Boston Dynamics doesn’t use ML of this type at all, IIRC.)

      In conversation Monday, 19-Feb-2024 04:34:27 JST permalink
    • Embed this notice
      Ivan Sagalaev :flag_wbw: (isagalaev@mastodon.social)'s status on Monday, 19-Feb-2024 04:34:28 JST Ivan Sagalaev :flag_wbw: Ivan Sagalaev :flag_wbw:
      in reply to

      @inthehands first of all, thank you!

      Now, reading through this thread prompted a related but different thought: the current generation of Tesla's self-driving AI eschews codified decision-making in favor of learning how to drive based purely on humans. Which should obviously be a bad idea if your stated goal is to devise a better-than-human behavior. But everyone is just closing their eyes and saying "well, I guess they know better what they're doing". They don't.

      In conversation Monday, 19-Feb-2024 04:34:28 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 04:35:05 JST Paul Cantrell Paul Cantrell
      in reply to
      • FlowChainSensei

      @flowchainsenseisocial Interesting!

      (Bonus points for correct effect / affect usage)

      In conversation Monday, 19-Feb-2024 04:35:05 JST permalink
    • Embed this notice
      FlowChainSensei (flowchainsenseisocial@mastodon.social)'s status on Monday, 19-Feb-2024 04:35:06 JST FlowChainSensei FlowChainSensei
      in reply to
      • AJ Sadauskas

      @inthehands @ajsadauskas The effect of colour on affect.

      In conversation Monday, 19-Feb-2024 04:35:06 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 05:09:01 JST Paul Cantrell Paul Cantrell
      in reply to
      • Sven A. Schmidt
      • buherator

      @finestructure @buherator “You should do something to throw them a wrench!” is one of the most common suggestions I get from industry folks about the software project course I teach. And my response is always the same:

      Have you •ever• been on a project that didn’t have spontaneous problems, surprising obstacles, sudden wrinkles? Just make sure they’re doing real work, and all the problems naturally happen on their own.

      In conversation Monday, 19-Feb-2024 05:09:01 JST permalink
    • Embed this notice
      Sven A. Schmidt (finestructure@mastodon.social)'s status on Monday, 19-Feb-2024 05:09:02 JST Sven A. Schmidt Sven A. Schmidt
      in reply to
      • buherator

      @buherator @inthehands If I had any faith that it wouldn’t immediately leak and alert students I’d actually break an experiment on purpose as an instructor to teach this particular lesson 🙂

      In conversation Monday, 19-Feb-2024 05:09:02 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 05:11:44 JST Paul Cantrell Paul Cantrell
      in reply to
      • Ivan Sagalaev :flag_wbw:

      @isagalaev At least some of the embarrassing Tesla self-driving fails I’ve seen in videos online are situations where cross-checking multiple forms of input (radar, map, etc) would probably have helped a lot.

      In conversation Monday, 19-Feb-2024 05:11:44 JST permalink
    • Embed this notice
      Ivan Sagalaev :flag_wbw: (isagalaev@mastodon.social)'s status on Monday, 19-Feb-2024 05:11:45 JST Ivan Sagalaev :flag_wbw: Ivan Sagalaev :flag_wbw:
      in reply to

      @inthehands I think their vision layer is okay. It can reliably identify and classify objects and their placement. It's what to do with this information that has always been the problem: you've got this car over there moving that way and that car standing over here. What input you apply to pedals and the steering wheel? This part turned out to be harder than vision. And now they're trying to solve it with AI as well. Which just swaps one set of edge case for another and can't be debugged.

      In conversation Monday, 19-Feb-2024 05:11:45 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 05:13:13 JST Paul Cantrell Paul Cantrell
      in reply to
      • Martha Howell

      @MHowell For sure. I mean, the premise is to paint a “whole person” picture that fosters useful conversation in the interview, but I’m sure as often as not things like this become a discrimination vector. Conversely, though, I don’t think it’s possible to scrub enough personal identity characteristics from a resume to prevent discrimination.

      In conversation Monday, 19-Feb-2024 05:13:13 JST permalink
    • Embed this notice
      Martha Howell (mhowell@mas.to)'s status on Monday, 19-Feb-2024 05:13:14 JST Martha Howell Martha Howell
      in reply to

      @inthehands
      Backing way up, how many jobs require skills that are relevant to a specific sport? (And no, "teamwork" isn't an answer. There are a million non-sports examples of teamwork that can be highlighted in the average person's work history.)

      In conversation Monday, 19-Feb-2024 05:13:14 JST permalink
    • Embed this notice
      Analog AI (retreival9096@hachyderm.io)'s status on Monday, 19-Feb-2024 05:41:51 JST Analog AI Analog AI
      in reply to

      @inthehands You can just buy the same AI, and interview only people whose resumes were rejected.

      In conversation Monday, 19-Feb-2024 05:41:51 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 06:02:06 JST Paul Cantrell Paul Cantrell
      in reply to
      • Analog AI

      @Retreival9096
      This is a fairly compelling idea.

      In conversation Monday, 19-Feb-2024 06:02:06 JST permalink
    • Embed this notice
      sipuliina (sipuliina@mastodontti.fi)'s status on Monday, 19-Feb-2024 06:02:44 JST sipuliina sipuliina
      in reply to
      • AJ Sadauskas

      @inthehands @ajsadauskas I dont think any amount of fixing the biases of these systems by "implementing guardrails" or something like that will make things much better. These things simply shouldn't be done with an AI. And it isn't only about race, even though it is a prominent bias. There will always be biases, many of which will be harder to detect than race.

      In conversation Monday, 19-Feb-2024 06:02:44 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 08:19:33 JST Paul Cantrell Paul Cantrell
      in reply to
      • Sven A. Schmidt

      @finestructure Fair. There’s a loose analogy to the difference between structured software assignments, carefully designed to create clean conditions where only specific problems occur, and open-ended team projects.

      In conversation Monday, 19-Feb-2024 08:19:33 JST permalink
    • Embed this notice
      Sven A. Schmidt (finestructure@mastodon.social)'s status on Monday, 19-Feb-2024 08:19:35 JST Sven A. Schmidt Sven A. Schmidt
      in reply to

      @inthehands Hah, true. These experiments were a bit different though. Sure, you sometimes encountered real problems but the setups were well maintained and by and large you'd get decent results.

      What really lends itself to this kind of “broken experiment" is that you gather the data and can't tell if it's any good until you analyse it later. So you wouldn't be messing with students' data collection, “only” with their analysis.

      In conversation Monday, 19-Feb-2024 08:19:35 JST permalink
    • Embed this notice
      Bec (beccanalia@mastodon.social)'s status on Monday, 19-Feb-2024 08:21:14 JST Bec Bec
      in reply to
      • AJ Sadauskas

      @inthehands @ajsadauskas

      Hello. May I share this thread on LinkedIn?

      In conversation Monday, 19-Feb-2024 08:21:14 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 08:21:14 JST Paul Cantrell Paul Cantrell
      in reply to
      • Bec

      @beccanalia Sure, link away. It’s public, so link to the first post should make the whole thread visible to anyone, even if they’re not logged in to Mastodon.

      In conversation Monday, 19-Feb-2024 08:21:14 JST permalink
    • Embed this notice
      StevenSavage (he/him) (stevensavage@sfba.social)'s status on Monday, 19-Feb-2024 08:21:28 JST StevenSavage (he/him) StevenSavage (he/him)
      in reply to

      @inthehands in a discussion I saw someone noted that a "removing AI from workflow" consulting company would soon be viable.

      In conversation Monday, 19-Feb-2024 08:21:28 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 08:21:59 JST Paul Cantrell Paul Cantrell
      in reply to
      • Amy Worrall

      @amyworrall Yeah, I wondered about that too!

      In conversation Monday, 19-Feb-2024 08:21:59 JST permalink
    • Embed this notice
      Amy Worrall (amyworrall@mastodon.social)'s status on Monday, 19-Feb-2024 08:22:02 JST Amy Worrall Amy Worrall
      in reply to
      • AJ Sadauskas
      • Ben Fulton

      @benfulton @inthehands @ajsadauskas Is lacrosse a male associated sport? I think of it as played by schoolgirls from the 1950s…

      In conversation Monday, 19-Feb-2024 08:22:02 JST permalink
    • Embed this notice
      Ben Fulton (benfulton@fosstodon.org)'s status on Monday, 19-Feb-2024 08:22:03 JST Ben Fulton Ben Fulton
      in reply to
      • AJ Sadauskas

      @inthehands @ajsadauskas I mean, I've been telling people to change their name to Luke and say they play lacrosse for years.

      In conversation Monday, 19-Feb-2024 08:22:03 JST permalink
    • Embed this notice
      Catherine Berry (isomeme@mastodon.sdf.org)'s status on Monday, 19-Feb-2024 09:18:14 JST Catherine Berry Catherine Berry
      in reply to
      • AJ Sadauskas

      @inthehands @ajsadauskas

      It's not even an ML-specific problem. The oldest axiom of computer programming is "Garbage in, garbage out".

      In conversation Monday, 19-Feb-2024 09:18:14 JST permalink
    • Embed this notice
      Bec (beccanalia@mastodon.social)'s status on Monday, 19-Feb-2024 10:13:52 JST Bec Bec
      in reply to

      @inthehands

      Thank you. I like to ask or if I don't receive a timely response, at least tell folks what I would like to do / have done re their posts.

      In conversation Monday, 19-Feb-2024 10:13:52 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 10:13:52 JST Paul Cantrell Paul Cantrell
      in reply to
      • Bec

      @beccanalia
      That’s gracious. In my case, I treat everything I post here as fully public, but I like the respect you’re bring to this environment.

      In conversation Monday, 19-Feb-2024 10:13:52 JST permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 11:50:21 JST Paul Cantrell Paul Cantrell
      in reply to

      More of Paul’s grumbling on the topic of the AI mania sweeping the business world:
      https://hachyderm.io/@inthehands/111927598622910144

      In conversation Monday, 19-Feb-2024 11:50:21 JST permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        Paul Cantrell (@inthehands@hachyderm.io)
        from Paul Cantrell
        It’s like all these execs are selling their cows for 3 beans because they heard the fairy tale and now they honestly believe they’re going to climb a giant beanstalk and slay a giant
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Monday, 19-Feb-2024 14:08:53 JST Rich Felker Rich Felker
      in reply to

      @inthehands Epic.

      In conversation Monday, 19-Feb-2024 14:08:53 JST permalink
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Monday, 19-Feb-2024 14:18:01 JST Paul Cantrell Paul Cantrell
      in reply to
      • skybrian

      @skybrian In the cases of hiring, coding, and writing, there is a point where the number of “leads” is high enough, and the quality is low enough, where the cost of screening them is •worse• than starting from scratch. And I think a lot of people are huffing a lot of fumes right now about just how soon that point comes.

      In conversation Monday, 19-Feb-2024 14:18:01 JST permalink
    • Embed this notice
      skybrian (skybrian@mastodon.social)'s status on Monday, 19-Feb-2024 14:18:02 JST skybrian skybrian
      in reply to

      @inthehands

      One of these is not like the others. Here’s how I think about it: many processes can be thought of as generating a large number of leads and then screening them to find the good ones. In classic AI this a generate-and-test algorithm. It’s vital that your testing works or you will get bad answers.

      Using AI for the “generate” phase is not nearly as bad as using it for screening phase, provided that your tests are good. And we do know how to test our code, don’t we?

      In conversation Monday, 19-Feb-2024 14:18:02 JST permalink
    • Embed this notice
      Jeremy List (jeremy_list@hachyderm.io)'s status on Tuesday, 20-Feb-2024 12:13:09 JST Jeremy List Jeremy List
      in reply to

      @inthehands a few years ago my brother's workplace was training an ML model to distinguish microalgae from microscope slides. The model picked up on the correlation between the water source and which microscope was used to photograph it long before it even noticed any of the algae in its training data.

      In conversation Tuesday, 20-Feb-2024 12:13:09 JST permalink
    • Embed this notice
      Jeremy List (jeremy_list@hachyderm.io)'s status on Tuesday, 20-Feb-2024 12:36:01 JST Jeremy List Jeremy List
      in reply to

      @inthehands even in the hypothetical scenario where someone trains an AI that's actually a net positive when applied to finding candidates: I seriously doubt the training data used would be matching resumés with some group of humans' hiring decisions; which AFAIK is what every existing HR AI was trained on.

      In conversation Tuesday, 20-Feb-2024 12:36:01 JST permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.