GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Tom Morris (tommorris@mastodon.social)'s status on Friday, 23-May-2025 08:51:22 JST Tom Morris Tom Morris

    A thing I’ve been thinking about: when someone says “this is a good use case for generative/agentic AI”, that’s usually a sign that the process could be improved.

    Like, people use LLMs to write overly fluffy covering letters for job applications. OK, just have an application form.

    Or people use LLMs to understand errors when coding. Okay, that’s a sign to make the error handling more readable/helpful. E.g. the Rust compiler has pretty excellent errors compared to “syntax error on line 37”.

    In conversation about 9 months ago from mastodon.social permalink
    • Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Friday, 23-May-2025 08:55:05 JST Rich Felker Rich Felker
      in reply to

      @tommorris This this this 👆

      The fact that "AI" can successfully "do" X - not deceive someone that it did X, but actually do it - is definitive evidence that X is a bullshit job that shouldn't have needed to be done to begin with.

      Identify and eliminate this bullshit. Don't burn the physical world and the information plane automating it.

      In conversation about 9 months ago permalink
    • Embed this notice
      Tom Morris (tommorris@mastodon.social)'s status on Friday, 23-May-2025 08:55:58 JST Tom Morris Tom Morris
      in reply to

      Also the classic coding one: using LLMs to generate basic boilerplate code. This is a sign, perhaps, of a lack of maturity of the language ecosystem.

      Outside of university assignments/personal study, why are you reimplementing textbook/boilerplate code rather than it being in stdlib or a trusted annex to stdlib (e.g. Java’s Commons-Lang)? That’s a genuine question to ask.

      Hard to find? Dependency management sucks? Risk of (supply chain) vulns? These are language/community issues worth fixing!

      In conversation about 9 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Tom Morris (tommorris@mastodon.social)'s status on Friday, 23-May-2025 08:55:59 JST Tom Morris Tom Morris
      in reply to

      The report is long and complicated so people are asking an LLM to summarise it? That sounds like the report author needs to level up their writing ability. Make the executive summary better.

      Students are using AI bots to explain dense material? Okay, there’s an opportunity there for a more entry level textbook (e.g. the Cambridge or Routledge Companion series). Or for more group discussion between students to supplement lectures/assigned reading.

      In conversation about 9 months ago permalink
      Haelwenn /элвэн/ :triskell: likes this.
      Rich Felker and Haelwenn /элвэн/ :triskell: repeated this.
    • Embed this notice
      Piers Cawley (pdcawley@mendeddrum.org)'s status on Friday, 23-May-2025 08:56:27 JST Piers Cawley Piers Cawley
      in reply to

      @tommorris One of those signs of a codebase maintained by a developers who care is the amount of boilerplate that's hidden behind domain specific abstraction.

      Don't generate the fucking boilerplate for me; generate the domain specific stuff I need to eliminate it, or get the fuck out of my way you tedious Clippy++ attempt.

      In conversation about 9 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Tom Morris (tommorris@mastodon.social)'s status on Friday, 23-May-2025 08:56:48 JST Tom Morris Tom Morris
      in reply to
      • Piers Cawley

      @pdcawley Funnily enough, one of the examples I've been thinking about is AI generated tests for repetitive CRUD stuff.

      Needing too many AI generated unit tests may be a sign that the language/framework/codebase doesn't have good enough abstractions or type safety etc.

      On a very basic level, if I declare that a function takes Optional[String], and if—big if, mypy!—I trust the type system to enforce it, I don't need an AI to spit out twenty "what if you gave it an int though?" tests.

      In conversation about 9 months ago permalink
    • Embed this notice
      Tom Morris (tommorris@mastodon.social)'s status on Friday, 23-May-2025 08:58:03 JST Tom Morris Tom Morris
      in reply to
      • Paco Hope

      @paco The other definition of AI in wide use by the political class is: "I dunno what the hell it is, but $OUR_COUNTRY needs to win at it rather than the evil scheming bastards in $OTHER_COUNTRY."

      In conversation about 9 months ago permalink
    • Embed this notice
      Paco Hope (paco@infosec.exchange)'s status on Friday, 23-May-2025 08:58:04 JST Paco Hope Paco Hope
      in reply to

      @tommorris Mostly this sounds like the person saying “this is a good case for AI” is accurately sensing something that could be improved.

      But you’ve omitted the case where what needs to be improved is the person speaking. Material is hard to read? The person might need to learn more before they can understand it. The basics might be available and well written but the speaker hasn’t learned them yet.

      Some people have opinions on fields and subjects where they have very little knowledge or expertise and they think AI will bridge that gap: allowing them to achieve things in a domain that they are not sufficiently literate or expert yet.

      Today when someone says they want to use “AI” they mean one of 3 things:
      - predictive machine learning (eg transcription, OCR, translation, etc)
      - generative AI (LLM, image generation, etc)
      - pure made-up computer magic

      In conversation about 9 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Friday, 23-May-2025 08:58:46 JST Rich Felker Rich Felker
      in reply to
      • Paco Hope

      @tommorris @paco The only way you win is by declaring that the emperor has no clothes.

      (That all the other countries are pouring resources down the drain and burning the planet and that you're not going to do that.)

      In conversation about 9 months ago permalink
    • Embed this notice
      HiIamInfi (hiiaminfi@mastodon.social)'s status on Friday, 23-May-2025 09:00:47 JST HiIamInfi HiIamInfi
      in reply to

      @tommorris I think the most promising pitch for me was meeting minutes so basically automating the process of writing down what was said and narrowing it down to the key bits… but that also falls apart if you accept that 1. meetings that don’t have an agenda should be an email 2. „Urgent meetings“ are called calls and usually only require 2-3 sentences of summary and 3. a permanent record of a project will always be better than an endless chain of meeting minutes.

      In conversation about 9 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Friday, 23-May-2025 09:00:47 JST Rich Felker Rich Felker
      in reply to
      • HiIamInfi

      @hiiaminfi @tommorris That "narrowing it down to key bits" (summarizing) is something "AI" *cannot do*.

      It can pick bits that match patterns for what was important in other people's summaries of other things. But there's no a priori reason to expect that matches what's important in your material to be summarized. And empirically, it often doesn't.

      In conversation about 9 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.