GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Falcon Darkstar (falcon@mastodon.falconk.rocks)'s status on Tuesday, 07-Jan-2025 11:45:50 JST Falcon Darkstar Falcon Darkstar

    I will be speaking at the final Shmoocon about eliminating the impact of SQL injection, and not about how AI cannot do pentests. But since the topic keeps coming up anyway I may as well squawk about it.

    So much of the story about AI is this: what if instead of making necessary changes, we found a way to automate digging the hole deeper?

    So it is with this. Scanners of all types, including genAI ones, can find the same old patterns. It is apparently fancy to do it with particular algorithms.

    In conversation about 5 months ago from mastodon.falconk.rocks permalink

    Attachments


    • Rich Felker repeated this.
    • Embed this notice
      Falcon Darkstar (falcon@mastodon.falconk.rocks)'s status on Tuesday, 07-Jan-2025 11:46:04 JST Falcon Darkstar Falcon Darkstar
      in reply to

      We should not, in software engineering, try to figure out how to automate the ruinous process of burying each other in boilerplate code and existing patterns. If there is a kernel of commonality among APIs, we should build a clean abstraction.

      In other words, sure your IDE or some chatbot can generate a deep copy for a compiler to consume. But why bother? We all know what a deep copy is. Formally. The language should include a construct for the compiler to generate one.

      In conversation about 5 months ago permalink

      Attachments


      Rich Felker repeated this.
    • Embed this notice
      Falcon Darkstar (falcon@mastodon.falconk.rocks)'s status on Tuesday, 07-Jan-2025 11:46:17 JST Falcon Darkstar Falcon Darkstar
      in reply to

      So it is with that perennial nuisance, the injection vulnerability. All the fuzzers in the world cannot hope to find all of them.

      There are two paths.

      One, keep looking. Markets spring up for discoveries. Make measurements of density, value, and expected loss. Acknowledge they remain forever and keep getting hacked.

      Two, apply the principles we have known since at least the development of Signalling System No. 7 in fucking 1984, and separate the control and data channels properly.

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Falcon Darkstar (falcon@mastodon.falconk.rocks)'s status on Tuesday, 07-Jan-2025 11:46:38 JST Falcon Darkstar Falcon Darkstar
      in reply to

      So it is also with cryptography vulnerabilities. In the practice of engineering, if you build a boat anchor out of lithium, you will be fired and probably sued. Things come with stated limits and intended purposes.

      You know, like GCM and 4GB of data.

      In no other field does an engineer get to just put random parts together, run a process through it for 5 seconds, dump a bunch of water in and see that it fails, and then sell the assembly at scale. But that is how we build and test software.

      In conversation about 5 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: www.data.in
        Data Group
      Rich Felker repeated this.
    • Embed this notice
      Falcon Darkstar (falcon@mastodon.falconk.rocks)'s status on Tuesday, 07-Jan-2025 11:46:55 JST Falcon Darkstar Falcon Darkstar
      in reply to

      If you strip away all the jargon about 0days and "IDOR" and whatever, you are left with this.

      Vulnerabilities arise either because we didn't know about a potential threat yet, or because we keep doing the same old shit.

      Structurally, AI is capable of finding the latter.

      The latter should not exist, but we invested everything in trying to find each one individually instead of stamping them out at the root, because developing the practice of software engineering is apparently impossible.

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Falcon Darkstar (falcon@mastodon.falconk.rocks)'s status on Tuesday, 07-Jan-2025 11:47:12 JST Falcon Darkstar Falcon Darkstar
      in reply to

      AI, as we know it, is fundamentally about clustering, and sometimes about finding additional cluster members in randomness. An LLM does basically this. It cannot model the reason something exists or what might cause loss to the thing's operator. All that it can do is find patterns, or signals, and identify them as members of a cluster, some of which will be labelled bad.

      This is like pentesting often is: some reproducible reproduction of known techniques laid out in advance.

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Falcon Darkstar (falcon@mastodon.falconk.rocks)'s status on Tuesday, 07-Jan-2025 11:47:24 JST Falcon Darkstar Falcon Darkstar
      in reply to

      That work is not useful work. It is rightly automated, but more in the sense of a design rules check. It is tragic that we waste people's time doing that. It is more tragic that this activity has not ceased to uncover vulnerabilities in production systems, because this shows we have learned nothing from the vulnerable patterns.

      Clear, explicit specifications and output encoding help. Building simpler systems about which we can reason helps. An outside consultant critiquing the model helps.

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Falcon Darkstar (falcon@mastodon.falconk.rocks)'s status on Tuesday, 07-Jan-2025 11:47:32 JST Falcon Darkstar Falcon Darkstar
      in reply to

      The value is in what the founders of our field did: figure out how to do something cool that you are not supposed to be able to do.

      This is the antithesis of finding additional cluster members.

      The "pentesting" AI fills the role of the person who keeps laughing about finding the same pattern over and over. It is valid, but the solution is not to scale that dude up.

      In conversation about 5 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.