GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 15:56:13 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:

    Good morning folks

    In conversation about a year ago from mastodon.social permalink

    Attachments


    1. https://files.mastodon.social/media_attachments/files/112/079/471/574/166/965/original/0140f0c49e030e9d.jpg
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 15:56:11 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to

      Let's say I have an LLM

      prompt: your name is Bob, you're pleasant to talk to.

      user: hey Bob.
      response: hey user! how are you?

      but then later

      user: hey Bob!
      response: Bob says hello.
      user: aren't you Bob?
      response: I am an AI that simulates Bob
      user: so you're not Bob
      response: no. Bob is a person that I act out for you.

      so, you see, you now have Bob (who will probably never address you directly again), and you have... the AI that simulates Bob. Which is not the same thing as 'I have Bob'.

      In conversation about a year ago permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        http://you.so/
    • Embed this notice
      13 barn owls in a trenchcoat (hauntedowlbear@eldritch.cafe)'s status on Tuesday, 12-Mar-2024 15:56:11 JST 13 barn owls in a trenchcoat 13 barn owls in a trenchcoat
      in reply to

      @DarkestKale th

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 15:56:12 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to

      What's silly, is these things are often free - so students should be here seeing things, making connections - but they don't get told to come.

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 15:56:12 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to

      So, dorking around with LLM stuff more, and I'm going to call this 'delaminating'.

      You have an LLM and give it a persona. But, eventually, it starts projecting that into a person, and another AI level starts talking.

      ie: when you have the persona it talks in first person, but then when it delaminates, you get descriptions of the persona talking to you.

      ... and if you challenge it, you get the super-AI that says it pretends to be the persona.

      In conversation about a year ago permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        talking.ie
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 15:56:13 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to

      Am at a trade show today.

      It's neat seeing cool things.

      In conversation about a year ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: www.today.it
        Today
        Today
    • Embed this notice
      13 barn owls in a trenchcoat (hauntedowlbear@eldritch.cafe)'s status on Tuesday, 12-Mar-2024 17:42:11 JST 13 barn owls in a trenchcoat 13 barn owls in a trenchcoat
      in reply to

      @DarkestKale sorry mashed the phone while bookmarking this to engage with later. Now is later!

      This is something I've seen too, and I love your term for it.

      How are you handling your chatbot's "memory"?

      Are you feeding it in via a prompting routine (e.g. feeding back conversation history every round) or using a service that does that quietly in the background?

      (At this point I'm wondering if one approach is more susceptible than the other.)

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 17:42:12 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to
      • 13 barn owls in a trenchcoat

      @HauntedOwlbear hrm?

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 19:02:21 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to
      • 13 barn owls in a trenchcoat

      @HauntedOwlbear Silly Tavern lets you - and this is amazingly smart - edit responses.

      So when it says 'as an AI...' you can actually regenerate it a few times, pick one that's /closest/ to not being fucked up, then edit out the part you don't like. Then, when it gets bundled into the CW, it's not reinforcing bad behaviour.

      In conversation about a year ago permalink

      Attachments


    • Embed this notice
      13 barn owls in a trenchcoat (hauntedowlbear@eldritch.cafe)'s status on Tuesday, 12-Mar-2024 19:02:21 JST 13 barn owls in a trenchcoat 13 barn owls in a trenchcoat
      in reply to

      @DarkestKale Ah, this is all super helpful, because I frequently write my own (shitty-but-functional) applications for interacting with local models, so I am absolutely not in the habit of using standard terminology as I'm mostly wrapped up in my home-grown nonsense.

      So yeah, what I mostly do is feed back the previous entries on both sides of the conversation back to the chatbot (ohai token limit), and while I've seen a few approaches to using databases to manage long- or short-term memory, they're all most labour intensive than what I'm looking for for my fuckery.

      Definitely going to see if I can get SillyTavern talking to Oobabooga based on what you've said.

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 19:02:22 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to
      • 13 barn owls in a trenchcoat

      @HauntedOwlbear Getting back to your query, I've found that delamination mostly occurs when you ask for an opinion on... something.

      Mostly, something physical.

      This causes a chance for 'as a sentient AI, I cannot...' type responses. Like, if you have any kind of temperature on your model, then every time you ask for an opinion, it MIGHT spit out the 'as an LLM, I can't tell you an opinion' - and once that line's crossed, the 'persona' is kinda gone.

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 19:02:22 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to
      • 13 barn owls in a trenchcoat

      @HauntedOwlbear (Because the context window forces it to re-read its own statement that it's an AI, and that reinforces it, and... well.)

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 19:02:23 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to
      • 13 barn owls in a trenchcoat

      @HauntedOwlbear Naturally, 2k goes fast, so there's a few techniques to work on this, some of which are 'ok, but not just a FILO list of tokens, but instead we'll be smart about it...' and also, you have:
      RAG - retrieval augmented generation, aka 'get a little extra info and send that along with the prompt', which you CAN do with your own chatlogs
      or
      'Always send X thing with the prompt, every time', which is what Silly Tavern does.

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 19:02:23 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to
      • 13 barn owls in a trenchcoat

      @HauntedOwlbear

      Silly Tavern does something I'd thought of, which is to say that you can define keywords, and when you mention the keyword, it prepends the definition in the prompt.

      This is good for characters, locations, etc - but it's a pain to keep formatted well, etc.

      Proper RAG is a bit easier, but shakier in its actual reliability

      In conversation about a year ago permalink
    • Embed this notice
      :trebuchet: Kale :trebuchet: (darkestkale@mastodon.social)'s status on Tuesday, 12-Mar-2024 19:02:24 JST :trebuchet: Kale :trebuchet: :trebuchet: Kale :trebuchet:
      in reply to
      • 13 barn owls in a trenchcoat

      @HauntedOwlbear ok, so the 'memory' aspect is interesting. Lemme write a bit for you (assuming you might know some stuff, but gonna rehash anyway cause that's how my brain works).

      Typically, the 'memory' of the LLM is called the context window. In most local models, that's about 2k tokens, and that's trained in, so you can't just say 'I have heaps of RAM so MOAR'

      In conversation about a year ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.