Conversation
Notices
-
Embed this notice
kaia (kaia@brotka.st)'s status on Thursday, 09-Nov-2023 21:44:34 JST kaia
how to deploy LLM without ethical filters? asking for me. -
Embed this notice
ロミンちゃん (romin@shitposter.club)'s status on Thursday, 09-Nov-2023 21:45:47 JST ロミンちゃん
@kaia have some tens of millions of dollars to spare? kaia likes this. -
Embed this notice
kaia (kaia@brotka.st)'s status on Thursday, 09-Nov-2023 21:48:03 JST kaia
@romin no, but I know it's doable running a LLM locally -
Embed this notice
autism :verified: (jeff@misinformation.wikileaks2.org)'s status on Thursday, 09-Nov-2023 21:53:52 JST autism :verified:
@kaia get the weights without the censors kaia likes this. -
Embed this notice
ロミンちゃん (romin@shitposter.club)'s status on Thursday, 09-Nov-2023 21:57:05 JST ロミンちゃん
@kaia things have improved somewhat in the last year but it is just not there, gpt4 is light years away from what you could run locally atm. You can either install the python bloat or llama.cpp, the local models thread on /g/ (warning coomers) should have enough resources. kaia likes this. -
Embed this notice
sirsegv :heart_clockwork: (squidink7@misskey.fryer.net.au)'s status on Thursday, 09-Nov-2023 22:04:51 JST sirsegv :heart_clockwork:
@kaia@brotka.st @romin@shitposter.club Most locally-run LLMs don't have filters, but you'd need to make sure not to get one trained on any ChatGPT (or similar) data, as that would taint it
kaia likes this.
-
Embed this notice