Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
guizzy (in exile) (guizzy@shitposter.club)'s status on Sunday, 22-Oct-2023 06:08:27 JSTguizzy (in exile) @Moon @TheQuQu There's always the option of chaining LLMs and autoprompting to overcome base limitations of token prediction to overcome the limitations of the basic token prediction design. You can create multiple sets of system prompts and even different models/finetunes that are specialized on different aspects of cognition, and have them work out an answer to a prompt together while sharing a database they can query.