@kaia in my experience models trained on gpt-j (i.e.: groovy) gives better results compared the ones trained on Llama (i.e.: snoozy & vicuna).
I guess this happens because Meta made the Llama model based on the a SIMULATION of reasoning, which is basically a glorified words statistic. 😟
Said this I can confirm you all 7B models sucks ass from a straw to quote the AVGN, but when I tried to load 30B models on gpt4all I had no success (they don’t even appear in the top list).
I am still trying to understand how to make the terminal version work on Linux only using gcc/python and without installing NodeJS or other weird stuff 🤔