Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@sickburnbro @KuteboiCoder Seems to be an issue with their inability to train standard models to have "self doubt" causing them to just flat state info that might be otherwise illogical to a human mind
Was reading an article the other day that talked about a new method they're experimenting with that forces the AI to generate a set of answers to a problem and then analyze them before delivering a proper response
Supposedly produced results that would require a substantially larger model size to achieve at the cost of extra seconds of response time