@napocornejo@masto.ai
Yesterday I played with #phi3 #LLM - actually it's a state-of-the-art #SLM - on a cloud #GPU. It runs faster than mistral-nemo on the same GPU while still giving encyclopedic answers.
I haven't compared phi3 against a mid-sized or large #Mistral #Mixtral model. When I do, it just might surprise me.
@icedquinn@blob.cat
Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
kuteboiCoder (kuteboicoder@subs4social.xyz)'s status on Thursday, 21-Nov-2024 10:27:28 JSTkuteboiCoder