Conversation
Notices
-
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Thursday, 21-Nov-2024 10:27:25 JST iced depresso @napocornejo @KuteboiCoder neither was i, although i don't typically pay attention to LLMs :blobfoxinnocent:
i would like to finagle a text to speech model though.-
Embed this notice
kuteboiCoder (kuteboicoder@subs4social.xyz)'s status on Thursday, 21-Nov-2024 10:27:28 JST kuteboiCoder @napocornejo@masto.ai
Yesterday I played with #phi3 #LLM - actually it's a state-of-the-art #SLM - on a cloud #GPU. It runs faster than mistral-nemo on the same GPU while still giving encyclopedic answers.
I haven't compared phi3 against a mid-sized or large #Mistral #Mixtral model. When I do, it just might surprise me.
@icedquinn@blob.cat -
Embed this notice
Napoleon Cornejo (napocornejo@masto.ai)'s status on Thursday, 21-Nov-2024 10:27:28 JST Napoleon Cornejo @KuteboiCoder @icedquinn Interesting. I wasn't aware of these #phi3 model(s).
-
Embed this notice
Napoleon Cornejo (napocornejo@masto.ai)'s status on Thursday, 21-Nov-2024 10:27:29 JST Napoleon Cornejo You should all the try the french Mistal.ai #LLM. Seems powerful.
Talk to it here:
https://chat.mistral.ai/chat
-
Embed this notice