Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
catto (catto@maidsin.space)'s status on Sunday, 08-Dec-2024 20:16:15 JST catto
@kaia some people do this to run large LLMs locally, but typically they would use 3090s or Tesla P40s since 4090s are just uneconomical