@thedansimonson @inthehands sadly safety is not a "feature" amenable to scaling in a short VC-funded development cycle. Fundamentally, safety must be backed into the entire development culture, in an org willing to experiment (and lose money) until your safety-critical widgets are really ready. Where "readiness" is decided by a customer with the financial clout to shut you down if you get it wrong, or a government that'll jail you if you lie about performance.
Notices by Cave Cattum (fgcallari@mastodon.online)
-
Embed this notice
Cave Cattum (fgcallari@mastodon.online)'s status on Tuesday, 03-Sep-2024 02:40:11 JST Cave Cattum -
Embed this notice
Cave Cattum (fgcallari@mastodon.online)'s status on Tuesday, 03-Sep-2024 02:05:22 JST Cave Cattum @thedansimonson @inthehands But the problem there is conflating "generative AI" with all of machine learning, no? It is quite possible to build reliable (safety critical) software systems that solve hard problems using machine learning AND do not "hallucinate" anything. But there is no known way to do it cheaply.
-
Embed this notice
Cave Cattum (fgcallari@mastodon.online)'s status on Monday, 27-Nov-2023 04:06:49 JST Cave Cattum @emilygorcenski Fixed it for y'all.