@serapath @simon Yeah, it’s polysemic. It means x to researchers, but y to laypeople who only know of ChatGPT. I honestly haven’t seen/heard anyone IRL immediately jumping into a conversation with “but it’s not actually intelligent!!”. What I have experienced is getting partway into a conversation and having to say it - because it has become obvious the other person DOES think “Intelligence” is human-like decision making.
Notices by Jim Gardner (jimgar@fosstodon.org)
-
Embed this notice
Jim Gardner (jimgar@fosstodon.org)'s status on Monday, 08-Jan-2024 03:00:32 JST Jim Gardner -
Embed this notice
Jim Gardner (jimgar@fosstodon.org)'s status on Sunday, 07-May-2023 03:35:28 JST Jim Gardner @simon Hey Simon, I’ve been holding off the use of ChatGPT, Bard, etc., even though I think they could be useful. This is because I can see (especially with ChatGPT) the horrible unethical behaviour that the companies are using in their arms race to deploy deploy deploy. With all the talk in this leaked doc about open source alternatives, do you know of any LLMs that are “ethically sourced” and available for the average punter to use? I don’t want to be left behind :/
-
Embed this notice
Jim Gardner (jimgar@fosstodon.org)'s status on Sunday, 07-May-2023 03:35:22 JST Jim Gardner @simon @resing it *all* feels fundamentally wrong, so long as the results rely on indiscriminate harvesting of people’s work without permission. Literally the only compelling argument I have heard is the “necessary evil” Simon mentions - doing it anyway but making it open source. I just find it sad that this is the position we’re in at all, and worse, how little the majority of people seem to care about providence and permissions full stop.