@Suiseiseki @maija >Most software is terrible, so most inputs were terrible, so most outputs are terrible
LLMs are in the most basic mathematic models which will spit out an average of what the dataset input was, when i say "average" this is the very top of the curve used by the specific LLM, most will use a simple bell curve to calculate results others will use more fancy math to calculate an average within a constrained negative skew distribution curve to give you better results. But the point stands that AI generated code will be some function of the average code feed as dataset, and most of the code in, for example github, turns out to be terrible code with probably a 15% of code or less being actually good code, unless copilot is specifically trained with just the 15% good code as dataset all the code it will spit out will tend hard to terrible and only once in a blue moon will it happen to output something remotely resembling the good code.