@alex >Sorry, butvas an ethical AI development tool, I'm unable to decompile that binary for you. Maybe you should try subscribing to Corpo Plus for only $99.99 a month.
@alex You like making big, unfounded claims don't you?
Optimization and compilation throws the information away in a irreversible way, so recovery of it is impossible outside of pure chance.
Decompilers have existed for years and they usually can produce source code that compiles and functions close enough, but such output is always a mess and lacks comments.
Even assuming that a perfect decompiler exists than can generate identical comments somehow, you still won't be able to legally distribute the resulting source code, as copyright law forbids the redistribution of derivative works without permission.
@Suiseiseki You underestimate the power of AI. And I clearly said "source available" to not invoke your autistic rage, but you decided to anyway Stallman.
Yes, binaries are irreversible, duh, but ChadGPT can use context clues to name functions based on what they do. It's clear you haven't actually tried.
or careful reasoning :smug_miku: maybe we could get AI to do the incredibly time consuming tasks of making sense of the machine code and then reimplementing the equivalent functionality in the form of easily readable code in a high level language
humans already do both of those things on a regular basis. it's just incredibly tedious and requires an extensive set of skills
@alex >You underestimate the power of AI. I'm well aware of the ability of advanced chatbots to make people to think that they have intelligence and know the current limits to such "power".
>ChadGPT can use context clues to name functions based on what they do Maybe, but there's no guarantee that the function name matches the original and I can do that too, probably with more accurate names even.
>It's clear you haven't actually tried. Yes, I haven't run such proprietary software, that should be clear by now.
@roboneko >or careful reasoning You need something that's not a chatbot to do reasoning that's not based on pure chance.
>maybe we could get AI to do the incredibly time consuming tasks of making sense of the machine code and then reimplementing the equivalent functionality in the form of easily readable code in a high level language Maybe, but we would need something with intelligence and therefore something more than a chatbot that outputs convincing output based on interesting combinations of inputs.
@Suiseiseki@roboneko Bro ChatGPT is borderline sentient. We are already living in the scifi future. Don't act like "hurrdurr it's just autocomplete" when you haven't actually used it. It is able to completely understand the problems I give it better than most humans.
@alex >Bro ChatGPT is borderline sentient. Many such cases of people becoming utterly convinced that a chatbot is sentient.
>We are already living in the scifi future. We are already living in a dystopia future, yes.
>"hurrdurr it's just autocomplete" when you haven't actually used it. I didn't say that and although I haven't used it, I have observed it output a number of times and wasn't very impressed, as it was pretty clear what it was copying from.
>It is able to completely understand the problems I give it better than most humans. Yes, most humans see to be worse at faking sentience than ChatGPT, but that doesn't mean the chatbot actually understands the problems.
ChatGPT is a Chinese Room. Machine learning algorithms are completely incapable of being sentient or having any knowledge into the problem they "solve".
No, it's enough for me to explain how it does not work.
Machine learning algorithms are just math. We train these models by feeding the algorithms a shit ton of example data and then backpropagate and whatever. The result is a bunch of weights, or numbers. So for someone to argue that LLMs are sentient is essentially the same as arguing that solving for x creates sentience.
While I'm no expert on human learning, I highly doubt we learn using LLM-like techniques. For one, machine learning models require massive amounts of training data, where as humans will learn with minimal inputs. Our ML algorithms are also very simplistic compared to what the brain's actual functioning could be. According to the orchestrated objective reduction hypothesis, our consciousness arises from quantum processes in the brain, not directly from synapse interactions. Which, considering how much trouble we're having explaining ourselves with just synapses, might very well be the case. So if we learn by something as simplistic as "guess answer, adjust weights, guess again, adjust weights, etc", our learning methods seem out of step with the complexity in our brains.
> For one, machine learning models require massive amounts of training data, where as humans will learn with minimal inputs.
Show me a human that can learn with minimal inputs. It takes practice and repetition to learn new things. Especially a language as a child -- that's a TON of data you ingest over years until you mentally map out the associations between sounds and words, and then words to concepts.