Conversation
Notices
-
Embed this notice
The AI is going to be revolutionary, no doubt. One thing I'm already certain about is that you will be working overtime on your composition and reading comprehension.
Your capacity to ask the AI as precisely as you can will become one of your most critical skills.
Surpassed only by your fundamental understanding of your problem domain. For example, when the AI says "future improvements" you need to immediately filter that by scenarios, where those "improvements" are quantified by basic algebra.
Do not trust the AI when it comes to battery technologies. While it admits that at the extremes e=mc^2 falls off the edges it does not admit the same v=ir - try it. Its fascinating, it truly insist that internal resistance can evolve to zero without falling off the edges.
-
Embed this notice
@FourOh-LLC I would not rely on chat GPT as a research tool. It doesn't really memorize facts, not really. What it and LLMs do is they produce text that appears to complete what you gave it, so "the quick brown fox" might get completed with something like "jumped over the lazy dog."
That's what the algorithm is doing under the hood. The chat program you're using just creates a chat log and submits that as the prompt to complete. So, why am I bringing this up?
Well, it has a habit of making shit up. They call this hallucinating. See, if I were to ask it about a specific topic and ask it to cite the specific studies that support it, it will actually make them up. It will provide beautiful MLA formatting for them and everything, but those studies will be made up whole cloth.
This is because it isn't really memorizing facts. What it's doing is producing an answer that looks like it would be a real one based on its training data.
The way to use LLMs is you give the LLM the facts that you need it to interpret in the prompt, and then you tell it to do something with that information. You tell it to take the information at face value. You, the user, must curate the information yourself.
-
Embed this notice
@FourOh-LLC Yeah: don't. It isn't designed to perform math or logic or any scientific function. It's designed to compoe text. It isn't designed to really think. So you might be able to take a series of conclusions that you scribbled down into a text file and format it as coherent prose, but it will not be very good at doing whatever thinking you needed to do In order to arrive at those conclusions.
If you want to see what I mean by this, try asking it to generate mathematical proofs. A lot of the time it will do things completely backwards, like assuming the thing you're trying to prove.
But I use it for creative writing purposes. And for that purpose? It's amazing. I can specify the rough ideas I need it to convey, I can specify writing style, I can specify all kinds of things about it, and it will actually turn my little paragraph into about 500 words of stuff that needs minimal editing.
But do not have it do any real thinking for you. It fails in this respect
-
Embed this notice
I plan to use AI for STEM and nothing else - can you please comment on that?
-
Embed this notice
@BroDrillard @FourOh-LLC here's the thing: I know how chat GPT memorizes facts, and oddly enough it isn't the core LLM itself. It's that when you submit something to it the prompt you give it gets altered and filtered through various other things before it hits the actual model. A lot of the time it picks out facts to feed into it that you don't notice when you're typing something in to the little box you see in the chat window.
-
Embed this notice
You can get facts out of it. But you don't know which part of its response is true fact and what is hallucination. You have to be able to think through and verify yourself that the response does in fact make sense. Without that you could be getting and using(!) something good looking that's riddled with subtle (or sometimes not so subtle) mistakes.
If you for example use it to write programs without expert human supervision, you risk creeping data corruption and incorrect behavior outside the main paths where it's not immediately noticed by users.
The risk for humanity is society increasingly relying on this crutch (AI) and losing track of reality. "Some bridges/buildings have always collapsed, it's not the AI's fault. No human could do better." And at that point it'll be true because there won't be humans around with the training and experience to do things correctly "by hand", without AI.
-
Embed this notice
Wow.. creative writing.. I love that, although I have no use for it in my profession. But I will give it a try!
-
Embed this notice
@FourOh-LLC If you look up open AIs API, one thing I have it do is I use its functions feature to have it extract structured data from the text given.
For example, I could feed it a document, like someone's identification paperwork, from any country, and have it pull out their demographic information like their name and their address and identification number and so on. It will return it as a JSON blob.
So the reading comprehension and creative writing capability of these LLMs is actually very impressive. But if I ask it to prove the Pythagorean theorem or do basic arithmetic, there's a decent likelihood that it completely shits the bed.
It's an issue of knowing when to use it and how.
-
Embed this notice
@FourOh-LLC If you're using the chat bot more than the API, It might be ideal to view it as an assistant that can't do outside research like look things up on the internet. An assistant that you keep locked in a closet
-
Embed this notice
Yes, I have been using it to create MySQL INSERT OR UPDATE statements which I actually needed at my job. Its actually very useful to create the data tables, joins, views from the text description - which is the documentation of the database. I always wrote the docs afterwards, now I write them before, the way it should be.
My job involves scrubbing PCB parts and assemblies against all sort of demands and regulations - I see a huge jump in my future productivity thanks to the AI.