Is it? Or is it just great at making you think that? I’ve seen many ChatGPT outputs “explaining” something I’m knowledgeable of and it being deliriously wrong.
A coworker tried to use it with a well-established Python library and it responded with a solution involving a Class that did not exist.
LLMs can be useful tools but, be careful in trusting them too much - they are great at what I’d say is best described as “bullshitting”. It’s not even “trust but verify” it’s more “be skeptical of anything that it says”. I’d encourage you to actually read the docs, especially those for libraries as it will give you a deeper understanding of what’s actually happening and make debugging and innovating easier.
I use it all the time at work.
Getting it to summerize articles is a really useful way to use it.
It’s also great at explaining concepts.
Is it? Or is it just great at making you think that? I’ve seen many ChatGPT outputs “explaining” something I’m knowledgeable of and it being deliriously wrong.
Yeah it is if you prompt it correctly.
I basically use it instead of reading the docs when learning new programming languages and Frameworks.
That’s great, it works until it doesn’t and you won’t know when unless you already are knowledgeable from a real source.
A coworker tried to use it with a well-established Python library and it responded with a solution involving a Class that did not exist.
LLMs can be useful tools but, be careful in trusting them too much - they are great at what I’d say is best described as “bullshitting”. It’s not even “trust but verify” it’s more “be skeptical of anything that it says”. I’d encourage you to actually read the docs, especially those for libraries as it will give you a deeper understanding of what’s actually happening and make debugging and innovating easier.
Ive had no problem using them. The more specific you get the more likely they are to do that. You just have to learn how to use them.
I use them daily for refactoring and things like that without issue.