Learn with the help of LMMs (AI chatbots), it’s awesome, just let it generate some code, read it, understand it, and try make the code better, more beautiful and/or more efficient. Add some feature you miss in the code, don’t hesitate to ask your LMMs follow up question, it won’t laugh at stupid questions, it is just great.
However, do keep in mind that LLMs regularly pull language an library features out of their asses that have no direct correspondent in practice. I’d use the LLMs to generate small snippets of code, giving them a small and restricted set of requirements to minimize hallucinations.
Yea, encountered that as well (depending on LLM model). Mostly, it is enough to just feed the exception output back into the LLM thread and it will Fix it’s bugs, or at least can tell you why this exception normally occurs.
Learn with the help of LMMs (AI chatbots), it’s awesome, just let it generate some code, read it, understand it, and try make the code better, more beautiful and/or more efficient. Add some feature you miss in the code, don’t hesitate to ask your LMMs follow up question, it won’t laugh at stupid questions, it is just great.
However, do keep in mind that LLMs regularly pull language an library features out of their asses that have no direct correspondent in practice. I’d use the LLMs to generate small snippets of code, giving them a small and restricted set of requirements to minimize hallucinations.
Yea, encountered that as well (depending on LLM model). Mostly, it is enough to just feed the exception output back into the LLM thread and it will Fix it’s bugs, or at least can tell you why this exception normally occurs.