I’ve just released StacktraceGpt. It’s a tool uses OpenAI’s GPT to give easy-to-understand explanations for the error messages your app gives. Should help fixing problems and learning easier.
Please let me know your thoughts!
Thanks!
I’ve just released StacktraceGpt. It’s a tool uses OpenAI’s GPT to give easy-to-understand explanations for the error messages your app gives. Should help fixing problems and learning easier.
Please let me know your thoughts!
Thanks!
I don’t see any benefit to this, as there are either good stacktraces that are clear in their error or some abomination stacktraces that come from underlying libraries, for example try to put a negative port in a http client that uses erlang inet
under the hood, in witch case the bot will help you with nothing as it wouldn’t understand the context of the question.
Thank you for your feedback! You’re correct, well-formed stack traces often provide clear error information, and complex ones can be tough. StacktraceGpt aims to augment developers’ understanding, especially for those who might be new to Elixir or who might not be familiar with the nuances of certain libraries or the Erlang/OTP ecosystem.
While it may not always deduce the specific context of an error, it can offer insight into the type of error, common causes, and potential fixes. I appreciate your perspective and acknowledge the room for improvement in the tool’s current state.
That sound very interesting!
Perhaps it might help to provide an example where the explanation actually helps to understand the error, because current example is kinda strange
Stacktrace: function Ecto.Multi.new/0 is undefined
Explanation: The error message is stating that the functionEcto.Multi.new/0
is undefined
My apologies for the confusion - the example was indeed not ideal. It was merely set up to provoke a crash, a more representative example would better showcase StacktraceGpt’s potential.
I appreciate your feedback and will work on creating a better demo to underline the library’s usefulness. Thanks for your interest and suggestions!
GPT does better when it’s given more context as input. Right now, the stacktrace_gpt
code doesn’t supply anything besides the stacktrace itself, which is rarely enough to understand the problem.
I’d recommend the same thing for GPT as I’d want when presented with a stacktrace: an answer for “what’s the code at / around the point where the failure happened?”
Several things that seem like they’ll make doing that challenging:
Appreciate the candid feedback!
StacktraceGpt is a bit basic right now and it’s definitely a room to grow.
Will create couple of enhancement issues in GitHub after this post
I think that you can integrate my compiler Tria which has translator from erlang’s abstract forms AST to Elixir’s AST which will provide a part of code which was broken
mfa = {module, function, arity}
abstract_code =
module
|> Tria.Language.Beam.object_code!()
|> Tria.Language.Beam.abstract_code!()
tria = Tria.Language.Beam.tria(abstract_code, mfa)
elixir_ast = Tria.Compiler.ElixirTranslator.from_tria(tria)
elixir_ast
|> Macro.to_string()
|> Code.format_string!()
# Returns string representation of a function