StacktraceGpt - use GPT to explain your errors directly in the IEx console

GitHub - alexfilatov/stacktrace_gpt: StacktraceGpt is an Elixir tool that uses OpenAI's GPT to provide human-friendly explanations for your application's error stack traces, simplifying debugging and enhancing your learning experience. :robot:

I’ve just released StacktraceGpt. It’s a tool uses OpenAI’s GPT to give easy-to-understand explanations for the error messages your app gives. Should help fixing problems and learning easier.

Please let me know your thoughts!
Thanks! :pray:

8 Likes

I don’t see any benefit to this, as there are either good stacktraces that are clear in their error or some abomination stacktraces that come from underlying libraries, for example try to put a negative port in a http client that uses erlang inet under the hood, in witch case the bot will help you with nothing as it wouldn’t understand the context of the question.

1 Like

Thank you for your feedback! You’re correct, well-formed stack traces often provide clear error information, and complex ones can be tough. StacktraceGpt aims to augment developers’ understanding, especially for those who might be new to Elixir or who might not be familiar with the nuances of certain libraries or the Erlang/OTP ecosystem.

While it may not always deduce the specific context of an error, it can offer insight into the type of error, common causes, and potential fixes. I appreciate your perspective and acknowledge the room for improvement in the tool’s current state.

3 Likes

That sound very interesting!

Perhaps it might help to provide an example where the explanation actually helps to understand the error, because current example is kinda strange

Stacktrace: function Ecto.Multi.new/0 is undefined
Explanation: The error message is stating that the function Ecto.Multi.new/0 is undefined

1 Like

My apologies for the confusion - the example was indeed not ideal. It was merely set up to provoke a crash, a more representative example would better showcase StacktraceGpt’s potential.
I appreciate your feedback and will work on creating a better demo to underline the library’s usefulness. Thanks for your interest and suggestions!

1 Like

GPT does better when it’s given more context as input. Right now, the stacktrace_gpt code doesn’t supply anything besides the stacktrace itself, which is rarely enough to understand the problem.

I’d recommend the same thing for GPT as I’d want when presented with a stacktrace: an answer for “what’s the code at / around the point where the failure happened?”

Several things that seem like they’ll make doing that challenging:

  • the file may be too big for the LLM’s context window
  • users may not want to upload proprietary source code to OpenAI’s servers
  • the relevant line may not be the top one from the stacktrace, especially if it’s in stdlib code
1 Like

Appreciate the candid feedback!
StacktraceGpt is a bit basic right now and it’s definitely a room to grow.
Will create couple of enhancement issues in GitHub after this post :heart_eyes:

I think that you can integrate my compiler Tria which has translator from erlang’s abstract forms AST to Elixir’s AST which will provide a part of code which was broken

mfa = {module, function, arity}
abstract_code =
  module
  |> Tria.Language.Beam.object_code!()
  |> Tria.Language.Beam.abstract_code!()

tria = Tria.Language.Beam.tria(abstract_code, mfa)
elixir_ast = Tria.Compiler.ElixirTranslator.from_tria(tria)

elixir_ast
|> Macro.to_string()
|> Code.format_string!()
# Returns string representation of a function
1 Like

that’s an interesting idea @hst337, I’ll have a look on your compiler, thanks!