Using ChatGPT to understand some Elixir/Phoenix things

I think ChatGPT is going to be a game-changer for learning new programming languages.

I asked it to help me understand DateTime vs NaiveDateTime, and I thought I threw a curveball at it, smugly – but it actually provided a spot-on answer.

It even works for straightforward questions like: How can I extract the price from a string?

This is going to significantly cut down my exploration time when building things. It’s not going to build anything for me but it will allow me to cut away the fat and focus on the meat of my problem.

This is incredible!


It’s impressive that a bot can produce this kind of explanations but I feel like it’s kinda useless since you can never sort the good from the bad when you’re really ignorant of the answer…

I asked it some general questions one might have on Elixir, e.g. “What is a GenServer ?”, “What is the difference between cast and call in GenServer ?”, “I just don’t understand how to share global state in Elixir. Can you help me with that ?” and its answers were surprisingly good and accurate (some kind of rewording of the official docs with the KV store example).

It can even understand questions and answer in French :slight_smile:

But if you go for a bit more specific questions, it starts to spit out longs answers that look right but are either not really helpful or not right at all… And the code examples start to fail.

e.g. :

Can you explain parser combinators in Elixir please ?

Sure, I’d be happy to help with that! Parser combinators are a way of building parsers for languages by combining smaller parsers into larger ones. In Elixir, parser combinators are typically used to build parsers for languages such as regular expressions, HTML, and other structured text formats.

Parser combinators work by defining small parsers for individual language elements, such as numbers, strings, and punctuation. These small parsers are then combined using functions such as and, or, and many to build larger parsers for more complex language structures. The resulting parsers can be used to parse text and extract the desired information from it.

Here’s an example of how parser combinators might be used in Elixir to parse a simple arithmetic expression:

# Define a parser for numbers
number = regex(~r/\d+/)

# Define a parser for the `+` and `-` operators
op = regex(~r/[+-]/)

# Combine the number and op parsers to create a parser for arithmetic expressions
expression = number >> op >> number

# Parse an expression and extract the resulting values
{:ok, [lhs, op, rhs]} = expression.parse("2 + 3")

In this example, the number parser is used to parse a sequence of digits, the op parser is used to parse a + or - operator, and the expression parser is used to combine these smaller parsers into a parser for a full arithmetic expression. The expression.parse function is then used to parse a string and extract the resulting values.

I hope that helps! Let me know if you have any other questions about parser combinators in Elixir.

Not wrong (even though there are some odd stuff like “build parsers for languages such as regular expressions” ?!), but the code example does not work at all and has some weird quirks, like >> instead of a pipe operator… maybe it mixed up the syntax of parsec libraries in other languages ? And I like how it makes some kind of OO method call here : expression.parse("2 + 3").

I also asked it about LiveView, and the answer is… mmh.

Can you tell me about the life-cycle of a LiveView in Phoenix LiveView ?

Yes, I’d be happy to tell you about the life-cycle of a LiveView in Phoenix LiveView. Phoenix LiveView is a framework for building real-time, interactive web applications in the Elixir programming language. LiveViews are a key part of Phoenix LiveView, and they are used to manage the state and rendering of a web page on the server, while handling user interactions and updates in real-time.

The life-cycle of a LiveView in Phoenix LiveView can be divided into three main phases: initialization, rendering, and update.

During the initialization phase, the LiveView is first created on the server and its initial state is set up. This initial state may include data that is passed to the LiveView from the client, such as the current URL or user input. The LiveView then sends the initial HTML for the page to the client and waits for further updates.

In the rendering phase, the LiveView updates its state in response to user interactions and other events. For example, if the user clicks on a button or changes the value of a form input, the LiveView will update its state to reflect the new value. The LiveView also re-renders the HTML for the page to reflect these changes.

In the update phase, the LiveView sends the updated HTML to the client and waits for further updates. The client receives the updated HTML and uses it to update the page in real-time, without needing to refresh the page. The LiveView continues to update its state and re-render the HTML in response to user interactions and other events, and the cycle repeats until the LiveView is terminated.

I hope that helps! The life-cycle of a LiveView in Phoenix LiveView can be a bit complex, but it provides a powerful way to manage the state and rendering of a web page in real-time. If you have any other questions about LiveViews in Phoenix LiveView, feel free to ask and I’d be happy to help.

On the surface it seems good, but… I don’t understand anything about the specifics that it tries to explain ; it feels like someone who read the documentation page, got the general idea without really understanding it, and tries to explain it to you without admitting that they didn’t get it.

I think the docs, forum, Discord and stackoverflow are here to stay :grin:


Facebook’s Galactica project tried doing this for scientific papers, but it turns out the generation technique is equally good at generating accurate statements and plausible nonsense:

I’ll be much more impressed when the first AI model comes out that can say, “I don’t know anything about that topic”…


I love that title :003:


I’ll be much more impressed when the first AI model comes out that can say, “I don’t know anything about that topic”…

That would make it instantly recognizable as a bot in the context of an average online discussion :sweat_smile:


As someone pretty new to Elixir that just discovered ChatGPT must say its crazy how succinct the explanations are. I’ve been copy and pasting some functions from a practice project I’m doing in to it to see if the explanations make sense to me and it’s just so clearly written.
From the basic live generator I gave it:

@impl true
def handle_params(params, _, socket) do
|> apply_action(socket.assigns.live_action, params)

defp apply_action(socket, :index, _params) do
|> assign(:page_title, “Listing Posts”)
|> assign(:post, nil)

In return I got:

This code defines two functions in the Elixir programming language.

The handle_params/3 function is a function that is called by Phoenix LiveView to handle incoming parameters. It takes three arguments: a map of parameters, a session map, and a socket.

The handle_params/3 function does the following:

  1. It calls the apply_action/2 function and passes it the :live_action socket assign and the params map as arguments.
  2. It returns a tuple with the atom :noreply and the result of the apply_action/2 function call.

The apply_action/2 function is a private function that is called by the handle_params/3 function. It takes three arguments: a socket, an action atom, and a map of parameters.

The apply_action/2 function has a single clause that matches the :index action atom. It does the following:

  1. It assigns the string “Listing Posts” to the :page_title socket assign.
  2. It assigns nil to the :post socket assign.
  3. It returns the socket.

I get that it’s probably just reading and rewording the Hexdocs, but it’s very clearly written and I reckon it could have saved me days worth of learning If I had discovered this sooner.

Sure, and when it sputters some nonsense and you listen to it, leading to a collection of garbage knowledge that will take you a long time to straight out.


Well it’s not like you can’t check and test things.
It’s pretty hard finding examples and explanations for things as a new person, and I’ve already found some useful information using it. It’s obviously not always correct, but I’ve still learned things I didn’t know before.
It feels like making a really really advanced google search basically.

I think when the hype will die out, we will see what it practically can achieve. I don’t see any value in this system as a learning resource, as it doesn’t have any problem solving capabilities, it as you said “advanced google search”.


I get how at the experienced end it will be less appealing, but at the beginner end I wish I had found it sooner.

For example it took me ages to figure out how routes like this work within Heex templates

Routes.post_index_path(@socket, :new_index, @example.test, sample)

because it’s kind of just common knowledge once you know it. But figuring it out yourself without any guides took me ages. With AI-bots I can just post that and ask how does this work and get something back.

1 Like

One of the most annoying things with ChatGPT is that it keeps inventing libraries which do not exist (among its “creations”, this morning, ExExpressions, Leex, Yecc, MathEx…)

:leex and :yecc erlang modules actually do exist.


Where? Packages | Hex yields nothing.

It is part of the OTP, Erlang -- yecc Erlang -- leex


OK, sorry for my mistake. And “ExExpressions” and “MathEx”, any idea?

ChatGPT is awesome. I use it all the time in my coding.

You have to remember that it is “trained” on information from 2021 and before so it will have Leex and not Heex. It will also not use this format {@example} and will use <%= %> instead.

It won’t have information from recent updates, but if they are explained in hexdocs you can sort of temporarily train it on new stuff.

The hype aside, search as we know it is dead. I’m sure ChatGPT has Google’s full attention. It can’t deal with certain types of problems (e.g. anything with recent events) but there are a lot of things that it can take a swing at and significantly improve workflows.

The task of writing code seems like a good candidate since you can verify any solution by checking it if executes properly, so the model has a pretty good feedback loop. Seems like every week someone sends me something how ChatGPT basically could be doing my job.

1 Like

That would only happen if problems were clearly defined and let’s face it: the real world is incredibly messy and not perfect. And even if you were a perfect robot capable of flawlessly executing a task immediately without rest, a company cannot define a problem clearly.

I’ve always had to suss out what the problem actually is and then conjure up a solution that applies to us, and then write the code.


If AIs become very good at coding exactly what the stakeholder asks for… I guess our jobs are safe :smile:

Jokes apart, ChatGPT and similar are super interesting, but they are not optimized for correctness. They merely (and I realize “merely” is an understatement here) learn to produce plausible text by guessing what text follows from a prompt. This mechanism is surprisingly powerful, but it has no concept of correctness. It is even unable to express uncertainty about the solution it generates.

Because of that, code produced by ChatGPT is often ridden by subtle bugs, but ChatGPT sounds so confident and plausible that it takes an expert eye to recognize bugs (sometimes at bigger effort than writing the code from scratch). Moreover, it is not true that it’s easy to automatically verify if the coded solution meets the requirements: in the easy case, it would necessitates that someone writes automated tests for it (so we are back to square one), in the hard case it needs again an expert review.

That said, I do think that AI can be used to build good assistive tools. Instead of AIs that code/illustrate/write for us, AIs that assist us in such activities, boosting creativity, experimentation, suggesting improvements, etc.