Langchain - An Elixir LangChain-like library for integrating with LLMs like ChatGPT

@brainlid

I have a question, I actually need your advice if this is feasible:

Do you think the logic inside this file (in the demo):

Can it be re-written using Phoenix channels? Do you think the whole LiveView code can be replaced by Phoenix channels and Dart code (on a flutter mobile application) - I mean to write the chat interface natively in Flutter using phoenix_socket | Dart package ?

I am not sure how to expose the display_messages array to Flutter … Or is it that the phoenix_socket | Dart package has limited functionality, for this end?

Thank you.

@brainlid

I asked Chat-GPT to make the code conversion to pure Phoenix Channels code (no dependency on LiveView) and here is the result, there seems to be more work to do since I get Jason encoder errors related to MessageDelta{} (protocol not implemented).

defmodule LangChainDemoWeb.AgentChatChannel do
  use Phoenix.Channel

  alias LangChainDemoWeb.AgentChatLive.Agent.ChatMessage
  alias LangChain.Chains.LLMChain
  alias LangChain.Message
  alias LangChainDemo.FitnessUsers
  alias LangChainDemo.FitnessLogs
  alias LangChain.Message.ToolCall
  alias LangChain.Message.ToolResult

  # When a user joins the channel, initialize state
  def join("agent_chat:lobby", _message, socket) do
    current_user = FitnessUsers.get_fitness_user!(1)
    llm_chain = initialize_llm_chain(current_user)

    socket =
      assign(socket, :current_user, current_user)
      |> assign(:llm_chain, llm_chain)
      |> assign(:display_messages, [
        %ChatMessage{
          role: :assistant,
          hidden: false,
          content: "Hello! I'm your personal trainer. How can I help you today?"
        }
      ])

    {:ok, socket}
  end

  # Handle incoming "validate" event
  def handle_in("validate", %{"chat_message" => params}, socket) do
    changeset = ChatMessage.create_changeset(params) |> Map.put(:action, :validate)
    push(socket, "validate_response", %{form: to_form(changeset)})
    {:noreply, socket}
  end

  # Handle incoming "save" event
  def handle_in("save", %{"chat_message" => params}, socket) do
    case ChatMessage.new(params) do
      {:ok, message} ->
        updated_socket = add_user_message(socket, message.content)
        push(updated_socket, "message_saved", %{content: message.content})
        {:noreply, run_chain(updated_socket)}

      {:error, changeset} ->
        push(socket, "save_error", %{errors: changeset.errors})
        {:noreply, socket}
    end
  end

  # Handle timezone event to update user's timezone
  def handle_in("browser-timezone", %{"timezone" => timezone}, socket) do
    user = socket.assigns.current_user

    socket =
      if timezone != user.timezone do
        {:ok, updated_user} = FitnessUsers.update_fitness_user(user, %{timezone: timezone})
        assign(socket, :current_user, updated_user)
      else
        socket
      end

    {:noreply, socket}
  end

  # Handle incoming chat delta from async processing
  def handle_info({:chat_delta, delta}, socket) do
    updated_chain = LLMChain.apply_delta(socket.assigns.llm_chain, delta)
    push(socket, "chat_delta", %{delta: delta})
    {:noreply, assign(socket, :llm_chain, updated_chain)}
  end

  # Handle tool execution message from async processing
  def handle_info({:tool_executed, tool_message}, socket) do
    message = %ChatMessage{role: tool_message.role, tool_results: tool_message.tool_results}
    updated_socket = assign(socket, :llm_chain, LLMChain.add_message(socket.assigns.llm_chain, tool_message))
    push(updated_socket, "tool_executed", %{tool_message: message})
    {:noreply, updated_socket}
  end

  # Handle updated user information
  def handle_info({:updated_current_user, updated_user}, socket) do
    socket =
      assign(socket, :current_user, updated_user)
      |> assign(:llm_chain, LLMChain.update_custom_context(socket.assigns.llm_chain, %{current_user: updated_user}))

    {:noreply, socket}
  end

  # Handle task errors in async processing
  def handle_info({:task_error, reason}, socket) do
    push(socket, "task_error", %{reason: reason})
    {:noreply, socket}
  end

  # Function to initialize LLMChain
  defp initialize_llm_chain(current_user) do
    LLMChain.new!(%{
      llm: ChatOpenAI.new!(%{model: "gpt-4", temperature: 0, request_timeout: 60_000, stream: true}),
      custom_context: %{live_view_pid: self(), current_user: current_user},
      verbose: false
    })
  end

  # Handle running the LLM chain async
  defp run_chain(socket) do
    chain = socket.assigns.llm_chain
    live_view_pid = self()

    callback_fn = fn
      %LangChain.MessageDelta{} = delta -> send(live_view_pid, {:chat_delta, delta})
      %LangChain.Message{role: :tool} = message -> send(live_view_pid, {:tool_executed, message})
      _ -> :ok
    end

    Task.start(fn ->
      case LLMChain.run(chain, while_needs_response: true, callback_fn: callback_fn) do
        {:ok, _updated_chain, _last_message} -> :ok
        {:error, reason} -> send(live_view_pid, {:task_error, reason})
      end
    end)

    assign(socket, :llm_chain, chain)
  end

  # Add user message to chain
  defp add_user_message(socket, user_text) do
    updated_chain = LLMChain.add_message(socket.assigns.llm_chain, Message.new_user!(user_text))
    assign(socket, :llm_chain, updated_chain)
  end
end

@brainlid

I have managed to do it! Now this tool can be used with a Flutter based Chatbot application using Phoenix Channels. Here is the GitHub gist, would be nice to get some tips for code improvements. Thank you. Looking forward to keep contributing to this great project.

1 Like

@brainlid
Having some issues implementing the streaming example using {:langchain, "~> 0.3.0-rc.0"} and Ollama local models

Using the sample Getting Started Livebook code and OpenAi everything works fine. When I switch over to llama3 through Ollama I get the following error:

** (FunctionClauseError) no function clause matching in LangChain.Utils.fire_streamed_callback/2

Any help is much appreciated, and looks like great a library so far!

Here’s the code

alias LangChain.Chains.LLMChain
alias LangChain.MessageDelta
alias LangChain.Message
alias LangChain.ChatModels.ChatOllamaAI
alias LangChain.ChatModels.ChatOpenAI

handler = %{
on_llm_new_delta: fn _model, %MessageDelta{} = data ->
  # We received a chunk of the response
  IO.write(data.content)
  end,

  on_message_processed: fn _chain, %Message{} = data ->
    IO.puts("")
    IO.puts("")
    IO.inspect(data.content, label: "COMPLETED MESSAGE")
  end
}

{:ok, updated_chain, response} =
#  %{
#    llm: ChatOpenAI.new!(
#      %{
#      model: "gpt-4", 
#      stream: true, 
#      callbacks: [handler]
#      }
#    ),
#    callbacks: [handler]
#  }

  %{
    llm: ChatOllamaAI.new!(
      %{
      endpoint: "http://localhost:11434/api/chat", 
      model: "llama3:latest", 
      stream: true, 
      callbacks: [handler]
      }
    ),
    callbacks: [handler]
  }
  |> LLMChain.new!()
  |> LLMChain.add_messages([
    Message.new_system!("You are helpful assistant"),
    Message.new_user!("Write a haiku about the capital of the United States")
  ])
  |> LLMChain.run()

updated_chain.last_message.content

The full error message is here if that helps

** (FunctionClauseError) no function clause matching in LangChain.Utils.fire_streamed_callback/2    
    
    The following arguments were given to LangChain.Utils.fire_streamed_callback/2:
    
        # 1
        %LangChain.ChatModels.ChatOllamaAI{
          endpoint: "http://localhost:11434/api/chat",
          mirostat: 0,
          mirostat_eta: 0.1,
          mirostat_tau: 5.0,
          model: "llama3:latest",
          num_ctx: 2048,
          num_gqa: nil,
          num_gpu: nil,
          num_predict: 128,
          num_thread: nil,
          receive_timeout: 300000,
          repeat_last_n: 64,
          repeat_penalty: 1.1,
          seed: 0,
          stop: nil,
          stream: true,
          temperature: 0.8,
          tfs_z: 1.0,
          top_k: 40,
          top_p: 0.9,
          callbacks: []
        }
    
        # 2
        %LangChain.Message{
          content: nil,
          processed_content: nil,
          index: nil,
          status: :complete,
          role: :assistant,
          name: nil,
          tool_calls: [],
          tool_results: nil
        }
    
    Attempted function clauses (showing 2 out of 2):
    
        def fire_streamed_callback(model, data) when is_list(data)
        def fire_streamed_callback(model, %LangChain.MessageDelta{} = delta)
    
    (langchain 0.3.0-rc.0) lib/utils.ex:103: LangChain.Utils.fire_streamed_callback/2
    (elixir 1.17.2) lib/enum.ex:987: Enum."-each/2-lists^foreach/1-0-"/2
    (langchain 0.3.0-rc.0) lib/utils.ex:161: anonymous fn/5 in LangChain.Utils.handle_stream_fn/3
    (finch 0.19.0) lib/finch/http1/conn.ex:346: Finch.HTTP1.Conn.receive_response/8
    (finch 0.19.0) lib/finch/http1/conn.ex:131: Finch.HTTP1.Conn.request/8
    (finch 0.19.0) lib/finch/http1/pool.ex:60: anonymous fn/10 in Finch.HTTP1.Pool.request/6
    (nimble_pool 1.1.0) lib/nimble_pool.ex:462: NimblePool.checkout!/4
    #cell:fm7cmhkax2o3kitd:48: (file)

2 Likes

Thanks for writing this library, it seems really cool. Does this library work well in practice for apps requiring sub-second time-to-first-token?

1 Like

Make sure you pass mode: :while_needs_response to LLMChain.run() function, like this: LLMChain.run(mode: :while_needs_response).

1 Like

Thanks for the response. I tried what you recommended and still receive the same error

Seems related to either the adapter or the AI engine you ae using. There is a debug flag for LLMchain, set it true to see full error messages. I recommend also to inspect the source code of the adapter / LLMchain in general. The answer can be there. Also you can try to ask Claude, can be helpful.

1 Like

Does this library work well in practice for apps requiring sub-second time-to-first-token?

Yes. It supports streaming.

2 Likes

Hi @jszod,

It’s entirely possible that the there is an issue with the Ollama support. That adapter was added from a community contribution and I haven’t been using it myself.

1 Like

Hi @brainlid ,

Is there some documentation somewhere on how to use messages processors?

I’m trying to use the new json_response: true, json_schema: _ fields for ChatOpenAi but I’m not sure what to do.

1 Like

We figured it out.

@brainlid I have another interrogation, I am not sure what is the purpose of PrompTemplates.

For instance when sending a prompt you can just interpolate stuff in a string, or build a string in many different ways. What is the use case where a PrompTemplate is required?

Thank you.

1 Like

Hi @lud! Glad you figured out the Message Processor.

For PromptTemplates, yes, you can use string interpolation to build it too. It’s based on the original LangChain but the idea is this: you can create the prompts and setup the chain without having the additional user/context specific information. Then when it’s time to run the chain and you have the data it can be inserted in.

This is handy if you wanted to load a prompt from a YAML file, a database, or somewhere else when none of the specific use-case information is available.

But really, you can do whatever makes the most sense for your situation.

1 Like

Ah yes thanks! We have regular functions that return the promps but indeed, I can see the use case now. And actually I think I could exploit it.

Thank you and merry christmas :slight_smile:

1 Like

Elixir LangChain v0.3.0 is released! :tada: This library makes it much easier to integrate your Elixir application with one or more LLMs. It’s an adapter, shielding your app from swapping out which LLM you want to target for that next feature.

Recent additions include:

  • OpenAI o1 support
  • Bumblebee Llama 3.x function calling
  • Anthropic token caching
  • built-in fallback to another preferred LLM
  • and more!

Hire me! :grin: I’m looking for full time, US-based employment with or without AI!

11 Likes

I wish you all the best. Thank you.

3 Likes

Is OpenRouter supported by using the OpenAI model? I believe OpenRouter’s API is OpenAI compatible.

2 Likes

I’d like to say how much of a pleasure this library is to work with. Highly productive for RAG!

2 Likes

I’d love to have more explicit support for RAG or a Livebook demo or something to help people connect how it works and get started.

I’d love help in that direction!

Is OpenRouter supported by using the OpenAI model? I believe OpenRouter’s API is OpenAI compatible.

I don’t know! If you try it out (by changing the ChatOpenAi endpoint), let others know. :slight_smile:

1 Like