Langchain - An Elixir LangChain-like library for integrating with LLMs like ChatGPT

just came across this: GitHub - bitcrowd/rag: A library to make building performant RAG (Retrieval Augmented Generation) systems in Elixir easy.

2 Likes

Wow! Great find! I just wish there was an example on how to get started.

However, I found this in the docs!

https://hexdocs.pm/rag/Rag.Generation.LangChain.html

@spec generate_response(Rag.Generation.t(), LangChain.Chains.LLMChain.t()) ::
  Rag.Generation.t()

Showing that it’s designed to work with Elixir LangChain. Sweet!

1 Like

Hey, so this library is still under development, definitely not ready yet. As such, the code on GitHub significantly changed in comparison to version 0.1.0 on hex.

I started it this way, using langchain as a means to generate responses in a model and provider agnostic way.
After some time I realized that it should be the other way around: I want people to be able to use rag without langchain for simple one off generations.
I also want it to be able to be integrated with langchain, for ongoing conversations or any other of the features that langchain gives you.
But rag doesn’t need to know about langchain to be integrated, it can simply be used to retrieve information and prepare the next message in langchain.

1 Like

I agree. For that reason, I’ve been unclear on if or how to integrate RAG into LangChain. I’m pretty sure the Python/JS libraries combine them, but they each have distinct value separate from each other. But I do want RAG to be easy to use with LangChain. Because I want it for myself as well!

2 Likes

I am working with vertexAI API and just came across Langchain ex. Seems to be the way to have support for all infernece endpoints. I can see that for google gemini and vertex it looks for google_ai_key. Though that can work with the google AI studio where you can generate a long lived API key that can be stored in secret but with Vertex AI it only works with gcloud auth generated token which is time bound and in my current implementation I have to regenerate one every time I get 401. How can we pass the api_key that I generate when making call to LLM using langchain to bypass this limitation @brainlid ?

@darnahsan The API key can be set in an ENV when it doesn’t change or it can be passed in with each request using the Chat model’s config.

Thanks @brainlid I managed that going through the code and found that VertexAI doesn’t have support for File URL which was recently added to Google AI in langchain_ex. I have opened a PR for it to add support of file URL to Vertex AI up for review

1 Like

PR’s to improve the docs by exposing functionality that isn’t obvious is also welcome. :slight_smile:

@brainlid can you direct me to the repo for docs. I would put the docs over how to work with vertex AI (with file url). We have the main branch running in prod for us and it is working welll. I have documented parts of it in a blof post https://medium.com/@darnahsan/bridging-the-ai-gap-simplifying-llm-development-in-elixir-with-langchain-ex-fa1efc4bbe9d

1 Like

Hi @darnahsan!

Thanks for writing about your successes with the library!

The docs are built from the library itself using module and function docs.

Also just a note that the library is called “langchain” but the blog post refers to it as “langchain_ex”.

But I’ll happily take expanded docs or guides!

Thanks!

I have create a PR for the docs and updated the blog as well to reflect the correct pkg name.

2 Likes

Thanks @darnahsan! Merged!

2 Likes

I have created a PR to add native tool support to Vertex AI to make it at par with Google AI implementation.

1 Like

Hey guys, have any of you been able to make Langchain work with OpenRouter? It exposes an OpenAI-compatible API but all I’m getting is null. I’m pretty sure it’s hitting the right endpoint and with the right apikey, because those were problems but I got past them.

This is what I got:

ash_ai_demo/config/runtime.exs

config :langchain, openai_key: fn -> System.fetch_env!("OPENROUTER_API_KEY") end

ash_ai_demo/lib/ash_ai_demo/chat/message/changes/respond.ex

      %{
        llm:
          ChatOpenAI.new!(%{
            stream: true,
            model: "google/gemini-2.0-flash-exp:free",
            endpoint: "https://openrouter.ai/api/v1/chat/completions",
            custom_context: Map.new(Ash.Context.to_opts(context))
          })
      }
      |> LLMChain.new!()

ash_ai_demo/lib/ash_ai_demo/chat/conversation/changes/generate_name.ex

      %{
        llm:
          ChatOpenAI.new!(%{
            model: "google/gemini-2.0-flash-exp:free",
            endpoint: "https://openrouter.ai/api/v1/chat/completions",
            custom_context: Map.new(Ash.Context.to_opts(context))
          }),
        verbose?: true
      }
      |> LLMChain.new!()

logs:

[error] Received error from API: "Expected string, received null"
[error] Error during chat call. Reason: %LangChain.LangChainError{type: nil, message: "Expected string, received null", original: nil}

I followed all the instructions I could find but I’m stuck