Langchain - An Elixir LangChain-like library for integrating with LLMs like ChatGPT

LangChain is short for Language Chain. An LLM, or Large Language Model, is the “Language” part. This library makes it easier for Elixir applications to “chain” or connect different processes, integrations, libraries, services, or functionality together with an LLM.

LangChain is a framework for developing applications powered by language models. It enables applications that are:

  • Data-aware: connect a language model to other sources of data
  • Agentic: allow a language model to interact with its environment

The main value props of LangChain are:

  1. Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
  2. Off-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasks

Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.

Announcement post for the initial release that explains and gives an overview:

Library on Github:

The library helps to integrate an Elixir app with an LLM (Large Language Model) like ChatGPT. It includes Livebook notebooks for easy experimental play. This supports defining your own functions to expose to an LLM which it can call, allowing your app to extend the functionality in interesting new ways.

33 Likes

OMG :+1:t2:

3 Likes

Published a DEMO project for using the Elixir LangChain library.

  • writes the messages to a SQLite database
  • can edit/delete messages
  • can resubmit a conversation after making changes
  • can cancel an in-process message
  • message deltas are streamed in
  • uses LiveView’s new Async Operations

Check it out!

12 Likes

This is interesting, given Ex is a functional language and LangChain is a pretty Java-y mess of a library I can only see improvements out of the gate.

I’ve spent a bunch of time working with LLM’s recently and have just been writing functions to encapsulate the various steps in my pipelines. What’s the main benefit of adopting langchain in a functional language?

I’ve actually been inspired by a Python library I saw recently that makes it easy to define your LLM calls as functions and have been thinking about how to write a macro in Ex that does this, with the end result being single functions you can call to interact with the LLM. (GitHub - jackmpcollins/magentic: Seamlessly integrate LLMs as Python functions)

1 Like

This is super awesome, thank you! I might be missing it, but is there a way to return the :usage tokens for the llm from the completions endpoint?

# from the OpenAI api
"usage": { "prompt_tokens": 5, "completion_tokens": 5, "total_tokens": 10 } }

OpenAI api token usage doc

1 Like

Yes, the usage token could be returned but they aren’t currently. I want to add support for a couple other LLMs and see more examples of how others work and common among them.

2 Likes

I updated the DEMO project to include an Agent example. In this case, it’s an AI Personal Fitness Trainer! I created a YouTube video about it and wrote up an overview in a blog post.

4 Likes

Turns out ChatGPT doesn’t know anything about the date or day of the week! When that’s needed for your application, how can we solve it? See how we can make our AI apps more useful when they are date aware!

3 Likes

Awesome work!

Agree that the langchain abstraction are sometimes easier and sometimes more trouble than they are worth.

Also agree that FP will clean up a lot.

But I think it’s going to fall into sprawling spaghetti without an abstracted structure, at least when I think about dozens of people using it and collaborating across projects.

I was wondering if you’d be willing to look at my buddy’s project which is intended to be a graph builder (and reconfiguration system) and serialization system. If can be run on multiple runtimes and the development could be ported as well. It’s not FP today, but not bad.

If you like it, would you consider the DAG abstraction layer as a means of interoperability?

1 Like

Thanks for the great work @brainlid

Assuming models will likely have different APIs and features in the upcoming future, does it make sense to include the raw model responses (or only include the uncommon attributes)?

Like @f0rest8 I was in need of calculating usage tokens, and if we had the raw responses, I could easily do the mapping myself.

Looks like a great library, giving it a try.

@brainlid how do you set the OpenAI api_key when implementing within a LiveBook?

I have the the OpenAI api key set as an environment variable in my LiveBook, but can’t set it in the map when using ChatOpenAI.new!

I must be missing something simple. Thanks for the help in advance

You can set a secret OPENAI_KEY in livebook, then you must give your current livebook access to it in the secrets tab.
After that you can get the secret with an LB_ prefix and pass it to ChatOpenAI.new!

  api_key = System.get_env("LB_OPENAI_KEY") 

  %{
    llm:
      ChatOpenAI.new!(%{
        model: "gpt-4",
        api_key: api_key,
      })
  }
  |> LLMChain.new!()
1 Like

Yeah I tried that exact same thing and for some reason the api_key in the map is always nil for some reason.

I can retrieve the key from the environment but when I set it in the map it’s nil for some reason

Thanks for the help I’ll continue to try to figure out what should be a very easy thing to do

Today with zero code changes it works! Shut down and restarted the server a few times, and now it’s good to go. Thanks for the help.

1 Like

Amazing work! Many thanks to all contributors!

By combining this library with Ollama, I can transparently use the GPU of my MBP (Apple Silicon). Interacting with LLMs within Elixir applications becomes very efficient and local-only (I dislike reaching cloud services with sensitive context).

1 Like