LangChain is short for Language Chain. An LLM, or Large Language Model, is the “Language” part. This library makes it easier for Elixir applications to “chain” or connect different processes, integrations, libraries, services, or functionality together with an LLM.
LangChain is a framework for developing applications powered by language models. It enables applications that are:
Data-aware: connect a language model to other sources of data
Agentic: allow a language model to interact with its environment
The main value props of LangChain are:
Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
Off-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasks
Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.
Announcement post for the initial release that explains and gives an overview:
Library on Github:
The library helps to integrate an Elixir app with an LLM (Large Language Model) like ChatGPT. It includes Livebook notebooks for easy experimental play. This supports defining your own functions to expose to an LLM which it can call, allowing your app to extend the functionality in interesting new ways.
This is interesting, given Ex is a functional language and LangChain is a pretty Java-y mess of a library I can only see improvements out of the gate.
I’ve spent a bunch of time working with LLM’s recently and have just been writing functions to encapsulate the various steps in my pipelines. What’s the main benefit of adopting langchain in a functional language?
I’ve actually been inspired by a Python library I saw recently that makes it easy to define your LLM calls as functions and have been thinking about how to write a macro in Ex that does this, with the end result being single functions you can call to interact with the LLM. (GitHub - jackmpcollins/magentic: Seamlessly integrate LLMs as Python functions)
Yes, the usage token could be returned but they aren’t currently. I want to add support for a couple other LLMs and see more examples of how others work and common among them.
I updated the DEMO project to include an Agent example. In this case, it’s an AI Personal Fitness Trainer! I created a YouTube video about it and wrote up an overview in a blog post.
Turns out ChatGPT doesn’t know anything about the date or day of the week! When that’s needed for your application, how can we solve it? See how we can make our AI apps more useful when they are date aware!
Agree that the langchain abstraction are sometimes easier and sometimes more trouble than they are worth.
Also agree that FP will clean up a lot.
But I think it’s going to fall into sprawling spaghetti without an abstracted structure, at least when I think about dozens of people using it and collaborating across projects.
I was wondering if you’d be willing to look at my buddy’s project which is intended to be a graph builder (and reconfiguration system) and serialization system. If can be run on multiple runtimes and the development could be ported as well. It’s not FP today, but not bad.
If you like it, would you consider the DAG abstraction layer as a means of interoperability?
Assuming models will likely have different APIs and features in the upcoming future, does it make sense to include the raw model responses (or only include the uncommon attributes)?
Like @f0rest8 I was in need of calculating usage tokens, and if we had the raw responses, I could easily do the mapping myself.
You can set a secret OPENAI_KEY in livebook, then you must give your current livebook access to it in the secrets tab.
After that you can get the secret with an LB_ prefix and pass it to ChatOpenAI.new!
By combining this library with Ollama, I can transparently use the GPU of my MBP (Apple Silicon). Interacting with LLMs within Elixir applications becomes very efficient and local-only (I dislike reaching cloud services with sensitive context).
Amazing work! Many thanks @brainlid and all contributors!
I am starting using this library, so far looks great. Have minor question, what be the best way to handle alternative logic based on user input? I mean in code? For example, in my application, different logic will be executed based if the user A. wants to order food, or B. wants to eat at restaurant.
Also, how to provide it contextual data (actually data retrieved from tables in DB) based on user choices during the conversation? i.e. how to hook inside an ongoing chat and feed in DB data?
A RoutingChain is a good tool for changing how handle different contexts. It’s a basic prompt and chain that looks at the prompt and determines “Do they want to order food to be delivered? Or do they want to eat a restaurant?” The LLM looks at the prompt you provide and the user’s message to make the decision.
The result is to select a new, purpose built chain. You can then run the same message from the user on the selected chain.
To access databases, typically that’s done through a Tool. You define the tools (aka functions) that the LLM should have access to. If the LLM decides to call the tool, the library handles it and executes your Elixir code. Your function’s result is returned to the LLM for it to act on.
Ok, I made some progress. The documentation needs to be updated to this:
selected_route =
RoutingChain.new!(%{
llm: ChatOpenAI.new!(%{model: "gpt-4o-mini", stream: false}),
input_text: "I need help with JAVA programming language",
routes: routes,
default_route: PromptRoute.new!(%{name: "DEFAULT", chain: llm_chain})
})
|> RoutingChain.evaluate()
i.e. using RoutingChain.new! and llm: ChatOpenAI.new!(..)
I have made it work and can think how to implement it before the chat starts, e.g. using buttons (click handlers) determining the input_text to provide for RoutingChain.evaluate().
But would it be possible to do the routing inside an ongoing chat (changing the chain inside an ongoing chat)? If you can give me some direction that would be great. Thank you.
But would it be possible to do the routing inside an ongoing chat (changing the chain inside an ongoing chat)?
The way I handle this is to manage the displayed chat separately from the one sent to the LLM. Just like we don’t want to display the system setup or other prompts to the user, those are sent to the LLM but not displayed to the user.
More advanced LLMs could handle a routing prompt inside a conversation, but the simpler LLMs cannot.
To me, it makes the most sense to perform the routing operation as a standalone chat with the LLM. Like you run it on the side, then use the result to define your next prompt. You don’t have to switch to another chain. You may only want the result of the LLM’s topic choice.