LangChain is short for Language Chain. An LLM, or Large Language Model, is the “Language” part. This library makes it easier for Elixir applications to “chain” or connect different processes, integrations, libraries, services, or functionality together with an LLM.
LangChain is a framework for developing applications powered by language models. It enables applications that are:
- Data-aware: connect a language model to other sources of data
- Agentic: allow a language model to interact with its environment
The main value props of LangChain are:
- Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
- Off-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasks
Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.
Announcement post for the initial release that explains and gives an overview:
Library on Github:
The library helps to integrate an Elixir app with an LLM (Large Language Model) like ChatGPT. It includes Livebook notebooks for easy experimental play. This supports defining your own functions to expose to an LLM which it can call, allowing your app to extend the functionality in interesting new ways.
26 Likes
Published a DEMO project for using the Elixir LangChain library.
- writes the messages to a SQLite database
- can edit/delete messages
- can resubmit a conversation after making changes
- can cancel an in-process message
- message deltas are streamed in
- uses LiveView’s new Async Operations
Check it out!
10 Likes
This is interesting, given Ex is a functional language and LangChain is a pretty Java-y mess of a library I can only see improvements out of the gate.
I’ve spent a bunch of time working with LLM’s recently and have just been writing functions to encapsulate the various steps in my pipelines. What’s the main benefit of adopting langchain in a functional language?
I’ve actually been inspired by a Python library I saw recently that makes it easy to define your LLM calls as functions and have been thinking about how to write a macro in Ex that does this, with the end result being single functions you can call to interact with the LLM. (GitHub - jackmpcollins/magentic: Seamlessly integrate LLMs as Python functions)
This is super awesome, thank you! I might be missing it, but is there a way to return the :usage
tokens for the llm from the completions endpoint?
# from the OpenAI api
"usage": { "prompt_tokens": 5, "completion_tokens": 5, "total_tokens": 10 } }
OpenAI api token usage doc
Yes, the usage token could be returned but they aren’t currently. I want to add support for a couple other LLMs and see more examples of how others work and common among them.
2 Likes
I updated the DEMO project to include an Agent example. In this case, it’s an AI Personal Fitness Trainer! I created a YouTube video about it and wrote up an overview in a blog post.
4 Likes
Turns out ChatGPT doesn’t know anything about the date or day of the week! When that’s needed for your application, how can we solve it? See how we can make our AI apps more useful when they are date aware!
3 Likes