InstructorLite – structured outputs for LLMs, a tinkering-friendly fork of Instructor

Hello,

I’m excited to introduce InstructorLite – a fork of the Instructor package.

Instructor brought the very idea of structured LLM prompting to the Elixir ecosystem. InstructorLite is aimed to distill it into a composable set of simple tools that people can build upon.

What’s new in InstructorLite 0.1.0 compared to Instructor 0.0.5?

  • More granular API for more flexibility
  • Anthropic adapter
  • Reduced source code complexity
  • All streaming capabilities removed
  • BYO json schema as an official policy. Schema converter is still there, but it’s best-effort
  • Llamacpp adapter now uses JSON schema instead of GBNF grammar
  • OpenAI adapter is reimplemented using brand new structured outputs instead of JSON mode (Introduction to Structured Outputs | OpenAI Cookbook)
  • Fixed Ecto 3.12 compatibility

It’s not a drop-in replacement, but it’s very close. Feel free to report any issues you run into if you decide to give it a try!

Hex: instructor_lite | Hex
Migration guide: Migrating from Instructor — instructor_lite v0.2.0
Repo:

6 Likes

Thank you for sharing this!

Can the library be used to call Ollama locally? Is there any configuration required?

I didn’t test it with Ollama for 0.1.0 so I don’t think any of the built-in adapters will work with it out of the box. It should be pretty straightforward to write your own adapter though! And I’ll definitely look into it the future.

1 Like

Appreciate your work! I just returned to a project using Instructor today and bumped into the compatibility issue with Ecto 3.12. Tried the fork that attempted to fix it but it didn’t work (bummer that instructor doesn’t seem to be maintained anymore!). But migrating to instructor_lite was a breeze and everything works great now.

2 Likes

so cool!

1 Like

Released version 0.2.0 (Changelog) with a couple of bugfixes and simpler way to switch to a less strict json mode with OpenAI adapter.

1 Like

I looked into Ollama and the implementation would depend heavily on the underlying open source model, so I don’t believe it makes sense implementing a universal adapter. However, I went ahead and created an example cookbook on how to implement your own Ollama adapter. It pretty much includes a functional adapter for llama3.1 model

1 Like