ReqLLM - Composable LLM client built on Req

Alright ! Thank you.

Hi @mikehostetler, is it possible to configure the base URLs for providers so I can change them in the test environment? I’m looking to use GitHub - johantell/moxinet: Mox-style HTTP mocking for Elixir. Replace external services with a local test server for reliable, realistic tests. for testing, as we’re already using it and would prefer not to introduce new strategies for testing

Yes - the easiest way is to pass an option to the generate_text or stream_text methods.

The provider base_url will be overridden for that request. api_key follows this pattern as well.

Here’s an example:

test "mocked chat happy path", %{base_url: base_url} do
  assert {:ok, response} =
           ReqLLM.generate_text("openai:gpt-4o-mini", "ping",
             base_url: base_url,
             api_key: "test"
           )

  assert ReqLLM.Response.text(response) == "pong"
end
1 Like

Ideally from my perspective someone would open a pr for Elixir LangChain to build on top of this to make it easier to BYO LLM.

2 Likes

We built our own LangChain.ChatModels.ChatModel but it was pretty painful to figure out and debug, and having a REQ plugin as a path would have been fantastic.

1 Like

The entire problem space is deceptively complex

Turns out that Req only works for non-streaming calls - so under the covers, stream_text uses Finch … (ya, I know - I really tried)

I have some content queued up on what went into this - so keep an eye out for that.

2 Likes

Are you talking about downloading a stream or something more complex? Because Req has support for streams!

More complex - it’s the process architecture behind the stream with SSE.

Happy to share the research if you’re really interested but Req can’t do it.

1 Like

Yes pleaee! I’m actually very interested. I’m using Req to test my MCP server SSE streams and I’m always looking for more flexibility.

A DM if you do not want to polllute the thread.

Thanks! I’ve been trying on generate_object/4 but couldn’t find a way to get it to work. Would you be open to a PR that would allow overriding it on a configuration level, e.g.

config :req_llm, [
  open_ai: [base_url: "http://localhost"]
]

# or

config :req_llm, [
  ReqLLM.Providers.OpenAI: [base_url: "http://localhost"]
]

if so, I’d gladly try to help out :slight_smile:

A PR is very welcome! This should work - the model catalog allows overrides - but if it’s not working for you, that’s a bug.

1 Like