Bumblebee is out: GPT2, Stable Diffusion, and More in Elixir (and LiveBook)

We are glad to announce a variety of Neural Networks models are now available to the Elixir community via the Bumblebee project.

We have implemented several models, from GPT2 to Stable Diffusion, in pure Elixir, and you can download training parameters for said models directly from Hugging Face.

To run your first Machine Learning model in Elixir, all you need is three clicks, thanks to our integration between Livebook and Bumblebee. You can also easily embed and run said models within existing Elixir projects.

Watch the video by José Valim covering all of these topics and features:

Full details:

53 Likes

Nice!! :tada:

Love the image classification from inside a Phoenix app!!

2 Likes

Santa is early this year :smiley: Looks really amazing, I’m excited to play around with this. I love the way Livebook makes all of this so approachable for us that wanna learn and jump on the AI/ML bandwagon :slight_smile:

6 Likes
10 Likes

Wonderful! I have a few use cases for this specifically at work. The demo video was very well done!

1 Like

@chrismccord is the source of that demo available somewhere? I would like to show off elixir to my colleagues :smiley:

Found it : GitHub - chrismccord/single_file_phx_bumblebee_ml

5 Likes

Is there anything I can do to make this run on the GPU on the M1 Mac? For instance running Stable Diffusion since It is a bit slow :sweat_smile: I dont know where to begin or what to search for in order to answer this myself :slight_smile:

6 Likes

I’m having the same issue…

1 Like

Unfortunately Apple Silicon GPU support is still rudimentary everywhere (including Python ones) except in Apple’s on tooling. Apple did provide a XLA backend for CoreML some time but IIRC it was closed source and it did not catch on.

We have an issue to explore this a bit more but CoreML has limitations such as being unable to support f64 floats (aka doubles) and others.

So we will see.

TL;DR: CPU only on Apple Silicon. If someone wants to work on a CoreML backend for Nx though, it would be great!

10 Likes

Thank you Jose!

1 Like

Thanks for the update Jose :slight_smile:

I did what I could :cry::sweat_smile:

11 Likes

I was figuring out how to run existing models. Because I am no expert in Machine Learning.

I looked at how others are doing it, Tensorflow Serving, etc. I wished Elixir had something like that.

Lo and behold: Bumblebee. :sunglasses:

Machine learning serving from the comfort of Elixir.

How are people so awesome?!?

Thank you @josevalim.

4 Likes

No surprise, image classification would be a great addition to image but for some reason I can’t fathom, starting up Nx.Serving is failing with:

06:30:13.408 [notice] Application image exited: exited in: Image.Application.start(:normal, [])
    ** (EXIT) exited in: GenServer.call(EXLA.Client, {:client, :host, [platform: :host]}, :infinity)
        ** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started

Which suggests there is a failure starting EXLA.Client.

If I don’t configure EXLA then the app starts fine. But no surprise - the binary backend isn’t suited to this kind of work and I lose patience after 5 minutes of a test run :slight_smile:

Any help would be appreciated - I’m pretty sure this is a stupid user error. Here’s the relevant info: (Mac ARM, OTP 25, Elixir 1.14)

# Image.Application
defmodule Image.Application do
  @moduledoc false
  use Application

  def start(_type, _args) do
    Supervisor.start_link(
      [
        {Nx.Serving, serving: Image.Classification.serving(), name: Image.Serving, batch_timeout: 100}
      ],
      strategy: :one_for_one
    )
  end
end

# Image.Classification
defmodule Image.Classification do
  alias Vix.Vips.Image, as: Vimage

  def serving(model \\ "microsoft/resnet-50", featurizer \\ "microsoft/resnet-50") do
    {:ok, model_info} = Bumblebee.load_model({:hf, model})
    {:ok, featurizer} = Bumblebee.load_featurizer({:hf, featurizer})

    Bumblebee.Vision.image_classification(model_info, featurizer,
      top_k: 1,
      compile: [batch_size: 10],
      defn_options: [compiler: EXLA]
    )
  end

  def classify(%Vimage{} = image) do
    with {:ok, binary} <- Image.to_nx(image) do
      Nx.Serving.batched_run(Image.Serving, binary)
    end
  end
end

# mix.exs
  defp deps do
    [
       ...
      # For NX interchange and
      # Bumblebee for image classification
      if(otp_release() >= 24, do: [
        {:nx, "~> 0.4.1", optional: true},
        {:bumblebee, "~> 0.1.0", optional: true},
        {:exla, "~> 0.4.1", optional: true}
      ]),
      ...
     ]
     |> List.flatten()
     |> Enum.reject(&is_nil/1)
  end

# dev.exs
config :nx,
  default_backend: EXLA.Backend
3 Likes

It looks like the EXLA client process died, usually this happens when startup fails like when there is a linking error or something in the NIF. Is there any additional error messages before you hit this point?

1 Like

@seanmor5 thanks for the assist. No other errors. I tried starting it manually from iex and perhaps the following helps a little?

Interactive Elixir (1.14.0) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Supervisor.start_link([{Nx.Serving, serving: Image.Classification.serving(), name: Image.Serving, batch_timeout: 100}])
** (exit) exited in: GenServer.call(EXLA.Client, {:client, :host, [platform: :host]}, :infinity)
    ** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
    (elixir 1.14.0) lib/gen_server.ex:1027: GenServer.call/3
    (exla 0.4.1) lib/exla/backend.ex:147: EXLA.Backend.client_and_device_id/1
    (exla 0.4.1) lib/exla/backend.ex:44: EXLA.Backend.from_binary/3
    (bumblebee 0.1.0) lib/bumblebee/conversion/pytorch/loader.ex:71: Bumblebee.Conversion.PyTorch.Loader.object_resolver/1
    (unpickler 0.1.0) lib/unpickler.ex:828: Unpickler.resolve_object/2
    (unpickler 0.1.0) lib/unpickler.ex:818: anonymous fn/2 in Unpickler.finalize_stack_items/2
    (elixir 1.14.0) lib/map.ex:924: Map.get_and_update/3
    iex:1: (file)
1 Like

Just to try to dive one level further I manually started EXLA.Client without error but of course then Nx.Serving won’t start:

iex(2)> Supervisor.start_link([EXLA.Client], strategy: :one_for_one)
{:ok, #PID<0.18669.0>}
iex(3)> Supervisor.start_link([{Nx.Serving, serving: Image.Classification.serving(), name: Image.Serving, batch_timeout: 100}])
2022-12-10 12:58:37.980550: I tensorflow/compiler/xla/pjrt/tfrt_cpu_pjrt_client.cc:214] TfrtCpuClient created.
                                                                                                              ** (exit) exited in: GenServer.call(EXLA.Defn.LockedCache, {:lock, {#Function<88.82845959/1 in EXLA.Backend.reshape/2>, [{{:f, 32}, {2048000}, [nil]}]}}, :infinity)
    ** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
    (elixir 1.14.0) lib/gen_server.ex:1027: GenServer.call/3
1 Like

Oh I didn’t realize you were starting it up in this way. EXLA spins up a few processes: nx/application.ex at main · elixir-nx/nx · GitHub

That you will need to start to work with it

2 Likes

Though all of these should be started when you start the serving based on your code

1 Like

That was just a test to see if EXLA.Client would start. My configuration is, as best I can tell, exactly the same as that in the demo Phoenix app. It seems EXLA.Client can start (just as a test). But when starting Nx.Servering as a supervised child, the error in the original message occurs. I’m definitely stuck.

Just to make sure I force-compiled exla again and no errors:

kip@Kips-MacBook-Pro image % mix deps.compile exla --force
==> exla
Unpacking /Users/kip/Library/Caches/xla/0.4.1/cache/download/xla_extension-aarch64-darwin-cpu.tar.gz into /Users/kip/Development/image/deps/exla/cache
c++ -fPIC -I/Users/kip/.asdf/installs/erlang/25.1/erts-13.1/include -Icache/xla_extension/include -O3 -Wall -Wno-sign-compare -Wno-unused-parameter -Wno-missing-field-initializers -Wno-comment -shared -std=c++17 -w -DLLVM_ON_UNIX=1 c_src/exla/exla.cc c_src/exla/exla_nif_util.cc c_src/exla/exla_client.cc -o cache/libexla.so -Lcache/xla_extension/lib -lxla_extension -flat_namespace -undefined suppress
install_name_tool -change bazel-out/darwin_arm64-opt/bin/tensorflow/compiler/xla/extension/libxla_extension.so @loader_path/xla_extension/lib/libxla_extension.so -change bazel-out/darwin-opt/bin/tensorflow/compiler/xla/extension/libxla_extension.so @loader_path/xla_extension/lib/libxla_extension.so cache/libexla.so
Compiling 21 files (.ex)
Generated exla app
1 Like

I noticed you have exla as an optional dep. I don’t remember off the top of my head but optionals may not be automatically started. Try Application.ensure_all_started(:exla) before the call site.

2 Likes