How to use Membrane webrtc for audio only?

hey guys im quite new using membrane, i want to use membrane webrtc to utilize my webrtc call (audio only)

here’s my pipeline

  @impl true
  def handle_init(_ctx, opts) do
    spec =
      child(:webrtc_source, %Membrane.WebRTC.Source{
        signaling: opts[:ingress_signaling]
      })
      |> via_out(:output, options: [kind: :audio])
      |> child(%Membrane.Transcoder{output_stream_format: Membrane.Opus})
      |> child(:audio_realtimer, Membrane.Realtimer)
      |> via_in(:input, options: [kind: :audio])
      |> child(:webrtc_sink, %Membrane.WebRTC.Sink{
        signaling: opts[:egress_signaling]
      })

    {[spec: spec], %{}}
  end

and here’s my channels code

  @impl true
  def join("call:" <> signaling_id, %{"call_id" => call_id, "role" => role} = payload, socket) when role in ["ingress", "egress"] do
    PhoenixSignaling.register_channel(signaling_id)
    signaling = PhoenixSignaling.Registry.get(signaling_id)
    c_pid = CallHandler.new(call_id)
    if role == "egress" do
      CallHandler.add_egress(c_pid, signaling_id, signaling)
    else
      CallHandler.add_ingress(c_pid, signaling_id, signaling)
    end
    socket = assign(socket, :signaling_id, signaling_id)
    {:ok, socket}
  end


  @impl true
  def handle_in(signaling_id, msg, socket) do
    IO.inspect({:receive_frombrows, signaling_id, msg})
    # msg = Jason.decode!(msg)
    PhoenixSignaling.signal(signaling_id, msg)
    {:noreply, socket}
  end

  @impl true
  def handle_info({:membrane_webrtc_signaling, _pid, msg, _metadata}, socket) do
    IO.inspect({:receive_signal, socket.assigns.signaling_id, msg})
    push(socket, socket.assigns.signaling_id, msg)
    {:noreply, socket}
  end

all my peers successfully doing handshake but media wont flow into my peers
no errors in my code either, did i make some mistake on my pipeline?

1 Like

Hello @OctopusRage !
Could you tell where and how do you spawn your pipeline (i.e. what function do you use to spawn your pipeline) and how does the CallHandler.add_ingress/CallHandler.add_egress functions look like?
I also assume that the browser is sending only audio, isn’t it?

One tiny thing I see is that a more suggested approach would be to create a signalling with given id with PhoenixSignaling.new/1 in the place where you spawn your pipeline instead of fetching it with private PhoenixSignaling.Registry.get/1 function but I don’t think that the root of the problem.

Best wishes!

hey @varsill
yes browser only send audio

this is callhandler.ex looks like:

defmodule Piqeons.Rtc.CallHandler do
  alias ExWebRTC.SessionDescription
  alias PiqeonsWeb.Endpoint
  alias Piqeons.Rtc.PeerHandler2
  alias Membrane.WebRTC.Signaling
  alias Membrane.WebRTC.PhoenixSignaling
  alias Piqeons.Rtc.CallSpv
  alias Piqeons.Pipelines.RtcPipe
  use GenServer

  def new(call_id) do
    pid = CallSpv.get_call(call_id)

    if pid do
      pid
    else
      {:ok, pid} = CallSpv.spawn_call(call_id)
      pid
    end
  end

  def start_link(call_id, opts) do
    GenServer.start_link(__MODULE__, call_id, opts)
  end

  @impl true
  def init(call_id) do
    {:ok,
     %{
       call_id: call_id,
       ingress: %{
         signaling: nil,
         connected: false
       },
       egress: %{
         signaling: nil,
         connected: false
       },
       started: false
     }}
  end


  def add_egress(pid, peer_id, signaling \\ nil) do
    signaling = if signaling do
      signaling
    else
      # currently unused
      {p_pid, signaling} = PeerHandler2.new(peer_id)
      PeerHandler2.register_call_pid(p_pid, pid)
      signaling
    end

    IO.inspect({:egress, peer_id})
    GenServer.call(pid, {:add_egress, signaling})
  end

  def add_ingress(pid, peer_id, signaling \\ nil) do
    signaling = if signaling do
      signaling
    else
      # currently unused
      {p_pid, signaling} = PeerHandler2.new(peer_id)
      PeerHandler2.register_call_pid(p_pid, pid)
      signaling
    end

    GenServer.call(pid, {:add_ingress, signaling})
  end


  @impl true
  def handle_call({:add_egress, signaling}, _, state) do
    state = %{state | egress: %{signaling: signaling, connected: true}}

    if !state.started and state.ingress.signaling do
      state = start_pipeline(state)
      {:reply, state.egress.signaling, state}
    else
      {:reply, state.egress.signaling, state}
    end
  end

  @impl true
  def handle_call({:add_ingress, signaling}, _, state) do
    state = %{state | ingress: %{signaling: signaling, connected: true}}

    if !state.started and state.egress.signaling do
      state = start_pipeline(state)
      {:reply, state.ingress.signaling, state}
    else
      {:reply, state.ingress.signaling, state}
    end
  end


  @impl true
  def handle_info(msg, state) do
    IO.inspect(msg)
    {:noreply, state}
  end

  defp start_pipeline(state) do
    Task.start(fn ->
      Membrane.Pipeline.start_link(RtcPipe,
        ingress_signaling: state.ingress.signaling,
        egress_signaling: state.egress.signaling
      )
    end)

    %{state | started: true}
  end
end

current state how i tested it is i open 2 different tabs: 1 tab with ingress peer and other tab is egress peer

hi @varsill
i finally success implementing my code! thx for the insight
the problem lies on my js code.. so my mistake is assuming that ingress peer can do both record and play
and egress peer can also do both

so my other question is, is it possible to send back audio stream to ingress peer? or i have to do it hacky way?

Hi once again!

Great to hear that you managed to make it work!

so my other question is, is it possible to send back audio stream to ingress peer?

Though the browser supports using a single PeerConnction to handle both egress and ingress tracks, the Membrane Elements (Membrane.WebRTC.Source and Membrane.WebRTC.Sink) allow for data transfer only in one direction. So to transfer audio back to the browser that generated it you need to create a new signalling and use it with Membrane.WebRTC.Sink just as you would do to transfer data to any other peer.

A side question - are you sure you really need to send audio to the server and then send it the back to the browser? Couldn’t it be handled internally in the browser?

yes im sure i’ll need it, because i would like to replace the ingress peer with communication to the others server peer instead of browser and it requires media to flow without opening new peer connection

I see, so as I mentioned before - each of the Membrane.WebRTC.Source and Membrane.WebRTC.Sink elements supports only either inbound our outbound traffic for a single PeerConnection, but not both at the time.

However it’s possible to write a Membrane component based on ex_webrtc that would handle both inbound and outbound tracks with a single PeerConnection - you can take a look at how they did it here: membrane_rtc_engine/ex_webrtc/lib/ex_webrtc_endpoint.ex at master · fishjam-cloud/membrane_rtc_engine · GitHub .

Unfortunately it’s more complicated than handling just a single type of tracks.

1 Like

thx for the insight @varsill , ill take a look

1 Like