Process audio data from browser

I have a Phoenix application that I have setup to create a RTP session with LiveView server. LiveView currenty handles the signaling part and I am getting the RTP packets on my LiveView. :grinning:

Now, I am trying to figure out the best way to process the media packets. I see two options.

  1. Use membrane with a custom source element that take the RTP/Opus packets and builds up a pipeline to do whatever processing needs to be done.
  2. Use boombox(this would be my preference). But I don’t see how to create the input that boombox needs. The WebRTC input takes a signaling server(which I have already taken care of). Not sure how to get the RTP packets into boombox so it can create the output. Perhaps I could use Membrane.WebRTC.SignalingChannel, to handle the signalling part and then pass that to BoomBox…However, I have not figured out how the SignalingChannel gets used. Could I start this in my LiveView and pass the signaling messages from client to the SignalingChannel?

Hi, the Membrane way will work, but the SignalingChannel way won’t. The SignalingChannel proxies WebRTC session negotiation messages only - then the RTP messages don’t go through it, but directly to Boombox/WebRTC source. What you would need is Boombox accepting plain RTP. We’re working on that right now, but we also decided it’s high time we paid some tech debt in the Membrane’s RTP implementation, so it may take a month or two.

1 Like