Voip with phoenix framework

I m using phoenix for my chat and it s work perfectly. I would like to add voip call into my project. There is a way to add this feature on my server phoenix or i need to user other technologies ?

Is your chat a website? If so, maybe you can you use WebRTC?

Sorry i forgot a details. The chat work only for android and ios…

Do you need the ability to call actual phone numbers, or is this just going to be voice chat between people on your chat service?

I open with the both possibility, but without number will be easier for me. I just need to have a voice comunication like skype or hangout.

You can use WebRTC on iOS too [0]. That will allow for p2p communication.

There are also ways to achieve a client-server architecture (like in discord). You can use Janus [1] or Chromium’s webrtc stack … Or you can try and implement some parts of webrtc in Elixir, there are quite a few stun/sdp libs in Erlang on github, also there has been some dtls support since Erlang 18, the only problem I see is srtp (here maybe you can use libsrtp [2] with dirty schedulers?)

[0] https://cocoapods.org/pods/WebRTC
[1] https://github.com/meetecho/janus-gateway
[2] https://github.com/cisco/libsrtp

1 Like

Thanks for your info, but it s not possible to use the Socket connection with phoenix framework to do the voip call ?

You can sure, but you’d need to re-implement the webrtc style of stuff on the client, but then you could use normal Websockets.

WebRTC uses it’s own protocol and style, your server can work with it too but it does not work with the phoenix socket connection since it does not use HTML5 sockets, but you can always make a side server next to phoenix built in to the same system.

Phoenix is entirely capable of it, the WebRTC standard is not. But if you want to forgo the WebRTC standard and make something else up, feel free.

1 Like

You can use AudioToolbox in iOS to encode voice into AAC and send it over the websocket to phoenix, which will then resend these packets to everyone else connected (to a channel?).

The problem with this approach is that you would have to think about some things that have already been thought about in webrtc, like packet loss concealment, echo cancelation, automatic gain control, noise reduction, and noise suppression (just copied these from the webrtc.org website).

Also webrtc uses udp as its trasnport and not tcp like websockets do, and udp might be a better fit for voip.

Thanks guy for all those information i will definitly take a look of all your advises.
Thanks

Ability to use udp as it’s transport. Why? Because UDP Hole Punching isn’t always a successful solution.

Voice over TCP is possible, but there’s a reason why there was a need for WebRTC in addition to something like WebSockets.

This is how I would do it. I would definitely open up a phoenix WebSocket and do all the negotiation of my voice connection over that connection.

Of course, if it’s a native app then you don’t need WebRTC because you can manipulate UDP sockets on your own and use the native libraries for things like gain control.

Either way, there isn’t an easy way to “make voice just work”. There are really just too many things that into it and even with WebRTC it’s just a protocol to open a udp connection to a client.

For the same reason why webrtc uses it, I guess, – timeliness is preferred over reliability in real time communications.

Or have I misunderstood what you meant?

Sorry, I am awful at getting ahead of myself and conveying only parts of my thought process.

I meant that WebRTC doesn’t always use udp because UDP Hole Punching doesn’t always work.

It’s also why central voice servers have become more prevalent even if just for fallback solutions. Ends up some firewalls can’t actually complete peer <-> peer udp connections with each other but can with a different type of firewall.

1 Like