Our WebRTC implementation is growing. Its core is the Membrane RTC Engine, which is a SFU library that comes with four client SDKs — in Kotlin, Swift, React and TypeScript. On top of that, we have built an exemplary, open-source video conferencing system that is available online. All of these things are slowly being pushed to production.
In the meantime, interest in our WebRTC implementation has significantly increased… More and more questions are asked on our Discord server, and a growing number of new contributors are willing to add new features and implement missing parts of the WebRTC standard. We have also seen an increase in people trying to use the Membrane RTC Engine in their applications.
Although keeping all WebRTC-related repositories in the membraneframework organization allows us to easily show the new domain Membrane is able to handle, it also has several drawbacks. First of all, it’s hard for newcomers to discover all related pieces, as there are about 100 repositories in the whole membraneframework organization. Their nature is also different, as they are mostly plugins that cannot be used on their own. Another drawback is that we cannot directly compare our WebRTC implementation to similar solutions, as Membrane itself is a framework. Therefore, we decided that it’s a good time to separate all WebRTC-related repositories to a new organization.
To start off, the organization is going to be called Jellyfish. Establishing the final branding might take us some time, but because there is so much exciting work ahead of us, we don’t want to wait any longer.
The work we are talking about is the new standalone media server. It can be thought of as a multimedia bridge meant for creating different types of multimedia systems. For example, it will be a perfect choice for a real-time video conferencing system, a broadcasting solution or both at the same time. The unique feature of our media server is the ability to combine different multimedia protocols. For example, one can stream video from its CCTV camera via RTSP to the server, convert it to WebRTC and send to its web application. In general, there are no limitations.
At the beginning, our standalone server is going to be focused on WebRTC and HLS, but we are planning to add more protocols in the future (RTMP, RTSP, etc.). The server will be provided as both precompiled binaries and Docker images, so that it will be easy to install and run on your OS (except Windows ofc :)).
Last but not least, all the knowledge on how the media server works (particularly how to implement server side WebRTC features like simulcast or how to convert WebRTC to HLS), will be published on GitHub as an open book everyone can contribute to.