So, to explain how I actually want the voice chat application to behave:
A user logs in and is redirected to a page. There aren’t really any elements on that page except for a logout button and a couple of pictures. I don’t plan to put anything interactive for now. Now when the user does log in and enter the page, WebRTC will ask to use the microphone (I did all of this).
When the user does allow microphone usage, he will start broadcasting audio.
Now the actual behavior will be that when another user does login, the audio between them can only be heard if a condition is met (I use locations in an online game for said conditions). If a third user does log in and they’re at the margin distances than this’ll happen:
(User locations: A – B – C)
- A will only hear B
- B will hear A and C
- C will only hear B
- (again, just in case: A will not hear C)
Now I think I could just make one channel and let all of the users log in into that but I guess that it’ll be a problem when multiple users log in and they’ll start getting the data stream from each other even though they’ll never hear each other (user A and C example) so they’ll download unnecessary data.
I guess I could as well make a per-user-channel (sounds plausible) and then stream data (which could be created on the server’s side I guess?) but I don’t know how to actually implement that, or rather, should I.
I’ve never done anything with WebRTC and there’s a high probability that neither are correct.
I’m not really asking for spoon-fed code, just some recommendations/ideas and/or pointers.