[OpenSIPS-Users] MediaProxy loading issues - I think I need some tuning here

Jock McKechnie jock.mckechnie at gmail.com
Wed Apr 4 15:10:05 CEST 2012


Thank you, Saúl, for your swift reply.

I'm running a Deb Wheezy/Sid (unstable) release to keep up with the
latest dependencies for MediaProxy's build - which, I admit, I'm using
a build package from a few months ago.

I've got iptables v1.4.12.2 running, with MediaProxy 2.5.1 (according
to the dpkg information after the debuild), so slightly behind that
fixed descriptor leak release.

The loading on the box was clearly not right with whatever seems to be
going wrong, so my making any kind of assumptions on how well
MediaProxy works is unfair until I've got this sorted out.

Thank you, again.

 - JP


On Wed, Apr 4, 2012 at 1:46 AM, Saúl Ibarra Corretgé
<saul at ag-projects.com> wrote:
> Hi Jock,
>
> What MediaProxy version are you running?
>
> On Apr 3, 2012, at 10:50 PM, Jock McKechnie wrote:
>
>> Greetings all;
>>
>> We have several mediaproxy systems running in small scale production
>> (~50-100 calls concurrently) and have been very pleased with the
>> results. We find that we have to restart the relay/dispatcher machines
>> daily to keep them ticking over (they tend to get lost on their own
>> after a few days runtime), but this is a minor inconvenience.
>>
>
> What do you mean by "get lost on their own"?
>
>> Until today. Today I tried moving one of our small carrier circuits
>> over to it and gee whiz did all sorts of exciting things happen. I
>> have our systems set up with an initial OpenSIPS/media-dispatcher
>> running on a VM (public IP). This dispatcher speaks to a blade server
>> which is running a single media-relay instance.
>>
>> Under light load all is well. When the load starts ramping up (800+
>> calls) thing start going a bit pear-shaped, however. I end up with
>> massive numbers of entries like this in the syslog of the relay:
>> Cannot use port pair 53378/53379
>> Which appears to bog the whole relay down to the point where it's
>> using 100% of the core. Even after turning the calls back off, the
>> -relay remains at 100% and continues to dump more 'Cannot use port
>> pair' notices into rsyslog and is impossible to stop normally due to
>> it being so tied up. rsyslog was not loaded out in the 'top', so
>> although it was clearly being hammered by -relay, I don't think
>> rsyslog was the bottleneck here.
>>
>
> There was a very nasty bug after an API change in iptables which caused socket descriptors to be leaked, which led to this situation. What version of iptables are you using? (iptables -V).
>
>> I guess my first question is, what am I doing wrong here to cause it
>> to be pushing literally tens of thousands of these errors?
>>
>> And then, next, how do I best tune mediaproxy to handle larger loads?
>> I was thinking I could run several -relays on a single blade as they
>> appear to be single-threaded and, therefore, multiple forks will load
>> across the machine properly... but I'm not even sure if -relay can use
>> a different conf file to the default.
>>
>
> Yes, MediaProxy is single threaded, but the actual relaying of packets happen in *kernel space*, not in that single thread. Thus, you shouldn't run more than one relay in a single box, and that's why it's not even supported. If one box it's not enough, just add another one with another instance of MediaProxy relay :-)
>
>> The dispatcher, which as I said lives on the OpenSIPS vm, looks like this:
>> [Dispatcher]
>> socket_path=/tmp/dispatcher.sock
>> listen=dispatcher.public.ip.address
>> management_use_tls=no
>> log_level=WARNING
>>
>> The relay, on a Dell M610 blade, looks like:
>> [Relay]
>> dispatchers=dispatcher.public.ip.address
>> relay_ip=relay.public.ip.address
>> port_range=50000:60000
>> log_level=WARNING
>>
>> Any suggestions would be gratefully received;
>>
>
>
> Regards,
>
> --
> Saúl Ibarra Corretgé
> AG Projects
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users



More information about the Users mailing list