[OpenSIPS-Users] Dialog distribution
Mariana Arduini
marianarduini at gmail.com
Wed Mar 14 14:51:44 CET 2012
Hi Bogdan,
I got what you said about how the edge server should work, we were thinking
of something like that. Thanks a lot for pointing important issues to be
solved.
Back to the question on how to get an OpenSIPS instance to "see" dialogs
created by another OpenSIPS instance... what direction would you suggest us
to achieve this? Would it be something like creating a new Dialog Module
based on a distributed key-value store using one of the cache_db modules?
Thanks again!
Mariana.
On Mon, Mar 12, 2012 at 3:21 PM, Bogdan-Andrei Iancu <bogdan at opensips.org>wrote:
> **
> Hi Mariana,
>
> In your description, I identify 2 separate issues:
>
> 1) preserving the ongoing call - when the edge server learns that
> instance1 is down, it should take care of re-routing the sequential
> requests through a different core server ; as I guess you do RR on both
> edge and core, I assume that sequential requests are Route driven; if so,
> the edge, after doing loose_route() for a sequential request, it should
> check if the newly set destination (by loose_route) is still active or not
> (let's say there is an extension of LB module to tell if a peer is up or
> down); and if the destination pointed by Route hdr is down, the edge to
> simply route the call to another core proxy - of course this assumes that
> the core proxy should accept sequential requests with Route IP of one of
> the other core server (you need to alias the IPs of the other core proxies)
> - at least this will make the core system to receive, accept and route
> sequential requests for calls established through other core servers.
>
> 2) handling failure events for ongoing calls - by the SIP nature, once the
> call is established, there is nothing more going on at SIP level in order
> to "see" if the call is still on or not. This is one issue that can be
> addressed by in-dialog probing for example; or SST ; A second issues is
> what to do - ok, you noticed that core proxt 1 is down and you have 4 calls
> going through ? Considering that the routing info is in the Route headers
> (which are learned by end devices), there is not much you can do to force
> the dialogs to go through a different server, rather that what I said to 1)
>
> Regards,
> Bogdan
>
>
> On 03/12/2012 05:18 PM, Mariana Arduini wrote:
>
> Hi Bogdan,
>
> Thank you for pointing the possible issues in our solution :)
>
> The initial idea is to use load balancers running on a HA set. Our
> OpenSIPS instances will connect 2 network domains. We'll need a load
> balancer in front of each one of the network domains, as in this figure:
> http://s7.postimage.org/a5ex6dqgr/arch.jpg
>
> Besides running the load_balance module, edge servers would detect when
> instance 1 goes down and "transfer" all dialogs to another instance. I'm
> aware that this is not implemented in the current OpenSIPS code, but
> keeping all the established calls and being able to handle sequential
> requests on them is a requirement in our project and we'll have to find out
> how to do this. First thoughts may include making some changes on
> load_balance module, but so far we don't have a defined strategy. By the
> way, would you have any clue on this?
>
> Considering everything I've read in this users list and the way the
> load_balance module works (i.e. it just relays all of the sequential
> requests to the same server, whatever it is its status), I feel the current
> OpenSIPS implementation is not worried about loosing established ongoing
> calls in case of one of the instances fails, besides the usual lost of
> early state calls. Is it a common solution in SIP systems or is there any
> planning for improving this in OpenSIPS on the next releases?
>
> Thanks a lot again!
> Mariana
>
> On Mon, Mar 12, 2012 at 12:14 PM, Mariana Arduini <marianarduini at gmail.com
> > wrote:
>
>> Hi Bogdan,
>>
>> Thank you for pointing the possible issues in our solution :)
>>
>> The initial idea is to use load balancers running on a HA set. Our
>> OpenSIPS instances will connect 2 network domains. We'll need a load
>> balancer in front of each one of the network domains, as in this figure:
>>
>>
>>
>> Besides running the load_balance module, edge servers would detect when
>> instance 1 goes down and "transfer" all dialogs to another instance. I'm
>> aware that this is not implemented in the current OpenSIPS code, but
>> keeping all the established calls and being able to handle sequential
>> requests on them is a requirement in our project and we'll have to find out
>> how to do this. First thoughts may include making some changes on
>> load_balance module, but so far we don't have a defined strategy. By the
>> way, would you have any clue on this?
>>
>> Considering everything I've read in this users list and the way the
>> load_balance module works (i.e. it just relays all of the sequential
>> requests to the same server, whatever it is its status), I feel the current
>> OpenSIPS implementation is not worried about loosing established ongoing
>> calls in case of one of the instances fails, besides the usual lost of
>> early state calls. Is it a common solution in SIP systems or is there any
>> planning for improving this in OpenSIPS on the next releases?
>>
>> Thanks a lot again!
>> Mariana
>>
>>
>>
>> On Mon, Mar 12, 2012 at 9:08 AM, Bogdan-Andrei Iancu <bogdan at opensips.org
>> > wrote:
>>
>>> Hi Mariana,
>>>
>>> Well, before considering how opensips can "see" the dialogs created by
>>> other opensips instance, you should consider how the sequential request
>>> will get to the other instance.
>>> Let me explain:
>>> (1) dialog is created through instance X with IP1, so call is record
>>> routed with this IP1
>>> (2) instance X goes down, but you have another instance Y up and
>>> running with IP2
>>> (3) considering that sequential requests tries to go to IP1, how do
>>> you make them being routed (Ip level) to IP2 where the Y instance is
>>> running ?
>>>
>>> Regards,
>>> Bogdan
>>>
>>>
>>>
>>> On 03/09/2012 09:18 PM, Mariana Arduini wrote:
>>>
>>> Hello Bodgan,
>>>
>>> The problem is sequential requests in that dialog would not be
>>> delivered to the end-user, since the server would have gone down. BYE and
>>> re-INVITE messages wouldn't be relayed, affecting billing and features like
>>> call hold and call transfer. Also, we wouldn't be able to release media
>>> gateways resources.
>>>
>>> Despite all this, you sound like this is not an appropriate solution.
>>> If so, what other directions would you suggest?
>>>
>>> Thanks for your help!
>>> Mariana.
>>>
>>> On Fri, Mar 9, 2012 at 2:12 PM, Bogdan-Andrei Iancu <
>>> bogdan at opensips.org> wrote:
>>>
>>>> Hello Mariana,
>>>>
>>>> Currently there is no way you can share the dialog state between 2
>>>> running instances of opensips. Probably this will be available in the next
>>>> versions (1.9 maybe ?).
>>>>
>>>> But my question is how comes you have such a scenario that requests of
>>>> the same dialog end up on different servers ?? may you such consider fixing
>>>> that part.
>>>>
>>>> Regards,
>>>> Bogdan
>>>>
>>>>
>>>> On 03/09/2012 02:36 PM, Mariana Arduini wrote:
>>>>
>>>> Hello,
>>>>
>>>> I've been searching a lot on how to have more than one OpenSIPS
>>>> handling messages from the same dialog (for example, the initial request
>>>> goes to server #1 and the sequential requests go to server #2, in case
>>>> server #1 goes down). I've tried pointing the db_url to the same database
>>>> on both servers, setting db_mode parameter to 1 (flush all dialog data into
>>>> DB immediately), setting the db_update_period to a smaller value than the
>>>> default but didn't work, except for when we stop server #1 smoothly. Even
>>>> so, some header translations we should do were not performed.
>>>>
>>>> I'm supposed to find out how a distributed key-value store like Redis
>>>> can be useful on that. I've seen the example in the Key-value Interface
>>>> Tutorial but I have no idea on how to transfer dialog values, flags along
>>>> with other dialog state information from the database to a KVP store.
>>>> Would it be something like having a whole new dialog module that uses a
>>>> distributed cache_db instead? Sounds hard to accomplish...
>>>>
>>>> Is this
>>>> http://lists.opensips.org/pipermail/users/2012-February/020657.html supposed
>>>> to do what I need? Is there any example of use anywhere? What I got from it
>>>> is just profiling distribution, I don't get how could this allow all dialog
>>>> state to be shared...
>>>>
>>>> Thanks a lot for any pointer or help.
>>>> Mariana.
>>>>
>>>>
> --
> Bogdan-Andrei Iancu
> OpenSIPS Founder and Developerhttp://www.opensips-solutions.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opensips.org/pipermail/users/attachments/20120314/38a3d825/attachment-0001.htm>
More information about the Users
mailing list