[OpenSIPS-Users] [RFC] Distributed User Location
Rudy
rudy at dynamicpacket.com
Tue Apr 16 15:34:29 CEST 2013
Bogdan,
As David described, we use a similar memcached setup with script glue
and I think these are becoming fairly common when working in larger
scales. What would be great would be for the distributed location /
registrar modules to do most if not all of this internally. Depending
on what flags you pass to save, it can add all the path headers and
what not for the distributed location to work properly. In complement,
when calling lookup, it should handle both locally registered users
and dist users, registered on another proxy by directing the request
to the appropriate opensips instance. A working example of how these
would modules would function from script, please note this is pseudo
code.
<edge proxy 1>
save( contact user1 at domain ) => registration gets pushed into
distributed db, path added with ip of edge proxy 1
lookup( contact user1 at domain ) => lookup done in local cache (if not
found, check dist db), user found, sent to user1 at domain AOR (path is
local)
<edge proxy N>
lookup(contact user1 at domain ) => lookup is done in dist db, sent to
edge proxy 1 via path
<core proxy Y>
lookup(contact user1 at domain ) => lookup is done in dist db, sent to
edge proxy 1 via path
Thanks in advance,
--Rudy
Dynamic Packet
Toll-Free: 888.929.VOIP ( 8647 )
On Mon, Apr 15, 2013 at 10:28 PM, David Sanders <dsanders at pinger.com> wrote:
> I concur with Adrian, I don't think geo-distribution is as important for SIP
> signaling as it is for the media flow. The rest of this e-mail does not
> address geo-distribution.
>
> In regards to your last e-mail Bogdan, we do something similar to the system
> you describe, but in our case the 'distributed level' is a fault tolerant
> memcache setup. We have multiple OpenSIPS boxes which store registrations
> locally, and a key exists in memcache which specifies which box has the AORs
> for a given user.
>
> With multiple AORs for a user, we have some routing script logic which uses
> forced path headers and forwards any new registrations for a user to the
> OpenSIPS which has their other AORs. This addresses the NAT issue which Vlad
> mentions in the first e-mail in this thread, as all clients communicate with
> the IP that they register at. Some extra scripting logic is used to start
> keepalives on that OpenSIPS and also to allow INVITEs to be relayed through
> the OpenSIPS that a specific client registered at (say the first AOR exists
> on osipsA and a client registers to osipsB, the AORs would live on osipsA
> but all traffic for the second client is routed through osipsB).
>
> Muhammad mentions using 302s to redirect clients to the correct node.
> However, in my experience not all clients correctly process 302s. One of the
> driving factors for our current setup is that it is invisible to the client,
> as our main client UA did not properly handle 302s.
>
> I like the idea of storing the AORs in some distributed db, since it allows
> for OpenSIPS boxes to come and go without losing AORs. I can imagine a use
> case where someone is running OpenSIPS on Amazon EC2 and adds extra
> instances at peak load to handle more CPS. With AORs stored on a distributed
> db the OpenSIPS instances can be spun up and down at will. There is still
> the NAT issue, but this could be eliminated by relaying all SIP traffic
> through a client facing OpenSIPS instance that handles keepalives. At that
> point the OpenSIPS instances behind this box would be decoupled from the
> clients and their IPs wouldn't matter, as the clients only see the first
> box. This would also work with the relay-script mechanism we currently use
> due to path headers.
>
> - David
>
>
> On 4/9/13 1:51 AM, Adrian Georgescu wrote:
>>
>> I am running for years servers distributed in different countries part of
>> the same service and nobody complained about the latency of signaling but
>> they complained about media. This idea of geo-distribution is more about
>> media path optimization and automatic recovery in case of connectivity
>> failures in one location rather than pointing user from country X to server
>> Y. Any distribution model that is not done by a formula like Chord does, is
>> not deterministic. This means that mapping must be manually provisioned and
>> changed if nodes come and go. You cannot deterministically geo-locate to
>> the same server unless you hardwire this as a setting or in a database, is
>> not self-learning and in case of server failure it must be changed manually.
>> Signaling-wise the geo-location is less of a problem. If it takes a fews
>> seconds to make a call setup is not that critical, media relay however need
>> to use shortest path so. An algorithm to allocate the closest relay to the
>> calling party is much more useful. However without an async core, reserving
>> a relay 1 second RTT-wise away is a killer for the overall CPS. Whatever you
>> try to do without an async core any distribution of resources hit the
>> performance problem related to the child being blocked while processing a
>> request and the farther away the database or relay the worse it gets. I
>> think that by addressing the async issue, will automagically create a
>> multitude of solutions for better distribution and load balancing.
>>
>> Adrian
>>
>> On Apr 9, 2013, at 10:17 AM, Bogdan-Andrei Iancu wrote:
>>
>>> Hi,
>>>
>>> Putting together what you said and what Adrian and Muhammad said :
>>>
>>> Actually we may have a distributed USRLOC for 2 purposes: geo
>>> distribution and load distribution - how they are approach it is a bit
>>> different.
>>>
>>> But first let's look into the common part (for the 2 cases) : IMHO, in
>>> both cases we should have the SIP part (opensips) storing the actual full
>>> registration in a certain location (via USRLOC) and an upper layer,
>>> distributed, to keep a mapping between users (AORs) and the location(s) they
>>> are registered with. So:
>>> - local level - OpenSIPS doing classing registrations (a node)
>>> - distributed level - some other tool to keep (in a distributed
>>> fashion) the mapping of AORs on the nodes
>>>
>>> Now, here comes the difference.
>>>
>>> If you do geo distribution, you want to keep registration as closes as
>>> possible to the user. So the registration will be kept on the OpenSIPS node
>>> which was contacted by the user. In this case Chord does not work (at
>>> distributed level) as it has its own alg to distributed data across nodes;
>>> in our case we want to control the distribution and to say what
>>> data/registration stays on what node/opensips.
>>>
>>> If you do load distribution, you want to balance all received
>>> registrations across all existing nodes/opensips - in this case a Chord like
>>> approach will help (as it will do the load distribution for you).
>>>
>>>
>>> As I see the solution : have the 2 layers (local and distributed) as
>>> built in in OpenSIPS and additionally to be able to use different algorithms
>>> to do the mapping between registrations and OpenSIPS nodes.
>>>
>>>
>>> Is the above a good approach ??
>>>
>>> Regards,
>>>
>>> Bogdan-Andrei Iancu
>>> OpenSIPS Founder and Developer
>>> http://www.opensips-solutions.com
>>>
>>>
>>> On 04/05/2013 04:45 PM, Rudy wrote:
>>>>
>>>> Everyone,
>>>>
>>>> Before we get too off topic, I think the goal should be to design
>>>> something truly distributed. This would be more like what Adrian
>>>> suggested and less like a super node / slave node scenario. The nodes
>>>> should be able to coordinate amongst themselves, again, similar to the
>>>> docs Adrian shared.
>>>>
>>>> One thing we will need is a consistent hashing alg. Adrian suggested
>>>> Chord, another that works well for us in our implementations is
>>>> Ketama. Either way, it needs to be able to have consistent hashing, so
>>>> that additions / removals of nodes do not change the location of home
>>>> proxy of each registered user.
>>>>
>>>> http://en.wikipedia.org/wiki/Consistent_hashing
>>>>
>>>> Thanks in advance,
>>>> --Rudy
>>>> Dynamic Packet
>>>> Toll-Free: 888.929.VOIP ( 8647 )
>>>>
>>>>
>>>> On Fri, Apr 5, 2013 at 9:39 AM, Muhammad Shahzad<shaheryarkh at gmail.com>
>>>> wrote:
>>>>>
>>>>> Well, i am not much familiar with internals of opensips, i.e. its core
>>>>> and
>>>>> modules and how they interact with each other. But as an abstract idea,
>>>>> i
>>>>> suggest that both Base Node and Super Node should be opensips modules.
>>>>> No
>>>>> change in standard registrar or usrloc modules are actually needed.
>>>>>
>>>>> In the Super Node module, we will have,
>>>>>
>>>>> 1. one db table to store base node addresses for monitoring the Event.
>>>>> 2. one db table to store data received from the Event, lets call it
>>>>> "Event
>>>>> Table".
>>>>> 3. one process to manage "Event Table", pretty much the same way
>>>>> location
>>>>> table is managed by usrloc module.
>>>>> 4. some scripting functions for opensips.cfg, to look up in "Event
>>>>> Table"
>>>>> and do SIP redirect.
>>>>> 5. some MI functions to manually manage base node table and event
>>>>> table.
>>>>>
>>>>> In the Base Node module, we will have,
>>>>>
>>>>> 1. module parameters to define address of Super Node and event
>>>>> advertise
>>>>> socket (Super Node will connect to this socket to receive events).
>>>>> 2. a process to monitor usrloc table, such that as soon as a new user
>>>>> registers, it advertise this to event socket.
>>>>> 3. some scripting functions for opensips.cfg, to send call to Super
>>>>> Node if
>>>>> lookup function (from registrar module) fails and in reply route to
>>>>> handle
>>>>> SIP Redirect to send call to destination base node returned by Super
>>>>> Node.
>>>>> 4. some MI functions etc.
>>>>>
>>>>> Thank you.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Apr 5, 2013 at 1:37 PM, Bogdan-Andrei
>>>>> Iancu<bogdan at opensips.org>
>>>>> wrote:
>>>>>>
>>>>>> Hello Muhammad,
>>>>>>
>>>>>> Your approach is the correct one (from SIP perspective) IMHO. But
>>>>>> there
>>>>>> are questions on the implementation side too - like the "Super Node"
>>>>>> is just
>>>>>> a storage or it should have SIP capabilities? How much of this
>>>>>> behavior
>>>>>> should be hardcoded in the registrar + usrloc module ?
>>>>>>
>>>>>> Best regards,
>>>>>>
>>>>>> Bogdan-Andrei Iancu
>>>>>> OpenSIPS Founder and Developer
>>>>>> http://www.opensips-solutions.com
>>>>>>
>>>>>>
>>>>>> On 04/05/2013 04:57 AM, Muhammad Shahzad wrote:
>>>>>>
>>>>>> Well at 5 am in the morning while thinking on this topic the only
>>>>>> thing
>>>>>> ringing in my mind is a mechanism similar to IP to IP Gateway. Here is
>>>>>> the
>>>>>> main concept.
>>>>>>
>>>>>> 1. We have number of SIP servers running, say sip1.mydomain.com,
>>>>>> sip2.mydomain.com ... sipN.mydomain.com, each serving domain
>>>>>> mydomain.com
>>>>>> and a SIP client A can select any one of these servers through DNS
>>>>>> look-up
>>>>>> (or whatever way possible) and registers to that server. Lets call
>>>>>> these
>>>>>> servers as Base Nodes.
>>>>>>
>>>>>> 2. Upon successful registration of client A to server
>>>>>> sip1.mydomain.com,
>>>>>> this Registrar Node fires an Event, which can be subscribed by a
>>>>>> back-end
>>>>>> SIP server, lets call it Super Node. This event will only contain
>>>>>> following
>>>>>> things,
>>>>>>
>>>>>> a). User part of all Contact URIs of client A with Expiry.
>>>>>> b). Registrar Node information e.g. its IP address + Port.
>>>>>> c). SIP domain of client A. (in case of multi-domain setup)
>>>>>>
>>>>>> 3. Super Node stores this information in some db back-end (memcache,
>>>>>> redis, mysql etc.). This is sort of back-to-back register process but
>>>>>> without SIP or authentication, since that has already been handled on
>>>>>> Based
>>>>>> Node anyway. The Super Node only needs to know which user is
>>>>>> registered on
>>>>>> which Base Node e.g. user 1001 is registered on node
>>>>>> sip1.mydomain.com, user
>>>>>> 1203 is registered on sip6.mydomain.com and so on.
>>>>>>
>>>>>> 4. When a SIP client B tries to send INVITE or MESSAGE or SUBSCRIBE to
>>>>>> SIP
>>>>>> client A. The SIP request will arrive on Base Node it is currently
>>>>>> registered with, say sip2.mydomain.com. This node will first do local
>>>>>> look-up for location of client A. Upon failure it will forward request
>>>>>> to
>>>>>> Super Node, which will do a look-up on Event database and finds that
>>>>>> client
>>>>>> A is registered on node sip1.mydomain.com, so it will send SIP
>>>>>> redirect
>>>>>> response 302 to requester Base Node. Now the request source node knows
>>>>>> the
>>>>>> address of request destination node, where it will send request next
>>>>>> and
>>>>>> they both, while acting as independent SIP servers, establish SIP
>>>>>> session
>>>>>> between caller and callee. This should work regardless if both nodes
>>>>>> serve
>>>>>> same or different SIP domains.
>>>>>>
>>>>>> 5. The Super Node will also give us global presence of all users
>>>>>> currently
>>>>>> registered to all Base Nodes, which may be useful for management and
>>>>>> monitoring etc.
>>>>>>
>>>>>> Pros:
>>>>>> 1. Completely independent of network topology and SIP.
>>>>>> 2. Will work seamlessly for multi and federated domains.
>>>>>> 3. Scale-able in every direction.
>>>>>> 4. Minimal overhead for session establishment. Once supper node return
>>>>>> destination base node address in SIP redirect response, session will
>>>>>> establish directly between source and destination base node. Further
>>>>>> optimizations are possible, e.g. base node can cache destination base
>>>>>> node
>>>>>> returned by supper node for any particular user and avoid querying
>>>>>> super
>>>>>> node for recurring SIP sessions.
>>>>>>
>>>>>> Cons:
>>>>>> 1. Well, the key problem i can guess is of course the Event database
>>>>>> size
>>>>>> and speed, as it will have information on every user registered to
>>>>>> every
>>>>>> Base Node. I suggest memory cache db such as Redis would be idle for
>>>>>> this
>>>>>> storage.
>>>>>> 2. Bandwidth consumed in Event transport. We can apply compression and
>>>>>> make event queues as optimization.
>>>>>>
>>>>>> Comments and suggestions are highly welcome.
>>>>>>
>>>>>> Thank you.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Apr 4, 2013 at 2:05 PM, Vlad Paiu<vladpaiu at opensips.org>
>>>>>> wrote:
>>>>>>>
>>>>>>> Hello all,
>>>>>>>
>>>>>>> We would like to get suggestions and help on the matter of
>>>>>>> distributing
>>>>>>> the user location information.
>>>>>>> Extending the User Location with a built-in distributed support is
>>>>>>> not
>>>>>>> straight forward - it is not about simply sharing data - as it is
>>>>>>> really SIP
>>>>>>> dependent and network limited
>>>>>>>
>>>>>>> While now, by using the OpenSIPS trunk, it is possible to just share
>>>>>>> the
>>>>>>> actual usrloc info ( by using the db_cachedb module and storing the
>>>>>>> information in a MongoDB cluster ), you can encounter real-life
>>>>>>> scenarios
>>>>>>> where just sharing the info is not enough, like :
>>>>>>> - NAT-ed clients, where only the initial server that received
>>>>>>> the
>>>>>>> Register has the pin-hole open, and thus is the only server that can
>>>>>>> relay
>>>>>>> traffic back to the respective client
>>>>>>> - the user has a SIP client that only accepts traffic from the
>>>>>>> server
>>>>>>> IP that it's currently registered against, and thus would reject
>>>>>>> direct
>>>>>>> traffic from other IPs ( due to security reasons )
>>>>>>>
>>>>>>> We would like to implement a true general solution for this issue,
>>>>>>> and
>>>>>>> would appreciate your feedback on this. Also we'd appreciate if you
>>>>>>> could
>>>>>>> share the needs that you would have from such a distributed user
>>>>>>> location
>>>>>>> feature, and the scenarios that you would use such a feature in
>>>>>>> real-life
>>>>>>> setups.
>>>>>>>
>>>>>>>
>>>>>>> Best Regards,
>>>>>>>
>>>>>>> --
>>>>>>> Vlad Paiu
>>>>>>> OpenSIPS Developer
>>>>>>> http://www.opensips-solutions.com
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at lists.opensips.org
>>>>>>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Mit freundlichen Grüßen
>>>>>> Muhammad Shahzad
>>>>>> -----------------------------------
>>>>>> CISCO Rich Media Communication Specialist (CRMCS)
>>>>>> CISCO Certified Network Associate (CCNA)
>>>>>> Cell: +49 176 99 83 10 85
>>>>>> MSN: shari_786pk at hotmail.com
>>>>>> Email: shaheryarkh at googlemail.com
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at lists.opensips.org
>>>>>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Mit freundlichen Grüßen
>>>>> Muhammad Shahzad
>>>>> -----------------------------------
>>>>> CISCO Rich Media Communication Specialist (CRMCS)
>>>>> CISCO Certified Network Associate (CCNA)
>>>>> Cell: +49 176 99 83 10 85
>>>>> MSN: shari_786pk at hotmail.com
>>>>> Email: shaheryarkh at googlemail.com
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at lists.opensips.org
>>>>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opensips.org
>>>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opensips.org
>>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opensips.org
>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
More information about the Users
mailing list