[OpenSIPS-Devel] Location table structure - account_contact_idx

Donat Zenichev donat.zenichev at gmail.com
Thu Oct 17 03:26:19 EDT 2019


Good morning.
I'm terribly sorry, but I have one more question for OpenSIPS community.

I've started experiencing a trouble when working with mysql write-back
mode, and getting this error:
"CRITICAL:db_mysql:wrapper_single_mysql_stmt_execute: driver error (1062):
Duplicate entry 'XXXXXXXXXXX' for key 'PRIMARY' "

This only happens in the following case:
1. Subscriber A registers at instance 1;
2. Subscriber A suddenly gets disabled without sending de-registration;
2.1. At this point we still have a location record in the (shared) mysql
database for subscriber A ;
3. Subscriber A gets online and sends register message to instance 2;
3.1. Instance 2 gets an error 1062:
                "CRITICAL:db_mysql:wrapper_single_mysql_stmt_execute:
driver error (1062): Duplicate entry 'XXXXXXXXXXX' for key 'PRIMARY' "
3.2 old location record (saved by instance 1) is still in the location
table;

The logic for registrations (as well as for re-registrations) is as
following:
- get a subscriber authorized
- delete old location record with: remove("location", "$tu");
- save a new location with:
    if (!save("location")) { sl_reply_error(); }

For some reason remote() doesn't work when working over write-back,
meanwhile with sql-only mode it's ok.
Could this happen, that save() action was updated to mysql earlier, than a
remove() action? (when I was doing a new registering on instance2)
In case this is a true, it makes sense why I get duplicated entries.

As I understood, I shouldn't use a shared DB for all opensips instances
working in the cluster?
And does this mean that I need to use "skip_replicated_db_ops = 1"  when
using "write-back" and a shared mysql db?
I've come to this conclusion after reading of this bug case:
https://github.com/OpenSIPS/opensips/issues/1365

Many thanks to OpenSIPS community for any hint and advice in advance.

On Thu, Oct 10, 2019 at 12:00 PM Donat Zenichev <donat.zenichev at gmail.com>
wrote:

> Now I understand why does it happen.
> I just want to leave here my deduction for others, who will search for the
> same information.
>
> Firstly we need to understand a correlation of opensips subscribers
> activity and mysql wait_timout parameter.
>
> wait_timeout - this is a timeout responsible for disconnecting idling (not
> used) connections after a given timer for that triggers.
> so say we have it set to 300 seconds, in case a certain mysql connection
> (worker) is not in use during 300 seconds, this gonna be dropped.
> Default value is 28800 seconds, that is 8 hours.
>
> While you have a constant flow of subscribers that work with your opensips
> system, you have a bunch of mysql connections that send some data and are
> not in idle state (for the mysql server), thus the wait timer for such
> workers (on the mysql server side) is always updating and never reach it's
> end.
>
> When a certain mysql connection triggers it's wait_timeout it's gonna be
> dropped by mysql server. (And I guess opensips will re-connect this worker
> again?)
>
> Summarizing, it's a completely normal thing when you have re-connections
> appearing in you opensips log.
> In case you don't want to see them, just play with mysql timers not to
> lose any connection from opensips systems, but this has its drawback as
> well, like overfilled process-list on  the mysql server side, thus you have
> to increase a value of max_connections. And this is not such as good idea.
>
> There are also other useful timers of mysql that can be useful for solving
> certain problems:
> max_allowed_packet - responsible for the biggest packet size MySQL server
> can receive (consider this when you have pretty huge transactions).
> net_read_timeout and net_write_timeout - for cases when you have troubles
> with your network (delays, packets loss, jitter etc.)
> wait_timeout and interactive_timeout - both are responsible for
> disconnecting idling sessions (different between this couple is in the
> connection type, read mysql manuals).
> connect_timeout - the number of seconds that the mysqld server waits for a
> connect packet before responding with a Bad handshake.
>
> On Wed, Oct 9, 2019 at 1:43 PM Liviu Chircu <liviu at opensips.org> wrote:
>
>> On 09.10.2019 12:07, Donat Zenichev wrote:
>> > Yes 'wait_timout' has its default value of 8 hours. I would try to
>> > decrease it, but I guess this can lead to more re-connections.
>> > Am I right?
>>
>> No, it's quite the opposite.  The longer MySQL tolerates inactive
>> connections, the less often it will
>> have to destroy them.  Try setting "wait_timeout" to 10 seconds from the
>> console and play with it,
>> you will see what I mean.
>>
>> On 09.10.2019 12:07, Donat Zenichev wrote:
>> > Could you please describe a bit,
>> > why this "the more MySQL ops you will make them do" correlates to this
>> > "the less reconnects they will have to make" ?
>> > Does that mean, that the more job usrloc timer jobs do, the less mysql
>> > re-connections I have?
>>
>> The MySQL "wait_timeout" only tracks inactive connections (no data sent
>> by the client).
>> As soon as the client sends data on the conn, MySQL will reset its timer
>> to "wait_timeout"for that conn.
>>
>> If your calls are in the air for 4-5 seconds on a 100 Trying, I suggest
>> you switch to full DBG logs
>> and try to understand which MySQL op you're dealing with, and maybe
>> change the settings for that module.
>> It cannot be usrloc, since you just switched it to being fully async,
>> thanks to "write-back" SQL.
>>
>> Regards,
>>
>> Liviu Chircu
>> OpenSIPS Developer
>> http://www.opensips-solutions.com
>>
>>
>> _______________________________________________
>> Devel mailing list
>> Devel at lists.opensips.org
>> http://lists.opensips.org/cgi-bin/mailman/listinfo/devel
>>
>
>
> --
>
> Best regards,
> Donat Zenichev
>
>

-- 

Best regards,
Donat Zenichev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opensips.org/pipermail/devel/attachments/20191017/cbc289bb/attachment-0001.html>


More information about the Devel mailing list