[OpenSIPS-Users] OpenSIPS CPU Overload, Blocking detected and Timer Warnings
Bogdan-Andrei Iancu
bogdan at opensips.org
Thu Dec 12 14:17:17 UTC 2024
Hi Devang,
You get such CRITICAL errors when your opensips is blocked / stuck. The
way to debug is to do a "trap" via CLI - the backtraces should be
correlated with the reported blocked processes. Hopefully the backtrace
will provide some hints about the blockage.
Regards,
Bogdan-Andrei Iancu
OpenSIPS Founder and Developer
https://www.opensips-solutions.com
https://www.siphub.com
On 10.12.2024 16:43, Devang Dhandhalya wrote:
> Hello Everyone
>
> I am using OpenSIPS(3.4.9) kubernetes contarized with Active-Active HA
> setup
>
> I am facing an issue where the CPU usage of OpenSIPS gradually
> increases, and eventually, I am unable to use opensips-cli to check
> the process list or retrieve statistics.
> Below are the errors I am encountering:
>
> WARNING:core:timer_ticker: timer task <nh-timer> already scheduled
> 117890260 ms ago (now 181015110 ms), delaying execution
> WARNING:core:timer_ticker: timer task <ul-timer> already scheduled
> 117893270 ms ago (now 181015120 ms), delaying execution
> CRITICAL:core:__ipc_send_job: blocking detected while sending job type
> 0[RPC] on 39 to proc id 7/94 [SIP receiver udp:172.50.59.6:5060
> <http://172.50.59.6:5060>]
> ERROR:core:signal_pkg_status: failed to trigger pkg stats for process 7
> CRITICAL:core:__ipc_send_job: blocking detected while sending job type
> 0[RPC] on 39 to proc id 7/94 [SIP receiver udp:172.50.59.6:5060
> <http://172.50.59.6:5060>]
> ERROR:core:signal_pkg_status: failed to trigger pkg stats for process 7
> CRITICAL:core:__ipc_send_job: blocking detected while sending job type
> 0[RPC] on 39 to proc id 7/94 [SIP receiver udp:172.50.59.6:5060
> <http://172.50.59.6:5060>]
> ERROR:core:signal_pkg_status: failed to trigger pkg stats for process 7
> CRITICAL:core:__ipc_send_job: blocking detected while sending job type
> 0[RPC] on 39 to proc id 7/94 [SIP receiver udp:172.50.59.6:5060
> <http://172.50.59.6:5060>]
> DERROR:core:signal_pkg_status: failed to trigger pkg stats for process 7
> CRITICAL:core:__ipc_send_job: blocking detected while sending job type
> 0[RPC] on 39 to proc id 7/94 [SIP receiver udp:172.50.59.6:5060
> <http://172.50.59.6:5060>]
> ERROR:core:signal_pkg_status: failed to trigger pkg stats for process 7
> ERROR:core:signal_pkg_status: failed to trigger pkg stats for process 7
> CRITICAL:core:__ipc_send_job: blocking detected while sending job type
> 0[RPC] on 227 to proc id 54/141 [TCP receiver]
> ERROR:core:signal_pkg_status: failed to trigger pkg stats for process 54
> CRITICAL:core:__ipc_send_job: blocking detected while sending job type
> 0[RPC] on 227 to proc id 54/141 [TCP receiver]
> ERROR:core:signal_pkg_status: failed to trigger pkg stats for process 54
> CRITICAL:core:__ipc_send_job: blocking detected while sending job type
> 0[RPC] on 227 to proc id 54/141 [TCP receiver]
> ERROR:core:signal_pkg_status: failed to trigger pkg stats for process 54
> ERROR:core:handle_new_connect: maximum number of connections exceeded:
> 2048/2048
>
> In opensips configuration we ae handling TLS and WSS protocols
> We used Nathelper module for handling NAT and storing usrloc details
> in mongoDB using *federation-cachedb-cluster* and pinging_mode is
> ownership
> we are using auto scaling profiles and tag core parameter with socket
> and this tag parameter we using in save function as ownership tag
>
> *Important*: I noticed that when any one of the opensips pod is
> restarted after that sometime we are facing above warning of nh-timer
> and ul-timer and after that cpu starts increasing.
> After restart opensips pod private ip is changing so performing
> require action in postgres db to remove old record and add new record
> and perform clusterer reload on that opensips pod.
>
> So After performing the above actions why are we facing nh-timer and
> ul-timer warnings? I think due to that our resources start using and
> after this leads to OpenSIPS being unable to process calls or execute
> opensips-cli commands effectively.
>
> Any suggestions would be appreciated, kindly Let me know if you
> require any further information related to opensips configuration.
>
> Regards,
> *Devang Dhandhalya*
>
> *https://www.ecosmob.com/itexpo-2025/
> <https://www.ecosmob.com/itexpo-2025/>
> *
> *Disclaimer*
> In addition to generic Disclaimer which you have agreed on our
> website, any views or opinions presented in this email are solely
> those of the originator and do not necessarily represent those of the
> Company or its sister concerns. Any liability (in negligence, contract
> or otherwise) arising from any third party taking any action, or
> refraining from taking any action on the basis of any of the
> information contained in this email is hereby excluded.
>
> *Confidentiality*
> This communication (including any attachment/s) is intended only for
> the use of the addressee(s) and contains information that is
> PRIVILEGED AND CONFIDENTIAL. Unauthorized reading, dissemination,
> distribution, or copying of this communication is prohibited. Please
> inform originator if you have received it in error.
>
> *Caution for viruses, malware etc.*
> This communication, including any attachments, may not be free of
> viruses, trojans, similar or new contaminants/malware, interceptions
> or interference, and may not be compatible with your systems. You
> shall carry out virus/malware scanning on your own before opening any
> attachment to this e-mail. The sender of this e-mail and Company
> including its sister concerns shall not be liable for any damage that
> may incur to you as a result of viruses, incompleteness of this
> message, a delay in receipt of this message or any other computer
> problems.
>
> _______________________________________________
> Users mailing list
> Users at lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opensips.org/pipermail/users/attachments/20241212/166a49d0/attachment-0001.html>
More information about the Users
mailing list