<div dir="ltr">By the way, <div><br></div><div><span style="font-family:arial,sans-serif;font-size:13px">pkmem:2-real_used_size = 50858056</span><br></div><div><span style="font-family:arial,sans-serif;font-size:13px"><br>
</span></div><div><span style="font-family:arial,sans-serif;font-size:13px">Is the MI FIFO process. We've noticed a strange issue with a lot of broken pipes causing opensips_receiver_XXXX pipes to be left in the /tmp directory on the box, to the order of 100's after just an hour or so. We do have crons that are checking the MI about once a minute for statistics, but this is new behavior that we didn't experience (with the abandoned pipes) in 1.7.</span></div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Sep 19, 2013 at 4:58 PM, Bobby Smith <span dir="ltr"><<a href="mailto:bobby.smith@gmail.com" target="_blank">bobby.smith@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Greetings list,<div><br></div><div>We're trying to track down some memory issues that we originally thought were related to rabbitmq, but after updating to the latest 1.9 I'm seeing a lot of these errors in the log file:</div>
<div><br></div><div><div>2013-09-19T20:54:40.115582+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: CRITICAL:dialog:log_next_state_dlg: bogus event 2 in state 3 for dlg 0x2acc8b7780b0 [3248:527118168] with clid '<a href="mailto:3388204-13671@10.215.190.98" target="_blank">3388204-13671@10.215.190.98</a>' and tags '3388204' '13665SIPpTag01563795'</div>
</div><div><br></div><div>I understand what this means (I think, it's around the order that a 200 OK and ACK are processed), but repeating the same test on a previous revision doesn't show these messages.</div><div>
<br></div><div>Also, after a short amount of time running the test:</div><div><br></div><div><div>2013-09-19T18:02:23.809205+00:00 registrar2 /usr/local/opensips/sbin/opensips[3918]: ERROR:core:build_req_buf_from_sip_req: out of pkg memory</div>
<div>2013-09-19T18:02:23.809231+00:00 registrar2 /usr/local/opensips/sbin/opensips[3918]: ERROR:tm:print_uac_request: no more shm_mem</div><div>2013-09-19T18:02:23.809242+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:core:build_req_buf_from_sip_req: out of pkg memory</div>
<div>2013-09-19T18:02:23.809252+00:00 registrar2 /usr/local/opensips/sbin/opensips[3918]: ERROR:tm:t_forward_nonack: failure to add branches</div><div>2013-09-19T18:02:23.809261+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:print_uac_request: no more shm_mem</div>
<div>2013-09-19T18:02:23.809271+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:t_forward_nonack: failure to add branches</div><div>2013-09-19T18:02:23.809279+00:00 registrar2 /usr/local/opensips/sbin/opensips[3918]: ERROR:tm:_reply_light: failed to allocate shmem buffer</div>
<div>2013-09-19T18:02:23.809288+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:_reply_light: failed to allocate shmem buffer</div><div>2013-09-19T18:02:23.809297+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: ERROR:tm:new_t: out of mem</div>
<div>2013-09-19T18:02:23.809306+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: ERROR:tm:t_newtran: new_t failed</div><div>2013-09-19T18:02:23.809911+00:00 registrar2 /usr/local/opensips/sbin/opensips[3921]: ERROR:tm:new_t: out of mem</div>
<div>2013-09-19T18:02:23.809942+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:new_t: out of mem</div><div>2013-09-19T18:02:23.809970+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:t_newtran: new_t failed</div>
<div>2013-09-19T18:02:23.809999+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: ERROR:tm:new_t: out of mem</div><div>2013-09-19T18:02:23.810037+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: ERROR:tm:t_newtran: new_t failed</div>
<div>2013-09-19T18:02:23.810068+00:00 registrar2 /usr/local/opensips/sbin/opensips[3921]: ERROR:tm:t_newtran: new_t failed</div><div>2013-09-19T18:02:23.810880+00:00 registrar2 /usr/local/opensips/sbin/opensips[3919]: ERROR:core:build_req_buf_from_sip_req: out of pkg memory</div>
<div>2013-09-19T18:02:23.810921+00:00 registrar2 /usr/local/opensips/sbin/opensips[3921]: ERROR:dialog:dlg_add_leg_info: Failed to resize legs array</div></div><div><br></div><div>It seems very strange we'd run out of both package and shared memory at the same time. When I dump statistics when these messages are propagating in the log, I see:</div>
<div><br></div><div>According to statistics:</div><div><br></div><div><div>shmem:total_size = 1073741824</div><div>shmem:used_size = 168525088</div><div>shmem:real_used_size = 390522728</div><div>shmem:max_used_size = 1060997488</div>
<div>shmem:free_size = 683219096</div><div>shmem:fragments = 1106426</div></div><div><br></div><div><div>pkmem:0-real_used_size = 601136</div><div>pkmem:1-real_used_size = 610592</div><div>pkmem:2-real_used_size = 50858056</div>
<div>pkmem:3-real_used_size = 610416</div><div>pkmem:4-real_used_size = 610416</div><div>pkmem:5-real_used_size = 610416</div><div>pkmem:6-real_used_size = 610416</div><div>pkmem:7-real_used_size = 610416</div><div>pkmem:8-real_used_size = 610416</div>
<div>pkmem:9-real_used_size = 610416</div><div>pkmem:10-real_used_size = 610416</div><div>pkmem:11-real_used_size = 650864</div><div>pkmem:12-real_used_size = 654800</div><div>pkmem:13-real_used_size = 650944</div><div>pkmem:14-real_used_size = 651136</div>
<div>pkmem:15-real_used_size = 650704</div><div>pkmem:16-real_used_size = 650888</div><div>pkmem:17-real_used_size = 651712</div><div>pkmem:18-real_used_size = 651040</div><div>pkmem:19-real_used_size = 601136</div><div>
pkmem:20-real_used_size = 618512</div>
<div>pkmem:21-real_used_size = 669680</div><div>pkmem:22-real_used_size = 669680</div><div>pkmem:23-real_used_size = 669680</div><div>pkmem:24-real_used_size = 669680</div><div>pkmem:25-real_used_size = 669680</div><div>
pkmem:26-real_used_size = 669680</div>
<div>pkmem:27-real_used_size = 669680</div><div>pkmem:28-real_used_size = 669680</div><div>pkmem:29-real_used_size = 660464</div></div><div><br></div><div>And pkmem is configured for 64MB per process.</div><div><br></div>
<div>Any thoughts? It doesn't seem like transactions are dropping or anything, we just see these strange issues in the logs.</div><div><br></div><div>Thanks,</div><div><br></div><div><br></div><div><br></div></div>
</blockquote></div><br></div>