Hi list,<br><br>Ive done quite a bit of troubleshooting and ive found the switch runs clean with not using dp_translate, but when we do the errors appear.<br><br>After a few thousand calls we start getting: (no errors before this)<br>
<br>Sep 18 00:09:13 sips /usr/local/sbin/opensips[68260]: ERROR:dialplan:dp_get_svalue: no AVP or SCRIPTVAR found (error in scripts)<br>Sep 18 00:09:13 sips /usr/local/sbin/opensips[68260]: ERROR:dialplan:dp_translate_f: invalid param 2<br>
Sep 18 00:09:13 sips /usr/local/sbin/opensips[68260]: ERROR:core:do_assign: no value in right expression<br>Sep 18 00:09:13 sips /usr/local/sbin/opensips[68260]: ERROR:core:do_assign: error at line: 298<br><br>Backtrace shows:<br>
#0 0x0000000801ff0211 in rule_translate (msg=0x6fe600, string={s = 0x80282a9c3 "1234569999", len = 10}, rule=Variable "rule" is not available.<br>) at dp_repl.c:192<br>192 memcpy(result->s + result->len, match.begin, match.len);<br>
(gdb)<br><br>Were using sipP to test this, im setting the source and dest number manually with a AVP var then having dp_translate run on it, its taking a 10 digit number and turning it into 11 digits, we have about 45 rules loaded into the database for the dialplan, with this particular dialplan ID their is 2 rules total, we call dp_translate a total of 4 times for each new call.<br>
<br>vmstat is basically all 0's when dp_translate disabled, when enabled it looks like: <br><br>0 9 0 2891M 2574M 1484 0 0 0 3737 0 0 0 2744 29807 11711 13 15 72<br> 1 7 0 2899M 2569M 1493 0 0 0 1983 0 0 0 2678 39221 11355 13 11 76<br>
0 8 0 2891M 2568M 1119 0 0 0 2821 0 0 0 2360 28331 10401 13 15 72<br> 0 8 0 2901M 2565M 1477 0 0 0 2086 0 0 0 2226 39722 9430 11 15 74<br> 1 8 0 2893M 2560M 1250 0 0 0 1993 0 0 0 2912 23983 12123 11 15 74<br>
4 6 0 2901M 2551M 1557 0 0 0 2035 0 0 0 3075 38446 13035 12 18 70<br> 0 9 0 2893M 2548M 1103 0 0 0 1877 0 0 0 2772 26050 11474 12 12 76<br> 0 8 0 2901M 2539M 1434 0 0 0 743 0 0 0 3289 34833 13759 8 17 75<br>
0 9 0 2893M 2534M 943 0 0 0 1533 0 0 0 3372 23843 14379 8 24 68<br> 2 7 0 2901M 2528M 1252 0 0 0 1207 0 0 0 2762 39615 11275 12 13 75<br> 0 8 0 2902M 2521M 1134 0 0 0 703 0 0 0 3364 18464 14069 6 18 76<br>
0 8 0 2901M 2514M 1670 0 0 0 1737 0 0 0 3771 17832 17211 1 16 82<br> 0 8 0 2902M 2508M 1212 0 0 0 803 0 0 0 3141 5263 13990 1 14 85<br> 0 8 0 2901M 2499M 1542 0 0 0 1241 0 0 0 3720 17120 16641 1 17 82<br>
0 7 0 2902M 2497M 1260 0 0 0 2027 0 0 0 2561 6328 11863 1 14 85<br> 0 7 0 2901M 2499M 1979 0 0 0 3653 0 0 0 2442 19121 11724 3 13 85<br> 1 8 0 2902M 2498M 1387 0 0 0 3062 0 0 0 2183 6172 10662 0 13 87<br>
<br><br>We have ran this at 5CPS and the switch will run fine for several thousand calls, then at 60+ CPS and runs for several thousand calls as well, so it appears to be a memory issue to me as when the total number of processed calls goes up is when it dies on us.<br>
<br>Let me know what else I can do to test/debug on my side to help with this.<br><br>Thanks<br>