<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <font face="monospace">Hi,<br>
      <br>
      We avoided on purpose an unset command, as in such case it will be
      impossible to decide which remaining node should take over the tag
      - keep in mind that the fundamental idea is that a sharing tag
      MUST be active on some node. By allowing only setting the tag as
      active, we (1) are sure we have an active node all the time and
      (b) we are sure that only one is active (as all other will step
      down upon broadcast).<br>
      <br>
      Of course, if you have some scenarios which do not fit with this
      philosophy, I am open to discussions and patches.<br>
      <br>
      Best regards, <br>
    </font>
    <pre class="moz-signature" cols="72">Bogdan-Andrei Iancu

OpenSIPS Founder and Developer
  <a class="moz-txt-link-freetext" href="https://www.opensips-solutions.com">https://www.opensips-solutions.com</a>
  <a class="moz-txt-link-freetext" href="https://www.siphub.com">https://www.siphub.com</a></pre>
    <div class="moz-cite-prefix">On 20.06.2025 16:01, Wadii wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAEeO=-HLXjhtSz-P9pFc4LcbUUtEQ_SE7VxYWj==d9Cw02mGRg@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">hello<br>
        I had a similar setup, you can avoid the split brain issue by
        not hardcoding the sharing_tag active state in clusterer module
        config<br>
        Either set both servers to start with 'backup' state / 'none' by
        removing the param entirely, then few seconds after startup via
        startup_route each server checks if it actually has the VIP and
        only then sets itself active if needed.<br>
        <br>
        startup_route {<br>
        launch(exec("sleep 10 && /path/check_vip.py",,
        $var(out), $var(err)), vip_check_route);<br>
        }<br>
        <br>
        This way reality determines who's active, not config files</div>
      <br>
      <div class="gmail_quote gmail_quote_container">
        <div dir="ltr" class="gmail_attr">Le ven. 20 juin 2025 à 13:48,
          Alexey <<a href="mailto:slackway2me@gmail.com"
            moz-do-not-send="true" class="moz-txt-link-freetext">slackway2me@gmail.com</a>>
          a écrit :<br>
        </div>
        <blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi
          list, hi Team,<br>
          <br>
          I would like to know if it is possible to add one more<br>
          MI command for the clusterer module, which will be<br>
          the opposite for the 'clusterer_shtag_set_active' command.<br>
          <br>
          E.g. 'clusterer_shtag_set_inactive'.<br>
          It could be very useful in some scenarios.<br>
          <br>
          We have an active/standby cluster based on Keepalived.<br>
          And Keepalived is configured with the ability to be switched<br>
          manually from active to standby. I mean that the
          keepalived.conf<br>
          on both nodes has these two not default options:<br>
          <br>
              state BACKUP        # both must be BACKUP for nopreempt to
          work<br>
              nopreempt<br>
          <br>
          And keepalived is configured to execute the script when the
          node<br>
          becomes active (or 'master' in keepalived terminology):<br>
          <br>
              ...<br>
              vrrp_instance voip {<br>
          <br>
                  notify_master "/etc/keepalived/master-backup.sh
          MASTER"<br>
                  notify_backup "/etc/keepalived/master-backup.sh
          BACKUP"<br>
                  notify_stop "/etc/keepalived/master-backup.sh STOP"<br>
                  notify_fault "/etc/keepalived/master-backup.sh FAULT"<br>
              ...<br>
          <br>
          <br>
          And this script runs MI command:<br>
          <br>
              ...<br>
              /usr/bin/opensips-cli -x mi clusterer_shtag_set_active
          vip/1<br>
              ...<br>
          <br>
          Keepalived works well.<br>
          <br>
          One OpenSIPS node has the following configuration<br>
          of the clusterer module and sharing_tags:<br>
          <br>
          ...<br>
          modparam("clusterer", "sharing_tag", "vip/1=active")<br>
          ...<br>
          <br>
          <br>
          The other one - the following:<br>
          ...<br>
          modparam("clusterer", "sharing_tag", "vip/1=backup")<br>
          ...<br>
          <br>
          <br>
          So, if the state of nodes changes, we can run MI command<br>
          'clusterer_shtag_set_active' triggered by Keepalived state
          change,<br>
          and we do it.<br>
          <br>
          But if the first node is rebooted, OpenSIPS starts with the
          configured options -<br>
          modparam("clusterer", "sharing_tag", "vip/1=active") .<br>
          <br>
          But because of 'nopreempt' option in keepalived.conf the state<br>
          of the nodes remains unchanged (I configured it on purpose,<br>
          it's convenient for me to be able to switch nodes manually and<br>
          to decide which one will be active - I copied the behavior of
          AcmePacket SBC<br>
          with these options).<br>
          <br>
          So, in some cases the situation bacomes as follows:<br>
          the second node became active (either manually or because of
          some<br>
          problems on the first node) and remains in active state.<br>
          <br>
          But after rebooting the first node OpenSIPS on it starts with<br>
          modparam("clusterer", "sharing_tag", "vip/1=active")
          parameters.<br>
          <br>
          So, since that, both nodes are sure that each of them should
          have<br>
          active sharing_tags.<br>
          At the same time, keepalived on the first node enters backup
          state,<br>
          because it sees that the second node is already in master
          state<br>
          (nopreempt option works).<br>
          <br>
          If such a command existed, I could use it in keepalived
          config/script<br>
          and run it during switching to backup, something like -<br>
          <br>
               /usr/bin/opensips-cli -x mi clusterer_shtag_set_INactive
          vip/1<br>
          <br>
          <br>
          -- <br>
          best regards, Alexey<br>
          <a href="https://alexeyka.zantsev.com/" rel="noreferrer"
            target="_blank" moz-do-not-send="true"
            class="moz-txt-link-freetext">https://alexeyka.zantsev.com/</a><br>
          <br>
          _______________________________________________<br>
          Users mailing list<br>
          <a href="mailto:Users@lists.opensips.org" target="_blank"
            moz-do-not-send="true" class="moz-txt-link-freetext">Users@lists.opensips.org</a><br>
          <a
href="http://lists.opensips.org/cgi-bin/mailman/listinfo/users"
            rel="noreferrer" target="_blank" moz-do-not-send="true"
            class="moz-txt-link-freetext">http://lists.opensips.org/cgi-bin/mailman/listinfo/users</a><br>
        </blockquote>
      </div>
      <br>
      <fieldset class="moz-mime-attachment-header"></fieldset>
      <pre class="moz-quote-pre" wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@lists.opensips.org">Users@lists.opensips.org</a>
<a class="moz-txt-link-freetext" href="http://lists.opensips.org/cgi-bin/mailman/listinfo/users">http://lists.opensips.org/cgi-bin/mailman/listinfo/users</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>