<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    paulwong

    構建ARTEMIS集群

    如果多個ARTEMIS以單體的方式啟動,則各個實例互不相關,不能實現高可用。

    集群

    因此需將多個實例組隊--集群,以實現高可用,即集群內可實現實例的失敗轉移,消息的負載均衡,消息的重新分配。
    這需要在集群配置區指定所有所有實例的IP和PORT。

    但集群實例如果DOWN機,其保存的消息也會隨之消失,因此需要實現高可用,有兩種方式:共享存儲及消息復制。

    共享存儲

    共享存儲是由master/slave對組成,指兩個實例保存消息的目錄相同,且一個是master,另一個是slave,同一時間只有一個實例對外提供服務,這個實例就是master。當master down機時,slave就會接手,變成master。由于使用的目錄保存消息,因此slave啟用時,消息不會丟失。

    消息復制

    消息復制同樣是由master/slave對組成,是指slave實例復制其master實例上的消息,因此slave實例有master實例上的消息的備份。當master down機,則slave變成master,由于消息之前已進行過復制,因此消息不會丟失。

    消息的重新分配

    消息的重新分配是指消息分布在各個實例,如果只有一個consumer連上集群,則所有消息都會被消費,不需要不同的consumer連不同的實例。

    新建集群的命令

    create-artemis-cluster.sh
    #! /bin/bash

    MQ_PORT=$1
    ARTEMIS_HOME=$2
    HOST=$3
    HTTP_PORT=$4
    STATIC_CONNECTOR=$5
    CLUSTER_NAME=$6
    INSTANCE_FOLDER=$7
    DATA_FOLDER=$8
    HA_MODE=$9
    IS_MASTER_OR_SLAVE=${10}

    echo $MQ_PORT
    echo $IS_MASTER_OR_SLAVE
    rm -rf $INSTANCE_FOLDER

    if [ "$HA_MODE" == "shared-store" ]
    then
      echo "shared-store mode"
      DATA_FOLDER=$ARTEMIS_HOME/data/$CLUSTER_NAME/$DATA_FOLDER
    else
      echo "replicated mode"
      DATA_FOLDER=data
    fi

    if [ "$STATIC_CONNECTOR" == "no" ]
    then
      echo "no staticCluster"
      STATIC_CONNECTOR=""
    else
      echo "need staticCluster"
      STATIC_CONNECTOR="--staticCluster $STATIC_CONNECTOR"
    fi

    ./bin/artemis create \
    --no-amqp-acceptor \
    --no-mqtt-acceptor \
    --no-stomp-acceptor \
    --no-hornetq-acceptor \
    --clustered \
    $STATIC_CONNECTOR \
    --max-hops 1 \
    --$HA_MODE \
    $IS_MASTER_OR_SLAVE \
    --cluster-user admin-cluster \
    --cluster-password admin-cluster  \
    --failover-on-shutdown true \
    --data $DATA_FOLDER \
    --default-port $MQ_PORT  \
    --encoding UTF-8 \
    --home $ARTEMIS_HOME \
    --name $HOST:$MQ_PORT \
    --host $HOST \
    --http-host $HOST \
    --http-port $HTTP_PORT \
    --user admin \
    --password admin \
    --require-login \
    --role admin \
    --use-client-auth \
    $INSTANCE_FOLDER

    create-artemis-cluster-node1.sh
    #! /bin/bash

    BIN_PATH=$(cd `dirname $0`; pwd)

    cd $BIN_PATH/../

    ARTEMIS_HOME=`pwd`
    echo $ARTEMIS_HOME

    MASTER_IP_1=10.10.27.69
    SLAVE_IP_1=10.10.27.69
    MASTER_IP_2=10.10.27.69
    SLAVE_IP_2=10.10.27.69

    MASTER_PORT_1=62616
    SLAVE_PORT_1=62617
    MASTER_PORT_2=62618
    SLAVE_PORT_2=62619

    MASTER_HTTP_PORT_1=8261
    SLAVE_HTTP_PORT_1=8262
    MASTER_HTTP_PORT_2=8263
    SLAVE_HTTP_PORT_2=8264

    MASTER_NODE_1=node1
    SLAVE_NODE_1=node2
    MASTER_NODE_2=node3
    SLAVE_NODE_2=node4

    STATIC_CONNECTOR=tcp://$SLAVE_IP_1:$SLAVE_PORT_1,tcp://$MASTER_IP_2:$MASTER_PORT_2,tcp://$SLAVE_IP_2:$SLAVE_PORT_2

    CLUSTER=cluster-1
    DATA_FOLDER=data-1

    HA_MODE=shared-store

    INSTANCE_FOLDER=brokers/$HA_MODE/$CLUSTER/$MASTER_NODE_1

    ./bin/create-artemis-cluster.sh $MASTER_PORT_1 $ARTEMIS_HOME $MASTER_IP_1 $MASTER_HTTP_PORT_1 $STATIC_CONNECTOR $CLUSTER $INSTANCE_FOLDER $DATA_FOLDER $HA_MODE


    create-artemis-cluster-replication-node1.sh
    #! /bin/bash

    BIN_PATH=$(cd `dirname $0`; pwd)

    cd $BIN_PATH/../

    ARTEMIS_HOME=`pwd`
    echo $ARTEMIS_HOME

    MASTER_IP_1=10.10.27.69
    SLAVE_IP_1=10.10.27.69
    MASTER_IP_2=10.10.27.69
    SLAVE_IP_2=10.10.27.69
    MASTER_IP_3=10.10.27.69
    SLAVE_IP_3=10.10.27.69

    MASTER_PORT_1=63616
    SLAVE_PORT_1=63617
    MASTER_PORT_2=63618
    SLAVE_PORT_2=63619
    MASTER_PORT_3=63620
    SLAVE_PORT_3=63621

    MASTER_HTTP_PORT_1=8361
    SLAVE_HTTP_PORT_1=8362
    MASTER_HTTP_PORT_2=8363
    SLAVE_HTTP_PORT_2=8364
    MASTER_HTTP_PORT_3=8365
    SLAVE_HTTP_PORT_3=8366

    MASTER_NODE_1=node1
    SLAVE_NODE_1=node2
    MASTER_NODE_2=node3
    SLAVE_NODE_2=node4
    MASTER_NODE_3=node5
    SLAVE_NODE_3=node6

    #STATIC_CONNECTOR=tcp://$SLAVE_IP_1:$SLAVE_PORT_1,tcp://$MASTER_IP_2:$MASTER_PORT_2,tcp://$SLAVE_IP_2:$SLAVE_PORT_2,tcp://$MASTER_IP_3:$MASTER_PORT_3,tcp://$SLAVE_IP_3:$SLAVE_PORT_3
    STATIC_CONNECTOR=no

    CLUSTER_1=cluster-1
    CLUSTER_2=cluster-2
    DATA_FOLDER_1=data-1
    DATA_FOLDER_2=data-2
    DATA_FOLDER_3=data

    HA_MODE_1=shared-store
    HA_MODE_2=replicated

    CURRENT_IP=$MASTER_IP_1
    CURRENT_PORT=$MASTER_PORT_1
    CURRENT_HTTP_PORT=$MASTER_HTTP_PORT_1
    CURRENT_NODE=$MASTER_NODE_1
    CURRENT_CLUSTER=$CLUSTER_2
    CURRENT_DATA_FOLDER=$DATA_FOLDER_3
    CURRENT_HA_MODE=$HA_MODE_2

    INSTANCE_FOLDER=brokers/$CURRENT_HA_MODE/$CURRENT_CLUSTER/$CURRENT_NODE

    ./bin/create-artemis-cluster.sh $CURRENT_PORT $ARTEMIS_HOME $CURRENT_IP $CURRENT_HTTP_PORT $STATIC_CONNECTOR $CURRENT_CLUSTER $INSTANCE_FOLDER $CURRENT_DATA_FOLDER $CURRENT_HA_MODE

    Artemis的配置文件


    node1:
    <?xml version='1.0'?>
    <!--
    Licensed to the Apache Software Foundation (ASF) under one
    or more contributor license agreements.  See the NOTICE file
    distributed with this work for additional information
    regarding copyright ownership.  The ASF licenses this file
    to you under the Apache License, Version 2.0 (the
    "License"); you may not use this file except in compliance
    with the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing,
    software distributed under the License is distributed on an
    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    KIND, either express or implied.  See the License for the
    specific language governing permissions and limitations
    under the License.
    -->

    <configuration xmlns="urn:activemq"
                   xmlns:xsi
    ="http://www.w3.org/2001/XMLSchema-instance"
                   xmlns:xi
    ="http://www.w3.org/2001/XInclude"
                   xsi:schemaLocation
    ="urn:activemq /schema/artemis-configuration.xsd">

       <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation
    ="urn:activemq:core ">

          <name>10.10.27.69</name>


          <persistence-enabled>true</persistence-enabled>

          <!-- this could be ASYNCIO, MAPPED, NIO
               ASYNCIO: Linux Libaio
               MAPPED: mmap files
               NIO: Plain Java Files
           
    -->
          <journal-type>ASYNCIO</journal-type>

          <paging-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/paging</paging-directory>

          <bindings-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/bindings</bindings-directory>

          <journal-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/journal</journal-directory>

          <large-messages-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/large-messages</large-messages-directory>

          <journal-datasync>true</journal-datasync>

          <journal-min-files>2</journal-min-files>

          <journal-pool-files>10</journal-pool-files>

          <journal-device-block-size>4096</journal-device-block-size>

          <journal-file-size>10M</journal-file-size>
          
          <!--
           This value was determined through a calculation.
           Your system could perform 62.5 writes per millisecond
           on the current journal configuration.
           That translates as a sync write every 16000 nanoseconds.

           Note: If you specify 0 the system will perform writes directly to the disk.
                 We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
          
    -->
          <journal-buffer-timeout>16000</journal-buffer-timeout>


          <!--
            When using ASYNCIO, this will determine the writing queue depth for libaio.
           
    -->
          <journal-max-io>4096</journal-max-io>
          <!--
            You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
             <network-check-NIC>theNicName</network-check-NIC>
            
    -->

          <!--
            Use this to use an HTTP server to validate the network
             <network-check-URL-list>http://www.apache.org</network-check-URL-list> 
    -->

          <!-- <network-check-period>10000</network-check-period> -->
          <!-- <network-check-timeout>1000</network-check-timeout> -->

          <!-- this is a comma separated list, no spaces, just DNS or IPs
               it should accept IPV6

               Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
                        Using IPs that could eventually disappear or be partially visible may defeat the purpose.
                        You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running 
    -->
          <!-- <network-check-list>10.0.0.1</network-check-list> -->

          <!-- use this to customize the ping used for ipv4 addresses -->
          <!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->

          <!-- use this to customize the ping used for ipv6 addresses -->
          <!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->


          <connectors>
                <!-- Connector used to be announced through cluster connections and notifications -->
                <connector name="artemis">tcp://10.10.27.69:62616</connector>
                <connector name = "node2">tcp://10.10.27.69:62617</connector>
                <connector name = "node3">tcp://10.10.27.69:62618</connector>
                <connector name = "node4">tcp://10.10.27.69:62619</connector>
          </connectors>


          <!-- how often we are looking for how many bytes are being used on the disk in ms -->
          <disk-scan-period>5000</disk-scan-period>

          <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
               that won't support flow control. 
    -->
          <max-disk-usage>90</max-disk-usage>

          <!-- should the broker detect dead locks and other issues -->
          <critical-analyzer>true</critical-analyzer>

          <critical-analyzer-timeout>120000</critical-analyzer-timeout>

          <critical-analyzer-check-period>60000</critical-analyzer-check-period>

          <critical-analyzer-policy>HALT</critical-analyzer-policy>

          
          <page-sync-timeout>212000</page-sync-timeout>


                <!-- the system will enter into page mode once you hit this limit.
               This is an estimate in bytes of how much the messages are using in memory

                The system will use half of the available memory (-Xmx) by default for the global-max-size.
                You may specify a different value here if you need to customize it to your needs.

                <global-max-size>100Mb</global-max-size>

          
    -->

          <acceptors>

             <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
             <!-- amqpCredits: The number of credits sent to AMQP producers -->
             <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
             <!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
                                          as duplicate detection requires applicationProperties to be parsed on the server. 
    -->
             <!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
                                           default: 102400, -1 would mean to disable large mesasge control 
    -->

             <!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
                        "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
                        See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. 
    -->


             <!-- Acceptor for every supported protocol -->
             <acceptor name="artemis">tcp://10.10.27.69:62616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>

          </acceptors>


          <cluster-user>admin-cluster</cluster-user>

          <cluster-password>admin-cluster </cluster-password>
          <cluster-connections>
             <cluster-connection name="my-cluster">
                <connector-ref>artemis</connector-ref>
                <message-load-balancing>ON_DEMAND</message-load-balancing>
                <max-hops>1</max-hops>
                <static-connectors>
                   <connector-ref>node2</connector-ref>
                   <connector-ref>node3</connector-ref>
                   <connector-ref>node4</connector-ref>
                </static-connectors>
             </cluster-connection>
          </cluster-connections>


          <ha-policy>
             <shared-store>
                <master>
                   <failover-on-shutdown>true</failover-on-shutdown>
                </master>
             </shared-store>
          </ha-policy>

          <security-settings>
             <security-setting match="#">
                <permission type="createNonDurableQueue" roles="admin"/>
                <permission type="deleteNonDurableQueue" roles="admin"/>
                <permission type="createDurableQueue" roles="admin"/>
                <permission type="deleteDurableQueue" roles="admin"/>
                <permission type="createAddress" roles="admin"/>
                <permission type="deleteAddress" roles="admin"/>
                <permission type="consume" roles="admin"/>
                <permission type="browse" roles="admin"/>
                <permission type="send" roles="admin"/>
                <!-- we need this otherwise ./artemis data imp wouldn't work -->
                <permission type="manage" roles="admin"/>
             </security-setting>
          </security-settings>

          <address-settings>
             <!-- if you define auto-create on certain queues, management has to be auto-create -->
             <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
             </address-setting>
             <!--default for catch all-->
             <address-setting match="#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
                <redistribution-delay>0</redistribution-delay>
             </address-setting>
          </address-settings>

          <addresses>
             <address name="DLQ">
                <anycast>
                   <queue name="DLQ" />
                </anycast>
             </address>
             <address name="ExpiryQueue">
                <anycast>
                   <queue name="ExpiryQueue" />
                </anycast>
             </address>

          </addresses>


          <!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
          <broker-plugins>
             <broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
                <property key="LOG_ALL_EVENTS" value="true"/>
                <property key="LOG_CONNECTION_EVENTS" value="true"/>
                <property key="LOG_SESSION_EVENTS" value="true"/>
                <property key="LOG_CONSUMER_EVENTS" value="true"/>
                <property key="LOG_DELIVERING_EVENTS" value="true"/>
                <property key="LOG_SENDING_EVENTS" value="true"/>
                <property key="LOG_INTERNAL_EVENTS" value="true"/>
             </broker-plugin>
          </broker-plugins>
          
    -->

       </core>
    </configuration>

    node2
    <?xml version='1.0'?>
    <!--
    Licensed to the Apache Software Foundation (ASF) under one
    or more contributor license agreements.  See the NOTICE file
    distributed with this work for additional information
    regarding copyright ownership.  The ASF licenses this file
    to you under the Apache License, Version 2.0 (the
    "License"); you may not use this file except in compliance
    with the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing,
    software distributed under the License is distributed on an
    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    KIND, either express or implied.  See the License for the
    specific language governing permissions and limitations
    under the License.
    -->

    <configuration xmlns="urn:activemq"
                   xmlns:xsi
    ="http://www.w3.org/2001/XMLSchema-instance"
                   xmlns:xi
    ="http://www.w3.org/2001/XInclude"
                   xsi:schemaLocation
    ="urn:activemq /schema/artemis-configuration.xsd">

       <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation
    ="urn:activemq:core ">

          <name>10.10.27.69</name>


          <persistence-enabled>true</persistence-enabled>

          <!-- this could be ASYNCIO, MAPPED, NIO
               ASYNCIO: Linux Libaio
               MAPPED: mmap files
               NIO: Plain Java Files
           
    -->
          <journal-type>ASYNCIO</journal-type>

          <paging-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/paging</paging-directory>

          <bindings-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/bindings</bindings-directory>

          <journal-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/journal</journal-directory>

          <large-messages-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/large-messages</large-messages-directory>

          <journal-datasync>true</journal-datasync>

          <journal-min-files>2</journal-min-files>

          <journal-pool-files>10</journal-pool-files>

          <journal-device-block-size>4096</journal-device-block-size>

          <journal-file-size>10M</journal-file-size>
          
          <!--
           This value was determined through a calculation.
           Your system could perform 25 writes per millisecond
           on the current journal configuration.
           That translates as a sync write every 40000 nanoseconds.

           Note: If you specify 0 the system will perform writes directly to the disk.
                 We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
          
    -->
          <journal-buffer-timeout>40000</journal-buffer-timeout>


          <!--
            When using ASYNCIO, this will determine the writing queue depth for libaio.
           
    -->
          <journal-max-io>4096</journal-max-io>
          <!--
            You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
             <network-check-NIC>theNicName</network-check-NIC>
            
    -->

          <!--
            Use this to use an HTTP server to validate the network
             <network-check-URL-list>http://www.apache.org</network-check-URL-list> 
    -->

          <!-- <network-check-period>10000</network-check-period> -->
          <!-- <network-check-timeout>1000</network-check-timeout> -->

          <!-- this is a comma separated list, no spaces, just DNS or IPs
               it should accept IPV6

               Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
                        Using IPs that could eventually disappear or be partially visible may defeat the purpose.
                        You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running 
    -->
          <!-- <network-check-list>10.0.0.1</network-check-list> -->

          <!-- use this to customize the ping used for ipv4 addresses -->
          <!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->

          <!-- use this to customize the ping used for ipv6 addresses -->
          <!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->


          <connectors>
                <!-- Connector used to be announced through cluster connections and notifications -->
                <connector name="artemis">tcp://10.10.27.69:62617</connector>
                <connector name = "node1">tcp://10.10.27.69:62616</connector>
                <connector name = "node3">tcp://10.10.27.69:62618</connector>
                <connector name = "node4">tcp://10.10.27.69:62619</connector>
          </connectors>


          <!-- how often we are looking for how many bytes are being used on the disk in ms -->
          <disk-scan-period>5000</disk-scan-period>

          <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
               that won't support flow control. 
    -->
          <max-disk-usage>90</max-disk-usage>

          <!-- should the broker detect dead locks and other issues -->
          <critical-analyzer>true</critical-analyzer>

          <critical-analyzer-timeout>120000</critical-analyzer-timeout>

          <critical-analyzer-check-period>60000</critical-analyzer-check-period>

          <critical-analyzer-policy>HALT</critical-analyzer-policy>

          
          <page-sync-timeout>180000</page-sync-timeout>


                <!-- the system will enter into page mode once you hit this limit.
               This is an estimate in bytes of how much the messages are using in memory

                The system will use half of the available memory (-Xmx) by default for the global-max-size.
                You may specify a different value here if you need to customize it to your needs.

                <global-max-size>100Mb</global-max-size>

          
    -->

          <acceptors>

             <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
             <!-- amqpCredits: The number of credits sent to AMQP producers -->
             <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
             <!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
                                          as duplicate detection requires applicationProperties to be parsed on the server. 
    -->
             <!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
                                           default: 102400, -1 would mean to disable large mesasge control 
    -->

             <!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
                        "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
                        See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. 
    -->


             <!-- Acceptor for every supported protocol -->
             <acceptor name="artemis">tcp://10.10.27.69:62617?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>

          </acceptors>


          <cluster-user>admin-cluster</cluster-user>

          <cluster-password>admin-cluster </cluster-password>
          <cluster-connections>
             <cluster-connection name="my-cluster">
                <connector-ref>artemis</connector-ref>
                <message-load-balancing>ON_DEMAND</message-load-balancing>
                <max-hops>1</max-hops>
                <static-connectors>
                   <connector-ref>node1</connector-ref>
                   <connector-ref>node3</connector-ref>
                   <connector-ref>node4</connector-ref>
                </static-connectors>
             </cluster-connection>
          </cluster-connections>


          <ha-policy>
             <shared-store>
                <slave>
                   <failover-on-shutdown>true</failover-on-shutdown>
                </slave>
             </shared-store>
          </ha-policy>

          <security-settings>
             <security-setting match="#">
                <permission type="createNonDurableQueue" roles="admin"/>
                <permission type="deleteNonDurableQueue" roles="admin"/>
                <permission type="createDurableQueue" roles="admin"/>
                <permission type="deleteDurableQueue" roles="admin"/>
                <permission type="createAddress" roles="admin"/>
                <permission type="deleteAddress" roles="admin"/>
                <permission type="consume" roles="admin"/>
                <permission type="browse" roles="admin"/>
                <permission type="send" roles="admin"/>
                <!-- we need this otherwise ./artemis data imp wouldn't work -->
                <permission type="manage" roles="admin"/>
             </security-setting>
          </security-settings>

          <address-settings>
             <!-- if you define auto-create on certain queues, management has to be auto-create -->
             <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
             </address-setting>
             <!--default for catch all-->
             <address-setting match="#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
                <redistribution-delay>0</redistribution-delay>
             </address-setting>
          </address-settings>

          <addresses>
             <address name="DLQ">
                <anycast>
                   <queue name="DLQ" />
                </anycast>
             </address>
             <address name="ExpiryQueue">
                <anycast>
                   <queue name="ExpiryQueue" />
                </anycast>
             </address>

          </addresses>


          <!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
          <broker-plugins>
             <broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
                <property key="LOG_ALL_EVENTS" value="true"/>
                <property key="LOG_CONNECTION_EVENTS" value="true"/>
                <property key="LOG_SESSION_EVENTS" value="true"/>
                <property key="LOG_CONSUMER_EVENTS" value="true"/>
                <property key="LOG_DELIVERING_EVENTS" value="true"/>
                <property key="LOG_SENDING_EVENTS" value="true"/>
                <property key="LOG_INTERNAL_EVENTS" value="true"/>
             </broker-plugin>
          </broker-plugins>
          
    -->

       </core>
    </configuration>

    node3
          <acceptors>
             <acceptor name="artemis">tcp://10.10.27.69:62618?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
          </acceptors>

          <connectors>
                <!-- Connector used to be announced through cluster connections and notifications -->
                <connector name="artemis">tcp://10.10.27.69:62618</connector>
                <connector name = "node1">tcp://10.10.27.69:62616</connector>
                <connector name = "node2">tcp://10.10.27.69:62617</connector>
                <connector name = "node4">tcp://10.10.27.69:62619</connector>
          </connectors>

          <cluster-user>admin-cluster</cluster-user>

          <cluster-password>admin-cluster </cluster-password>
          <cluster-connections>
             <cluster-connection name="my-cluster">
                <connector-ref>artemis</connector-ref>
                <message-load-balancing>ON_DEMAND</message-load-balancing>
                <max-hops>1</max-hops>
                <static-connectors>
                   <connector-ref>node1</connector-ref>
                   <connector-ref>node2</connector-ref>
                   <connector-ref>node4</connector-ref>
                </static-connectors>
             </cluster-connection>
          </cluster-connections>


          <ha-policy>
             <shared-store>
                <master>
                   <failover-on-shutdown>true</failover-on-shutdown>
                </master>
             </shared-store>
          </ha-policy>

          <address-settings>
             <address-setting match="#">
                <redistribution-delay>0</redistribution-delay>
             </address-setting>
          </address-settings>

    node4
          <acceptors>
             <acceptor name="artemis">tcp://10.10.27.69:62619?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
          </acceptors>

          <connectors>
                <!-- Connector used to be announced through cluster connections and notifications -->
                <connector name="artemis">tcp://10.10.27.69:62619</connector>
                <connector name = "node1">tcp://10.10.27.69:62616</connector>
                <connector name = "node2">tcp://10.10.27.69:62617</connector>
                <connector name = "node3">tcp://10.10.27.69:62618</connector>
          </connectors>

          <cluster-user>admin-cluster</cluster-user>

          <cluster-password>admin-cluster </cluster-password>
          <cluster-connections>
             <cluster-connection name="my-cluster">
                <connector-ref>artemis</connector-ref>
                <message-load-balancing>ON_DEMAND</message-load-balancing>
                <max-hops>1</max-hops>
                <static-connectors>
                   <connector-ref>node1</connector-ref>
                   <connector-ref>node2</connector-ref>
                   <connector-ref>node4</connector-ref>
                </static-connectors>
             </cluster-connection>
          </cluster-connections>


          <ha-policy>
             <shared-store>
                <master>
                   <failover-on-shutdown>true</failover-on-shutdown>
                </master>
             </shared-store>
          </ha-policy>

          <address-settings>
             <address-setting match="#">
                <redistribution-delay>0</redistribution-delay>
             </address-setting>
          </address-settings>

    JAVA Client

    由于使用了多個active/passive對的集群,為了使客戶端能在多個active中失敗跳轉,使用了apid包,如果單獨使用artemis的包,就只能在單個active/passive對中跳轉,不能在多個active中跳轉。

    pom.xml
    <project xmlns="http://maven.apache.org/POM/4.0.0"
        xmlns:xsi
    ="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation
    ="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
        <groupId>com.paul</groupId>
        <artifactId>test-artemis-cluster</artifactId>
        <version>0.0.1-SNAPSHOT</version>

        <properties>
            <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
            <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
            <java.version>1.8</java.version>
            <maven.compiler.source>${java.version}</maven.compiler.source>
            <maven.compiler.target>${java.version}</maven.compiler.target>
            <qpid.jms.version>0.23.0</qpid.jms.version>
        </properties>
        
        <dependencies>
          <dependency>
             <groupId>org.apache.qpid</groupId>
             <artifactId>qpid-jms-client</artifactId>
             <version>${qpid.jms.version}</version>
          </dependency>
       </dependencies>

    </project>

    jndi.properties
    java.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory
    java.naming.security.principal=admin
    java.naming.security.credentials=admin
    connectionFactory.ConnectionFactory=failover:(amqp://10.10.27.69:62616,amqp://10.10.27.69:62617,amqp://10.10.27.69:62618,amqp://10.10.27.69:62619)
    queue.queue/exampleQueue=TEST

    Receiver.java
    package com.paul.testartemiscluster;

    import javax.jms.Connection;
    import javax.jms.ConnectionFactory;
    import javax.jms.MessageConsumer;
    import javax.jms.Queue;
    import javax.jms.Session;
    import javax.jms.TextMessage;
    import javax.naming.InitialContext;

    public class Receiver {
       public static void main(String args[]) throws Exception{

          try {
             InitialContext context = new InitialContext();

             Queue queue = (Queue) context.lookup("queue/exampleQueue");

             ConnectionFactory cf = (ConnectionFactory) context.lookup("ConnectionFactory");
             

             try (
                   Connection connection = cf.createConnection("admin", "admin");
             ) {
                Session session = connection.createSession(true, Session.AUTO_ACKNOWLEDGE);
                MessageConsumer consumer = session.createConsumer(queue);
                connection.start();
                while (true) {
                   for (int j = 0 ; j < 10; j++) {
                      TextMessage receive = (TextMessage) consumer.receive(0);
                      System.out.println("received message " + receive.getText());
                   }
                   session.commit();
                }
             }
          } catch (Exception e) {
             e.printStackTrace();
          }
       }
    }

    Reference

    Networks of Brokers in AMQ 7 Broker (Clustering)
    https://github.com/RedHatWorkshops/amqv7-workshop/tree/master/demos/broker/clustering
    https://github.com/RedHatWorkshops/amqv7-workshop/tree/master/demos/broker/high-availability


    Installing the AMQ7 Broker
    https://redhatworkshops.github.io/amqv7-workshop/40-master-slave.html


    Architecting messaging solutions with Apache ActiveMQ Artemis
    https://developers.redhat.com/blog/2020/01/10/architecting-messaging-solutions-with-apache-activemq-artemis


    Artemis集群(18)
    https://blog.csdn.net/guiliguiwang/article/details/82423853


    (9)Artemis網絡孤立(腦分裂)
    https://blog.csdn.net/guiliguiwang/article/details/82185987


    Artemis高可用性和故障轉移(19)
    https://blog.csdn.net/guiliguiwang/article/details/82426949


    Configuring Multi-Node Messaging Systems
    https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html-single/configuring_messaging/index#advanced_configuration


    Creating a broker in ActiveMQ Artemis
    https://medium.com/@joelicious/creating-a-broker-in-activemq-artemis-cb553e9ff9aa


    ActiveMQ Artemis cluster does not redistribute messages after one instance crash
    https://stackoverflow.com/questions/67644488/activemq-artemis-cluster-does-not-redistribute-messages-after-one-instance-crash

    posted on 2021-06-30 16:33 paulwong 閱讀(1574) 評論(0)  編輯  收藏 所屬分類: JMS

    主站蜘蛛池模板: 亚洲熟妇丰满多毛XXXX| 久久免费国产视频| 国产精品亚洲专区无码唯爱网| 亚洲国产精品一区二区久| 久久精品国产亚洲av日韩| 亚洲视频一区调教| 精品日韩亚洲AV无码| 久久精品国产亚洲AV无码娇色 | 久久久久国产精品免费免费不卡 | 亚洲精品白色在线发布| 亚洲天堂视频在线观看| 久久亚洲国产成人精品性色| 亚洲综合在线视频| 亚洲第一成年人网站| 亚洲成aⅴ人在线观看| 日韩亚洲国产综合高清| 亚洲欧美中文日韩视频| 色欲aⅴ亚洲情无码AV| 色爽黄1000部免费软件下载| 精品国产免费一区二区三区| 美女视频黄a视频全免费网站色窝 美女被cao网站免费看在线看 | 大学生高清一级毛片免费| 男女交性永久免费视频播放 | 三年片在线观看免费观看大全一 | 国产99久久久久久免费看| 久久性生大片免费观看性| 中国毛片免费观看| 免费在线中文日本| 99久久99热精品免费观看国产| 国产精品成人免费福利| 成人性生交视频免费观看| 四虎影视永久免费视频观看| 亚洲成年人啊啊aa在线观看| 亚洲精品无码mv在线观看网站 | 国产成人免费a在线视频色戒| 亚洲av中文无码| 亚洲精品成人网站在线观看| 亚洲日产2021三区| 亚洲AV成人一区二区三区观看 | 亚洲av午夜精品无码专区| 亚洲欧美日韩中文字幕一区二区三区|