How to Add a New Node to an Already Existing ClickHouse Cluster
April 12, 2024 | by techdbzone.com
Expanding your ClickHouse cluster by adding a new node can seem daunting, but with careful configuration, you can seamlessly enhance your system’s scalability and resilience. Here’s a step-by-step guide on how to integrate a new node, clickhouse01-db-prod
, into an existing ClickHouse cluster, specifically a sng_prod_cluster
.
Step 1: Update the Configuration
Firstly, update the config.xml
file to include the new node’s display name. This file typically manages the primary configurations for your ClickHouse instance.
Step 2: Create Macros Configuration
The macros.xml
file allows ClickHouse
to differentiate between shards and replicas within your cluster. Set up the macros.xml
for your new node as follows:
<clickhouse>
<macros>
<shard>01</shard>
<replica>03</replica>
<cluster>sng_prod_cluster</cluster>
</macros>
</clickhouse>
Place this configuration in /etc/clickhouse-server/config.d/
.
Step 3: Define Remote Servers
Next, edit the remote-servers.xml
file, which contains the details about all nodes within the cluster. Ensure you enable the replace="true"
attribute to update the existing configuration. Add the new node under the respective shard and replica settings:
<clickhouse>
<remote_servers replace="true">
<sng_prod_cluster>
<secret>mysecretphrase</secret>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>clickhouse03-db-prod.sng.taxi.local</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse04-db-prod.sng.taxi.local</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse01-db-prod.sng.taxi.local</host>
<port>9000</port>
</replica>
</shard>
</sng_prod_cluster>
</remote_servers>
</clickhouse>
This configuration should also be placed in /etc/clickhouse-server/config.d/
.
Step 4: Update Existing Nodes
It’s important to ensure that existing nodes in the cluster are aware of the new node. Modify the remote-servers.xml
on nodes clickhouse03
and clickhouse04
to include the configuration for the new node:
<replica>
<host>clickhouse01-db-prod.sng.taxi.local</host>
<port>9000</port>
</replica>
Step 5: Configure ZooKeeper
For clusters using ZooKeeper for coordination, create a use-keeper.xml
to specify the ZooKeeper nodes:
<clickhouse>
<zookeeper>
<node>
<host>clickhouse-keeper1-db-prod.sng.taxi.local</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper2-db-prod.sng.taxi.local</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper3-db-prod.sng.taxi.local</host>
<port>9181</port>
</node>
</zookeeper>
</clickhouse>
Step 6: Restart and Verify
Finally, restart the ClickHouse service on your new node and existing nodes to apply the changes:
systemctl restart clickhouse-server.service
After the services are up again, verify the cluster’s status with the following command:
clickhouse-client -u default --password='*****' -q "SELECT cluster, replica_num, host_name, host_address, port, is_local FROM clusters"
This command provides a snapshot of the cluster configuration, showing how each node is connected and functioning within the cluster.
Adding a new node to your ClickHouse cluster enhances its performance and reliability. By following these detailed steps, you can ensure a smooth integration of the new node into your existing infrastructure.
RELATED POSTS
View all