Add a Remote Cluster
To build a multicluster deployment, you can add a remote cluster to your service mesh.
To give you clean instructions, this section starts right after you complete the
initial configuration, and assumes that Cluster_1
and Cluster_2
reside on
the same network. The new remote cluster, Cluster_2
in this section, shares
the control plane of CLUSTER_1
, and this guide
assumes that Cluster_1
and Cluster_2
reside on the same network.
Production systems use this configuration when all clusters within a region share a common control plane. Any configuration with one control plane per zone
The following diagram shows a multicluster deployment with a primary cluster and a remote cluster:
Complete the initial configuration instructions before you continue.
Configure Trust
For CLUSTER_2
to participate in cross-cluster load balancing with your
first cluster, in this case CLUSTER_1
, establish trust
between the clusters and generate a Certificate Authority (CA) certificate for
CLUSTER_2
that the common root CA signed.
Using the set environment variables, configure trust with the following steps:
- Go to the
${WORK_DIR}
folder with the following command:
$ cd ${WORK_DIR}
- Generate the intermediate CA files for
Cluster_2
with the following command:
$ make -f ${ISTIO}/tools/certs/Makefile ${CLUSTER_2}-cacerts-k8s
- To ensure that the Istio control plane and the secret share the same
namespace, create the
istio-system
namespace inCluster_2
with the following command:
$ kubectl create namespace istio-system --context=${CTX_2}
- Push the secret with the generated CA files to
Cluster_2
with the following command:
$ kubectl create secret generic cacerts --context=${CTX_2} \
-n istio-system \
--from-file=${WORK_DIR}/${CLUSTER_2}/ca-cert.pem \
--from-file=${WORK_DIR}/${CLUSTER_2}/ca-key.pem \
--from-file=${WORK_DIR}/${CLUSTER_2}/root-cert.pem \
--from-file=${WORK_DIR}/${CLUSTER_2}/cert-chain.pem
Congratulations!
You configured trust in Cluster_2
to enable workloads in
different clusters to trust each other in your multicluster mesh.
Next, deploy an Istio control plane on Cluster_2
.
Deploy Istio
Next, deploy Istio in Cluster_2
with the discovery address pointing at the
ingress gateway of Cluster_1
. Your initial configuration
enabled mesh expansion on Cluster_1
, and now Cluster_2
can access the
discovery server in Cluster_1
via the ingress gateway.
The new remote cluster requires the following configurations:
Configuration field | Description | Value |
---|---|---|
clusterName | Specifies a human-readable cluster name. | ${CLUSTER_2} |
network | Specifies a network ID as an arbitrary string. All clusters in your mesh must be on the same network, and have the same network ID. | ${NETWORK_1} |
meshID | Specifies a mesh ID as an arbitrary string. All clusters in your mesh share the same mesh ID. | ${MESH} |
remotePilotAddress | IP address of the Istio ingress gateway of `Cluster_1`. | ${DISCOVERY_ADDRESS} |
Using the previously set environment variables,
deploy Istio in Cluster_2
with the following steps:
Set the value of the
DISCOVERY_ADDRESS
environment variable to the IP address of the Istio ingress gateway ofCluster_1
with the following command:$ export DISCOVERY_ADDRESS=$(kubectl \ --context=${CTX_1} \ -n istio-system get svc istio-ingressgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
To pass configuration values to the Istio operator for installation, you define a custom resource (CR). Define and save the
install.yaml
CR with the following command:$ cat <<EOF> ${WORK_DIR}/${CLUSTER_2}/install.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: global: meshID: ${MESH} multiCluster: clusterName: ${CLUSTER_2} network: ${NETWORK_1} # Access the control plane discovery server via the ingress remotePilotAddress: ${DISCOVERY_ADDRESS} EOF
Install Istio on
Cluster_2
with the following command:$ istioctl --context=${CTX_2} manifest apply -f \ ${WORK_DIR}/${CLUSTER_2}/install.yaml
Verify that the control plane of
Cluster_2
is ready with the following command:$ kubectl --context=${CTX_2} -n istio-system get pod NAME READY STATUS RESTARTS AGE istiod-f756bbfc4-thkmk 1/1 Running 0 136m prometheus-b54c6f66b-q8hbt 2/2 Running 0 136m
After the status of all pods is
Running
, you can continue configuring your deployment.
Configure endpoint discovery
To enable cross-cluster load balancing in your mesh, configure endpoint discovery. This feature requires that clusters share secrets between them. If the shared secrets provide the needed trust, each cluster in the mesh can access the API server in the other clusters directly.
Using the environment variables that you set previously, configure endpoint discovery with the following steps:
Share the secret of
Cluster_2
withCluster_1
with the following command:$ istioctl x create-remote-secret \ --context=${CTX_2} \ --name=${CLUSTER_2} | \ kubectl apply -f - --context=${CTX_1}
Congratulations!
You successfully added a remote cluster to your mesh.
Now, you can verify that your newly deployed cluster works as intended.
Next, continue to add clusters until you complete your deployment. You can add the following types of clusters to your mesh:
If you completed your deployment, what comes next?
The following sections provide you possible next steps:
To configure additional Istio features, go to our Tasks section.
To operate your service mesh, go to our Operations section.
To deploy example applications, go to our Examples section.
To troubleshoot your service mesh, go to our Common problems and Diagnostic tools sections.