Multi-Cluster
An ambient mesh can be expanded to form a single mesh across multiple Kubernetes clusters.
Pre-requisites
A number of steps must be taken before an ambient mesh can be linked to other clusters. These changes can all safely be made to individual installations. Following these steps when ambient mesh is installed, even if you are not sure if you will need to link multiple clusters, can help make adopting multi-cluster mesh easier in the future.
Shared root of trust
Each cluster in the mesh must have a shared root of trust. This can be achieved by providing a root certificate signed by a corporate CA, or a custom root certificate created for this purpose. That certificate signs a unique intermediate CA certificate for each cluster.
Read the Istio documentation on plugging in CA certificates to learn how to provide these certificates when installing a cluster.
Installation options
A few installation options must be customized in order to link clusters.
These should be provided in a file passed in with --values
when installing Istio with Helm.
During the installation of the istiod
chart:
meshConfig:
# Optional: giving each cluster a unique trust domain allows writing policies about specific cluster.
# Without this set, AuthorizationPolicies cannot distinguish which cluster a request is from
trustDomain: "my-cluster.local"
global:
# Identifies the cluster by a name. This is strongly recommended to be set to a unique value for each cluster.
multiCluster:
clusterName: "my-cluster"
# The name of the 'network' for the cluster. It is recommended to keep this the same as clusterName.
network: "my-cluster"
env:
# Enables assigning multi-cluster services an IP address
PILOT_ENABLE_IP_AUTOALLOCATE: "true"
# Required if you have distinct trust domains per-cluster
PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true"
# Required to enable multi-cluster support
platforms:
peering:
enabled: true
During the installation of the ztunnel
chart:
env:
# Required if you have distinct trust domains per-cluster
SKIP_VALIDATE_TRUST_DOMAIN: "true"
# Must match the setting during Istio installation
network: "my-cluster"
During the installation of the istio-cni
chart:
# Enables assigning multi-cluster services an IP address
ambient:
dnsCapture: true
After installing, ensure the istio-system
namespace is labeled with the network name chosen above:
$ kubectl label namespace istio-system topology.istio.io/network=my-cluster
Deploy an East-West Gateway
While an ingress gateway serves traffic from clients outside the mesh, an east-west gateway facilitates traffic between cluster joined in the same mesh. Before clusters can be linked together, a gateway must be deployed in each cluster.
This can be done with istioctl
:
$ istioctl multicluster expose --namespace istio-gateways
This gateway can also be deployed manually, or with your regular deployment tooling.
istioctl multicluster expose --namespace istio-gateways --generate
will generate a YAML manifest that can then be applied to the cluster directly.
It is recommended to not change any values except the name
, namespace
, and topology.istio.io/network
label, which must match the values set during installation.
$ istioctl multicluster expose --namespace istio-gateways --generate
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
labels:
istio.io/expose-istiod: "15012"
topology.istio.io/network: "my-network"
name: istio-eastwest
namespace: istio-gateways
spec:
gatewayClassName: istio-eastwest
listeners:
- name: cross-network
port: 15008
protocol: HBONE
tls:
mode: Passthrough
- name: xds-tls
port: 15012
protocol: TLS
tls:
mode: Passthrough
Linking clusters
Linking clusters enables cross-cluster service discovery and enables traffic to traverse across cluster boundaries.
Before linking clusters, you should ensure each cluster you want to configure is set in you kubeconfig
file.
You can view the list of clusters currently configured with kubectl config get-contexts
.
If you have multiple kubeconfig
files you need to join into one, you can run KUBECONFIG=kubeconfig1.yaml:kubeconfig2.yaml:kubeconfig3.yaml kubectl config view --flatten
to get a merge file.
Linking clusters is as simple as:
$ istioctl multicluster link --contexts=context1,context2,context3 --namespace istio-gateways
This command will bi-directionally link the 3 configured clusters to one-another. Now you are ready to send traffic across clusters!
A few alternative operations are available.
Below will generate an asymmetrical topology, where cluster alpha
can reach cluster beta
, but beta
cannot reach alpha
.
$ istioctl multicluster link --from alpha --to beta
As above, the --generate
flag can be used to output YAML that can be applied to clusters directly:
$ istioctl multicluster link --generate --context target-cluster | kubectl apply -f - --context=source-cluster
$ istioctl multicluster link --generate --context source-cluster | kubectl apply -f - --context=target-cluster
Exposing a Service across clusters
When you want a Service to be reachable from other clusters, all you need to do is label the service:
$ kubectl label service hello-world solo.io/service-scope=global
Exposing a service doesn’t change the semantics of the existing service. Rather, it creates a new service derived from the service, with the union of endpoints of the service across all clusters.
Each multi-cluster service gets a hostname of the form <name>.<namespace>.mesh.internal
(mirroring the <name>.<namespace>.svc.cluster.local
naming scheme of standard Services).
Multi-cluster ambient mesh adheres to the principal of namespace sameness - that is, a Service named hello-world
in the application
namespace in Cluster 1 is equivalent to the Service hello-world
Service in the application
namespace in Cluster 2.
Details
A global service is created if any cluster exposes it. This service has the union of endpoints of each cluster that exposes the same service.
Below shows a complete table of possible states:
Local Cluster Service | Remote Cluster Service | Result |
---|---|---|
Exists No service-scope |
Exists No service-scope |
No global service created |
None | Existsservice-scope=global |
Global service exists Remote endpoints |
Exists No service-scope |
Existsservice-scope=global |
Global service exists Remote endpoints |
Existsservice-scope=global |
Existsservice-scope=global |
Global service exists Local and remote endpoints |
Existsservice-scope=global |
None | Global service exists Local endpoints |
Existsservice-scope=global |
Service exists but does not have service-scope=global |
Global service exists Local endpoints |
Traffic control
By default, traffic will be configured to remain in the same network when possible.
If there are no healthy endpoints available locally, traffic will be sent to remote networks.
An endpoint is considered “healthy” if the Pod
is “ready”.
Traffic can be controlled further with the networking.istio.io/traffic-distribution
annotation on a Service
:
PreferClose
can be set to prefer traffic as close as possible, taking into account zone, region, and network. Note this can also be set as.spec.trafficDistribution
, as this is a standard Kubernetes options.PreferNetwork
can be set to prefer traffic within the same network. This is the default for global services.PreferRegion
can be set to prefer traffic as close as possible, taking into account region and network.Any
can be set to consider all endpoints equally.