Istio Namespace Isolation
I’m working on a project which requires a CockroachDB instance in multiple namespaces (prod/uat/dev), in an Isio-enabled Kubernetes cluster.
There are currently some pretty significant drawbacks to using Istio with “headless TCP” services, one of which being you can only have a single instance of a service with a specific TCP port, within the entire service mesh. So, no multiple cockroachDB instances on port 26257.
The problem hinges on the fact that for TCP-based services, Istio can only intercept traffic to a given TCP-based service by recognizing its port number.
A useful workaround to this issue, however, is Istio’s namespace isolation. Namespace isolation allows you to configure Istio to only configure the envoy sidecar proxies to access a subset of services on the mesh, within the scope of a namespace.
How would you use Istio namespace isolation?
My project is an easy example. Say you have 3 namespaces (dev/uat/prod), each of which should talk to their own CockroachDB instance (in their namespace), but be blissfully unaware of any other CockroachDB instances in other namespaces.
What you want is a “Sidecar” custom resource, which limits the scope of the service mesh config deployed to your sidecars. For example, the following CR restricts istio-proxy sidecars to being “aware” of only other services in the same namespace, or in the istio-system namespace:
```yaml kind: Sidecar metadata: name: default namespace: istio-system spec: egress: - hosts: - "./*" - "istio-system/*" ```
If this Sidecar is applied within the “cluster mesh” namespace (istio-system by default), and it doesn’t explictly match any source workloads, then the default will apply to all namespaces. This is a simple way to create a default namespace isolation policy.
Or so I thought…
Turns out there’s a wrinkle, that’s outlined in the docs:
NOTE 2: A Sidecar configuration in the MeshConfig root namespace will be applied by default to all namespaces without a Sidecar configuration. This global default Sidecar configuration should not have any workloadSelector.
The subtle implication here is that if you:
a) Setup a default Sidecar resource in the
istio-system namespace, and b) Create a Sidecar resource in your namespace with a workloadSelector which doesn’t include all pods (for example, permitting a pod named “logmaster” to connect to pod “logcatcher” in namespace “logmonkey”)
Then any pods not matched by the aforementioned workloadSelector will have NO Sidecar resource applied, and thus no namespace isolation.
If you want namespace isolation, either specify a default Sidecar resource in your MeshConfig root namespace (default istio-system) and don’t create any more Sidecar resources in your target namespace, OR create a default Sidecar resource in each namespace, followed by more workload-specific Sidecar resources.
In my case, since I legitimately need some pods in each prod/uat/dev namespace to be able to communicate with select services in different namespaces, I’ve created a default Sidecar resource in each namespace, which restricts all pods to commuicating only within their own namespace:
apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: istio-config namespace: prod spec: egress: - hosts: - "./*"
Having established this per-namespace default, I can now apply more lenient Sidecar resources to only a subset of my pods, as in the following example:
apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: permit-logs-to-logmonkey spec: egress: - hosts: - ./* - logmonkey/logcatcher.logmonkey.svc.cluster.local workloadSelector: labels: app.kubernetes.io/name: batman-app
Now I can happily spin up multiple headless TCP services in separate namespaces, without having Istio arbitrarily forward the traffic to the first-available service listening on a particular port :)
Update (29 Apr 2020)
Got any questions / suggestions? I’m an Istio-n00b too, hit me up!