[Feature] Enable different connectivity modes among clusters (a.k.a. Security Phase 1) #1879
Labels
deprecated
This issue or pull request refers to a deprecated version of Liqo
feat
Adds a new feature to the codebase
Currently Liqo enables full connectivity from pods in the first cluster, to pods in the second cluster. This is not always desirable, as (optionally) some connectivity restrictions should be in place to protect non-offloaded pods running in the second cluster (and vice versa).
This feature request aims at providing the first step into that direction.
Overall architecture
This first phase of security aims at providing a set of predefined behaviours, which can be easily turned on/off in Liqo.
More granular security mechanisms are left for future phases.
This feature request proposes three operating modes.
Terminology
Pictures and tables in this document will use the following terminology:
Furthermore, yellow cells in the table highlights the differences compared to the default behaviour (a.k.a., full pod-to-pod connectivity).
Full pod-to-pod connectivity (default mode)
This represent the behaviour of the current Liqo: there are no restriction in terms of connectivity, and all pods of the first cluster can connect to all pods of the second cluster (and vice versa). This means that, focusing on a first cluster Cluster1, its "home"pods can communicate with all pods of the second cluster, irrespective from the fact that they are offloaded (hence, owned by Custer1) or not (hence, owned by Cluster2).
Possible use-case: both clusters are under control of the same organization, and Liqo is used simply to "cross the boundaries" among distinct clusters (e.g., because clusters are running in different geographical regions).
Intra-cluster traffic segregation
Full pod-to-pod connectivity is provided only within real clusters; regarding intra-cluster traffic, the possibility of starting a connection, and relative response traffic, is allowed for:
local pods, that can reach on remote cluster only pods offoaded by their cluster
remote pods, that can reach endpoints of a local service reflected on the remote cluster
These two behaviours, enabled by this connectivity mode, give the chance to setup different connectivity scenarios, such as the one presented in the following picture.
Possible use-case: TODO
Protected borders
Full pod-to-pod connectivity is provided within the virtual cluster, hence pods and services falling under the same ownership (e.g., pods running on Cluster1, and pods offloaded by Cluster1 in Cluster2) have full connectivity, such as the Cluster1 becomes "bigger". However, such pods are protected from pods owned by Cluster2 (and, obviously, running in Cluster2).
In this model, pods from Cluster1 (either native, or offloaded) can also contact native services running in Cluster2 (i.e., owned by Cluster2), with properly configured "holes". This enables a fast communication path between pods in one cluster (the virtual cluster set up by Cluster1) and the services in a second Cluster2, e.g., for fast communication between applications under the control of different owners.
The following picture shows the possible communication patterns; orange cells in the table highlights the differences compared to the default behaviour (a.k.a., full pod connectivity).
HOLE: Px must be able to contact Py/Sy exclusively thanks to an explicit configuration of Network Policy. In that case responses have to come back (using connection tracking).
Possible use-case: Creation of an extended cluster that spans across multiple physical clusters, but enforcing strong security boundaries between pods and service under the control of Cluster1, and the ones under the control of Cluster2.
Required steps
To be defined
The text was updated successfully, but these errors were encountered: