Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange behavior when generating various policies #687

Open
r0binak opened this issue Mar 14, 2023 · 0 comments
Open

Strange behavior when generating various policies #687

r0binak opened this issue Mar 14, 2023 · 0 comments

Comments

@r0binak
Copy link

r0binak commented Mar 14, 2023

My environment:

  • Kubernetes 1.23.17
  • containerd 1.6.18
  • Cilium 1.13.0
  • KubeArmor 0.9
  • Discovery Engine 0.8

I used Google's Online boutique microservice application to see how the discovery engine generates policies. The result surprised me.

When I launched the discovery engine for the first time with flags karmor discover -n boutique to get KuberArmor policies, the result was very bad, the policies were not complete, there were no main processes running in the container. I completely modeled and simulated the work of all microservices, so network and load activity was present.

Example:

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: autopol-system-2133208058
  namespace: boutique
spec:
  action: Allow
  file:
    matchDirectories:
    - dir: /
      recursive: true
    - dir: /lib/x86_64-linux-gnu/
      recursive: true
  process:
    matchPaths:
    - path: /bin/grpc_health_probe
  selector:
    matchLabels:
      app: recommendationservice
  severity: 1

Then, I decided to check how network policies are generated. It was especially interesting to see what CIDR, L7 and FQDN policies look like. But I didn't see anything good.

Example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: autopol-egress-wcpetxnypaieyyp
  namespace: boutique
spec:
  egress:
  - ports:
    - protocol: UDP
  podSelector:
    matchLabels:
      app: frontend
  policyTypes:
  - Egress

Cilium Network Policy:

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: autopol-egress-wcpetxnypaieyyp
  namespace: boutique
spec:
  egress:
  - toCIDR:
    - 0.0.0.0/32
    toPorts:
    - ports:
      - port: "0"
        protocol: UDP
  endpointSelector:
    matchLabels:
      app: frontend

I did not manage to generate FQDN policies, CIDR-based policies are formed very bad. I tried to knock from the container with curl to other containers, and just to external services from the big Internet - Google. After some time, I reinstalled KuberArmor and DiscoveryEngine and the quality of KubeArmorPolicy improved noticeably. But the problems with NetworkPolicy remained the same.

KuberarmorPolicy for the same microservice after reinstall:

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: autopol-system-2133208058
  namespace: boutique
spec:
  action: Allow
  file:
    matchDirectories:
    - dir: /
      recursive: true
    - dir: /lib/x86_64-linux-gnu/
      recursive: true
  process:
    matchPaths:
    - path: /bin/grpc_health_probe
    - path: /usr/local/bin/python3.10
  selector:
    matchLabels:
      app: recommendationservice
  severity: 1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant