The Openstack discoverer provides service discovery for an Openstack cluster. It does this by monitoring all Load Balancer as a Service (LBaaS) configured as well as the corresponding Members. They are synchronized to the Team namespace as Services and Endpoints, with the Namespace being configured as the TenantName in Openstack.
The Discoverer will poll the Openstack API on a customizable interval and update the Gimbal cluster accordingly.
The discoverer will only be responsible for monitoring a single cluster at a time. If multiple clusters are required to be watched, then multiple discoverers will need to be deployed.
The OpenStack discoverer adds a label to discovered services and endpoints that contains the OpenStack Load Balancer name. Given that Kubernetes has specific requirements around label values, the discoverer will do the following when necessary:
- Replace any disallowed character with a dash (-).
- Prepend lb to the name when it does not begin with an alphanumeric character
- Append lb to the name when it does not end with an alphanumeric character
Examples:
Load Balancer name | Label value |
---|---|
bar:8080 | bar-8080 |
foo!@#$%^&*()_+bar | foo----------_-bar |
foo-bar_BAZ.123 | foo-bar_BAZ.123 (no change) |
foo! | foo-lb |
!foo | lb-foo |
!foo! | lb-foo-lb |
foo- | foo-lb |
foo_ | foo_lb |
foo. | foo.lb |
See the naming conventions documentation for additional information around handling names.
The following sections outline the technical implementations of the discoverer.
See the design document for additional details.
Arguments are available to customize the discoverer, most have defaults but others are required to be configured by the cluster administrators:
flag | default | description |
---|---|---|
version | false | Show version, build information and quit |
num-threads | 2 | Specify number of threads to use when processing queue items |
gimbal-kubecfg-file | "" | Location of kubecfg file for access to Kubernetes cluster hosting Gimbal |
backend-name | "" | Name of cluster scraping for services & endpoints (Cannot start or end with a hyphen and must be lowercase alpha-numeric) |
debug | false | Enable debug logging |
reconciliation-period | 30s | The interval of time between reconciliation loop runs |
http-client-timeout | 5s | The HTTP client request timeout |
openstack-certificate-authority | "" | Path to cert file of the OpenStack API certificate authority |
prometheus-listen-address | 8080 | The address to listen on for Prometheus HTTP requests |
gimbal-client-qps | 5 | The maximum queries per second (QPS) that can be performed on the Gimbal Kubernetes API server |
gimbal-client-burst | 10 | The maximum number of queries that can be performed on the Gimbal Kubernetes API server during a burst |
openstack-project-watchlist | "" | List of projects to be watched for reconciliation. If empty, load balancers across all projects will be reconciled. This watchlist should be comma separated list. e.g) --openstack-project-watchlist=project1,project2... |
The discoverer requires the following credentials to access the backend OpenStack cluster. Similar to the OpenStack CLI, the credentials can be provided using environment variables:
Credential | Environment Variable | Description |
---|---|---|
Username | OS_USERNAME |
The OpenStack username |
Password | OS_PASSWORD |
The password of the OpenStack user |
Authentication URL | OS_AUTH_URL |
The URL of the endpoint to use for authentication |
Tenant Name | OS_TENANT_NAME |
The OpenStack user's tenant name |
User Domain Name | OS_USER_DOMAIN_NAME |
The OpenStack user's domain name |
If you need to provide a CA certificate to establish a secure connection with the
authentication endpoint, you may use the --openstack-certificate-authority
flag to
provide the path to a CA certificate.
Following example creates a Kubernetes secret which the Openstack discoverer will consume to get credentials & other information to be able to discover services & endpoints:
kubectl create secret generic remote-discover-openstack \
--from-file=certificate-authority-data=./ca.pem \
--from-literal=backend-name=openstack \
--from-literal=username=admin \
--from-literal=password=abc123 \
--from-literal=auth-url=https://api.openstack:5000/ \
--from-literal=tenant-name=gimbal
Credentials to the backend OpenStack cluster can be updated at any time if necessary. To do so, we recommend taking advantage of the Kubernetes deployment's update features:
- Create a new secret with the new credentials.
- Update the deployment to reference the new secret.
- Wait until the discoverer pod is rolled over.
- Verify the discoverer is up and running.
- Delete the old secret, or rollback the deployment if the discoverer failed to start.
The discoverer has two configuration parameters that control the request rate limiter of the Kubernetes client used to sync services and endpoints to the Gimbal cluster:
- Queries per second (QPS): Number of requests per second that can be sent to the Gimbal API server. Set using the
--gimbal-client-qps
command-line flag. - Burst size: Number of requests that can be sent during a burst period. A burst is a period of time in which the number of requests can exceed the configured QPS, while still maintaining a smoothed QPS rate over time. Set using the
--gimbal-client-burst
command-line flag.
These configuration parameters are dependent on your requirements and the hardware running the Gimbal cluster. If services and endpoints in your environment undergo a high rate of change, increase the QPS and burst parameters, but make sure that the Gimbal API server and etcd cluster can handle the increased load.
Data flows from the remote cluster into the Gimbal cluster. The steps on how they replicate are as follows:
- Connection is made to remote cluster and all LBaaS's and corresponding Members are retrieved from the cluster
- Those objects are then translated into Kubernetes Services and Endpoints, then synchronized to the Gimbal cluster in the same namespace as the remote cluster. Labels will also be added during the synchronization (See the labels section for more details).
- Once the initial list of objects is synchronized, any further updates will happen based upon the configured
reconciliation-period
which will start a new reconciliation loop.
All synchronized services & endpoints will have additional labels added to assist in understanding where the object were sourced from.
Labels added to service and endpoints:
gimbal.projectcontour.io/service=<serviceName>
gimbal.projectcontour.io/backend=<nodeName>
gimbal.projectcontour.io/load-balancer-id=<LoadBalancer.ID>
gimbal.projectcontour.io/load-balancer-name=<LoadBalancer..Name>