The easiest way to deploy RIG is using Helm:
helm repo add accenture https://accenture.github.io/reactive-interaction-gateway
# Helm v3
helm install rig accenture/reactive-interaction-gateway
# Helm v2
helm install --name=rig accenture/reactive-interaction-gateway-helm-v2
Check out the Helm v2 README or Helm v3 README and Operator's Guide for more information on configuring RIG.
This deployment is not recommended as lots of configurations is hard coded
kubectl apply -f kubectl/rig.yaml
Check out the getting-started tutorial, the examples and the features for more information what you can do with RIG.
Both kubectl
and helm
deploy bunch of Kubernetes resources:
- deployment - manages pod(s)
- service - provides the main communication point for other applications
- headless service - takes care of DNS discovery used internally
To allow external communication (outside of your cluster) do:
helm upgrade --set service.type=LoadBalancer rig accenture/reactive-interaction-gateway
# for kubectl update kubectl/rig.yaml to use a service of type LoadBalancer instead of ClusterIP
Scale the deployment and create multiple pods
helm upgrade --set replicaCount=<replicas> rig accenture/reactive-interaction-gateway
# or
kubectl scale deployment/<deployment_name> --replicas <replicas>
You can also inspect the logs of the pods with kubectl logs <pod_name>
to see how they automatically re-balance Kafka consumers (if you are using Kafka) and adapt Proxy APIs from other nodes.
Every node in cluster needs to be discoverable by other nodes. For that Elixir/Erlang uses so called long name
or short name
. We are using long name
which is formed in the following way app_name@node_host
. app_name
is in our case set to rig
and node_host
is taken from environment variable NODE_HOST
. This can be either IP or container alias or whatever that is routable in network by other nodes.
We are using the pod IP with:
- name: NODE_HOST
valueFrom:
fieldRef:
fieldPath: status.podIP
Nodes in Erlang cluster use cookies as a form of authorization/authentication between them. Only nodes with the same cookie can communicate together. Ideally, it is some generated hash, that's why we recommend adapting NODE_COOKIE
environment variable in the values.yaml
.
You can configure bunch of environment variables, please check the Helm v2 README or Helm v3 README and Operator's Guide.
# kubectl
kubectl delete -f kubectl/rig.yaml
# Helm v3
helm uninstall rig
# Helm v2
helm delete --purge rig