Kubernetes
Below are the instructions to install the Aperture Controller on Kubernetes.
Prerequisites
You can do the installation using the aperturectl
CLI tool or using Helm
.
Install the tool of your choice using the following links:
- Refer
aperturectl install controller to see all the available command line arguments.
Once the Helm CLI is installed, add the Aperture Controller Helm chart repository in your environment for install or upgrade:
helm repo add aperture https://fluxninja.github.io/aperture/
helm repo update
Installation
The Aperture Controller can be installed on Kubernetes using the below options:
Upgrading from one of the installation modes below to the other is discouraged and can result in unpredictable behavior.
The Aperture Controller can be installed using the Kubernetes Operator available for it. This method requires access to create cluster level resources like ClusterRole, ClusterRoleBinding, CustomResourceDefinition and so on.
The Aperture Controller can also be installed with only namespace-scoped resources.
Exposing etcd and Prometheus services
If the Aperture Controller is installed with the packaged etcd and Prometheus, follow the following steps to expose them outside the Kubernetes cluster so that the Aperture Agent running on Linux can access them.
Contour is used as a Kubernetes Ingress Controller in the following steps to expose the etcd and Prometheus services out of Kubernetes cluster using Load Balancer.
Any other tools can also be used to expose the etcd and Prometheus services out of the Kubernetes cluster based on your infrastructure.
Add the Helm chart repository for Contour in your environment:
helm repo add bitnami https://charts.bitnami.com/bitnami
Install the Contour chart by running the following command:
helm install aperture bitnami/contour --namespace projectcontour --create-namespace
It might take a few minutes for the Contour Load Balancer IP to become available. You can watch the status by running:
kubectl get svc aperture-contour-envoy --namespace projectcontour -w
Once
EXTERNAL-IP
is no longer<pending>
, run the following command to get the External IP for the Load Balancer:kubectl describe svc aperture-contour-envoy --namespace projectcontour | grep Ingress | awk '{print $3}'
Add an entry for the above IP in the cloud provider's DNS configuration. For example, follow steps on Cloud DNS on GKE for Google Kubernetes Engine.
Configure the below parameters to install the Kubernetes Ingress with the Aperture Controller by updating the
values.yaml
created during installation and passing it withinstall
command:ingress:
enabled: true
domain_name: YOUR_DOMAIN_HERE
etcd:
service:
annotations:
projectcontour.io/upstream-protocol.h2c: "2379"Replace the values of
YOUR_DOMAIN_HERE
with the actual value the domain name under with the External IP is exposed.- aperturectl
- Helm
aperturectl install controller --version v2.6.0 --values-file values.yaml
helm upgrade --install controller aperture/aperture-controller -f values.yaml
It might take a few minutes for the Ingress resource to get the
ADDRESS
. You can watch the status by running:kubectl get ingress controller-ingress -w
Once the
ADDRESS
matches the External IP, the etcd will be accessible onhttp://etcd.YOUR_DOMAIN_HERE:80
and the Prometheus will be accessible onhttp://prometheus.YOUR_DOMAIN_HERE:80
.
Applying Policies
The process of creating policies for Aperture can be done either after the installation of the controller or after the installation of the agent, depending on your preference. Generating and applying policies guide includes step-by-step instructions on how to create policies for Aperture in a Kubernetes cluster, which you can follow to create policies according to your needs.