Skip to main content
Version: 2.6.0

Selector

info

Selectors are used by flow control and observability components instantiated by Aperture Agents like Classifiers, Flux Meters and Load Scheduler. Selectors define scoping rules that decide how these components should select flows for their operations.

A Selector consists of:

Control Point

Control points are similar to feature flags. They identify the location in the code or data plane (web servers, service meshes, API gateways, and so on) where flow control decisions are applied. They're defined by developers using the SDKs or configured when integrating with API Gateways or Service Meshes.

graph LR users(("users")) subgraph Frontend Service fingress["ingress"] recommendations{{"recommendations"}} live-update{{"live-update"}} fegress["egress"] end subgraph Checkout Service cingress["ingress"] cegress["egress"] end subgraph Database Service dbingress["ingress"] end users -.-> fingress fegress -.-> cingress cegress -.-> dbingress

In the above diagram, each service has HTTP or gRPC control points. Every incoming API request to a service is a flow at its ingress control point. Likewise, every outgoing request from a service is a flow at its egress control point.

In addition, the Frontend service has feature control points identifying recommendations and live-update features inside the Frontend service's code.

note

The Control Point definition does not care about which particular entity (like a pod) is handling a particular flow. A single Control Point covers all the entities belonging to the same service.

tip

Use the aperturectl flow-control control-points CLI command to list active control points.

Label Matcher

The Label Matcher optionally narrows down the selected flow based on conditions on Labels.

There are multiple ways to define a label matcher. The simplest way is to provide a map of labels for exact-match:

label_matcher:
match_labels:
http.method: GET

Matching expression trees can also be used to define more complex conditions, including regular expression matching. Refer to Label Matcher reference for further details.

Example

service: checkout.myns.svc.cluster.local
agent_group: default
control_point: ingress
label_matcher:
match_labels:
user_tier: gold

Agent Group

note

The Agent Group and Service are optional constructs that help scale Aperture configuration in complex environments, such as Kubernetes, or in multi-cluster installations.

In standalone Aperture Agent deployments (not co-located with any service), the Control Points alone can be used to match flows to policies and that deployment can be used as a feature flag decision service serving remote flow control requests.

Agent Group is a flexible label that defines a collection of agents that operate as peers. For example, an Agent Group can be a Kubernetes cluster name in the case of DaemonSet deployment, or it can be a service name for sidecar deployments.

Agent Group defines the scope of agent-to-agent synchronization, with agents within the group forming a peer-to-peer network to synchronize fine-grained state per-label global counters that are used for rate-limiting purposes. Additionally, all agents within an Agent Group instantiate the same set of flow control components as published by the controller.

Service

A service in Aperture is similar to services tracked in Kubernetes or Consul. Services in Aperture are usually referred by their fully qualified domain names (FQDN).

A service is a collection of entities delivering a common functionality, such as checkout, billing and so on. Aperture maintains a mapping of entity IP addresses to service names. For each flow control decision request sent by an entity, Aperture looks up the service name and then decides which flow control components to execute.

note

An entity (Kubernetes pod, VM) might belong to multiple services.

Special Service Names
  • any: Can be used in a policy to match all services
Service Discovery

Aperture Agents perform automated discovery of services and entities in environments such as Kubernetes and watch for any changes. Service and entity entries can also be created manually through configuration.

Services in Aperture are scoped within Agent Groups, creating two level hierarchies, e.g.:

graph TB subgraph group2 s3[search.mynamespace.svc.cluster.local] s4[db.mynamespace.svc.cluster.local] end subgraph group1 s1[frontend.mynamespace.svc.cluster.local] s2[db.mynamespace.svc.cluster.local] end

In this example, there are two independent db.mynamespace.svc.cluster.local services.

For single-cluster deployments, a single default Agent Group can be used:

graph TB subgraph default s1[frontend.mynamespace.svc.cluster.local] s3[search.mynamespace.svc.cluster.local] s2[db.mynamespace.svc.cluster.local] end

as another extreme, if Agent Groups already group entities into logical services, the Agent Group can be treated as a service to match flows to policies (useful when installing as a sidecar):

graph TB subgraph frontend s1[*] end subgraph search s2[*] end subgraph db s3[*] end

Agent Group name together with Service name determine the service to select flows from.

Gateways Integration

Aperture can be integrated with Gateways to control traffic before that is routed to the upstream service. Gateways can be configured to send flow control requests to Aperture for every incoming request.

As the requests to Aperture are sent from the Gateway, the service selector has to be configured to match the Gateway's service. For example, if the Gateway controller is running with service name nginx-server in namespace nginx, for upstream service having location/route as /service1, the selector should be configured as follows:

service: nginx-server.nginx.svc.cluster.local
agent_group: default
control_point: service1
label_matcher:
match_labels:
http.target: "/service1"

Also, if the control point is configured uniquely for each location/route, the control_point field alone can be used to match the upstream service and the rest of the fields can be omitted:

agent_group: default
control_point: service1

Filtering out liveness/health probes, and metrics endpoints

Liveness and health probes are essential for checking the health of the application, and metrics endpoints are necessary for monitoring its performance. However, these endpoints do not contribute to the overall latency of the service, and if included in latency calculations, they might cause requests to be rejected, leading to unnecessary pod restarts.

To prevent these issues, traffic to these endpoints can be filtered out by matching expressions. In the example below, flows with http.target starting with /health, /live, or /ready, and User Agent starting with kube-probe/1.23 are filtered out.

service: checkout.myns.svc.cluster.local
agent_group: default
control_point: ingress
label_matcher:
match_expressions:
- key: http.target
operator: NotIn
values:
- /health
- /live
- /ready
- /metrics
- key: http.user_agent
operator: NotIn
values:
- kube-probe/1.23

Filtering out traffic to these endpoints can prevent unnecessary pod restarts and ensure that the application is available to handle real user traffic

Other flows can be filtered out by matching on different keys and operators, such as http.method with NotIn operator and GET value. For more information on how to configure the Label Matcher, see the Label Matcher reference.

info

Remember that while these endpoints might have a low latency, they should not be included in the overall latency of the service. Filtering them out can help improve the accuracy of latency calculations and prevent requests from being rejected.