Component(s)
target allocator
Problem
The Target Allocator (TA) can only be deployed via the OpenTelemetry Operator today. Users who want to run the TA without the operator - for example in environments where installing a CRD-based operator isn't feasible, or when managing their own collector fleet - have no official deployment manifests to work with. They must reverse-engineer the operator-generated resources or write their own from scratch.
Proposal
Add a cmd/otel-allocator/deploy/ directory containing kustomize base manifests for deploying the TA standalone:
| Resource |
Description |
ServiceAccount |
Identity for the TA pod |
ClusterRole |
Read access to pods, nodes, services, endpoints, endpointslices |
ClusterRoleBinding |
Binds the ClusterRole to the ServiceAccount |
Deployment |
Runs the TA (1 replica, mounts config from ConfigMap) |
Service |
Exposes port 80 → 8080 for collectors |
kustomization.yaml |
Kustomize base with image placeholder |
Users provide their own ConfigMap with targetallocator.yaml and create a kustomize overlay to set namespace, image tag, and ClusterRoleBinding subject namespace.
Why kustomize
- Already used throughout this project and the broader Kubernetes ecosystem
- Users can reference the base directly from GitHub (
github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/deploy)
- Easy to override image, namespace, and names via overlays
- No additional tooling required beyond
kubectl apply -k
E2E validation
An accompanying Go-based integration test prepared here deploys the TA from these manifests into a kind cluster, verifies target distribution across collectors, tests scale-up/down with consistent hashing, and validates the HTTP API contract - all without the operator.
Acceptance criteria
Component(s)
target allocator
Problem
The Target Allocator (TA) can only be deployed via the OpenTelemetry Operator today. Users who want to run the TA without the operator - for example in environments where installing a CRD-based operator isn't feasible, or when managing their own collector fleet - have no official deployment manifests to work with. They must reverse-engineer the operator-generated resources or write their own from scratch.
Proposal
Add a
cmd/otel-allocator/deploy/directory containing kustomize base manifests for deploying the TA standalone:ServiceAccountClusterRoleClusterRoleBindingDeploymentServicekustomization.yamlUsers provide their own
ConfigMapwithtargetallocator.yamland create a kustomize overlay to set namespace, image tag, and ClusterRoleBinding subject namespace.Why kustomize
github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/deploy)kubectl apply -kE2E validation
An accompanying Go-based integration test prepared here deploys the TA from these manifests into a kind cluster, verifies target distribution across collectors, tests scale-up/down with consistent hashing, and validates the HTTP API contract - all without the operator.
Acceptance criteria
cmd/otel-allocator/deploy/with READMEkustomize buildproduces valid, deployable manifests