Skip to main content

Prerequisites

  • Kubernetes 1.25+
  • Helm 3.8+

Install

helm install proxy-hopper oci://ghcr.io/cams-data/helm/proxy-hopper \
  --namespace proxy-hopper \
  --create-namespace
This deploys a single-instance Proxy Hopper with the in-memory backend using the default config. See Configuration below to supply your own.

Configuration

Inline config (default)

Pass your config.yaml content directly in values:
helm install proxy-hopper oci://ghcr.io/cams-data/helm/proxy-hopper \
  --namespace proxy-hopper \
  --create-namespace \
  --set-file config.inline=config.yaml
Or in a values.yaml file:
config:
  inline: |
    proxyProviders:
      - name: my-provider
        auth:
          type: basic
          username: user
          password: secret
        ipList:
          - "10.0.0.1:3128"
          - "10.0.0.2:3128"
        regionTag: US-East

    targets:
      - name: general
        regex: '.*'
        ipPool: my-provider
        minRequestInterval: 1s
        numRetries: 3
helm install proxy-hopper oci://ghcr.io/cams-data/helm/proxy-hopper \
  --namespace proxy-hopper \
  --create-namespace \
  -f values.yaml

Existing ConfigMap or Secret

If you manage config separately (e.g. via an operator or external-secrets), reference it instead:
# Reference a ConfigMap — key must be "config.yaml"
config:
  existingConfigMap: my-proxy-hopper-config

# Or reference a Secret (for configs containing credentials)
config:
  existingSecret: my-proxy-hopper-secret

Redis backend

Enable the bundled Redis subchart for multi-instance HA deployments:
backend:
  type: redis

redis:
  enabled: true
  architecture: standalone
  auth:
    enabled: false
  master:
    persistence:
      size: 1Gi
The chart automatically selects the -redis Docker image and wires up the Redis URL when backend.type: redis is set. To use an external Redis instance instead:
backend:
  type: redis
  redis:
    url: redis://my-redis:6379/0

redis:
  enabled: false

Scaling

Enable the HorizontalPodAutoscaler:
autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 80
HPA with multiple replicas requires the Redis backend so that all instances share pool state. The in-memory backend is not suitable for multi-replica deployments.

Ingress

ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - host: proxy-hopper.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: proxy-hopper-tls
      hosts:
        - proxy-hopper.example.com

Prometheus metrics

Enable the ServiceMonitor for Prometheus Operator:
metrics:
  enabled: true
  serviceMonitor:
    enabled: true
    interval: 30s
    additionalLabels:
      release: prometheus

Upgrading

helm upgrade proxy-hopper oci://ghcr.io/cams-data/helm/proxy-hopper \
  --namespace proxy-hopper \
  --reuse-values \
  --version <new-version>

Uninstall

helm uninstall proxy-hopper --namespace proxy-hopper

Production checklist

Avoid putting credentials in config.inline in plain values files. Use config.existingSecret referencing a Secret managed by your secrets operator (e.g. External Secrets, Sealed Secrets).
Proxy Hopper is I/O-bound — CPU usage is low but memory scales with open connections. Set resources.requests and resources.limits based on observed usage.
The bundled Redis subchart is convenient but not hardened for production. For serious workloads use a managed service (AWS ElastiCache, GCP Memorystore, Redis Cloud) and reference it via backend.redis.url.
Configure a PDB to maintain minimum availability during node drain or rolling upgrades. This is not included in the chart but can be added manually.