Skip to content

Unable to maintain client IP with SSL Passthrough in Ingress NGINX Controller

Describe the bug:

I'm currently hosting an ASP.NET application in a Kubernetes environment, using Ingress Nginx Controller (deployed via an official helm chart) for external accessing purposes. My application authorizes requests via client-provided certificates. When these certificates on the server-side have any validation issues, the application should return custom error codes and messages. If the client IP is not in the allowed pool, a 403 HTTP error should be returned.

However, when a request with an expired certificate is sent, the expected 403 HTTP error is not returned. Instead, NGINX validates the certificate and returns a 400 HTTP error due to issues #8229 (closed) and https://github.com/openssl/openssl/issues/14036.

To address this, I enabled ssl-passthrough on the controller and on the Ingress rule for the service and added extra validation on the application itself. But, unfortunately, when ssl-passthrough is enabled on Ingress, the client IP is rewritten as ::ffff:<INTERNAL_IPV4_OF_INGRESS_POD>.

A similar issue was reported here #8052 (closed) where the client IP is always 127.0.0.1, but the solution provided (setting enable-real-ip: "true" and forwarded-for-header: proxy_protocol in Nginx Ingress Controller ConfigMap) did not seem to solve my issue.

I know that ssl-passthrough is working on L4, but there is an another workaround to provide the client IP to the application (to HttpContext.Request.RemoteIpAddress or X-Forwarded-For or X-Real-IP headers)?

What you expected to happen:

I am hoping for a workaround to preserve the client IP in the HttpContext.Request.RemoteIpAddress or X-Forwarded-For or X-Real-IP headers even when ssl-passthrough mode is enabled.

NGINX Ingress controller version: NGINX Ingress controller Release: v1.6.4 Build: 69e88338 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.21.6

Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.14", GitCommit:"3321ffc07d2f046afdf613796f9032f4460de093", GitTreeState:"clean", BuildDate:"2022-11-09T13:32:47Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}

  • How was the ingress-nginx-controller installed: public-ingress-nginx ingress-nginx-test 4 2023-11-30 13:35:05.882801305 +0000 UK deployed ingress-nginx-4.5.2 1.6.4

Helm values:

USER-SUPPLIED VALUES:
commonLabels: {}
controller:
  admissionWebhooks:
    annotations: {}
    certManager:
      enabled: false
    certificate: /usr/local/certificates/cert
    createSecretJob:
      resources:
        limits:
          cpu: 10m
          memory: 20Mi
        requests:
          cpu: 10m
          memory: 20Mi
      securityContext:
        allowPrivilegeEscalation: false
    enabled: true
    existingPsp: ""
    failurePolicy: Fail
    key: /usr/local/certificates/key
    namespaceSelector: {}
    networkPolicyEnabled: false
    objectSelector: {}
    patch:
      enabled: true
      image:
        digest: ""
        image: ingress-nginx/kube-webhook-certgen
        pullPolicy: IfNotPresent
        registry: registry.k8s.io
        tag: v20220916-gd32f8c343
      labels: {}
      nodeSelector:
        kubernetes.io/os: linux
      podAnnotations: {}
      priorityClassName: ""
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      tolerations: []
    patchWebhookJob:
      resources: {}
      securityContext:
        allowPrivilegeEscalation: false
    port: 8443
    service:
      annotations: {}
      externalIPs: []
      loadBalancerSourceRanges: []
      servicePort: 443
      type: ClusterIP
  allowSnippetAnnotations: true
  autoscaling:
    apiVersion: autoscaling/v2
    behavior: {}
    enabled: true
    maxReplicas: 5
    minReplicas: 1
    targetCPUUtilizationPercentage: 80
    targetMemoryUtilizationPercentage: 80
  autoscalingTemplate: []
  config:
    allow-snippet-annotations: "true"
    enable-real-ip: "true"
    forwarded-for-header: proxy_protocol
    gzip-min-length: "1"
    gzip-types: text/plain application/json application/xml
    use-gzip: "true"
  configMapNamespace: ""
  containerName: controller
  containerPort:
    http: 80
    https: 443
  customTemplate:
    configMapKey: ""
    configMapName: ""
  dnsPolicy: ClusterFirst
  electionID: ""
  enableTopologyAwareRouting: false
  existingPsp: ""
  extraArgs:
    enable-ssl-passthrough: ""
  extraEnvs: []
  healthCheckPath: /healthz
  hostNetwork: false
  hostPort:
    enabled: false
    ports:
      http: 80
      https: 443
  image:
    allowPrivilegeEscalation: true
    chroot: false
    digest: ""
    image: ingress-nginx/controller
    pullPolicy: IfNotPresent
    registry: registry.k8s.io
    runAsUser: 101
    tag: v1.6.4
  ingressClass: nginx
  ingressClassByName: false
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx
    default: false
    enabled: true
    name: public-ingress-nginx
    parameters: {}
  keda:
    enabled: false
  kind: DaemonSet
  livenessProbe:
    failureThreshold: 5
    httpGet:
      path: /healthz
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  maxmindLicenseKey: ""
  metrics:
    enabled: true
    port: 10254
    prometheusRule:
      additionalLabels: {}
      enabled: false
      rules: []
    service:
      annotations: {}
      externalIPs: []
      loadBalancerSourceRanges: []
      servicePort: 10254
      type: ClusterIP
    serviceMonitor:
      enabled: false
  minAvailable: 1
  name: controller
  nodeSelector:
    kubernetes.io/os: linux
  opentelemetry:
    containerSecurityContext:
      allowPrivilegeEscalation: false
    enabled: false
    image: registry.k8s.io/ingress-nginx/opentelemetry:v20230107-helm-chart-4.4.2-2-g96b3d2165@sha256:331b9bebd6acfcd2d3048abbdd86555f5be76b7e3d0b5af4300b04235c6056c9
  publishService:
    enabled: true
    pathOverride: ""
  readinessProbe:
    failureThreshold: 3
    httpGet:
      path: /healthz
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  replicaCount: 1
  reportNodeInternalIp: false
  resources:
    requests:
      cpu: 300m
      memory: 220Mi
  scope:
    enabled: false
  service:
    annotations: {}
    appProtocol: true
    enableHttp: false
    enableHttps: true
    enabled: true
    external:
      enabled: true
    externalIPs: []
    externalTrafficPolicy: Local
    internal:
      enabled: false
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    labels: {}
    loadBalancerIP: ""
    nodePorts:
      http: ""
      https: ""
      tcp: {}
      udp: {}
    ports:
      https: 443
    targetPorts:
      http: http
      https: https
    type: LoadBalancer
  tcp:
    configMapNamespace: ""
  tolerations: []
  udp:
    annotations: {}
    configMapNamespace: ""
  updateStrategy: {}
  watchIngressWithoutClass: false
defaultBackend:
  enabled: false
podSecurityPolicy:
  enabled: false
rbac:
  create: true
  scope: false
serviceAccount:
  annotations: {}
  automountServiceAccountToken: true
  create: true
  name: ""
  • Current State of the controller:
Name:         public-ingress-nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=public-ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.6.4
              helm.sh/chart=ingress-nginx-4.5.2
Annotations:  meta.helm.sh/release-name: public-ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx-test
Controller:   k8s.io/ingress-nginx
Events:       <none>
  • Additional context:

I am including relevant sections of the helm chart values, the start of the Ingress rule file, and the application HttpMiddleware logs for more context.

Helm chart values config section:

config:
    use-gzip: "true"
    gzip-types:
        text/plain
        application/json
        application/xml
    gzip-min-length: "1"
    allow-snippet-annotations: "true"
    enable-real-ip: "true"
    forwarded-for-header: proxy_protocol

Start of ingress rule:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/use-forwarded-headers: "true"

Application logs of headers and Client IP:

[
  {
    "@t": "2023-11-30T15:07:57.6371371Z",
    "@m": "X-Forwarded-For: ''",
    "@i": "e6231517",
    "@l": "Warning",
    "SourceContext": "Program",
    "RequestId": "0HMVHMEUBTND3:00000001",
    "RequestPath": "/api",
    "ConnectionId": "0HMVHMEUBTND3",
    "Scope": [
      "Start processing the request with trace: 0HMVHMEUBTND3:00000001"
    ]
  },
  {
    "@t": "2023-11-30T15:07:57.6371881Z",
    "@m": "Connection:RemoteIpAddress: '::ffff:172.17.4.148'",
    "@i": "05dda92f",
    "@l": "Warning",
    "SourceContext": "Program",
    "RequestId": "0HMVHMEUBTND3:00000001",
    "RequestPath": "/api",
    "ConnectionId": "0HMVHMEUBTND3",
    "Scope": [
      "Start processing the request with trace: 0HMVHMEUBTND3:00000001"
    ]
  }
]

How to reproduce this issue:

  1. enable ssl-passthrough in nginx-ingress-controller;
  2. enable ssl-passthrough in nginx-ingress-rule;
  3. perform request from external ip;
  4. log request headers/client ip

Anything else we need to know:

Kindly help me figure out what I might be missing here. How can I ensure that the client IP is preserved as expected?