How to use Helm 3

Helm 2 已經標記為 Legacy,也不推薦使用,所以本篇皆以 Helm 3 為例

Installing Helm

On Mac

1
brew install helm

Using docker-compose

1
2
3
echo alias helm='docker run -e KUBECONFIG="/root/.kube/config:/root/.kube/some-other-context.yaml" -ti --rm -v $(pwd):/apps -v ~/.kube:/root/.kube -v ~/.helm:/root/.helm alpine/helm' >> ~/.bashrc

source ~/.bashrc

Configuration

Add repo

stable helm charts 已經有明確的棄用時間,所以儘量不要使用吧

1
2
3
helm repo add stable https://kubernetes-charts.storage.googleapis.com/

helm repo update

(Optional) Completion

1
helm completion zsh > ~/.zsh/completion/_helm

How to use

Helm 3 的 command 特地做得跟 kubectl 很像,像是 --kubeconfig--kube-context,所以會 kubectl 應該很好上手

Search repo

1
helm search repo nginx

Show chart values

1
2
3
helm show values bitnami/nginx
# or save values.yaml
helm show values bitnami/nginx > values.yaml

List installed chart

1
2
3
helm list
# or all namespaces
helm list -A

Install and upgrade chart

1
2
3
4
5
helm --namespace=${NS} install ${NAME} bitnami/nginx -f values.yaml
# or
helm --namespace=${NS} upgrade --install ${NAME} bitnami/nginx -f values.yaml
# or with variable
helm --namespace=${NS} upgrade --install ${NAME} bitnami/nginx --set "ingress.enabled=true"

Uninstall chart

1
2
# uninstall alias uninstall, del, delete, un
helm --namespace=${NS} uninstall ${NAME}

Examples

bitnami/nginx 為例

  1. using helm show values bitnami/nginx > values.yaml to download values.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    248
    249
    250
    251
    252
    253
    254
    255
    256
    257
    258
    259
    260
    261
    262
    263
    264
    265
    266
    267
    268
    269
    270
    271
    272
    273
    274
    275
    276
    277
    278
    279
    280
    281
    282
    283
    284
    285
    286
    287
    288
    289
    290
    291
    292
    293
    294
    295
    296
    297
    298
    299
    300
    301
    302
    303
    304
    305
    306
    307
    308
    309
    310
    311
    312
    313
    314
    315
    316
    317
    318
    319
    320
    321
    322
    323
    324
    325
    326
    327
    328
    329
    330
    331
    332
    333
    334
    335
    336
    337
    338
    339
    340
    341
    342
    343
    344
    345
    346
    347
    348
    349
    350
    351
    352
    353
    354
    355
    356
    357
    358
    359
    360
    361
    362
    363
    364
    365
    366
    367
    368
    369
    370
    371
    372
    373
    374
    375
    376
    377
    378
    379
    380
    381
    382
    383
    384
    385
    386
    387
    388
    389
    390
    391
    392
    393
    394
    395
    396
    397
    398
    399
    400
    401
    402
    403
    404
    405
    406
    407
    408
    409
    410
    411
    412
    413
    414
    415
    416
    417
    418
    419
    420
    421
    422
    423
    424
    425
    426
    427
    428
    429
    430
    431
    432
    433
    434
    435
    436
    437
    438
    439
    440
    441
    442
    443
    444
    445
    446
    447
    448
    449
    450
    451
    452
    453
    454
    455
    456
    457
    458
    459
    460
    461
    462
    463
    464
    465
    466
    467
    468
    469
    470
    471
    472
    473
    474
    475
    476
    477
    478
    479
    480
    481
    482
    483
    484
    485
    486
    487
    488
    489
    490
    491
    492
    493
    494
    495
    496
    497
    498
    499
    500
    501
    502
    503
    504
    505
    506
    507
    508
    509
    510
    511
    512
    513
    514
    515
    516
    517
    518
    519
    520
    521
    522
    523
    524
    525
    526
    527
    528
    529
    530
    531
    532
    533
    534
    535
    536
    537
    538
    539
    540
    541
    542
    543
    544
    545
    546
    547
    548
    549
    550
    551
    552
    553
    554
    555
    556
    557
    558
    559
    560
    561
    562
    563
    564
    565
    566
    567
    568
    569
    570
    571
    572
    573
    574
    575
    576
    577
    578
    579
    580
    581
    582
    583
    584
    585
    586
    587
    588
    589
    590
    591
    592
    593
    594
    595
    596
    597
    598
    599
    600
    601
    602
    603
    604
    605
    606
    607
    608
    609
    610
    611
    612
    613
    614
    615
    616
    617
    618
    619
    620
    621
    622
    623
    624
    625
    626
    627
    628
    629
    630
    631
    632
    633
    634
    635
    636
    637
    638
    639
    640
    641
    642
    643
    644
    645
    646
    647
    648
    649
    650
    651
    652
    653
    654
    655
    656
    657
    658
    659
    660
    661
    662
    663
    664
    665
    666
    667
    668
    669
    670
    671
    672
    673
    674
    ## Global Docker image parameters
    ## Please, note that this will override the image parameters, including dependencies, configured to use the global value
    ## Current available global Docker image parameters: imageRegistry and imagePullSecrets
    ##
    # global:
    # imageRegistry: myRegistryName
    # imagePullSecrets:
    # - myRegistryKeySecretName

    ## Bitnami NGINX image version
    ## ref: https://hub.docker.com/r/bitnami/nginx/tags/
    ##
    image:
    registry: docker.io
    repository: bitnami/nginx
    tag: 1.19.3-debian-10-r7
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## E.g.:
    ## pullSecrets:
    ## - myRegistryKeySecretName
    ##
    pullSecrets: []
    ## Set to true if you would like to see extra information on logs
    ##
    debug: false

    ## String to partially override nginx.fullname template (will maintain the release name)
    ##
    # nameOverride:

    ## String to fully override nginx.fullname template
    ##
    # fullnameOverride:

    ## Kubernetes Cluster Domain
    ##
    clusterDomain: cluster.local

    ## Extra objects to deploy (value evaluated as a template)
    ##
    extraDeploy: []

    ## Add labels to all the deployed resources
    ##
    commonLabels: {}

    ## Add annotations to all the deployed resources
    ##
    commonAnnotations: {}

    ## Command and args for running the container (set to default if not set). Use array form
    ##
    # command:
    # args:

    ## Additional environment variables to set
    ## E.g:
    ## extraEnvVars:
    ## - name: FOO
    ## value: BAR
    ##
    extraEnvVars: []

    ## ConfigMap with extra environment variables
    ##
    # extraEnvVarsCM:

    ## Secret with extra environment variables
    ##
    # extraEnvVarsSecret:

    ## Get the server static content from a git repository
    ## NOTE: This will override staticSiteConfigmap and staticSitePVC
    ##
    cloneStaticSiteFromGit:
    enabled: false
    ## Bitnami Git image version
    ## ref: https://hub.docker.com/r/bitnami/git/tags/
    ##
    image:
    registry: docker.io
    repository: bitnami/git
    tag: 2.28.0-debian-10-r64
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    # - myRegistryKeySecretName
    ## Repository to clone static content from
    ##
    # repository:
    ## Branch inside the git repository
    ##
    # branch:
    ## Interval for sidecar container pull from the repository
    ##
    interval: 60

    ## Custom server block to be added to NGINX configuration
    ## PHP-FPM example server block:
    ## serverBlock: |-
    ## server {
    ## listen 0.0.0.0:8080;
    ## root /app;
    ## location / {
    ## index index.html index.php;
    ## }
    ## location ~ \.php$ {
    ## fastcgi_pass phpfpm-server:9000;
    ## fastcgi_index index.php;
    ## include fastcgi.conf;
    ## }
    ## }
    ##
    # serverBlock:

    ## ConfigMap with custom server block to be added to NGINX configuration
    ## NOTE: This will override serverBlock
    ##
    # existingServerBlockConfigmap:

    ## Name of existing ConfigMap with the server static site content
    ##
    # staticSiteConfigmap

    ## Name of existing PVC with the server static site content
    ## NOTE: This will override staticSiteConfigmap
    ##
    # staticSitePVC

    ## Number of replicas to deploy
    ##
    replicaCount: 1

    ## Pod extra labels
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
    ##
    podLabels: {}

    ## Pod annotations
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    podAnnotations: {}

    ## Pod affinity preset
    ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
    ## Allowed values: soft, hard
    ##
    podAffinityPreset: ""

    ## Pod anti-affinity preset
    ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
    ## Allowed values: soft, hard
    ##
    podAntiAffinityPreset: soft

    ## Node affinity preset
    ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
    ## Allowed values: soft, hard
    ##
    nodeAffinityPreset:
    ## Node affinity type
    ## Allowed values: soft, hard
    type: ""
    ## Node label key to match
    ## E.g.
    ## key: "kubernetes.io/e2e-az-name"
    ##
    key: ""
    ## Node label values to match
    ## E.g.
    ## values:
    ## - e2e-az1
    ## - e2e-az2
    ##
    values: []

    ## Affinity for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
    ##
    affinity: {}

    ## Node labels for pod assignment. Evaluated as a template.
    ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}

    ## Tolerations for pod assignment. Evaluated as a template.
    ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: {}

    ## NGINX pods' Security Context.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
    ##
    podSecurityContext:
    enabled: false
    runAsUser: 1001
    runAsNonRoot: true
    ## sysctl settings
    ## Example:
    ## sysctls:
    ## - name: net.core.somaxconn
    ## value: "10000"
    ##
    sysctls: {}

    ## NGINX Core containers' Security Context (only main container).
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
    ##
    containerSecurityContext:
    enabled: false
    fsGroup: 1001

    ## Configures the ports NGINX listens on
    ##
    containerPorts:
    http: 8080
    # https: 8443

    ## NGINX containers' resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    # cpu: 100m
    # memory: 128Mi
    requests: {}
    # cpu: 100m
    # memory: 128Mi

    ## NGINX containers' liveness and readiness probes.
    ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
    ##
    livenessProbe:
    enabled: true
    initialDelaySeconds: 30
    timeoutSeconds: 5
    periodSeconds: 10
    failureThreshold: 6
    successThreshold: 1
    readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    timeoutSeconds: 3
    periodSeconds: 5
    failureThreshold: 3
    successThreshold: 1

    ## Custom Liveness probe
    ##
    customLivenessProbe: {}

    ## Custom Rediness probe
    ##
    customReadinessProbe: {}

    ## Autoscaling parameters
    ##
    autoscaling:
    enabled: false
    # minReplicas: 1
    # maxReplicas: 10
    # targetCPU: 50
    # targetMemory: 50

    ## Array to add extra volumes (evaluated as a template)
    ##
    extraVolumes: []

    ## Array to add extra mounts (normally used with extraVolumes, evaluated as a template)
    ##
    extraVolumeMounts: []

    ## NGINX Service properties
    ##
    service:
    ## Service type
    ##
    type: LoadBalancer

    ## HTTP Port
    ##
    port: 80

    ## HTTPS Port
    ##
    httpsPort: 443

    ## Specify the nodePort(s) value(s) for the LoadBalancer and NodePort service types.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
    ##
    nodePorts:
    http: ""
    https: ""

    ## Target port reference value for the Loadbalancer service types can be specified explicitly.
    ## Listeners for the Loadbalancer can be custom mapped to the http or https service.
    ## Example: Mapping the https listener to targetPort http [http: https]
    targetPort:
    http: http
    https: https

    ## Set the LoadBalancer service type to internal only.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    # loadBalancerIP:

    ## Provide any additional annotations which may be required. This can be used to
    ## set the LoadBalancer service type to internal only.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    annotations: {}

    ## Enable client source IP preservation
    ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    ##
    externalTrafficPolicy: Cluster

    ## LDAP Auth Daemon Properties
    ##
    ## Daemon that will proxy LDAP requests
    ## between NGINX and a given LDAP Server
    ##
    ldapDaemon:

    enabled: false

    ## Bitnami NGINX LDAP Auth Daemon image
    ## ref: https://hub.docker.com/r/bitnami/nginx-ldap-auth-daemon/tags/
    ##
    image:
    registry: docker.io
    repository: bitnami/nginx-ldap-auth-daemon
    tag: 0.20200116.0-debian-10-r141
    pullPolicy: IfNotPresent

    ## LDAP Daemon port
    ##
    port: 8888

    ## LDAP Auth Daemon Configuration
    ##
    ## These different properties define the form of requests performed
    ## against the given LDAP server
    ##
    ## BEWARE THAT THESE VALUES WILL BE IGNORED IF A CUSTOM LDAP SERVER BLOCK
    ## ALREADY SPECIFIES THEM.
    ##
    ##
    ldapConfig:

    ## LDAP URI where to query the server
    ## Must follow the pattern -> ldap[s]:/<hostname>:<port>
    uri: ""

    ## LDAP search base DN
    baseDN: ""

    ## LDAP bind DN
    bindDN: ""

    ## LDAP bind Password
    bindPassword: ""

    ## LDAP search filter
    filter: ""

    ## LDAP auth realm
    httpRealm: ""

    ## LDAP cookie name
    httpCookieName: ""

    ## NGINX Configuration File containing the directives (that define
    ## how LDAP requests are performed) and tells NGINX to use the LDAP Daemon
    ## as proxy. Besides, it defines the routes that will require of LDAP auth
    ## in order to be accessed.
    ##
    ## If LDAP directives are provided, they will take precedence over
    ## the ones specified in ldapConfig.
    ##
    ## This will be evaluated as a template.
    ##
    ##

    nginxServerBlock: |-
    server {
    listen 0.0.0.0:{{ .Values.containerPorts.http }};

    # You can provide a special subPath or the root
    location = / {
    auth_request /auth-proxy;
    }

    location = /auth-proxy {
    internal;

    proxy_pass http://127.0.0.1:{{ .Values.ldapDaemon.port }};

    ###############################################################
    # YOU SHOULD CHANGE THE FOLLOWING TO YOUR LDAP CONFIGURATION #
    ###############################################################

    # URL and port for connecting to the LDAP server
    proxy_set_header X-Ldap-URL "ldap://YOUR_LDAP_SERVER_IP:YOUR_LDAP_SERVER_PORT";

    # Base DN
    proxy_set_header X-Ldap-BaseDN "dc=example,dc=org";

    # Bind DN
    proxy_set_header X-Ldap-BindDN "cn=admin,dc=example,dc=org";

    # Bind password
    proxy_set_header X-Ldap-BindPass "adminpassword";
    }
    }

    ## Use an existing Secret holding an NGINX Configuration file that
    ## configures LDAP requests. (will be evaluated as a template)
    ##
    ## If provided, both nginxServerBlock and ldapConfig properties are ignored.
    ##
    existingNginxServerBlockSecret:

    ## LDAP Auth Daemon containers' liveness and readiness probes.
    ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
    ##
    livenessProbe:
    enabled: true
    initialDelaySeconds: 30
    timeoutSeconds: 5
    periodSeconds: 10
    failureThreshold: 6
    successThreshold: 1
    readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    timeoutSeconds: 3
    periodSeconds: 5
    failureThreshold: 3
    successThreshold: 1

    ## Custom Liveness probe
    ##
    customLivenessProbe: {}

    ## Custom Rediness probe
    ##
    customReadinessProbe: {}

    ## Ingress paramaters
    ##
    ingress:
    ## Set to true to enable ingress record generation
    ##
    enabled: false

    ## Set this to true in order to add the corresponding annotations for cert-manager
    ##
    certManager: false

    ## When the ingress is enabled, a host pointing to this will be created
    ##
    hostname: example.local

    ## Ingress annotations done as key:value pairs
    ## For a full list of possible ingress annotations, please see
    ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
    ##
    ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
    ##
    annotations: {}

    ## Enable TLS configuration for the hostname defined at ingress.hostname parameter
    ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
    ## You can use the ingress.secrets parameter to create this TLS secret, relay on cert-manager to create it, or
    ## let the chart create self-signed certificates for you
    ##
    tls: false

    ## The list of additional hostnames to be covered with this ingress record.
    ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
    ## E.g.
    ## extraHosts:
    ## - name: example.local
    ## path: /
    ##
    extraHosts: []

    ## The tls configuration for additional hostnames to be covered with this ingress record.
    ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
    ## E.g.
    ## extraTls:
    ## - hosts:
    ## - example.local
    ## secretName: example.local-tls
    ##
    extraTls: []

    ## If you're providing your own certificates, please use this to add the certificates as secrets
    ## key and certificate should start with -----BEGIN CERTIFICATE----- or -----BEGIN RSA PRIVATE KEY-----
    ## name should line up with a secretName set further up
    ## If it is not set and you're using cert-manager, this is unneeded, as it will create the secret for you
    ## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created
    ## It is also possible to create and manage the certificates outside of this helm chart
    ## Please see README.md for more information
    ##
    ## E.g.
    ## secrets:
    ## - name: example.local-tls
    ## key:
    ## certificate:
    ##
    secrets: []

    ## Health Ingress parameters
    ##
    healthIngress:
    ## Set to true to enable health ingress record generation
    ##
    enabled: false

    ## Set this to true in order to add the corresponding annotations for cert-manager
    ##
    certManager: false

    ## When the health ingress is enabled, a host pointing to this will be created
    ##
    hostname: example.local

    ## Health Ingress annotations done as key:value pairs
    ## For a full list of possible ingress annotations, please see
    ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
    ##
    ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
    ##
    annotations: {}

    ## Enable TLS configuration for the hostname defined at healthIngress.hostname parameter
    ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.healthIngress.hostname }}
    ## You can use the healthIngress.secrets parameter to create this TLS secret, relay on cert-manager to create it, or
    ## let the chart create self-signed certificates for you
    ##
    tls: false

    ## The list of additional hostnames to be covered with this health ingress record.
    ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
    ## E.g.
    ## extraHosts:
    ## - name: example.local
    ## path: /
    ##
    extraHosts: []

    ## The tls configuration for additional hostnames to be covered with this health ingress record.
    ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
    ## E.g.
    ## extraTls:
    ## - hosts:
    ## - example.local
    ## secretName: example.local-tls
    ##
    extraTls: []

    ## If you're providing your own certificates, please use this to add the certificates as secrets
    ## key and certificate should start with -----BEGIN CERTIFICATE----- or -----BEGIN RSA PRIVATE KEY-----
    ## name should line up with a secretName set further up
    ## If it is not set and you're using cert-manager, this is unneeded, as it will create the secret for you
    ## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created
    ## It is also possible to create and manage the certificates outside of this helm chart
    ## Please see README.md for more information
    ##
    ## E.g.
    ## secrets:
    ## - name: example.local-tls
    ## key:
    ## certificate:
    ##
    secrets: []

    ## Prometheus Exporter / Metrics
    ##
    metrics:
    enabled: false

    ## Bitnami NGINX Prometheus Exporter image
    ## ref: https://hub.docker.com/r/bitnami/nginx-exporter/tags/
    ##
    image:
    registry: docker.io
    repository: bitnami/nginx-exporter
    tag: 0.8.0-debian-10-r98
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    # - myRegistryKeySecretName

    ## Prometheus exporter pods' annotation and labels
    ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
    ##
    podAnnotations: {}

    ## Prometheus exporter service parameters
    ##
    service:
    ## NGINX Prometheus exporter port
    ##
    port: 9113
    ## Annotations for the Prometheus exporter service
    ##
    annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "{{ .Values.metrics.service.port }}"

    ## NGINX Prometheus exporter resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    # cpu: 100m
    # memory: 128Mi
    requests: {}
    # cpu: 100m
    # memory: 128Mi

    ## Prometheus Operator ServiceMonitor configuration
    ##
    serviceMonitor:
    enabled: false
    ## Namespace in which Prometheus is running
    ##
    # namespace: monitoring

    ## Interval at which metrics should be scraped.
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    ##
    # interval: 10s

    ## Timeout after which the scrape is ended
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    ##
    # scrapeTimeout: 10s

    ## ServiceMonitor selector labels
    ## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration
    ##
    # selector:
    # prometheus: my-prometheus
  2. install bitnami/nginx with values file

    1
    helm --namespace=default install test-server -f values.yaml