• -------------------------------------------------------------
  • ====================================

Helm包管理实践Guide

docker dewbay 5年前 (2020-05-12) 3165次浏览 已收录 0个评论 扫描二维码
文章目录[隐藏]

安装 helm

这里选择最简单的安装方式,使用官方的 Binary 安装

1.下载需要的 helm 版本
2.解压(tar -zxvf helm-v2.8.2-linux-amd64.tgz)
3.解压后得到对应版本的 helm 二进制文件,将其 copy 到$PATH 路径下(cp linux-amd64/helm /usr/local/bin/helm)
4.没问题的话执行 helm help 将会打印出 help info,致此 helm client 安装完成

[root@tcz-dev-adam ~]# helm help
The Kubernetes package manager
To begin working with Helm, run the 'helm init' command:
$ helm init
This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.
Common actions from this point include:
- helm search: search for charts
- helm fetch: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts
…………

安装 tiller

这里使用指定镜像的方式安装, 所以先准备好镜像

1. 从 docker hub 上下载中意的镜像并打包到自己的仓库中

[root@tcz-dev-adam ~]# docker pull jiang7865134/tiller:v2.8.2 
[root@tcz-dev-adam ~]# docker tag jiang7865134/tiller:v2.8.2 hub.xxx.xxx/tiller:v2.8.2

 

2. 通过 helm init 安装 tiller, 这里指定了 service-account 为 tiller,需要创建相应的 rbac 再安装 tiller

# 创建 tiller-rbac,这里默认使用了 cluster-admin 最大的权限,若不需要可以更改 clusterrolebinding 或新建一个自定义的 role
[root@tcz-dev-adam ~]# cat tiller-brac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system

# 创建 tiller-rbac
[root@tcz-dev-adam ~]# kubectl create -f tiller-brac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

3. 安装 tiller,指定 tiller-image 镜像,指定 service-account

[root@tcz-dev-adam ~]# helm init --service-account tiller --tiller-image hub.xxx.xxx/tiller:v2.8.2 --debug

NOTE:

如果由于网络问题没能获取到注册文件有以下报错,则可以从别处 copy 一份/root/.helm/repository/repositories.yaml,再执行上面安装 tiller 的命令

[root@tcz-dev-adam ~]# helm repo list
Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

tiller 安装成功后,通过 kubectl get pods,deployment,service -o wide -n kube-system|grep tiller 可以看到安装成功的 resource
tiller 默认是没有 nodeSelector 也没有 tolerations 的,可以通过 edit tiller 的 deployment 按自己需要重新部署到指定机器上

4. 通过执行 helm version 确认 helm 可用,正常 client & server version 都正常显示

[root@tcz-dev-adam ~]# helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean”}

NOTE:

若有以下报错,则可能有两种原因,一是需要安装 socat yum install -y socat,二是没有倒 helm host 的环境变量,通 kubectl get service -n kube-system|grep service 获取 helm service 的 clusterIP 及 port,export HELM_HOST=$tiller-sevice-ClusterIP:44134

E0327 10:36:41.258053 30270 portforward.go:331] an error occurred forwarding 39855 -> 44134: error forwarding port 44134 to pod 
a8d76186f92eea818842492a803683cc91cc24f4bff60c3cf3f4a7cd2f34ad53, 
uid : unable to do port forwarding: socat not found.

(*可选)uninstall tiller,添加补全命令

# uninstall tiller
[root@tcz-dev-adam ~]# helm reset ( or helm reset -f )

# 添加补全命令
[root@tcz-dev-adam ~]# helm completion bash > .helmrc 
[root@tcz-dev-adam ~]# echo "source .helmrc" >> .bashrc

使用 helm 安装服务

1. 查看当前可用的 repo

Helm 安装时已经默认配置好了两个仓库:stable 和 local,stable 是官方仓库,local 是用户存放自己开发的 chart 的本地仓库

[root@tcz-dev-adam ~]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts

2. 使用 helm search 可查看当前可安装的 chart

[root@tcz-dev-adam ~]# helm search
NAME CHART VERSION APP VERSION DESCRIPTION
stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools
stable/aerospike 0.2.3 v4.5.0.5 A Helm chart for Aerospike in Kubernetes
stable/airflow 2.3.0 1.10.0 Airflow is a platform to programmatically autho...
stable/ambassador 1.1.5 0.50.3 A Helm chart for Datawire Ambassador
stable/anchore-engine 0.12.0 0.3.3 Anchore container analysis and policy evaluatio...
stable/apm-server 0.1.0 6.2.4 The server receives data from the Elastic APM a
。。。。。。

或者直接 search 对应的 chart,helm 会将所有 repo 中可安装的 chart list 出来

[root@tcz-dev-adam base]# helm search prometheus
NAME CHART VERSION APP VERSION DESCRIPTION
local/prometheus 0.1.2 1.0 A Helm Prometheus chart for Kubernetes
stable/prometheus 8.9.0 2.8.0 Prometheus is a monitoring system and time seri...
stable/prometheus-adapter v0.4.1 v0.4.1 A Helm chart for k8s prometheus adapter
stable/prometheus-blackbox-exporter 0.2.0 0.12.0 Prometheus Blackbox Exporter
stable/prometheus-cloudwatch-exporter 0.4.2 0.5.0 A Helm chart for prometheus cloudwatch-exporter
stable/prometheus-snmp-exporter 0.0.2 0.14.0 Prometheus SNMP Exporter
stable/prometheus-to-sd 0.1.1 0.2.2 Scrape metrics stored in prometheus format and ...
telemetry/prometheus 0.1.2 1.0 A Helm Prometheus chart for Kubernetes
stable/elasticsearch-exporter 1.1.3 1.0.2 Elasticsearch stats exporter for Prometheus

3. 使用 helm 安装 chart,指定要安装从 chart,name & namespace

[root@tcz-dev-adam ~]# helm install stable/influxdb -n influxdb --namespace kube-system
# 输出分为三部分
# (1)chart 本次部署的描述信息,包括通过-n 参数指定的 name(若不指定则随机生成),—-namespace 指定部署的 nmespace(默认为 default)
NAME: influxdb
LAST DEPLOYED: Wed Mar 27 15:17:13 2019
NAMESPACE: kube-system
STATUS: DEPLOYED # deployed 表示已经部署到集群
# (2)当前部署到集群的资源列表 configmap/service/deployment/pod
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
influxdb 1 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
influxdb ClusterIP 10.109.47.137 <none> 8086/TCP,8088/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
influxdb 1 0 0 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
influxdb-855769f97b-mbqff 0/1 Pending 0 0s
# (3)NOTES 部分显示的是 release 的使用方法
NOTES:
InfluxDB can be accessed via port 8086 on the following DNS name from within your cluster:
- http://influxdb.kube-system:808
You can easily connect to the remote instance with your local influx cli. To forward the API port to localhost:8086 run the following:
- kubectl port-forward --namespace kube-system $(kubectl get pods --namespace kube-system -l app=influxdb -o jsonpath='{ .items[0].metadata.name }') 8086:8086
You can also connect to the influx cli from inside the container. To open a shell session in the InfluxDB pod run the following:
- kubectl exec -i -t --namespace kube-system $(kubectl get pods --namespace kube-system -l app=influxdb -o jsonpath='{.items[0].metadata.name}') /bin/sh
To tail the logs for the InfluxDB pod run the following:
- kubectl logs -f --namespace kube-system $(kubectl get pods --namespace kube-system -l app=influxdb -o jsonpath='{ .items[0].metadata.name }’)

安装成功后,可在 resource 中看到

[root@tcz-dev-adam ~]# kubectl get deployment -n kube-system |grep influxdb
influxdb 1 1 1 1 3m
[root@tcz-dev-adam ~]# kubectl get configmap -n kube-system |grep influxdb
influxdb 1 3m
influxdb.v1 1 3m
[root@tcz-dev-adam ~]# kubectl get service -n kube-system |grep influxdb
influxdb ClusterIP 10.109.47.137 <none> 8086/TCP,8088/TCP 3m
[root@tcz-dev-adam ~]# kubectl get pod -n kube-system |grep influxdb
influxdb-855769f97b-mbqff 1/1 Running 0 3m

开发 helm chart

1. 先在本地创建自定义 chart

Kubernetes 给我们提供了大量官方 chart,不过要部署微服务应用,还是需要开发自己的 chart。

[root@tcz-dev-adam helm-hub]# helm create kube-state-metrics
Creating kube-state-metrics

Helm 会帮我们创建目录 kube-state-metrics,并生成了各类 chart 文件,可在此 chart 文件修改生成自己的 yaml,新建的 chart 默认包含一个 nginx 应用示例 values.yaml

[root@tcz-dev-adam helm-hub]# tree kube-state-metrics
kube-state-metrics
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ └── service.yaml
└── values.yaml

2. 可直接删除 templates 目录下的所有文件,替换成自己的 yaml,再配合 values.yaml 的使用开发自己的 chart

自己的替换修改完后,tree 如下,templates.tpl 为原先的 _helpers.tpl,这里自己当成 template 使用便重命名了

[root@tcz-dev-adam helm-hub]# tree kube-state-metrics
kube-state-metrics
├── charts
├── Chart.yaml
├── templates
│ ├── configmap.yaml
│ ├── deployment.yaml
│ ├── rbac.yaml
│ └── templates.tpl
└── values.yaml

如果有不同的环境都需要部署,但是每个环境都个别参数都不一样,则可通过 define 一个 template,在 template 里做结构控制判断,再通过外部部署 chart 时传入特定的参数来选择不同的参数

[root@tcz-dev-adam templates]# cat templates.tpl
{{- define "kube-state-metrics.containers.args" -}}
args:
- --kubeconfig
- /etc/kube-state-metrics/kubeconfig
- --apiserver
- --collectors
- namespaces,nodes,pods
{{- if (eq .Values.context|upper “ZONE-XXX") -}}. # (.Values.context 则在 vaulues.yaml 文件中定义)
- https://10.xx.xxx.xxx:443
{{- else if (eq .Values.context|upper "ZONE-XXX") -}}
- https://10.xx.xxx.xxx:443
{{- else -}}
- https://127.0.0.1:6443
{{- end -}}

# kube-state-metrics.containers.args 这个 template 则在 deployment.yaml 中引用
[root@tcz-dev-adam templates]# cat deployment.yaml
……….
template:
metadata:
labels:
app: kube-state-metrics
spec:
containers:
- name: kube-state-metrics
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
{{ include "kube-state-metrics.containers.args" . |indent 8}}
…

3. debug chart

可通过 helm lint 和 helm install –dry-run —debug 来调试刚开发完的 chart 是否正常

[root@tcz-dev-adam kube-state-metrics]# helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/: parse error in "kube-state-metrics/templates/templates.tpl": template: kube-state-metrics/templates/templates.tpl:6: unexpected EOF
Error: 1 chart(s) linted, 1 chart(s) failed

可根据报错提示修改,修改没问题 helm lint 如下

[root@tcz-dev-adam kube-state-metrics]# helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures

通过–dry-run 会模拟安装 chart,并输出每个模板生成的 YAML 内容没,可查看将要部署渲染后的 yaml,检视这些输出,判断是否与预期相符。

[root@tcz-dev-adam helm-hub]# helm install ./kube-state-metrics --dry-run --set context=ZONE-xxx --debug
[debug] Created tunnel using local port: '40281'
[debug] SERVER: "127.0.0.1:40281"
[debug] Original chart version: ""
[debug] CHART PATH: /root/helm-hub/kube-state-metrics

NAME: silly-dragon
REVISION: 1
…...
…
…

4. 部署测试开发的 chart

将 debug 用的 —-dry-run 参数去掉

[root@tcz-dev-adam helm-hub]# helm install ./kube-state-metrics --name kube-state-metrics --namespace kube-system --set context=ZONE-xxx —debug

5. 打包 chart,并创建 index,上传至远端文件服务器(自建的 repo)

[root@tcz-dev-adam helm-hub]# helm package ./kube-state-metrics -d charts-packages/
Successfully packaged chart and saved it to: charts-packages/kube-state-metrics-0.1.0.tgz

# 生成远端 repo 的 index
[root@tcz-dev-adam helm-hub]# helm repo index charts-packages/ --url http://xxxx.xxx.xxx.com/devops/kubernetes/charts
[root@tcz-dev-adam helm-hub]# cat charts-packages/index.yaml # 新生成的 index.yaml 记录了当前仓库中所有 chart 的信息
apiVersion: v1
entries:
kube-state-metrics:
- apiVersion: v1
appVersion: "1.0"
created: 2019-03-28T19:00:14.422812821+08:00
description: A Helm chart for Kubernetes
digest: d7a8efac3149268df45411b50aa346f154f5aac1cc8cc63352a1e20159672fe5
name: kube-state-metrics
urls:
- http://xxxx.xxx.xxx.com/devops/kubernetes/charts/kube-state-metrics-0.1.0.tgz
version: 0.1.0
prometheus:
- apiVersion: v1
appVersion: "1.0"
created: 2019-03-28T19:00:14.456720188+08:00
description: A Helm Prometheus chart for Kubernetes
digest: 940d457c6cb9047869f4bccb3a7c49a3a6f97bc3cb39ebc2c743dc3dc1f138e2
name: prometheus
urls:
- http://xxxx.xxx.xxx.com/devops/kubernetes/charts/prometheus-0.1.2.tgz
version: 0.1.2
- apiVersion: v1
appVersion: "1.0"
created: 2019-03-28T19:00:14.448098372+08:00
description: A Helm Prometheus chart for Kubernetes
digest: 010925071ffa5350fb0e57f7c22e9dbc1857b3cdf2f764f49fbade6b13a020ee
name: prometheus
urls:
- http://xxxx.xxx.xxx.com/devops/kubernetes/charts/prometheus-0.1.1.tgz
version: 0.1.1
- apiVersion: v1
appVersion: "1.0"
created: 2019-03-28T19:00:14.438597594+08:00
description: A Helm chart for Kubernetes
digest: 42859453dbe55b790c86949947f609f8a23cac59a605be79910ecc17b511d5cc
name: prometheus
urls:
- http://xxxx.xxx.xxx.com/devops/kubernetes/charts/prometheus-0.1.0.tgz
version: 0.1.0
generated: 2019-03-28T19:00:14.421820865+08:00

将 chart package 和 index 上传到远端 repo

[root@tcz-dev-adam helm-hub]# scp charts-packages/* root@xxxx.xxx.xxx.com:/var/www/dl/devops/kubernetes/charts
root@xxxx.xxx.xxx.com's password:
index.yaml 100% 1525 1.4MB/s 00:00
kube-state-metrics-0.1.0.tgz 100% 2030 1.5MB/s 00:00
prometheus-0.1.0.tgz 100% 2781 2.4MB/s 00:00
prometheus-0.1.1.tgz 100% 2021 1.2MB/s 00:00
prometheus-0.1.2.tgz

6. 到其他有环境安装 helm 客户端机器上,通过 helm repo add 将远端 repo 添加到 Helm

repo 命名为 telemetry

[root@SVRxxxxxx ~]# helm repo add telemetry http://xxxx.xxx.xxx.com/devops/kubernetes/charts
"telemetry" has been added to your repositories
[root@tcz-dev-adam ~]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
telemetry http://xxxx.xxx.xxx.com/devops/kubernetes/charts 
# 更新 repo
[root@SVRxxxxx ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "telemetry" chart repository
Update Complete. ⎈ Happy Helming!⎈

7. 通过新添加的 repo 安装 chart

# search 刚上传的 chart
[root@SVRxxxx ~]# helm search kube-state-metrics
WARNING: Repo "local" is corrupt or missing. Try 'helm repo update'.NAME CHART VERSION APP VERSION DESCRIPTION
telemetry/kube-state-metrics 0.1.0 1.0 A Helm chart for Kubernetes
# 安装 chart
[root@SVRxxxx ~]# helm install telemetry/kube-state-metrics --name kube-state-metrics --namespace kube-system --set context=ZONE-xxx —debug11

其他常用的一些命令:

查看 pods kubectl get pods --all-namespaces

kubectl apply -f prometheus/kubernetes-prometheus.yaml

kubectl proxy --port=8888 --address=10.0.197.61 --accept-hosts=^*$

http://10.0.197.61:8888/api/v1/namespaces/kube-system/services/kibana-logging/proxy
kubectl get crd;
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com

启动监控
kubectl get pods --all-namespaces
kubectl get clusterroles prometheus-operator -o yaml

kubectl create -f manifests/setup && \
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done && \
kubectl create -f manifests/

kubectl get customresourcedefinitions alertmanagers.monitoring.coreos.com -o yaml

kubectl delete namespace monitoring
kubectl delete clusterroles prometheus-operator \
kubectl delete clusterrolebindings prometheus-operator
kubectl delete customresourcedefinitions alertmanagers.monitoring.coreos.com \
kubectl delete customresourcedefinitions podmonitors.monitoring.coreos.com \
kubectl delete customresourcedefinitions prometheuses.monitoring.coreos.com \
kubectl delete customresourcedefinitions prometheusrules.monitoring.coreos.com \
kubectl delete customresourcedefinitions servicemonitors.monitoring.coreos.com \
kubectl delete customresourcedefinitions thanosrulers.monitoring.coreos.com \

nohup kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 --address 10.0.197.61 >> log.log 2>&1 &
nohup kubectl port-forward $(kubectl get pods --selector=app=grafana -n monitoring --output=jsonpath="{.items..metadata.name}") -n monitoring 3000 --address 10.0.197.61 >> log.log 2>&1 &
nohup kubectl port-forward -n monitoring alertmanager-main-0 9093 --address 10.0.197.61 >> log.log 2>&1 &

kubectl get ingress

for f in manifests/*.yaml
do
sed -i 's/namespace: custom-metrics/namespace: monitoring/g' $f | kubectl apply -f $f
done

sed -i 's/namespace: custom-metrics/namespace: monitoring/g' manifests/custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml | kubectl apply -f -

露水湾 , 版权所有丨如未注明 , 均为原创丨本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:Helm包管理实践Guide
喜欢 (5)
[]
分享 (0)
关于作者:
发表我的评论
取消评论

表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址