前言 本文主要讲述生产环境 2023 K3s Rancher 如何 单节点 部署 Mysql Redis
如果你还没有部署 K3s 和 Rancher ,你可以浏览上一篇文章:【K3S】01 - 异地集群初始化
如果你想部署 集群HA ,你可以浏览这篇文章:【K3S】03 - Rancher 中间件集群HA部署
环境声明
hostname
系统
配置
节点
角色
部署
m1
Ubuntu-Server(20.04)
2c4g
192.168.0.67/32
control-plane,etcd,master
k3s(v1.24.6+k3n1) server nginx rancher(2.7.1) Helm(3.10.3)
n1
Ubuntu-Server(20.04)
1c2g
192.168.0.102/32
control-plane,etcd,master
k3s(v1.24.6+k3n1) server
m2
Ubuntu-Server(20.04)
2c4g
172.25.4.244/32
control-plane,etcd,master
k3s(v1.24.6+k3n1) server
harbor
Ubuntu-Server(20.04)
2c4g
192.168.0.88
Docker-Hub Jenkins CI/CD
Harbor(2.7.1) Jenkins(2.3) Docker-Compose
节点均用 WireGuard 打通内网,后续所有节点路由均用内网ip访问,具体详细的节点内容请访问上一篇文章
持久存储 在 Kubernetes 集群中, NFS 通常用于提供持久性存储,使多个 Pod 可以共享相同的存储卷,从而实现 数据共享 和 持久性存储
安装NFS 除非特别声明,否则本文所有命令均用 root 用户执行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 apt-get install nfs-kernel-server -y mkdir -p /nfschmod a+w+x /nfsnano /etc/exports /nfs *(rw,sync ,no_subtree_check,no_root_squash) exportfs -a sudo /etc/init.d/nfs-kernel-server start
1 2 3 4 5 6 7 8 9 10 11 12 13 apt-get install nfs-common -y mkdir -p /nfschmod a+w+x /nfsmount -t nfs m1:/nfs /nfs nano /etc/fstab m1:/nfs /nfs nfs rw 0 1
最后验证下:
1 2 3 root@m1:~# showmount -e Export list for m1: /nfs *
创建SC 配置nfs所需要的yaml:
class.yaml
仅在m1节点执行
mkdir /root/nfsdir
kubectl apply -f /root/nfsdir/class.yaml
1 2 3 4 5 6 7 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: fuseim.pri/ifs parameters: archiveOnDelete: "false"
deployment.yaml
这里更换了nfs-client-provisioner
的镜像地址,源配置地址 k8s v1.20 以上有误
仅在m1节点执行,其中ip和路径均为m1
相关配置 kubectl apply -f /root/nfsdir/deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168 .0 .67 - name: NFS_PATH value: /nfs volumes: - name: nfs-client-root nfs: server: 192.168 .0 .67 path: /nfs
rbac.yaml
仅在m1节点执行 kubectl apply -f /root/nfsdir/rbac.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: ["" ] resources: ["persistentvolumes" ] verbs: ["get" , "list" , "watch" , "create" , "delete" ] - apiGroups: ["" ] resources: ["persistentvolumeclaims" ] verbs: ["get" , "list" , "watch" , "update" ] - apiGroups: ["storage.k8s.io" ] resources: ["storageclasses" ] verbs: ["get" , "list" , "watch" ] - apiGroups: ["" ] resources: ["events" ] verbs: ["create" , "update" , "patch" ] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: default rules: - apiGroups: ["" ] resources: ["endpoints" ] verbs: ["get" , "list" , "watch" , "create" , "update" , "patch" ] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
下图表示我们成功创建了一个sc:
1 2 3 4 root@m1:~/nfsdir# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 14d nfs-client fuseim.pri/ifs Delete Immediate false 12d
创建持久卷声明PVC
1 2 kubectl get pvc kubectl get pv
至此,持久存储部署完毕,这里的准备工作做好后,我们才能继续部署Mysql和Redis,所以你需要 至少创建两个 pvc ,分别用于不同的中间件
部署MySQL 配置服务发现
因为MySQL默认3306端口,所以我们需要注册服务发现的端口为3306即可
选择器对后面我们创建Deployment起到绑定作用,随意配置
创建密文 此方法仅为明文环境变量注入配置,如需提高安全性,请使用证书方式配置
创建ConfigMap 此处是为了配置MySQL的my.cnf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 ###### [mysql]配置模块 ###### [mysql] # 设置MySQL客户端默认字符集 default-character-set=utf8mb4 ###### [mysqld]配置模块 ###### [mysqld] port=3306 user=mysql socket=/var/run/mysqld/mysqld.sock log-bin=mysql-bin # 开启binlog 选择ROW模式 binlog-format=ROW sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION datadir=/var/lib/mysql # MySQL8 的密码认证插件 default_authentication_plugin=mysql_native_password # 禁用符号链接以防止各种安全风险 symbolic-links=0 # 允许最大连接数 max_connections=1000 max_connect_errors=1000 # 服务端使用的字符集默认为8比特编码的latin1字符集 character-set-server=utf8mb4 # 创建新表时将使用的默认存储引擎 default-storage-engine=INNODB # 表名存储在磁盘是小写的,但是比较的时候是不区分大小写 lower_case_table_names=0 max_allowed_packet=16M # 设置时区 default-time_zone='+8:00' # binlog 配置 # log-bin = /logs/mysql-bin.log expire-logs-days = 30 max-binlog-size = 500M # server-id 配置 server-id = 1 ###### [client]配置模块 ###### [client] default-character-set=utf8mb4
配置StatefulSet
注入环境变量 MYSQL_DATABASE
和 MYSQL_ROOT_PASSWORD
此处的标签是目录 配置服务发现 所配置的键值对,必须一致,否则其他服务无法找到此StatefulSet
至此,MySQL 单节点部署完毕,如果报拉取镜像超时,请在对应节点提前手动拉取下镜像
定时备份 单节点的备份有很多中方式,思路如下:
使用mysqldump导出为sql.gz(适合表少数据少,测试用)
备份持久卷(如果不备份到s3,仅备份到本机集群依旧有丢失风险)
使用第三方工具进行同步备份或者增量备份
…(多节点的LongHorn异地容灾)
由于我的数据很少,而且正在向NoSQL转型,弱化MySQL的能力,因此我们这里使用最简单的mysqldump导出sql.gz即可,说到kubernetes的定时任务,那肯定是控制器job
创建持久卷:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: volmysqlbackup annotations: {} labels: {} namespace: default spec: selector: matchLabels: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nfs-client volumeName: ''
创建备份数据库任务:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 apiVersion: batch/v1 kind: CronJob metadata: name: mysql-dump namespace: mysql spec: schedule: "0 2 * * *" jobTemplate: spec: template: metadata: name: mysql-dump annotations: "kubectl.kubernetes.io/job-completion-timeout": "1" spec: containers: - name: mysql-dump image: mysql:8.0.22 env: - name: TZ value: "Asia/Shanghai" - name: MYSQL_HOST value: testmysql - name: MYSQL_USER value: root - name: MYSQL_DATABASE value: "test1 test2 test3" - name: MYSQL_PASSWORD valueFrom: secretKeyRef: key: password name: mysql.root.password command: ["/bin/sh" ,"-c" ,"rm -rf /backup/$(date -d \"$(date +%Y-%m-%d) -7 days\" +%Y-%m-%d) && mkdir -p /backup/$(date +%Y-%m-%d) && mysqldump --host=$MYSQL_HOST -u$MYSQL_USER -p$MYSQL_PASSWORD --databases $MYSQL_DATABASE | gzip > /backup/$(date +%Y-%m-%d)/backup_$(date +%Y-%m-%d_%H-%M-%S).sql.gz" ] volumeMounts: - name: job-pvc mountPath: "/backup" restartPolicy: Never volumes: - name: job-pvc persistentVolumeClaim: claimName: volmysqlbackup ttlSecondsAfterFinished: 0
这里我们优化脚本代码,采用configmap的形式,首先创建需要执行脚本的configmap:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 apiVersion: v1 kind: ConfigMap metadata: name: config-backup annotations: {} labels: {} namespace: mysql data: backup.sh: >- rm -rf /backup/$(date -d "$(date +%Y-%m-%d) -7 days" +%Y-%m-%d) mkdir -p /backup/$(date +%Y-%m-%d) mysqldump --host=$MYSQL_HOST -u$MYSQL_USER -p$MYSQL_PASSWORD --databases $MYSQL_DATABASE | gzip > /backup/$(date +%Y-%m-%d)/backup_$(date +%Y-%m-%d_%H-%M-%S).sql.gz
最终的CronJob为:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 apiVersion: batch/v1 kind: CronJob metadata: name: mysql-dump namespace: mysql spec: schedule: "0 2 * * *" jobTemplate: spec: template: metadata: name: mysql-dump annotations: "kubectl.kubernetes.io/job-completion-timeout": "1" spec: containers: - name: mysql-dump image: mysql:8.0.22 env: - name: TZ value: "Asia/Shanghai" - name: MYSQL_HOST value: testmysql - name: MYSQL_USER value: root - name: MYSQL_DATABASE value: "test1 test2 test3" - name: MYSQL_PASSWORD valueFrom: secretKeyRef: key: password name: mysql.root.password command: ["/bin/bash" ] args: ["-c" , "bash /scripts/backup.sh" ] volumeMounts: - name: job-pvc mountPath: "/backup" - name: scripts-volume mountPath: /scripts readOnly: true restartPolicy: Never volumes: - name: job-pvc persistentVolumeClaim: claimName: volmysqlbackup - name: scripts-volume configMap: name: config-backup ttlSecondsAfterFinished: 0
部署Redis 配置服务发现
配置ConfigMap
配置文件内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 port 6379 timeout 0 requirepass test123456 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump.rdb dir /var/lib/redis appendfsync everysec
配置StatefulSet
注意这里的启动参数,我们需要指定配置文件启动 redis-server /usr/local/redis/redis.conf
要初始化 /usr/local/redis/redis.conf
,我们需要注入ConfigMap和持久化存储
添加标签,此标签的值源于配置服务发现
中选择器
的值
其他
至此,redis单节点部署完成
在下一篇文章中,我们将通过 k3s 和 rancher 部署 mysql 集群HA