Debian上Kubernetes存储如何管理
Managing Kubernetes Storage on Debian: A Structured Approach
Kubernetes storage management on Debian involves configuring persistent storage solutions, defining storage abstractions, and ensuring high availability for stateful applications. Below is a step-by-step guide covering essential components and common storage options.
1. Prerequisites
Before configuring storage, ensure your Debian nodes meet the following requirements:
- Kubernetes Cluster: Deploy a cluster using
kubeadm
(initialize withsudo kubeadm init --pod-network-cidr=10.244.0.0/16
). - Network Plugin: Install a plugin like Calico (
kubectl apply -f https://docs.projectcalico.org/v3.25/manifests/calico.yaml
) or Flannel (kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
) for Pod communication. - Container Runtime: Use
containerd
(default for newer Kubernetes versions) with optimized configuration (loadoverlay
andbr_netfilter
modules, setnet.bridge.bridge-nf-call-iptables=1
via sysctl).
2. Core Kubernetes Storage Concepts
Two key abstractions manage storage in Kubernetes:
- PersistentVolume (PV): A cluster-wide resource representing physical or virtual storage (e.g., NFS, Ceph). Defined statically or dynamically via a
StorageClass
. - PersistentVolumeClaim (PVC): A user’s request for storage (specifying size, access mode). Binds to an available PV or triggers dynamic provisioning.
3. Common Storage Solutions for Debian
A. NFS (Network File System)
NFS is a simple shared storage solution for stateful applications (e.g., WordPress databases).
- On the NFS Server (Debian node):
sudo apt install nfs-kernel-server -y sudo mkdir -p /data/nfs-server echo "/data/nfs-server *(rw,async,no_subtree_check)" | sudo tee -a /etc/exports sudo systemctl start nfs-kernel-server & & sudo systemctl enable nfs-kernel-server
- In Kubernetes:
- PV (Static):
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteMany # Allows multiple Pods to write nfs: server: < NFS_SERVER_IP> path: /data/nfs-server
- PVC (Request Storage):
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi
kubectl apply -f < filename> .yaml
. - PV (Static):
B. Ceph (Unified Storage)
Ceph provides scalable block, file, and object storage. Use Rook to simplify deployment on Kubernetes.
- Prerequisites: Install
ceph-common
(sudo apt install ceph-common -y
). - With Rook:
- Create a Rook operator:
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-1.13/cluster/examples/kubernetes/ceph/operator.yaml
- Deploy a Ceph cluster (adjust node roles in the manifest):
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-1.13/cluster/examples/kubernetes/ceph/cluster.yaml
- Create a storage class for dynamic provisioning:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: rook-ceph.rbd.csi.ceph.com parameters: clusterID: < CLUSTER_ID> # From Rook dashboard pool: replicapool imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner-secret csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node-secret csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph reclaimPolicy: Delete
rook-ceph-block
class in PVCs. - Create a Rook operator:
C. Longhorn (Distributed Block Storage)
Longhorn is a lightweight, distributed block storage solution ideal for edge and small-to-medium clusters.
- Installation:
helm repo add longhorn https://charts.longhorn.io helm install longhorn longhorn/longhorn --namespace longhorn-system
- Usage: Longhorn automatically creates a
longhorn
storage class. Use it in PVCs to provision block volumes dynamically.
4. Dynamic Provisioning with StorageClasses
A StorageClass
defines how PVs are dynamically created (e.g., using Ceph, Longhorn). Key parameters include:
- Provisioner: Identifies the plugin (e.g.,
kubernetes.io/no-provisioner
for static,rook-ceph.rbd.csi.ceph.com
for Ceph). - Volume Binding Mode:
WaitForFirstConsumer
(delays binding until a Pod is scheduled) orImmediate
(binds immediately). - Reclaim Policy:
Delete
(removes PV when PVC is deleted) orRetain
(preserves PV for reuse).
Example of a local storage class (for direct-attached disks):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
Apply with kubectl apply -f storageclass.yaml
.
5. Best Practices for Optimization
- Close Swap: Disable swap on all nodes to prevent kubelet failures:
sudo swapoff -a & & sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- Kernel Parameters: Load required modules and set sysctl rules for containerd:
echo "overlay br_netfilter" | sudo tee /etc/modules-load.d/containerd.conf sudo modprobe overlay & & sudo modprobe br_netfilter echo "net.bridge.bridge-nf-call-iptables = 1" | sudo tee /etc/sysctl.d/99-kubernetes.conf sudo sysctl --system
- High-Performance Plugins: Choose distributed storage (Ceph, GlusterFS) for HA and performance. Use SSDs for critical workloads.
- Monitoring: Use Prometheus + Grafana to track storage metrics (e.g., PV usage, IOPS).
By following these steps, you can effectively manage Kubernetes storage on Debian, ensuring your stateful applications have reliable, scalable, and performant storage.
声明:本文内容由网友自发贡献,本站不承担相应法律责任。对本内容有异议或投诉,请联系2913721942#qq.com核实处理,我们将尽快回复您,谢谢合作!
若转载请注明出处: Debian上Kubernetes存储如何管理
本文地址: https://pptw.com/jishu/717043.html