Rook v1.3のRook Ceph Storage Quickstartに沿って、クラウドのマネージドK8sやvSphere VolumeのようなIaaSのストレージ機能が何もないオンプレベアメタルなKubernetesクラスタでもPersistentVolumeのダイナミックプロビジョニングが使えることを試してみます。
Ceph Storage Quickstart | Rook Docs
- 環境
- 準備
- git clone
- Rook Operatorをデプロイ
- Rook Cephクラスタの作成 (検証環境版)
- toolbox
- StorageClassの作成と試用
- (Appendix) cephコマンド
- 情報リンク
- 追記: 表記について(Rook/Ceph)
環境
ESXi上に構築したmaster 1台、worker 2台のKubernetes v1.18クラスタです。(全ノード 4CPU / RAM8GB)
準備
現在のドキュメントにこの通り書かれています。
- Raw devices (no partitions or formatted filesystems)
- Raw partitions (no formatted filesystem)
- PVs available from a storage class in block mode
一番簡単なのはノードにOSインストール領域とは別の未フォーマットのディスク(マウントもしていない)を1個持っていること。
実は昨年末くらいに初めてRook-Ceph触ったとき、RookもCephも何もわかっておらずここが理解できていなくてストレージを使える状態にできない原因が(RookなのかCephなのかOpenShiftだからなのかインターネット非接続環境だったからなのか切り分けできず)わからずしばらく放置してました💦
ほとんど記憶にないけど当時はドキュメントに載ってなかったぽいけど、現在のv1.3のドキュメントにはちゃんと書かれてて嬉しみ。
git clone
$ git clone --single-branch --branch release-1.3 https://github.com/rook/rook.git
ファイルを手元に持ってくる。
(リモートのファイルを引数にkubectl
しても良さそうだけど)
Rook Operatorをデプロイ
$ cd rook/cluster/examples/kubernetes/ceph
まずはOperatorに必要なネームスペースや色々なリソースを作成
[zaki@cloud-dev ceph]$ kubectl create -f common.yaml namespace/rook-ceph created customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt-rules created role.rbac.authorization.k8s.io/rook-ceph-system created clusterrole.rbac.authorization.k8s.io/rook-ceph-global created clusterrole.rbac.authorization.k8s.io/rook-ceph-global-rules created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster-rules created clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created serviceaccount/rook-ceph-system created rolebinding.rbac.authorization.k8s.io/rook-ceph-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created serviceaccount/rook-ceph-osd created serviceaccount/rook-ceph-mgr created serviceaccount/rook-ceph-cmd-reporter created role.rbac.authorization.k8s.io/rook-ceph-osd created clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system-rules created role.rbac.authorization.k8s.io/rook-ceph-mgr created role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created podsecuritypolicy.policy/00-rook-privileged created clusterrole.rbac.authorization.k8s.io/psp:rook created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created serviceaccount/rook-csi-cephfs-plugin-sa created serviceaccount/rook-csi-cephfs-provisioner-sa created role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin-rules created clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner-rules created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created serviceaccount/rook-csi-rbd-plugin-sa created serviceaccount/rook-csi-rbd-provisioner-sa created role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin-rules created clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner-rules created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
namespaceが作成されてる。
[zaki@cloud-dev ceph]$ kubectl describe ns rook-ceph Name: rook-ceph Labels: <none> Annotations: <none> Status: Active No resource quota. No LimitRange resource.
続いてOperator本体をデプロイ。
[zaki@cloud-dev ceph]$ kubectl create -f operator.yaml configmap/rook-ceph-operator-config created deployment.apps/rook-ceph-operator created
[zaki@cloud-dev ceph]$ kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE rook-ceph-operator-5ff5c45d49-s2j4x 1/1 Running 0 4m36s rook-discover-jdqg8 1/1 Running 0 3m12s rook-discover-tp5np 1/1 Running 0 3m12s
ちなみにOpenShiftの場合はoperator-openshift.yamlというファイルがある。
Rook Cephクラスタの作成 (検証環境版)
cluster.yamlでなくcluster-test.yamlを使用する。
[zaki@cloud-dev ceph]$ kubectl create -f cluster-test.yaml configmap/rook-config-override created cephcluster.ceph.rook.io/my-cluster created
[zaki@cloud-dev ceph]$ kubectl get pod -n rook-ceph -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES csi-cephfsplugin-provisioner-7469b99d4b-v4jt7 0/5 ContainerCreating 0 19s <none> k8s-worker01.esxi.jp-z.jp <none> <none> csi-cephfsplugin-provisioner-7469b99d4b-vjpdh 0/5 ContainerCreating 0 19s <none> k8s-worker02.esxi.jp-z.jp <none> <none> csi-cephfsplugin-rmsk7 0/3 ContainerCreating 0 20s 192.168.0.126 k8s-worker02.esxi.jp-z.jp <none> <none> csi-cephfsplugin-xtpw8 0/3 ContainerCreating 0 20s 192.168.0.125 k8s-worker01.esxi.jp-z.jp <none> <none> csi-rbdplugin-6mnzg 0/3 ContainerCreating 0 20s 192.168.0.126 k8s-worker02.esxi.jp-z.jp <none> <none> csi-rbdplugin-provisioner-865f4d8d-lg7tq 0/6 ContainerCreating 0 20s <none> k8s-worker01.esxi.jp-z.jp <none> <none> csi-rbdplugin-provisioner-865f4d8d-zs66b 0/6 ContainerCreating 0 20s <none> k8s-worker02.esxi.jp-z.jp <none> <none> csi-rbdplugin-wxsxt 3/3 Running 0 20s 192.168.0.125 k8s-worker01.esxi.jp-z.jp <none> <none> rook-ceph-detect-version-4l9sn 0/1 PodInitializing 0 26s 10.244.219.196 k8s-worker02.esxi.jp-z.jp <none> <none> rook-ceph-operator-5ff5c45d49-s2j4x 1/1 Running 0 8m12s 10.244.127.66 k8s-worker01.esxi.jp-z.jp <none> <none> rook-discover-jdqg8 1/1 Running 0 6m48s 10.244.127.67 k8s-worker01.esxi.jp-z.jp <none> <none> rook-discover-tp5np 1/1 Running 0 6m48s 10.244.219.195 k8s-worker02.esxi.jp-z.jp <none> <none>
しばらく待つと、以下の通り。
[zaki@cloud-dev ceph]$ kubectl get pod -n rook-ceph -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES csi-cephfsplugin-provisioner-7469b99d4b-v4jt7 5/5 Running 0 12m 10.244.127.70 k8s-worker01.esxi.jp-z.jp <none> <none> csi-cephfsplugin-provisioner-7469b99d4b-vjpdh 5/5 Running 0 12m 10.244.219.198 k8s-worker02.esxi.jp-z.jp <none> <none> csi-cephfsplugin-rmsk7 3/3 Running 0 12m 192.168.0.126 k8s-worker02.esxi.jp-z.jp <none> <none> csi-cephfsplugin-xtpw8 3/3 Running 0 12m 192.168.0.125 k8s-worker01.esxi.jp-z.jp <none> <none> csi-rbdplugin-6mnzg 3/3 Running 0 12m 192.168.0.126 k8s-worker02.esxi.jp-z.jp <none> <none> csi-rbdplugin-provisioner-865f4d8d-lg7tq 6/6 Running 0 12m 10.244.127.69 k8s-worker01.esxi.jp-z.jp <none> <none> csi-rbdplugin-provisioner-865f4d8d-zs66b 6/6 Running 0 12m 10.244.219.197 k8s-worker02.esxi.jp-z.jp <none> <none> csi-rbdplugin-wxsxt 3/3 Running 0 12m 192.168.0.125 k8s-worker01.esxi.jp-z.jp <none> <none> rook-ceph-mgr-a-59dfd65fb9-8h4hf 1/1 Running 0 9m8s 10.244.219.201 k8s-worker02.esxi.jp-z.jp <none> <none> rook-ceph-mon-a-56f68dc774-k4s4m 1/1 Running 0 9m22s 10.244.219.200 k8s-worker02.esxi.jp-z.jp <none> <none> rook-ceph-operator-5ff5c45d49-s2j4x 1/1 Running 0 20m 10.244.127.66 k8s-worker01.esxi.jp-z.jp <none> <none> rook-ceph-osd-0-6b6598769-62qb6 1/1 Running 0 8m59s 10.244.219.203 k8s-worker02.esxi.jp-z.jp <none> <none> rook-ceph-osd-1-c7947546-gc6dh 1/1 Running 0 7m23s 10.244.127.72 k8s-worker01.esxi.jp-z.jp <none> <none> rook-ceph-osd-prepare-k8s-worker01.esxi.jp-z.jp-7m7hz 0/1 Completed 0 9m6s 10.244.127.71 k8s-worker01.esxi.jp-z.jp <none> <none> rook-ceph-osd-prepare-k8s-worker02.esxi.jp-z.jp-zjhzp 0/1 Completed 0 9m5s 10.244.219.202 k8s-worker02.esxi.jp-z.jp <none> <none> rook-discover-jdqg8 1/1 Running 0 18m 10.244.127.67 k8s-worker01.esxi.jp-z.jp <none> <none> rook-discover-tp5np 1/1 Running 0 18m 10.244.219.195 k8s-worker02.esxi.jp-z.jp <none> <none>
toolbox
使い方あまり分かってないけど、メンテ用ツールが使えるので入れておく。
[zaki@cloud-dev ceph]$ kubectl apply -f toolbox.yaml deployment.apps/rook-ceph-tools created
[zaki@cloud-dev ceph]$ kubectl get pod -l app=rook-ceph-tools -n rook-ceph NAME READY STATUS RESTARTS AGE rook-ceph-tools-5754d4d5d-b2sp2 1/1 Running 0 74s
[zaki@cloud-dev ceph]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph status cluster: id: 06fcac99-7645-4b3d-95bb-085c2245c23a health: HEALTH_WARN 1 pool(s) have no replicas configured services: mon: 1 daemons, quorum a (age 17m) mgr: a(active, since 16m) osd: 2 osds: 2 up (since 15m), 2 in (since 15m) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 2.0 GiB used, 38 GiB / 40 GiB avail pgs: 1 active+clean
動いているけどヘルスチェックは警告状態。
内容的には冗長化されてないよーという感じでしょう。
検証なのでとりあえず進めましょう。。
StorageClassの作成と試用
Rookを使ったストレージには、以下の3種類がある。
- Block
- Object
- Shared Filesystem
まずはブロックストレージであるBlockについて試してみる。
Block Storageの場合
「(CephBlockPoolと)StorageClassの定義を作ってね」とマニフェストがドキュメントに載ってるけど、作業ディレクトリのcsi/rbd/storageclass.yaml
にファイルがある。
検証用だとcsi/rbd/storageclass-test.yaml
がreplicated:
のsizeが1
になってるので、こちらが良さそう。
このファイルを使ってStorageClassリソースを作成
[zaki@cloud-dev ceph]$ kubectl get sc No resources found in default namespace. [zaki@cloud-dev ceph]$ kubectl apply -f csi/rbd/storageclass-test.yaml cephblockpool.ceph.rook.io/replicapool created storageclass.storage.k8s.io/rook-ceph-block created [zaki@cloud-dev ceph]$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 2s
(sample) wordpressのデプロイ
StorageClassの準備ができたので、アプリをデプロイしてみる。
サンプルのWordPressのマニフェストは、リポジトリのroot以下cluster/examples/kubernetes
にある。
https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/mysql.yaml
namespaceは指定されてないので、デプロイ時に指定する。
あと、ストレージサイズが20GBを要求してるけど、今回検証用クラスタのノードにセットしてるディスクがそもそも20GBなので、使用ストレージのサイズは4GBくらいに減らした。
作業前はpv/pvcは何もない状態。
[zaki@cloud-dev kubernetes]$ kubectl get pv No resources found in default namespace. [zaki@cloud-dev kubernetes]$ kubectl get pvc -A No resources found
[zaki@cloud-dev kubernetes]$ kubectl create ns rook-example namespace/rook-example created
まずはMySQL
diff --git a/cluster/examples/kubernetes/mysql.yaml b/cluster/examples/kubernetes/mysql.yaml index 0b6d9f6..2df258b 100644 --- a/cluster/examples/kubernetes/mysql.yaml +++ b/cluster/examples/kubernetes/mysql.yaml @@ -24,7 +24,7 @@ spec: - ReadWriteOnce resources: requests: - storage: 20Gi + storage: 4Gi --- apiVersion: apps/v1 kind: Deployment
[zaki@cloud-dev kubernetes]$ kubectl apply -f mysql.yaml -n rook-example service/wordpress-mysql created persistentvolumeclaim/mysql-pv-claim created deployment.apps/wordpress-mysql created [zaki@cloud-dev kubernetes]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-c9e3af03-35e2-4bbe-9596-c543e49ec339 4Gi RWO Delete Bound rook-example/mysql-pv-claim rook-ceph-block 6s [zaki@cloud-dev kubernetes]$ kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rook-example mysql-pv-claim Bound pvc-c9e3af03-35e2-4bbe-9596-c543e49ec339 4Gi RWO rook-ceph-block 10s
作成されたpvcに応じたpvが自動で作成され、statusもBoundになっている。
[zaki@cloud-dev kubernetes]$ kubectl get pod,svc -n rook-example NAME READY STATUS RESTARTS AGE pod/wordpress-mysql-764fc64f97-6zxqq 1/1 Running 0 3m19s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/wordpress-mysql ClusterIP None <none> 3306/TCP 3m19s
次にWordPress本体。
同じくストレージサイズをちょっと変更。
diff --git a/cluster/examples/kubernetes/wordpress.yaml b/cluster/examples/kubernetes/wordpress.yaml index f400abc..8410961 100644 --- a/cluster/examples/kubernetes/wordpress.yaml +++ b/cluster/examples/kubernetes/wordpress.yaml @@ -24,7 +24,7 @@ spec: - ReadWriteOnce resources: requests: - storage: 20Gi + storage: 4Gi --- apiVersion: apps/v1 kind: Deployment
[zaki@cloud-dev kubernetes]$ kubectl apply -f wordpress.yaml -n rook-example service/wordpress created persistentvolumeclaim/wp-pv-claim created deployment.apps/wordpress created
[zaki@cloud-dev kubernetes]$ kubectl get pvc -n rook-example NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-c9e3af03-35e2-4bbe-9596-c543e49ec339 4Gi RWO rook-ceph-block 8m24s wp-pv-claim Bound pvc-771c613d-6df7-4c5d-b03a-cb0f553b241f 4Gi RWO rook-ceph-block 13s [zaki@cloud-dev kubernetes]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-771c613d-6df7-4c5d-b03a-cb0f553b241f 4Gi RWO Delete Bound rook-example/wp-pv-claim rook-ceph-block 15s pvc-c9e3af03-35e2-4bbe-9596-c543e49ec339 4Gi RWO Delete Bound rook-example/mysql-pv-claim rook-ceph-block 8m25s
ちゃんとpvがプロビジョニングされている。
じゃあアクセスしてみる。
[zaki@cloud-dev kubernetes]$ kubectl get svc -n rook-example NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 10.106.183.232 <pending> 80:31819/TCP 6m21s wordpress-mysql ClusterIP None <none> 3306/TCP 14m
type:LoadBalancer Serviceに関してはまだ何の準備もないので作成できません。(まだ試してないけど、オンプレK8sだとMetalLBを使うと良いらしい)
とりあえずNodePortに変更しましょう。
--- a/cluster/examples/kubernetes/wordpress.yaml +++ b/cluster/examples/kubernetes/wordpress.yaml @@ -10,7 +10,7 @@ spec: selector: app: wordpress tier: frontend - type: LoadBalancer + type: NodePort --- apiVersion: v1 kind: PersistentVolumeClaim
[zaki@cloud-dev kubernetes]$ kubectl apply -f wordpress.yaml -n rook-example service/wordpress configured persistentvolumeclaim/wp-pv-claim unchanged deployment.apps/wordpress unchanged
[zaki@cloud-dev kubernetes]$ kubectl get svc -n rook-example NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress NodePort 10.106.183.232 <none> 80:31819/TCP 9m1s wordpress-mysql ClusterIP None <none> 3306/TCP 17m
これで、http://<ノードのどれか>:31819 にアクセスすればOK
初期設定を済ませばこの通り。
じゃあpodから見てストレージ領域にデータ保存されてる?
root@wordpress-7bfc545758-fkwls:/var/www/html# df -h Filesystem Size Used Avail Use% Mounted on overlay 59G 5.4G 54G 10% / tmpfs 64M 0 64M 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/centos-root 59G 5.4G 54G 10% /etc/hosts shm 64M 0 64M 0% /dev/shm /dev/rbd0 3.9G 43M 3.8G 2% /var/www/html tmpfs 3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 3.9G 0 3.9G 0% /proc/acpi tmpfs 3.9G 0 3.9G 0% /proc/scsi tmpfs 3.9G 0 3.9G 0% /sys/firmware root@wordpress-7bfc545758-fkwls:/var/www/html# ls -F /var/www/html/ index.php readme.html wp-blog-header.php wp-config.php wp-includes/ wp-login.php wp-signup.php license.txt wp-activate.php wp-comments-post.php wp-content/ wp-links-opml.php wp-mail.php wp-trackback.php lost+found/ wp-admin/ wp-config-sample.php wp-cron.php wp-load.php wp-settings.php xmlrpc.php
ちゃんとありますね。
ちょっとイタズラ
root@wordpress-7bfc545758-fkwls:/var/www/html# echo hello > zzz.html root@wordpress-7bfc545758-fkwls:/var/www/html# cat zzz.html hello
表示もできます。
ではpodを削除。
[zaki@cloud-dev kubernetes]$ kubectl get pod -n rook-example NAME READY STATUS RESTARTS AGE wordpress-7bfc545758-fkwls 1/1 Running 0 21m wordpress-mysql-764fc64f97-6zxqq 1/1 Running 0 29m [zaki@cloud-dev kubernetes]$ kubectl delete pod -n rook-example wordpress-7bfc545758-fkwls pod "wordpress-7bfc545758-fkwls" deleted [zaki@cloud-dev kubernetes]$ kubectl get pod -n rook-example NAME READY STATUS RESTARTS AGE wordpress-7bfc545758-2c6kt 1/1 Running 0 5s wordpress-mysql-764fc64f97-6zxqq 1/1 Running 0 29m
[zaki@cloud-dev kubernetes]$ kubectl exec -it -n rook-example wordpress-7bfc545758-2c6kt -- ls /var/www/html index.php wp-blog-header.php wp-includes wp-signup.php license.txt wp-comments-post.php wp-links-opml.php wp-trackback.php lost+found wp-config-sample.php wp-load.php xmlrpc.php readme.html wp-config.php wp-login.php zzz.html wp-activate.php wp-content wp-mail.php wp-admin wp-cron.php wp-settings.php
zzz.html
が保持されていること、つまりpvが正しくプロビジョニングできていることを確認できました。
HelmチャートのMySQLのpv
以前、こちらの記事で「pvに使えるストレージが何もない状態だったのでhostpath使ったpvを用意」したけど、これもRook-Cephによるダイナミックプロビジョニングでデプロイできることを確認してみる。
まずRook-Cephで用意されているマニフェストで作成したstorageclass/rook-ceph-blockはdefault設定になってないが、公式HelmチャートリポジトリにあるMySQLはStorageClass指定がないので、default設定する。
[zaki@cloud-dev ~]$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 134m [zaki@cloud-dev ~]$ kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/rook-ceph-block patched [zaki@cloud-dev ~]$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block (default) rook-ceph.rbd.csi.ceph.com Delete Immediate true 135m
(default)
という表記が付いたので準備OK。
デプロイする。
[zaki@cloud-dev ~]$ helm install sample-mysql stable/mysql -n helm-sample NAME: sample-mysql LAST DEPLOYED: Sun Jul 5 16:58:05 2020 NAMESPACE: helm-sample STATUS: deployed REVISION: 1 NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: sample-mysql.helm-sample.svc.cluster.local To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace helm-sample sample-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) To connect to your database: 1. Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il 2. Install the mysql client: $ apt-get update && apt-get install mysql-client -y 3. Connect using the mysql cli, then provide your password: $ mysql -h sample-mysql -p To connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/sample-mysql 3306 mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
pv/pvcがBoundになっていることを確認。
[zaki@cloud-dev ~]$ kubectl get pvc -n helm-sample NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE sample-mysql Bound pvc-26d19ab5-621d-4d1a-b76b-a3df4bddc061 8Gi RWO rook-ceph-block 26s [zaki@cloud-dev ~]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-26d19ab5-621d-4d1a-b76b-a3df4bddc061 8Gi RWO Delete Bound helm-sample/sample-mysql rook-ceph-block 29s pvc-771c613d-6df7-4c5d-b03a-cb0f553b241f 4Gi RWO Delete Bound rook-example/wp-pv-claim rook-ceph-block 90m pvc-c9e3af03-35e2-4bbe-9596-c543e49ec339 4Gi RWO Delete Bound rook-example/mysql-pv-claim rook-ceph-block 98m
podも起動している。
[zaki@cloud-dev ~]$ kubectl get pod -n helm-sample NAME READY STATUS RESTARTS AGE sample-mysql-599cdf796c-bmt8m 1/1 Running 0 103s
[zaki@cloud-dev ~]$ kubectl logs -n helm-sample sample-mysql-599cdf796c-bmt8m 2020-07-05 07:59:29+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.30-1debian10 started. 2020-07-05 07:59:29+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2020-07-05 07:59:29+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.30-1debian10 started. 2020-07-05 07:59:29+00:00 [Note] [Entrypoint]: Initializing database files [...] 2020-07-05T07:59:40.286326Z 0 [Note] Event Scheduler: Loaded 0 events 2020-07-05T07:59:40.286532Z 0 [Note] mysqld: ready for connections. Version: '5.7.30' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
(Appendix) cephコマンド
ceph -h
を実行すると大量のサブコマンドが出てきてまだちょっと手に負えないので、目に留まったものをいくつかお試し実行。
status
[zaki@cloud-dev ceph]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph status cluster: id: 06fcac99-7645-4b3d-95bb-085c2245c23a health: HEALTH_WARN 1 pool(s) have no replicas configured services: mon: 1 daemons, quorum a (age 18m) mgr: a(active, since 18m) osd: 2 osds: 2 up (since 16m), 2 in (since 16m) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 2.0 GiB used, 38 GiB / 40 GiB avail pgs: 1 active+clean
以下はStorageClass作成直後
[zaki@cloud-dev kubernetes]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph status cluster: id: 06fcac99-7645-4b3d-95bb-085c2245c23a health: HEALTH_WARN 2 pool(s) have no replicas configured services: mon: 1 daemons, quorum a (age 91m) mgr: a(active, since 39m) osd: 2 osds: 2 up (since 89m), 2 in (since 89m) data: pools: 2 pools, 33 pgs objects: 0 objects, 0 B usage: 2.0 GiB used, 38 GiB / 40 GiB avail pgs: 33 active+clean
以下はpvを要求するMySQLがデプロイされた直後
[zaki@cloud-dev kubernetes]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph status cluster: id: 06fcac99-7645-4b3d-95bb-085c2245c23a health: HEALTH_WARN 2 pool(s) have no replicas configured services: mon: 1 daemons, quorum a (age 99m) mgr: a(active, since 47m) osd: 2 osds: 2 up (since 97m), 2 in (since 97m) data: pools: 2 pools, 33 pgs objects: 66 objects, 192 MiB usage: 2.2 GiB used, 38 GiB / 40 GiB avail pgs: 33 active+clean io: client: 4.0 KiB/s wr, 0 op/s rd, 0 op/s wr
更にWordPressがデプロイされた状態
[zaki@cloud-dev kubernetes]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph status cluster: id: 06fcac99-7645-4b3d-95bb-085c2245c23a health: HEALTH_WARN 2 pool(s) have no replicas configured services: mon: 1 daemons, quorum a (age 106m) mgr: a(active, since 54m) osd: 2 osds: 2 up (since 104m), 2 in (since 104m) data: pools: 2 pools, 33 pgs objects: 96 objects, 262 MiB usage: 2.2 GiB used, 38 GiB / 40 GiB avail pgs: 33 active+clean
df
[zaki@cloud-dev ceph]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED ssd 40 GiB 38 GiB 320 KiB 2.0 GiB 5.00 TOTAL 40 GiB 38 GiB 320 KiB 2.0 GiB 5.00 --- POOLS --- POOL ID STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 0 B 0 0 B 0 36 GiB
[zaki@cloud-dev kubernetes]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED ssd 40 GiB 38 GiB 357 MiB 2.3 GiB 5.87 TOTAL 40 GiB 38 GiB 357 MiB 2.3 GiB 5.87 --- POOLS --- POOL ID STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 0 B 0 0 B 0 36 GiB replicapool 2 357 MiB 125 357 MiB 0.97 36 GiB
pg stat
[zaki@cloud-dev ceph]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph pg stat 1 pgs: 1 active+clean; 0 B data, 320 KiB used, 38 GiB / 40 GiB avail [zaki@cloud-dev ceph]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph osd stat 2 osds: 2 up (since 39m), 2 in (since 39m); epoch: e18 [zaki@cloud-dev ceph]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 k8s-worker02.esxi.jp-z.jp 1024M 18.9G 0 0 0 0 exists,up 1 k8s-worker01.esxi.jp-z.jp 1024M 18.9G 0 0 0 0 exists,up
[zaki@cloud-dev ceph]$ kubectl exec -n rook-ceph -it rook-ceph-tools-5754d4d5d-b2sp2 -- ceph osd pool stats pool device_health_metrics id 1 nothing is going on
情報リンク
Rookだらけの Advent Calendar 2019
去年のアドベントカレンダーのRook編です。Ceph以外の話題も豊富。
Rookの概要とRook-Ceph - うつぼのブログ
アドカレ2日目の記事で、本記事の内容を含めた説明を改めて見るとわかりやすかったです。
「Ceph、完全に理解した」って Tweetする為のセッション ー Ceph 101 ー
つい先日開催されたJapan Rook Meetup #3の第1セッションの、Ceph 101の資料。Cephを完全に理解できました。
追記: 表記について(Rook/Ceph)
本記事内では「Rook-Ceph」と書いてしまってるけど、表記についてちょっと疑問ツイートしてみたところ、つよつよな方から「Rook/Ceph」を使っているとコメントいただきましたので、今度からこっちを使ってみようと思います。
よくRook/Cephをつかいますね
— 🐢sat🦥 (@satoru_takeuchi) 2020年7月5日