Skip to content

Docker Swarm + Microceph

My personal notes for setup of Docker Swarm + Microceph for based on VirtualizationHowTo video. Mine is done in Virtualbox and not in Proxmox.

Video

Initial setup

3 VMs, network interface on each one is enp0s3

IP addresses:

vmservera 192.168.100.175

vmserverb 192.168.100.176

vmserverc 192.168.100.177

Docker installation

Docker install

Swarm setup

sudo docker swarm init --advertise-addr 192.168.100.175

Keepalived installation

sudo apt install keepalived -y

sudo nano /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {
       state MASTER             #BACKUP for node 2 and 3
       interface enp0s3
       virtual_router_id 51
       priority 120             #lover for BACKUP nodes, 110 and 100 for example
       advert_int 1
       authentication {
               auth_type PASS
               auth_pass abc123
       }
       unicast_peer {            #IPs of remaining two nodes (192.168.100.175 and 192.168.100.177 on node2 and 192.168.100.176 and 192.168.100.175 on node3
               192.168.100.176 
               192.168.100.177
       }
       virtual_ipaddress {
               192.168.100.180    #same on each node
       }
}

Enable and start keepalived

sudo systemctl start keepalived

sudo systemctl enable keepalived

Microceph setup

sudo snap install microceph

sudo snap refresh --hold microceph

#only on one primary node, otherwise 3 separate clusters are created
sudo microceph cluster bootstrap
#not mandatory, just to check status
sudo microceph status     
#required second and third node, will generate join token
sudo microceph cluster add vmserverb
sudo microceph cluster add vmserverc

Removing microceph

This is mainly to correct issues, misconfiguration etc.

sudo snap stop microceph

sudo snap remove microceph

#on all 3 nodes, check if sdb is disk you want to add
sudo microceph disk add /dev/sdb --wipe

These 3 comands only on primary (vmservera)

sudo ceph osd pool create cephfs_data 64
sudo ceph osd pool create cephfs_metadata 64
sudo ceph fs new cephfs cephfs_metadata cephfs_data
#on all nodes
sudo mkdir /mnt/cephfs
#only on primary node (vmservera)
sudo ceph auth get-key client.admin

fstab must be modified on all nodes, mount and daemon reload as well

sudo nano /etc/fstab

#secret generate in previous comand
192.168.100.175:6789,192.168.100.176:6789,192.168.100.177:6789:/ /mnt/cephfs ceph name=admin,secret=AQBPrnNnj2XPBBAA+54tJ4/gi7+W/erSQFwL5w==,\_netdev 0 0  
sudo mount -a
sudo systemctl daemon-reload

After this, /mnt/cephfs can be used as bind volume for Swarm