Skip to content

GlusterFS Guide

You can use GlusterFS, a NAS (network-attached storage) file system, to setup a shared filesystem across nodes inside Crane. This guide illustrates how to install, setup, and cleanup Gluster. The following guide assumes that the commands are manually executed on each node. Alternatively, Gluster provides this repository for users that prefer to setup distributed systems with Ansible instead.

Installation

You can install Gluster on Ubuntu by running the following commands. (Gluster 7 is the latest version as of February 2020)

$ apt install software-properties-common
$ add-apt-repository ppa:gluster/glusterfs-7
$ apt update
$ apt install glusterfs-server

Start the GlusterFS management daemon and check that it is running.

$ sudo systemctl start glusterd
$ systemctl status glusterd

Setup

Configure the Trusted Pool

Assume we have three nodes with hostnames "server1", "server2" and "server3".

From server1:

$ sudo gluster peer probe server2
$ sudo gluster peer probe server3

Note: When using hostnames, the first server needs to be probed from one other server to set its hostname.

From server2:

$ sudo gluster peer probe server1

Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.

Check the peer status (on any node):

$ sudo gluster peer status

Create a GlusterFS Volume

Create brick directories on each node. (Replace the path to wherever you wish to store data):

$ mkdir -p /data/brick1

Now we will create a Gluster volume named "crane_volume" that distributes files across these bricks. If you wish to configure your volume further (e.g. replicating data), refer to this guide.

From any single node:

$ sudo gluster volume create crane_volume server1:/data/brick1 server2:/data/brick1 server3:/data/brick1
$ sudo gluster volume start

Confirm that the volume's status is "Started":

$ sudo gluster volume info

Mounting the Volume

Mount the volume to /mnt/gluster on each node. For example on server1:

$ sudo mount -t glusterfs server1:crane_volume /mnt/gluster

Bind mount /mnt/gluster as source directory to use with Docker.

Cleanup

$ ./script/gluster_teardown.sh

Note: Reusing the stale brick to create a new volume will fail with "</path/to/brick/> is already part of a volume". Simply put force at the end of gluster volume create ..., or execute the following commands.

$ setfattr -x trusted.glusterfs.volume-id (/path/to/brick)
$ setfattr -x trusted.gfid (/path/to/brick)
$ rm -rf (/path/to/brick)/.glusterfs

Last update: February 18, 2022