VIDEOCUBE
GlusterFS 설치하기 본문
여러 서버에서 한가지 작업을 할 때 꼭 필요한 것 중에 서로의 작업할 파일을 공유하는 문제가 있다.
NAS 등을 이용하여 파일을 자원을 서로 공유하는데 있어 여러가지 문제점 들을 내포하게 된다.
첫째로는 하나의 스토리지에 저장할 용량이 많아지면 많아질 수록 네트워크, 디스크 공간 등에 대한 이슈가 발생하는데
이때 경로를 다르게 해서 NAS 를 하드웨어적으로 더 추가 혹은 DISK 등 추가하는 자원들이 많이 지게 된다
고성능 시스템으로 점진적으로 진화되는데 이 경우에 백업까지 신경을 쓰게 된다면, 많은 이슈들을 내포하게 된다.
그래서 scalable 하게 구성할 수 있으며, 서버들로 이루어진다면 이러한 문제는 염려하지 않게 된다.
물론 저장할 수 있는 공간에는 한정적이지는 않지만, 일정한 사이즈를 넘어서게 되면 성능의 저하가 생길 수 있는 여지가 있다.
물론 그 저장용량의 한계치는 펙타 정도의 사이즈여서 걱정할 정도의 문제는 아닌것으로 판단된다.
HDFS (Hadoop Distributed File System ) 와 같은 형태를 가지고 있지만, 파일 관리 목적으로 사용되어지는 소프트웨어여서,
HDFS 와 비교해서 속도면에서도 월등히 앞선다고 한다.
다음은 Gluster ( https://www.gluster.org/ ) 에서 설치 가이드를 확인해 보자
다음 링크는 Quick Start Guide 에 대한 설명이다.
http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
Step 1 – Have at least two nodes
두개의 node 를 준비한다. ( 시스템 2대를 준비했다. 나중에 추가를 위해 3대를 미리 CentOS 6.3 을 설치 해 놓았다 )
2개의 디스크가 필요하다고 합니다.
sda [ OS ]
sdb [ GlusterFS ]
서버는 현재 다음과 같이 한개의 디스크만 준비되었다.
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 15G 345M 14G 3% /
tmpfs 246M 0 246M 0% /dev/shm
/dev/sda1 194M 25M 159M 14% /boot
/dev/sda7 49G 180M 46G 1% /data
/dev/sda5 29G 648M 27G 3% /usr
/dev/sda6 2.9G 134M 2.7G 5% /var
/dev/sdb1을 만들어 보자
저장소 > 하드디스크를 추가합니다.
새 디스크를 만듭니다.
동적 할당 보다는 저는 고정 크기를 선택했습니다.
이름은 storage 로 설정했습니다. [ 상당히 오랜 시간이 소요되는 것으로 보입니다 ]
현재 하드는 270GB 가 있고, 3대 정도 준비할 예정입니다.
50GB * 3 150GB 정도의 공간이 소요
root@glusterfs-01:/root> fdisk -l
Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00098ca7
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 1938 15360000 83 Linux
/dev/sda3 1938 2448 4096000 82 Linux swap / Solaris
/dev/sda4 2448 13055 85195776 5 Extended
/dev/sda5 2448 6273 30720000 83 Linux
/dev/sda6 6273 6655 3072000 83 Linux
/dev/sda7 6656 13055 51400704 83 Linux
Disk /dev/sdb: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
신규로 물리적 하드가 생성되었다..
Step 2 - Format and mount the bricks
(on both nodes): Note: We are going to use the XFS filesystem for the backend bricks. These examples are going to assume the brick is going to reside on /dev/sdb1.
mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /data/brick1
echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
mount -a && mount
You should now see sdb1 mounted at /data/brick1
가이드는 /data/brick1 ==> 커스텀하게 /storage 로 생성
ext3 는 16TB 까지 지원, ext4 는 별도 설정으로 16TB 이상 지원 가능
xfs 는 최대 크기 100TB 지원 가능하다고 함
root@glusterfs-01:/root> mkfs.xfs -i size=512 /dev/sdb1
-bash: mkfs.xfs: command not found
커널 xfs 지원여부 확인
lsmod | grep xfs
추가
modprobe xfs
root@glusterfs-01:/root> uname -r
2.6.32-279.el6.x86_64
xfs 설치
root@glusterfs-01:/> fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xb4059fc8.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
1
Invalid partition number for type `1'
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527): 6527
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
root@glusterfs-01:/> fdisk -l
Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00098ca7
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 1938 15360000 83 Linux
/dev/sda3 1938 2448 4096000 82 Linux swap / Solaris
/dev/sda4 2448 13055 85195776 5 Extended
/dev/sda5 2448 6273 30720000 83 Linux
/dev/sda6 6273 6655 3072000 83 Linux
/dev/sda7 6656 13055 51400704 83 Linux
Disk /dev/sdb: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb4059fc8
Device Boot Start End Blocks Id System
/dev/sdb1 1 6527 52428096 83 Linux
root@glusterfs-01:/root> mkfs.xfs -i size=512 /dev/sdb1
meta-data=/dev/sdb1 isize=512 agcount=4, agsize=3276756 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=13107024, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=6399, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
mkdir -p /storage
echo '/dev/sdb1 /storage xfs defaults 1 2' >> /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Oct 30 00:17:49 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=155618df-107f-4e1f-86e6-27d31a5679b1 / ext4 defaults 1 1
UUID=e86ef798-b7e6-4990-bc0a-717412dbe884 /boot ext4 defaults 1 2
UUID=9c415605-0e92-4174-87b0-7739e3f728eb /data ext4 defaults 1 2
UUID=c8be8b42-1ed1-43bd-a088-3502b6c36383 /usr ext4 defaults 1 2
UUID=4606eb75-d265-4c6a-9f8e-4c356ab5e438 /var ext4 defaults 1 2
UUID=b71637ee-c4f7-44fb-9e6f-9a6fe3e3c818 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/sdb1 /storage xfs defaults 1 2
mount -a && mount
rroot@glusterfs-01:/> mount -a && mount
/dev/sda2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
/dev/sda7 on /data type ext4 (rw)
/dev/sda5 on /usr type ext4 (rw)
/dev/sda6 on /var type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/sdb1 on /storage type xfs (rw)
같은 방식으로 나머지 2대를 세팅합니다.
Step 3 - Installing GlusterFS
(on both nodes) Install the software
yum install glusterfs-server
root@glusterfs-01:/> yum install glusterfs-server
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.mirror.cdnetworks.com
* extras: centos.mirror.cdnetworks.com
* updates: centos.mirror.cdnetworks.com
Setting up Install Process
No package glusterfs-server available.
Error: Nothing to do
http://docs.gluster.org/en/latest/Install-Guide/Install/
에서 CentOS Link 를 타고 갔더니 CentOS 에서 다음 yum 을 지원 하고 있다.
root@glusterfs-01:/etc/yum.repos.d> yum install centos-release-gluster
centos-gluster38 centos-gluster38/primary_db
에서 다음과 같이 yum install glusterfs-server
참으로 많이도 깐다..
https://buildlogs.centos.org/centos/6/storage/x86_64/
3.12 까지 지원하는거 같은데.. 3.8.15 을 내려받는다..
root@glusterfs-01:/etc/yum.repos.d> chkconfig
auditd 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
blk-availability 0:해제 1:활성 2:활성 3:활성 4:활성 5:활성 6:해제
crond 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
glusterd 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
glusterfsd 0:해제 1:해제 2:해제 3:해제 4:해제 5:해제 6:해제
ip6tables 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
iptables 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
lvm2-monitor 0:해제 1:활성 2:활성 3:활성 4:활성 5:활성 6:해제
netconsole 0:해제 1:해제 2:해제 3:해제 4:해제 5:해제 6:해제
netfs 0:해제 1:해제 2:해제 3:활성 4:활성 5:활성 6:해제
network 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
nfs 0:해제 1:해제 2:해제 3:해제 4:해제 5:해제 6:해제
nfslock 0:해제 1:해제 2:해제 3:활성 4:활성 5:활성 6:해제
postfix 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
rdisc 0:해제 1:해제 2:해제 3:해제 4:해제 5:해제 6:해제
restorecond 0:해제 1:해제 2:해제 3:해제 4:해제 5:해제 6:해제
rpcbind 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
rpcgssd 0:해제 1:해제 2:해제 3:활성 4:활성 5:활성 6:해제
rpcsvcgssd 0:해제 1:해제 2:해제 3:해제 4:해제 5:해제 6:해제
rsyslog 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
saslauthd 0:해제 1:해제 2:해제 3:해제 4:해제 5:해제 6:해제
sshd 0:해제 1:해제 2:활성 3:활성 4:활성 5:활성 6:해제
udev-post 0:해제 1:활성 2:활성 3:활성 4:활성 5:활성 6:해제
설치를 완료했다.
나머지 한대에도 같은 방식으로 설치했다.
root@glusterfs-01:/root> service glusterd status
glusterd (pid 1028)를 실행하고 있습니다..
root@glusterfs-02:/root> service glusterd status
glusterd (pid 1399)를 실행하고 있습니다..
Step 4 - Configure the firewall
The gluster processes on the nodes need to be able to communicate with each other. To simplify this setup, configure the firewall on each node to accept all traffic from the other node.
iptables -I INPUT -p all -s <ip-address> -j ACCEPT
where ip-address is the address of the other node.
root@glusterfs-01:/root> iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- 192.168.1.22 anywhere
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
-A INPUT -s 192.168.1.22/32 -p tcp -j ACCEPT
대역대
/24 256-2
/25 128-2
/26 64-2
/27 32-2
/28 16-2
/29 8-2
/30 4-2
32 는 해당 IP 1 개를 뜻합니다.
Step 5 - Configure the trusted pool
From "server1"
gluster peer probe server2
Note: When using hostnames, the first server needs to be probed from one other server to set its hostname.
From "server2"
gluster peer probe server1
Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.
root@glusterfs-01:/root> gluster peer probe 192.168.1.22
peer probe: success.
root@glusterfs-02:/root> gluster peer probe 192.168.1.21
peer probe: success. Host 192.168.1.21 port 24007 already in peer list
Check the peer status on server1
gluster peer status
You should see something like this (the UUID will differ)
Number of Peers: 1
Hostname: server2
Uuid: f0e7b138-4874-4bc0-ab91-54f20c7068b4
State: Peer in Cluster (Connected)
root@glusterfs-01:/root> gluster peer status
Number of Peers: 1
Hostname: 192.168.1.22
Uuid: abb9f77d-2d59-4035-8f74-f1a1a0ca0d0f
State: Peer in Cluster (Connected)
root@glusterfs-02:/root> gluster peer status
Number of Peers: 1
Hostname: 192.168.1.21
Uuid: ff2dfe63-b72e-4b68-b1be-4889ecf85063
State: Peer in Cluster (Connected)
Step 6 - Set up a GlusterFS volume
On both server1 and server2:
mkdir -p /data/brick1/gv0
From any single server:
gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0
gluster volume start gv0
/data/brick1 ==> /storage 로 변경
두대의 서버에서 실행
mkdir -p /storage/gv0
두대중 한대의 서버에서 실행
root@glusterfs-01:/root> gluster volume create gv0 replica 2 192.168.1.21:/storage/gv0 192.168.1.22:/storage/gv0
volume create: gv0: success: please start the volume to access data
root@glusterfs-01:/root> gluster volume start gv0
volume start: gv0: success
Confirm that the volume shows "Started":
gluster volume info
You should see something like this (the Volume ID will differ):
Volume Name: gv0
Type: Replicate
Volume ID: f25cc3d8-631f-41bd-96e1-3e22a4c6f71f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1:/data/brick1/gv0
Brick2: server2:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet
Note: If the volume is not started, clues as to what went wrong will be in log files under /var/log/glusterfs/glusterd.log on one or both of the servers.
root@glusterfs-01:/root> gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: d742a9e3-d045-4a75-b7f8-12a14454a7a5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.1.21:/storage/gv0
Brick2: 192.168.1.22:/storage/gv0
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
Step 7 - Testing the GlusterFS volume
For this step, we will use one of the servers to mount the volume. Typically, you would do this from an external machine, known as a "client". Since using this method would require additional packages to be installed on the client machine, we will use one of the servers as a simple place to test first , as if it were that "client".
mount -t glusterfs server1:/gv0 /mnt
작업 서버에서 연결해보기
root@linux-01:/root> mount -t glusterfs 192.168.1.21:/gv0 /mnt
mount: unknown filesystem type 'glusterfs'
glusterfs-client 설치 하기
yum install centos-release-gluster
yum install glusterfs-client
root@linux-01:/root> mount -t glusterfs 192.168.1.21:/gv0 /mnt
Mount failed. Please check the log file for more details.
[2017-11-18 17:54:43.120661] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.15 (args: /usr/sbin/glusterfs --volfile-server=192.168.1.21 --volfile-id=/gv0 /mnt)
[2017-11-18 17:54:43.233512] W [socket.c:897:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT 0 on socket 9, 규약 사용 불가능
[2017-11-18 17:54:43.233579] E [socket.c:3031:socket_connect] 0-glusterfs: Failed to set keep-alive: 규약 사용 불가능
[2017-11-18 17:54:43.235820] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-11-18 17:54:43.238876] E [socket.c:2309:socket_connect_finish] 0-glusterfs: connection to 192.168.1.21:24007 failed (호스트로 갈 루트가 없음)
[2017-11-18 17:54:43.238942] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 192.168.1.21 (전송 종료지점이 연결되어 있지 않습니다)
[2017-11-18 17:54:43.238957] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2017-11-18 17:54:43.240138] W [glusterfsd.c:1327:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1ed) [0x7f3dd315ad4d] -->/usr/sbin/glusterfs() [0x40a16b] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6a) [0x405c6a] ) 0-: received signum (1), shutting down
[2017-11-18 17:54:43.240178] I [fuse-bridge.c:5788:fini] 0-fuse: Unmounting '/mnt'.
iptable 을 열어야 겠다..
-A INPUT -s 192.168.1.0/24 -p tcp -j ACCEPT
192.168.1.x 대역은 전부 열도록 하겠다.
mount -t glusterfs 192.168.1.21:/gv0 /mnt
root@linux-01:/mnt> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 15G 344M 14G 3% /
tmpfs 939M 0 939M 0% /dev/shm
/dev/sda1 194M 25M 159M 14% /boot
/dev/sda7 49G 180M 46G 1% /data
/dev/sda5 29G 763M 27G 3% /usr
/dev/sda6 2.9G 138M 2.7G 5% /var
192.168.1.21:/gv0 50G 33M 50G 1% /mnt
mkdir glusterfs-01
root@linux-01:/var/log/glusterfs> mount -t glusterfs 192.168.1.21:/gv0 /glusterfs-01
mkdir glusterfs-02
root@linux-01:/var/log/glusterfs> mount -t glusterfs 192.168.1.22:/gv0 /glusterfs-02
root@linux-01:/glusterfs-01> touch /glusterfs-01/test123
root@linux-01:/glusterfs-01>
root@linux-01:/glusterfs-01> ls /glusterfs-02/test123
/glusterfs-02/test123
touch /glusterfs-01/test321
ls /glusterfs-02/test321
바로 공유 되어진다.
-----------------------------------------------
23번 서버도 추가로 넣어 확장을 해보자..!!!
192.168.1.21 번 mount 와 192.168.1.22 번 mount 와 데이터가 공유되는데 걸리는 시간 및
가상IP > Domain 으로 만들어서.. 21번과 22번 어느 곳이든 mount 하게끔 만드는 과정은 추후에 활용할때..
작성해야겠다..
오늘도 미션완료..
'시스템' 카테고리의 다른 글
Nginx 설치하기 (0) | 2017.11.25 |
---|---|
[DNS] /var/named 폴더 경로 변경하기 (0) | 2017.11.20 |
VirtualBox Guest 복사 삽질 일기 (0) | 2017.11.16 |
Window 7 에서 Window 10 Update 삽질 일기 (0) | 2017.11.13 |
VirtualBox 삽질 일기 (0) | 2017.11.12 |