群晖(黑群晖)的备份

数据很重要、数据很重要、数据很重要!!!

重要的话说三遍,话说自己买了个黑群晖,6盘位的,就是为了保存照片和数据……

其实1盘和2盘的nas都是耍流氓,根本无用,必须是4盘位以上的才勉强可靠。所以才买的这个黑群晖,只买了3个盘做了raid5,侥是如此,真的是莫名坏了一块,好在在保修期内,更换一块后故障解除,期间倒腾2块盘的数据依然是噩梦啊。

那么数据如此重要,那么怎么做到完全的备份呢?

数据是3盘raid5,已经有保证了,剩下就是系统也必须有备份可恢复才可以

群晖本质是linux,文件系统是mdadm+lvm,所以备份也必须从这个方向开始。

首先是备份启动的usb盘,先fdisk -l 看一下:

 1Disk /dev/sda: 2000.3 GB, 2000398934016 bytes  
 2255 heads, 63 sectors/track, 243201 cylinders  
 3Units = cylinders of 16065 * 512 = 8225280 bytes
 4
 5   Device Boot      Start         End      Blocks  Id System
 6/dev/sda1               1         311     2490240  fd Linux raid autodetect
 7Partition 1 does not end on cylinder boundary  
 8/dev/sda2             311         572     2097152  fd Linux raid autodetect
 9Partition 2 does not end on cylinder boundary  
10/dev/sda3             588      243201  1948793440+ fd Linux raid autodetect
11
12Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes  
13255 heads, 63 sectors/track, 243201 cylinders  
14Units = cylinders of 16065 * 512 = 8225280 bytes
15
16   Device Boot      Start         End      Blocks  Id System
17/dev/sdb1               1         311     2490240  fd Linux raid autodetect
18Partition 1 does not end on cylinder boundary  
19/dev/sdb2             311         572     2097152  fd Linux raid autodetect
20Partition 2 does not end on cylinder boundary  
21/dev/sdb3             588      243201  1948793440+ fd Linux raid autodetect
22
23Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes  
24255 heads, 63 sectors/track, 243201 cylinders  
25Units = cylinders of 16065 * 512 = 8225280 bytes
26
27   Device Boot      Start         End      Blocks  Id System
28/dev/sdc1               1         311     2490240  fd Linux raid autodetect
29Partition 1 does not end on cylinder boundary  
30/dev/sdc2             311         572     2097152  fd Linux raid autodetect
31Partition 2 does not end on cylinder boundary  
32/dev/sdc3             588      243201  1948793440+ fd Linux raid autodetect
33
34Disk /dev/sdu: 4227 MB, 4227858432 bytes  
354 heads, 32 sectors/track, 64512 cylinders  
36Units = cylinders of 128 * 512 = 65536 bytes
37
38   Device Boot      Start         End      Blocks  Id System
39/dev/sdu1   *           1         256       16352+  e Win95 FAT16 (LBA)

3块2T的盘,每个盘有3个分区,然后最后是USB盘,盘符sdu,备份它

1dd if=/dev/sdu |gzip -c > /root/usb.img  

USB盘备好了,下面得备份raid了,先查看raid信息:

 1cat /proc/mdstat  
 2Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]  
 3md2 : active raid5 sda3[0] sdc3[2] sdb3[1]  
 4      3897584512 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
 5
 6md1 : active raid1 sda2[0] sdb2[1] sdc2[2]  
 7      2097088 blocks [12/3] [UUU___]
 8
 9md0 : active raid1 sda1[0] sdb1[1] sdc1[2]  
10      2490176 blocks [12/3] [UUU___]

看来有三个raid组,md0和md1是raid1,md2是raid5

md0的组员是sda1、sdb1、sdc1

md1的组员是sda2、sdb2、sdc2

md2的组员是sda3、sdb3、sdc3

三个盘的分区应该是一样的,我们看一个即可:

 1fdisk -l /dev/sda
 2
 3Disk /dev/sda: 2000.3 GB, 2000398934016 bytes  
 4255 heads, 63 sectors/track, 243201 cylinders  
 5Units = cylinders of 16065 * 512 = 8225280 bytes
 6
 7   Device Boot      Start         End      Blocks  Id System
 8/dev/sda1               1         311     2490240  fd Linux raid autodetect
 9Partition 1 does not end on cylinder boundary  
10/dev/sda2             311         572     2097152  fd Linux raid autodetect
11Partition 2 does not end on cylinder boundary  
12/dev/sda3             588      243201  1948793440+ fd Linux raid autodetect

我们可以看到是2T的盘,三个分区的起始柱头信息都很清晰,记录下来。

所以,如果坏了,看看分区,分区没坏直接进行下一步,分区坏了的话,我们先fdisk分好区,然后开始修复:

查看各个md的uuid:

1mdadm --examine --scan  /dev/sda1 /dev/sdb1 /dev/sdc1  
2ARRAY /dev/md0 UUID=35d393bd:1f4dde6b:3017a5a8:c86610be  
3mdadm --examine --scan  /dev/sda2 /dev/sdb2 /dev/sdc2  
4ARRAY /dev/md1 UUID=8f02f0d4:e249900a:3017a5a8:c86610be  
5mdadm --examine --scan  /dev/sda3 /dev/sdb3 /dev/sdc3  
6ARRAY /dev/md/2 metadata=1.2 UUID=d1411045:24723563:3a19cef5:07732afa name=DiskStation:2  

修复的时候,根据上面信息,编辑生成/etc/mdadm.conf,注意顺序是倒的, md0->md1->md2,devices中如果一块盘坏了,改成missing……

 1DEVICE partitions  
 2ARRAY /dev/md0 level=raid1 num-devices=2  
 3 ↪UUID=b3cd99e7:d02be486:b0ea429a:e18ccf65
 4 ↪devices=/dev/sda1,missing
 5ARRAY /dev/md1 level=raid1 num-devices=2  
 6 ↪UUID=75fa22aa:9a11bcad:b42ed14a:b5f8da3c
 7 ↪devices=/dev/sda2,missing
 8ARRAY /dev/md2 level=raid1 num-devices=2  
 9 ↪UUID=532502de:90e44fb0:242f485f:f02a2565
10 ↪devices=/dev/sda3,missing

然后刷新raid并查看raid是否正常

1mdadm -A -s  
2cat /proc/mdstat  

如果要全面毁了盘阵,重建md,方法如下:

1mdadm --create --verbose /dev/md0 --level=1 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
2
3mdadm --create --verbose /dev/md1 --level=1 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc2
4
5mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda3 /dev/sdb3 /dev/sdc3  

上面已经做好了RAID部分MD的备份。LVM是基于MD之上的,下面我们来备份LVM。

首先查看卷信息:

 1DiskStation> pvdisplay  
 2  --- Physical volume ---
 3  PV Name               /dev/md2
 4  VG Name               vg1
 5  PV Size               3.63 TB / not usable 2.88 MB
 6  Allocatable           yes (but full)
 7  PE Size (KByte)       4096
 8  Total PE              951558
 9  Free PE               0
10  Allocated PE          951558
11  PV UUID               5li2xk-tZdQ-c63W-qpM7-jo9F-5xGg-xFP1Wr
12
13DiskStation> vgdisplay  
14  --- Volume group ---
15  VG Name               vg1
16  System ID             
17  Format                lvm2
18  Metadata Areas        1
19  Metadata Sequence No  3
20  VG Access             read/write
21  VG Status             resizable
22  MAX LV                0
23  Cur LV                2
24  Open LV               1
25  Max PV                0
26  Cur PV                1
27  Act PV                1
28  VG Size               3.63 TB
29  PE Size               4.00 MB
30  Total PE              951558
31  Alloc PE / Size       951558 / 3.63 TB
32  Free  PE / Size       0 / 0   
33  VG UUID               UUCftW-HIFK-vmL0-0MTG-XzOT-BCTI-okhVf3
34
35DiskStation> lvdisplay  
36  --- Logical volume ---
37  LV Name                /dev/vg1/syno_vg_reserved_area
38  VG Name                vg1
39  LV UUID                BmrrfE-vjMY-rJO8-C7OP-2los-KfeO-1hBrWb
40  LV Write Access        read/write
41  LV Status              available
42  # open                 0
43  LV Size                12.00 MB
44  Current LE             3
45  Segments               1
46  Allocation             inherit
47  Read ahead sectors     auto
48  - currently set to     384
49  Block device           253:0
50
51  --- Logical volume ---
52  LV Name                /dev/vg1/volume_1
53  VG Name                vg1
54  LV UUID                80vCET-1yH4-KMXS-E0N7-3tiP-m24I-Gzw7ij
55  LV Write Access        read/write
56  LV Status              available
57  # open                 1
58  LV Size                3.63 TB
59  Current LE             951555
60  Segments               1
61  Allocation             inherit
62  Read ahead sectors     auto
63  - currently set to     4096
64  Block device           253:1

ok, 备份

1vgcfgbackup  

备份文件会放到/etc/lvm/backup/vg1,把这个文件拷走.

如果要恢复:

1vgcfgrestore -f vg1 vg1  
2pvscan  
3vgscan  
4lvscan  

把u盘文件、fdisk文件、lvm的文件拷出来放到dropbox,一切就妥当了。


如何把jpg合成gif
Linu下如何让mount显示的很整齐
comments powered by Disqus