• OUR TOWER SETUP 01 
      • RAID5 SETUP
      • 3 drives [6 TB] + 1 spare [1 TB]
  • available space at the time of writing data [this process depends on the size of written data: 2 drives [4 TB total] + 1 drive [2 TB] for checksums
  • NB: sdc are faulty and needs to be replaced
      •  
      • OS SETUP
      • LIVE USB  - debian bullseye
      • only SSH server during the OS istall [no standard system utilities]
      • GRUB bootloader on all 4 drives
      •  
      • RAID status
      • cat /proc/mdstat
      •  
      • /dev/sdc failed [1 bad sectore: 2064] >> at the moment we are already using sdd, so sdc needs to be replaced
  •  
      • before removing the faulty drive physically
      • mdadm --manage /dev/md0 --remove /dev/<faulty drive>
      •  
      • After inserting a healthy drive 
      • mdadm --manage /dev/md0 --add /dev/<new drive>
      • Installed nmon 
      • apt install nmon
      • Installed midnight commander, openvpn and htop 
      • apt install mc openvpn htop sudo
      •  
      • echo "idle" > /sys/block/md0/md/sync_action [ wasn't allowed bc OS is running on all drives simultaniously]
      •  
      • mounted USB 16 GB to copy openvpn config dir to  /etc/openvpn/client/neutrinet
      • mount /dev/sde /mnt
      •  
      • in midnight commander
      • ctrl + o [clean view of the terminal]
      •  
      • browse subdirs of the dir and evaluate space allocation 
      • ncdu
  • TODO:
  • disable pwd login
  • change defaulf ssh port
  • add firewall
  • implement borg 
  • replace the sdc disk
      • RAID 
      • RAID0 = stripe [when you lose you lose]
      • RAID1 = mirror [you can lose 1 drive]
      • RAID5 = depending of the number of drives, minimum 3, you can lose 1 or maybe more drives.
      •  
      • Raid calculator http://www.raid-calculator.com/
      • RAID 5 (no dependencies of drive - in case of failure) rebuild capacity - blocks - is splitted and distributed over 3 elements 
      •  
      • Howto setup a RAID : https://wiki.archlinux.org/title/RAID
      • MDADM
      • https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm
      • example: mdadm --detail /dev/md127
      •  
      • 1 mounted device  with 3 disks ( /dev/sda, /dev/sdb, /dev/sdc)
      •  
      • Disk tool >> gnome disk utility
      •  
      • drives except usb have info about the drive << smartmon tools to check the status
      • https://www.smartmontools.org/
      •  
      • sudo smartctl -i /dev/sda 
      •  
      • memtest << ram tester
      •  
      • lsblk 
      •  
      • uuid - how to know signature - use logical unique identifier for mounting
      •  
      • ls /dev/disk/by-uuid/ -l
      •  
      • UUID < unique hardware id [will only change if you reformat]
      • UUID=  /mnt/storage  
      •  
      • /etc/fstab
      •  
      • e.g
      • UUID=54a90f46-431b-4e1f-8012-b0364102711c /boot           ext2    defaults        0       2
      • UUID=51f324c8-b5bf-4917-b790-bd2fb2dfecd5 /boot           ext2    defaults        0       2
      •  
      • - what is the adequate cpu && ram for the backup process?
      •  
      • 4 Gig
      • 2.4 GHZ
      •  
      • https://community.synology.com/enu/forum/1/post/130208
      • https://unix.stackexchange.com/questions/296138/starting-additional-openvpn-connection-under-systemd
      •  
      • AUTOMOUNTING USB < 
      • https://unix.stackexchange.com/questions/560358/how-do-i-automount-usb-drives-in-linux-debian
      •  
      • usb device >> /dev/sde
      • mountpoint >> /mnt/ooooo
      •  
      • systemd.mount
      • systemd.automount
      •  
      • sudo nano /etc/systemd/system/mnt-ooooo.mount
      •  
      • [Unit]
      • Description=Mount sde
      •  
      • [Mount]
      • What=/dev/disk/by-uuid/a164f220-cc96-450f-aa4a-27849ed21d44
      • Where=/mnt/ooooo
      • Type=auto
      • Options=defaults
      •  
      • [Install]
      • WantedBy=multi-user.target
      •  
      •  
      • sudo nano /etc/systemd/system/mnt-ooooo.automount
      •  
      • [Unit]
      • Description=Automount usb
      •  
      • [Automount]
      • Where=/mnt/ooooo
      •  
      • [Install]
      • WantedBy=multi-user.target
      •  
      • sudo systemctl daemon-reload
      • sudo systemctl enable --now  mnt-mountpoint.mount mnt-mountpoint.automount
      •  
      • TO STOP ( not necessary but recommended)
      • umount /mnt/ooooo
      •  
      • to do 
      • RSYNC or RESTIC /BORG - autoplug 
      • https://wiki.archlinux.org/title/Synchronization_and_backup_programs#Chunk-based_increments
      • SSHFS ??
      •  
      • automount usb https://unix.stackexchange.com/questions/560358/how-do-i-automount-usb-drives-in-linux-debian
      • https://wiki.archlinux.org/title/SSHFS#Mounting
      • https://linuxconfig.org/how-to-use-google-drive-on-linux
      •  
      • sudo dd bs=4M status=progress if=<>.img of=/dev/<>
      •  
      •  
      • Troubleshooting after power surge ==================================================
      •  
      • TIERCE's email:
      •     
      • - remove the 3rd drive (sdc, pale blue sata)  
      • - boot from a live usb containing mdadm, like  https://www.system-rescue.org/Download/  
      •  
      • - have a look if the liveusb has detected the existing raid.  
      • # cat /proc/mdstat  - if there is a /dev/md127 or /dev/md0, 
      •  
      • $ echo "repair" /sys/block/md127/md/sync_action  (replace md127 by the found raid if there is one)  
      •  
      • - look if the repair is resuming from 95,2% or not.  
      • # cat /proc/mdstat  
      •  
      • - if yes, let the sync finish to 100% and reboot.  
      •  
      • -------------------------------------------------------------------------
      • OUR NOTES [follow-up on Tierce's email]:
      •  
      • - have a look if the liveusb has detected the existing raid. 
      •  
      • $ cat /proc/mdstat
      •  
    • Personalities : [raid 6] [raid5] [raid4]
    • md127 : inactive sdb1[1] sda1[0] sdc1[3]
      • 5860144120 blocks super 1.2
      •  
    • unuse devices: <none>
    •  
      • - try to force de repair  
      • $ echo "repair" > /sys/block/md127/md/sync_action
      •  
      • µ//////
      • we have permission issue if we run the script
      •  
      • $ lsblk
      • ___ all disks have md127 subblock except foe cdc and none of them are mounted
      • ___ /etc/fstab is empty
      •  
      • we manage to see 4 disks
      • - do we have to mount the disks ?
      • and also md127
      • we see have 3 md127 devices and sdc.
      •  
      • we also ran 
      • $ mdadm --manage /dev/md0 --remove /dev/sdc1
      • mdamd: hot remove failed for /dev/sdc1: No such file or directory
      •  
      • OUR TOWER SETUP 02 2021-09-28 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
      •  
      • RAID5 : 3x2TB drives + 0 spare
      • available space at the time of writing data : 2 drives [4 TB total] + 1 drive [2 TB] for checksums
      •  
      • OS SETUP
      • only SSH server during the OS istall [no standard system utilities]
      • GRUB bootloader on all 3 drives
      •  
      •  
      • -- check raid status
      • cat /proc/mdstat                           
      •  
      • -- installs
      • apt install mc openvpn htop sudo nmon      
      •  
      • -- check drives
      • /sbin/mdadm --detail /dev/md0              
      •  
      • -- openvpn
      • copied openvpn config files [from neutrinet dir] to >>> /etc/openvpn/client 
      • renamed openvpn.ovpn to openvpn.conf
      • created system file /etc/systemd/system/openvpn@neutrinet.service 
      •  
      • [Unit]
      • Description=OpenVPN service for %I
      • After=syslog.target network-online.target
      • Wants=network-online.target
      • Documentation=man:openvpn(8)
      • Documentation=https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
      • Documentation=https://community.openvpn.net/openvpn/wiki/HOWTO
      •  
      • [Service]
      • Type=notify
      • PrivateTmp=true
      • WorkingDirectory=/etc/openvpn/client/%i/
      • ExecStart=/usr/sbin/openvpn --status %t/openvpn-server/status-%i.log --status-version 2 --suppress-timestamps --cipher AES-256-GCM --ncp-ciphers AES-256-GCM:AES-128-GCM:AES-256-CBC:AES-128-CBC:BF-CBC --config /etc/openvpn/client/%i.conf
      • CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SYS_CHROOT CAP_DAC_OVERRIDE
      • LimitNPROC=10
      • DeviceAllow=/dev/null rw
      • DeviceAllow=/dev/net/tun rw
      • ProtectSystem=true
      • ProtectHome=true
      • KillMode=process
      • RestartSec=5s
      • Restart=on-failure
      •  
      • [Install]
      • WantedBy=multi-user.target
      •  
      • systemctl daemon-reload                    
      • systemctl enable openvpn@neutrinet.service 
      • systemctl start openvpn@neutrinet.service  
      • systemctl status openvpn@neutrinet.service 
      • ● openvpn@neutrinet.service - OpenVPN service for neutrinet
      •      Loaded: loaded (/etc/systemd/system/openvpn@neutrinet.service; enabl>
      •      Active: active (running) since Wed 2021-09-29 12:37:23 BST; 1h 4min >
      •        Docs: man:openvpn(8)
      •              https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
      •              https://community.openvpn.net/openvpn/wiki/HOWTO
      •    Main PID: 72302 (openvpn)
      •      Status: "Initialization Sequence Completed"
      •       Tasks: 1 (limit: 1122)
      •      Memory: 1.6M
      •         CPU: 1.466s
      •      CGroup: /system.slice/system-openvpn.slice/openvpn@neutrinet.service
      •              └─72302 /usr/sbin/openvpn --status /run/openvpn-server/statu>
      •  
      • Sep 29 12:37:31 s14 openvpn[72302]: net_iface_mtu_set: mtu 1500 for tun0
      • Sep 29 12:37:31 s14 openvpn[72302]: net_iface_up: set tun0 up
      • Sep 29 12:37:31 s14 openvpn[72302]: net_addr_v4_add: 80.67.181.168/25 dev>
      • Sep 29 12:37:31 s14 openvpn[72302]: net_iface_mtu_set: mtu 1500 for tun0
      • Sep 29 12:37:31 s14 openvpn[72302]: net_iface_up: set tun0 up
      • Sep 29 12:37:31 s14 openvpn[72302]: net_addr_v6_add: 2001:913:1fff:ffff::>
      • Sep 29 12:37:31 s14 openvpn[72302]: add_route_ipv6(2000::/3 -> 2001:913:1>
      • Sep 29 12:37:31 s14 openvpn[72302]: add_route_ipv6(2001:913:1f00::/40 -> >
      • Sep 29 12:37:31 s14 openvpn[72302]: WARNING: this configuration may cache>
      • Sep 29 12:37:31 s14 openvpn[72302]: Initialization Sequence Completed
      •  
      • cat /proc/mdstat                           
      • Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
      • md0 : active raid5 sda1[0] sdb1[1] sdc1[2]
      •       3906762752 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      •       bitmap: 4/15 pages [16KB], 65536KB chunk
      • unused devices: <none>
      •  
      • lsblk                                      
      • NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
      • sda       8:0    0  1.8T  0 disk  
      • └─sda1    8:1    0  1.8T  0 part  
      •   └─md0   9:0    0  3.6T  0 raid5 /
      • sdb       8:16   0  1.8T  0 disk  
      • └─sdb1    8:17   0  1.8T  0 part  
      •   └─md0   9:0    0  3.6T  0 raid5 /
      • sdc       8:32   0  1.8T  0 disk  
      • └─sdc1    8:33   0  1.8T  0 part  
      •   └─md0   9:0    0  3.6T  0 raid5 /
      •  
      • df -h                                      
      • Filesystem      Size  Used Avail Use% Mounted on
      • udev            468M     0  468M   0% /dev
      • tmpfs            97M  720K   97M   1% /run
      • /dev/md0        3.6T  1.1G  3.4T   1% /
      • tmpfs           485M     0  485M   0% /dev/shm
      • tmpfs           5.0M     0  5.0M   0% /run/lock
      • tmpfs            97M     0   97M   0% /run/user/0
      • tmpfs            97M     0   97M   0% /run/user/1000
      •  
      • sudo /sbin/mdadm --detail /dev/md0         
      • [sudo] password for leverburns: 
      • /dev/md0:
      •            Version : 1.2
      •      Creation Time : Tue Sep 28 00:43:46 2021
      •         Raid Level : raid5
      •         Array Size : 3906762752 (3725.78 GiB 4000.53 GB)
      •      Used Dev Size : 1953381376 (1862.89 GiB 2000.26 GB)
      •       Raid Devices : 3
      •      Total Devices : 3
      •        Persistence : Superblock is persistent
      •  
      •      Intent Bitmap : Internal
      •  
      •        Update Time : Tue Sep 28 09:42:59 2021
      •              State : active 
      •     Active Devices : 3
      •    Working Devices : 3
      •     Failed Devices : 0
      •      Spare Devices : 0
      •  
      •             Layout : left-symmetric
      •         Chunk Size : 512K
      •  
      • Consistency Policy : bitmap
      •  
      •               Name : s14:0  (local to host s14)
      •               UUID : f394d34b:b6b102d2:a007a7d3:36bff5ab
      •             Events : 11543
      •  
      •     Number   Major   Minor   RaidDevice State
      •        0       8        1        0      active sync   /dev/sda1
      •        1       8       17        1      active sync   /dev/sdb1
      •        2       8       33        2      active sync   /dev/sdc1
      •