BUILD YOUR OWN CENTOS COMPUTER CLUSTER ====================================== This document contains the instructions to build a cluster of PC running CENTOS Linux system that has a network sharing file (NSF) system. The modus operandi of the cluster mimics that of the Rocks Cluster (see http://www.rocksclusters.org/). The customization and packages to be pre-installed are those commonly used by my research group which runs researhes on computational materials science, DFT, molecular dynamics, DFTB, machine learning, Quantum Monte Carlo, and computational pshysics. The customization of the cluster takes into account the consideration that some selected computation packages/softwares are to be pre-installed once-and-for-all during the setting up of the frontend and additional of the compute nodes. This allows the cluster to be used in the long run without the need of repeated installation of the same packages as new nodes or new users are added. Going through this script package step-by-step it is possible to understand what they are logically doing. You may or may not needs all the pre-installed software intended by this script pacakge. The script package can be adopted and adapted to suit your own use, but it requires you to understand clearly what these scripts are doing (they are understandable by reading the instructions in the scripts, written in shell script). Some procedures are essential while others optional, and some require specific tuning based on your systems (e.g., IP addresses and gateways). The general structure of the cluster is as follows: The cluster contains a frontend node (a.k.a 'mother node') plus a number of compute nodes. All nodes are connected via a 100/1000 MB/s switch. In principle this document can also be applied to other Linux OS with or without modification. The 'cluster' is not one that is defined in a strict sense (like the Rocks Clusters), as our's is much less in complexity. If you want to use this script package to set up a CENTOS cluster in your institution, you need to modify only the following information which is specific to your network enviroment. The default values used in this script package are as listed below. You will need these information to build the cluster, namely: DOMAIN, IPADDR, GATEWAY, DNS1 and DNS2 (DNS2 is optional). Ask your network admin for these information. For my case, these are HOSTNAME=anicca IPADDR=10.205.18.133 DOMAIN=usm.my GATEWAY=10.205.19.254 DNS1=10.202.1.1 DNS2=10.202.1.2 ### DNS2 is optional ====================== Hardware requirements: ====================== i). One frontend PC + at least one compute node PC. ii). Two hardisks on frontend (one small capacity and the other large, e.g., 500 GiB + 1 TiB). iii). LAN cables, plus a 100/1000 MB/s switch. iv). The frontend node has to be equipped with two network cards, a built-in and an externally plug-in one. v). All compute nodes must be equipped with a minimum of one network card (either built-in or external). Naming convention: We shall denote the built-in network card eth0 while the external network card eth1. eth0 is the network card that connects to the swtich, while eth1 the network card that connects to the internet. Note that it is not material to stick strictly to these convention. The convention is merely for the sake of naming consistency. Important IPs to take note: The IP for the frontend at eth0 is by default set to: 192.168.1.10 The IPs for node1, node2, node3 etc. at their respective eth0 are by default set to (in sequential order): 192.168.1.21, 192.168.1.22, 192.168.1.23, ... ========================================================================================== To build a CentOS cluster, follow the following procedure step-by-step in sequential order. ========================================================================================== (2) Use Rufus or other software to burn the lastest CentOS iso into a bootable thumb drive. This manuscript is prepared based on CentOS-7-x86_64-DVD-1804, but in principle it should also work for other version of CENTOS (version 6 or above). (4) Connect all eth0 ports of all the PCs in the cluster to a 100/1000 MB/s switch. eth1 of the frontend is to be connected to the internet network (e.g., in USM, it is the usm.my network). (6) Install CENTOS using the bootable thumb drive into the frontend's smaller hardisk. Leave the larger hardisk as it is for the moment. For the sake of uniformity, you should use the installation option of 'Development and Creative Workstation'. Choose to add all software packages offered at the installation prompt. Use a common root password for the frontend as well as all compute nodes. (7) After the installation of CENTOS in the frontend has completed, proceed to the next step. (8) Check out the label of the hard disks in the frontend using fdisk -l. Say the larger hard disk is labelled /dev/sdb. Keep a backup copy of /etc/fstab, and mount the larger hard disk (/dev/sdb in this case) in the folder /export in the root directory in the frontend CENTOS. This can be done by typing the following command line (as su) in the terminal: mkdir /export mount -t xfs /dev/sdb /export chmod -R 777 /export cp /etc/fstab /etc/fstab.orig Mount this hardisk permanantlty by adding the line /dev/sdb /export xfs rw 2 2 to /etc/fstab. The permanant mounting of /dev/sdb to /export will take place after a reboot. In case the hard disk is not formatted properly, it may refuse to be mounted. The hard disk can be formatted using mkfs.xfs -f /dev/sdb to forcefully format it into XFS format. Alternatively, a hard disk can also be formatted by using the 'gnome-disks' GUI application. (OPTIONAL) The task of formatting and mouting the external hard disk to the Centos root directory is well described in the following executable instruction mount_export.txt downloadable from http://anicca.usm.my/tlyoon/configrepo/howto/customise_centos/Centos_cluster/mount_export.txt ============================================================= (10) Download the following script into /root/ ============================================================= http://anicca.usm.my/configrepo/howto/customise_centos/Centos_cluster/frontend-1of3.txt Customize the following variables in centos_frontend-1of3.txt script to suit your case: HOSTNAME= IPADDR= DOMAIN= GATEWAY= DNS1= DNS2= Then chmod +x sh frontend-1of3.txt ./frontend-1of3.txt in the frontend as su. frontend-1of3.txt will do the necessary preparatory configuration for the frontend. These include (i) setting the SELINUX to permissive in /etc/sysconfig/selinux, (ii) activating sshd.service so that ssh can work, and (iii) setting the IP of the frontend by making changes to its network configuration such as assigning IP address, setting up DNS server, etc. (The instruction is based on http://www.techkaki.com/2011/08/how-to-configure-static-ip-address-on-centos-6/). (iv) create /state/partition1. (v) etc. The frontend will reboot at the end of frontend-1of3.txt. (12) After rebooting the frontend, log in as root. Check the frontend to see if the following functions are configured successfully by frontend-1of3.txt. (a) Check the mode of SELINUX using the follwing commands: getenforce sestatus The SELINUX should be set to permissive. This can be independently confirmed by checking if the file /etc/sysconfig/selinux has a line stating 'SELINUX=permissive'. (b) Check manually if both eth0 and eth1 are connected to the switch and internet respectively. This can be done by issuing the following commands one-by-one (i) ping google.com (ii) ssh into the frontend from a third party terminal (iii) ssh into a third party terminal from the frontend (iv) ifconfig (v) cat /etc/sysconfig/network-scripts/ifcfg-$nc0 (vi) cat /etc/sysconfig/network-scripts/ifcfg-$nc1 where nc0=$(lshw -class network | grep -E 'Ethernet interface|logical name' | grep 'logical name' | awk '{print $3}' | awk '!/vir/' | tail -n1) nc1=$(lshw -class network | grep -E 'Ethernet interface|logical name' | grep 'logical name' | awk '{print $3}' | awk '!/vir/' | awk 'NR==1{print}') (14) Check if the HWADDR for both $nc0 and $nc1 are explicitly specified. Assure that (a) DOMAIN=local, DNS1=127.0.0.1 for the network card $nc0 (=eth0) (b) DOMAIN=xxx, DNS1=xxx, DNS2=xxx for network card $nc1 (=eth1) where xxx are the values for DOMAIN, DNS1, DNS2 for $nc1 you have set in frontend-1of3.txt. The setting of the network configuration as attempted to be established by the frontend-1of3.txt script may fail to work. In such as case, the network configuration can be manually configured by tweaking the NetworkManager. To use the NetworkManager GUI to establish an connection to the internet via the eth1 network card, do as follows: NetworkManager can be found in the following manner: Settings -> Network. Under the 'Wired' panel, click on the network card item. In case you don't see the NetworkManager icon (which could happen when a fresh copy of CENTOS is just set up), issue the command service NetworkManager restart or service NetworkManager restart (try out which one works) in the terminal to launch it. The connection to internet (via eth1) is established if ping google.com returns positive responses. (15) Alternatively, the setting of the required network configuration (as stated in item (14)) can be manually configured, if needed, by editing the files /etc/sysconfig/network-scripts/ifcfg-$nc0 /etc/sysconfig/network-scripts/ifcfg-$nc1 Issue service network restart to restart the network service. However, this is an alternative option that may result in unexpected glitches. Avoid doing it if you don't know how to do it exactly. (16) In any case, both network cards must be active and connected before proceeding to the next step. Reboot the frontend if you have done some manual configurations. Often both cards will be connected after rebooting. Be remined that the network card $nc0 (=eth0) is to be connected to the switch, while $nc1(=eth1) to the internet in our convention. ======================================================================================== (18) Download and execute the following script in /root/ once internet is established ======================================================================================== wget http://anicca.usm.my/configrepo/howto/customise_centos/Centos_cluster/frontend-2of3.txt chmod +x frontend-2of3.txt ./frontend-2of3.txt Note that the contents in frontend-2of3.txt will accomodate a total of 12 nodes. If you wish to have more nodes in your cluster, extent the list by adding, e.g., node13='/export 192.168.1.32/24(rw,sync,no_root_squash)' echo $node13 >> /etc/exports CentOS will reboot at the end of the script. All the steps described until this stage must be completed before initiating the following steps. =========================================================================== (20) Download and execute the following script into /share/apps/local/bin =========================================================================== cd /share/apps/local/bin wget http://anicca.usm.my/configrepo/howto/customise_centos/Centos_cluster/frontend-3of3.txt chmod +x frontend-3of3.txt ./frontend-3of3.txt This will download all required scripts into /share/apps/local/bin. In addition, it will also customize the content of ~/.bashrc for root and execute the basic_packages.txt script. ============================= Customization of the frontend ============================= Once a frontend is up and running, install the following three categories of packages step-by-step. The instllation can be carried out independently while installing other nodes. (20.1) CUDA packages, to be installed in the root directory. Manual responses are required when running this installation packages. cd /share/apps/configrepo/ wget http://comsics.usm.my/tlyoon/configrepo/howto/customise_centos/nvidia_cuda/inst_cuda_centos_11.1.0_local.txt chmod +x inst_cuda_centos_11.1.0_local.txt ./inst_cuda_centos_11.1.0_local.txt (20.2) Run packages0. Manual responses are required when running this installation packages. Could be time-consuming since it involves large files. Issue the command: packages0.txt (20.3) packages1, to be installed in the /share/apps directory. Automated installation. Issue the command: packages1.txt Despite being time consuming, packages1.txt is an auto installation process. This finishes the part on setting up the frontend of the cluster. Now proceed to the setting up of the nodes. =========================== Install CENTOS in each node =========================== (22) Install CENTOS in all other compute nodes using CentOS installation file in a pen drive. All nodes should use the same installation options as that used in setting up the frontend. (24) During the installation of a node, physically connect the network card of the node (which is referred to as 'eth0' here; by default we choose the built-in/internal network card as the eth0) to the switch. 24.2) Manually identify the value of The latest $ipaddlastnumber. This is the number which appears in the form of compute-0-c$ipaddlastnumber. Figure out the right value of it by checking the latest value of from the list of compute-0-c$ipaddlastnumber in /etc/hosts in the frontend. For example, cat /etc/hosts 10.205.19.225 anicca.usm.my 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 192.168.1.10 anicca.local anicca 192.168.1.22 compute-0-22.local compute-0-22 c22 192.168.1.23 compute-0-23.local compute-0-23 c23 192.168.1.24 compute-0-24.local compute-0-24 c24 192.168.1.25 compute-0-25.local compute-0-25 c25 192.168.1.26 compute-0-26.local compute-0-26 c26 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 In this example, the value of latest ipaddlastnumber from existing list is ipaddlastnumber=26. Hence, to add a new node to the list in /etc/hosts, it has to be assigned a value of ipaddlastnumber=27 Fixing the correct value of $ipaddlastnumber is essential as each node has to be assigned with an unique integer as an identity. (24.5) When installing a node using the CentOS installation pen drive, manaully set the IP address using NetworkManager to: For eht1 (if exists, optional): Click the tab IPv4. Set the network card to use DHCP, and make sure to check 'Connect Automatically' For eht0 (mandatory): Click the tab IPV4. Check the 'manual' option. Use the following setting: Address 192.168.1.10 Netmask 255.255.0.0 Gateway 192.168.1.$ipaddlastnumber DNS 192.168.1.10 (25) Note that if a node is equipped with two network cards, one will be used to connect to the switch (eth0) and the other (eth1) internet-accssing network. Having two network cards in a node is optional but preferred. If only one network card is present, it should be connected to the switch. This card would be identified as the eth0, and there will be no eth1 card. The network card eht0 is used to connect to the local network. It is preferrably one with a speed 100/1000 Mbps. (25.5) After successfully rebooting a node upon installing CentOS in a node, check if the node has connected to the internet via eth1 by issuing ifconfig ping google.com If eth1 is present and has been configured correctly in previous step, one should get the node to access the internet. If it does not, manually tweak the NetworkManager GUI until it does (assuming eth1 is present). However, connecting the node to internet is optional. (25.7) Manually tweak the NetworkManager GUI to get the eth0 network card with the setting mentioned in (24.5) to 'connect automatically' every time the node boots up. If the node does not get connected to the internet or the ssh connection to the frontend fail despite the presence of both network cards physically, it may be due to the wrong guess of the physical identities of eth0 and eth1. One possible way to fix this issue is to manually swap the LAN cables to both network cards to see if the expected connection can be established. You may have to issue the command service network restart after swapping the LAN cables to the network cards. If it fails, just leave out the eth1 (connection to the internet) and use only eth0 for local network connection. (25.9.1) In the case where the node has successfully established a connection to the frontend via the eth0 card, the node should see a positive response when pinging the frontend from the terminal of the node: ping 192.168.1.10. (25.9.2) In the case where the node has successfully established a connection to the internet via the eth1 card, and if (25.9.1) is established, you should see similar output as in the following samples, eno1: flags=4163 mtu 1500 inet 10.205.19.29 netmask 255.255.254.0 broadcast 10.205.19.255 inet6 fe80::c179:dee0:8b19:51b0 prefixlen 64 scopeid 0x20 ether 54:04:a6:28:c8:0c txqueuelen 1000 (Ethernet) RX packets 34649849 bytes 3348276078 (3.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 411108 bytes 34124296 (32.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 18 memory 0xfb600000-fb620000 enp8s0: flags=4163 mtu 1500 inet 192.168.1.21 netmask 255.255.0.0 broadcast 192.168.255.255 inet6 fe80::a109:26ab:e943:65ae prefixlen 64 scopeid 0x20 ether 1c:af:f7:ed:32:d3 txqueuelen 1000 (Ethernet) RX packets 217611 bytes 179611464 (171.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 171507 bytes 17822576 (16.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ========================================= Execute coc-nodes_manual script in a node ========================================= (26) In a node which has already established a ssh connection to the frontend and/or the internet, copy the following file into the /root directory (or anywhere) of the freshly installed node: http://anicca.usm.my/configrepo/howto/customise_centos/Centos_cluster/coc-nodes_manual This script can be obtained via either of the following ways: (i) if internet connection from the current node is available: wget http://anicca.usm.my/configrepo/howto/customise_centos/Centos_cluster/coc-nodes_manual (ii) if internet connection from the current node is not available but ssh into the mother node is available: scp 192.168.1.10:/share/apps/local/bin/coc-nodes_manual . (27) Run the script: chmod +x coc-nodes_manual ./coc-nodes_manual (27.5) Among others, the major functions of the script coc-nodes_manual include: (a) Automatically set the value of $ipaddlastnumber for the new node by checking out the local ip of the eth0 (via the command line: ifconfig | grep 255.255.0.0 |awk '{ print $2}' | awk -F'.' '{print $4}') Note that the value of $ipaddlastnumber grepped is the value that you have entered while installing the CENTOS in the current node in step (24.2) above. (b) mount the node to the frontend as a nsf node, setting the hostname of the node as 'compute-0-$ipaddlastnumber', alias c-$ipaddlastnumber. It will create a shared directory /share/ in the c-$ipaddlastnumber node, which is a nsf directory physically kept in the hard disk of the frontend. It will set the hostname in /etc/hostname of the new node, as well as generating the necessary content in the .bashrc file for root in the node (via gen_bashrc-root). In addition, a local folder /state/partition1 will also be created in the node. (c) set the following items in /etc/ssh/sshd_config and /etc/ssh/ssh_config is set to 'no', namely, GSSAPIAuthentication no (d) It will also yum install epel-release and sshpass, and execute the basic_packages.txt script. (d) will work completely only if the node is connected to the internet. If it fails due to absence of internet connection, just leave it alone as the node can still work despite the installation step (d) fails. (28) At the ending stage of the execution of coc-nodes_manual, you will be prompted to establish a passwordless ssh for the root from the current node -> the frontend (if ssh has been established). (29) After the completion of the execution of the coc-nodes_manual, reboot the node manually and log in as root. Assure that the node still work correctly after re-booting. Specifically, a) The node must be ssh-able to and fro the mother node (192.168.1.10), b) The local ip set 192.168.1.$ipaddlastnumber is consistent with other nodes in the cluster c) nfs is established correctly as specified in /etc/fstab d) The shared directory /share/ is mounted and seen to have established e) The node's ip is correctly recorded in /etc/hosts, which is linked to /share/cluster/hosts f) Try to copy a large file (~200 MB) to and fro the frontend and the new node. If the eth0 connection via the swith works properly, the transfer rate of the file copying process should be of the order > 20 MB/s (or at least larger than ~12 MB/s). g) ssh other existing nodes via the command e.g., ssh c-21 or ssh 192.168.1.21 (assuming c-21 is not itself). (37) After rebooting a new node, you will be forced to create a local user. For the sake of consistency, create a local user 'user'. But the naming of the user is immaterial. (37.5) The most preliminary setting of a node is considered done if items (a) - (g) are successfully established. (38) Repeat steps (22) - (37.5) to set up all other nodes. (39) All users existed in the mother node are to be synced to any new node. This can be done by simply executing the following script as su in the frontend su ssh 192.168.1.10 coc-sync-users The function of coc-sync-users are as follows: (a) Users will be added to a node if they exist in the frontend but not in the nodes. (b) The uid of the users in the nodes shall be made to match with that in the frontend if they are not (so that a user will have a common uid in both the node and the frontend). (c) The users exist in a local node but not in the frontend will not be added to the frontend. These local users will be referred to as the 'orphaned' users. If the uids of these 'orphaned' users clash with any uid in the frontend, the uid of the 'orphaned' users in the local nodes will be modified to avoid clashes with uid in the frontned. (d) The home directories of all users are also to be linked to that kept in the nsf partition /share/home/ in the frontend end. Whenever a new node or more is/are added to the cluster, run (39) to sync all users across the cluster. (40) For best practice in the maintainance of the cluster, always assure that passwordless ssh by su to any node in the cluster is always maintained and working. ========================== Customization of the nodes ========================== Once a node is up and running, install the following three categories of packages step-by-step. (64) CUDA packages cd /share/apps/configrepo/ wget http://comsics.usm.my/tlyoon/configrepo/howto/customise_centos/nvidia_cuda/inst_cuda_centos_11.1.0_local.txt chmod +x inst_cuda_centos_11.1.0_local.txt ./inst_cuda_centos_11.1.0_local.txt (66) statepartition1_packages, to be installed in the /state/partition1 directory. Manual responses are required when running this installation packages. Could be time-consuming since it involve large files. Issue the command line: statepartition1_packages.txt ============================= Adding an user to the cluster ============================= (72) To add a new user to the cluster, issue the following command in a terminal in the frontend as su: coc-add_new_user The root will be promtped for the username to be added. After providing the username to be added, a new file, newuser.dat, will be creted as /share/apps/configrepo/users_data/newuser.dat. In the file newuser.dat, the following 1-line information about the will be created, in the format $index $user $uid $passwd For example, 19 mockuser1 1019 ds!Jw3QXZ Note that the value of $index is immaterial. The password is generated automatically. The user suggested in the prompt will be subjected to an automatic check against existing usernames that have already existed in the cluster. In case the username suggested has already been taken, the addition of the user will be rejected. A new username needs to be suggested. Retry /share/apps/local/bin/coc-add_new_user. (76) The script /share/apps/local/bin/coc-add_new_user will attempt to first create the new user in the frontend, and then ssh into each node in turn for creating the new user there. The process will be fully automatic. The password of the user added is generated randomly and is archieved in /share/apps/configrepo/users_data/userpass_xx.dat. The permission of the directory /share/apps/configrepo/users_data/ is 700, hence can only be accessed by the root. (78) Towards the end of the /share/apps/local/bin/coc-add_new_user script, the script coc-pwlssh2 will be invoked automatically. This will create passwordless ssh into each node to and fro the frontend for the $user. The script will also customize the .bashrc file for the user $user. (80) After the script coc-add_new_user is done, check indeed passwordless ssh has been achieved by the user $user by performing some trial ssh stunt, e.g., ssh c-21 to and fro the frontend. ================================= Syncing all users into a new node ================================= (90) If a new node is later added to the cluster, perform (39). =============================== Remove an user from the cluster =============================== (92) To remove a named user globally from the cluster, issue the command coc-remove-a-user-globally (95) Administrator can also exectue the following scripts in tendem from time to time to sync all users across the cluster coc-sync-users (96) At times the passwordless ssh of some users between the frontend and certain nodes may become not working. Administrator can exectue the following scripts to fix this problem so that passwordless ssh for all users between the nodes and frontend can be resumed. coc-sync-pwlssh (97) Miscellaneous management issues: (i) The entries generated by Anaconda in ~/.bashrc of an user may interfer with the entries generated by gaussian-add add_g09_user unexpectedly. To solve the issue, simply source $g09root/bsd/g09.profile in the very first line in ~/.bashrc before other sourcing or export are peformed. (ii) Occasionally, pwlssh by roots to and fro the nodes and frontend may lost. To fix this, ssh-keygen -t rsa in each node for a new set of id_rsa id_rsa.pub keys, and then ssh-copy-id c2X from frontend one by one. If it does not work, rm -rf /home/users/.ssh/ for all users and re-do the above steps. (iii) It is possible that pwlssh by an existing user to and fro the nodes and frontend may lost. Restoring the pwdless function in this case proves unfruitful. Only the ssh from node to frontend may be recovered but not in the opposite direction. (iv) At times ssh into a node by either root or a generic user may be associated with the warning 'ABRT has detected 1 problem(s). For more info run: abrt-cli list --since 1602006848', and take a very long time (or even hang). To fix the problem, issue as su the following command line in the client host. abrt-auto-reporting enabled (v) At times, the LAN cable contact in a node's network card at eth0 may become loose, rendeing the node distached from the cluste. Even after the contact is re-affirmed manually the node may still detached from the local network. It is possible to restore the local network connect of the node to the cluster without rebooting. To do so, check out the name of the eth0 network card that is off, e,g., ifconfig [root@compute-0-23 ~]# ifconfig enp6s1: flags=4163 mtu 1500 inet 10.205.19.83 netmask 255.255.255.0 broadcast 10.205.19.255 inet6 fe80::1a7:7d8a:879c:a162 prefixlen 64 scopeid 0x20 ether 00:e0:4c:69:36:75 txqueuelen 1000 (Ethernet) RX packets 929 bytes 85131 (83.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 142 bytes 14938 (14.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp7s0: flags=4099 mtu 1500 ether 4c:ed:fb:41:05:68 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 17 bytes 2473 (2.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1000 (Local Loopback) RX packets 12 bytes 988 (988.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12 bytes 988 (988.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 cd /etc/sysconfig/network-scripts [root@compute-0-23 network-scripts]# ls ifcfg-enp6s0 ifcfg-Wired_connection_1 ifdown-eth ifdown-ipv6 ifdown-ppp ifdown-Team ifup ifup-eth ifup-ipv6 ifup-plusb ifup-routes ifup-TeamPort init.ipv6-global ifcfg-enp7s0 ifdown ifdown-ib ifdown-isdn ifdown-routes ifdown-TeamPort ifup-aliases ifup-ib ifup-isdn ifup-post ifup-sit ifup-tunnel network-functions ifcfg-lo ifdown-bnep ifdown-ippp ifdown-post ifdown-sit ifdown-tunnel ifup-bnep ifup-ippp ifup-plip ifup-ppp ifup-Team ifup-wireless network-functions-ipv6 The network card that needs to be fixed in this case is enp7s0. To fix the connection, ifconfig enp7s0 up Yoon Tiem Leong Universiti Sains Malaysia 11800 USM Penang, Malaysia updated 2 Oct 2020