Showing posts with label red hat linux 7. Show all posts
Showing posts with label red hat linux 7. Show all posts

Saturday 22 January 2022

Can not remove Logical Volume, message "Logical volume contains a filesystem in use"

In my local machine, I have multiple logical volume. When I try to remove below logical volume, getting below error.

[root@redhat001:~]# lvremove /dev/mapper/system-lv_redhat

Logical volume /dev/mapper/system-lv_redhat contains a filesystem in use.

To remove this logical volume in redhat linux system, please follow the below methods. 


1. Please check if any logical volume mounted on your system. To check this you can simply use "mount" command with grep option.

[root@redhat001:~]# mount | grep /dev/mapper/system-lv_redhat

Above command will provide you information if logical system is mount or not.  If you found, file system is mount, then please umount it first, remove entry from /etc/fstab then try to remove logical volume.


2. In case no logical volume mounted but still you are not able to remove logical volume then you need to check this method if any file is open or any active process using by this logical volume. To check this please run the below command. 

[root@redhat001:~]# lsof | grep /dev/mapper/system-lv_redhat

or 

[root@redhat001:~]#ps -ef | grep /dev/mapper/system-lv_redhat

Using first command you can find if any file is open at system end and it is use by this logical volume, so please check this and close files.

Second command use when you get any process using by this logical volume. please kill the particular process. If process use by root file system, I would suggest instead of killing the process, please stop any application if running and reboot the server, so it would automatically kill your process.

3. You can deactivate logical volume and remove it

[root@redhat001:~]# lvchange -an /dev/mapper/system-lv_redhat

[root@redhat001:~]#lvs

[root@redhat001:~]#lvremove -f  /dev/mapper/system-lv_redhat

Please deactivate logical volume using lvchange command and run lvs to verify that logical volume deactivate or not. 

Hope using above three methods you can resolve this error. In case of any query, please drop a comment on this article. 

Friday 7 January 2022

mount: wrong fs type, bad option, bad superblock on /dev/mapper/system-lv_redhat error and solution

When we mount any file system or disk on redhat linux server, we have received below error message some times. 

[root@redhat001:~]# mount /redhat

mount: wrong fs type, bad option, bad superblock on /dev/mapper/system-lv_redhat

In above, /redhat is mount point and /dev/mapper/system-lv_redhat is device mapper name for this file system. 

system is volume group and lv_redhat is logical volume.


To resolve this issue, you need to check below points and performed action.

1. Check /etc/fstab entry, file system type (eg. xfs, ext4 etc.) should be correct. 

2. Check data integrity or the filesystem type using below command:

[root@redhat001:~]# file -sL /dev/mapper/system-lv_redhat

/dev/mapper/system-lv_redhat: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

The output of the above command shows that filesystem type is XFS

3. Check filesystem type mentioned in /etc/fstab:

[root@redhat001:~]#cat /etc/fstab

/dev/mapper/system-lv_redhat /redhat xfs defaults 1 2

4. If bad superblock found on the server, we need to repair file system. 

To check bad superblock you can run "e2fsck" and "tune2fs"  for ext formatted file system and "xfs_repair" for xfs file system. 

For above example, file system is xfs type , so please run the below command to repair file system.

Before doing this, please ensure, you have backup of this file system in tape library or any other backup solution. 

[root@redhat001:~]# xfs_repair  /dev/mapper/system-lv_redhat

It will take time to repair file system, also depend upon file system size. If file system size bigger, the repairing process will take long time. 


Sunday 24 January 2021

How to clear cache on Linux

In this article, we will guide you how to clear the memory cache on Linux system by clearing PageCache, dentries, and inodes from the command line.

In linux system basically we have a three different type of caches that need to be clear from linux system.

PageCache is cached files. Files that were recently accessed are stored here so they will not need to be queried from the hard disk again, unless that file changes or the cache is cleared to make room for other data

Dentry, inode cache is directory and file attributes. This information goes hand in hand with PageCache, although it doesn't contain the actual contents of any files.

Please find the below commands to clear the cache from linux device

To clear PageCache only, use this command:-

[root@localhost:~]#sysctl vm.drop_caches=1

To clear dentries and inodes, use this command:-

[root@localhost:~]#sysctl vm.drop_caches=2

To clear PageCache, plus dentries and inodes, use this command:-

[root@localhost:~]#sysctl vm.drop_caches=3

Please use free command or top to check your system's RAM usage and verify that the cache has been cleared.

Also you can use the following commands to accomplish the same thing as the respective systemctl commands:

Clear PageCache:-

[root@localhost:~]# echo 1 > /proc/sys/vm/drop_caches 

Clear dentries and inodes:-

[root@localhost:~]# echo 2 > /proc/sys/vm/drop_caches 

Clear PageCache, dentries and inodes:-

[root@localhost:~]# echo 3 > /proc/sys/vm/drop_caches

Using above commands you can clear the cache from linux system. In case you have any query please comment on this post. Thanks!!

Saturday 22 August 2020

SSH or SFTP Authentication issue in linux

We normally getting a below error while accessing the destination server via SSH or SFTP protocols.

Error:

root@localhost> sftp  root@XYZ.com

warning: Authentication failed.

FATAL: ssh2 client failed to authenticate. (or you have too old ssh2 installed, check with ssh2 "-V")

To resolve this error first we need to understand what is an issue. In such type of above error, issue is mostly from destination server end which you want to connect from your system. 

In /etc/ssh/sshd_config file, a parameter "MaxAuth Tries" value is very less due to this when we are attempting to access the destination server using SSH or SFTP protocol then we will get such issue if your account will not authenticate in first two attempts. 

So to resolve such issue , you need to increase the value of "MaxAuth Tries" from default value.

edit the /etc/ssh/sshd_config file

search this parameter

increase the value "MaxAuth Tries" to "5" and take a restart of ssh service

systemctl restart sshd

login on source server again and try to access the server, if you are facing this issue again then increase the value again and set to 20.


Saturday 29 February 2020

Failed to start lsb bring up/down networking in rhel 7

Such type of error we have received after operating system upgradation from redhat linux operating system from version 7.x to version 7.y.

The root cause for error is network manager upgradation during operating system patching.

To troubleshoot this error, please restart the network service and check the status 

[root@localhost network-scripts]# systemctl restart network

Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.

[root@localhost network-scripts]# systemctl status network

You can see " Failed to start lsb bring up/down networking" error message

Solution: To resolve such type of network issue, please perform the below steps.

Go to /etc/sysconfig/network-scripts/ directory and list the files.

[root@localhost]# cd /etc/sysconfig/network-scripts/

You can see the ifcfg-lo filename file which you need to removed it.

after remove this file please take a restart of service. Your network service will restart properly without any issue. Also if you have any other duplicate or backup ifcfg file please remove it.

[root@localhost network-scripts]# rm -rf ifcfg-lo

[root@localhost network-scripts]# systemctl restart network

Now try to access the redhat machine via ssh. Please post your comment if you have any query regarding this post. 

Sunday 20 May 2018

NFS Stale File Handle error and solution

In linux machine we have NFS mounted directories sometimes contain stale file handles. If you run command such as ls or vi you will see an error

# ls
.: Stale File Handle

Before moving to fix this issue first we need to understand the concept of Stale File Handle.

A filehandle becomes stale whenever the file or directory referenced by the handle is removed by another host.

while your client still holds an active reference to the object. A typical example occurs when the current directory of a process, running on your client, is removed on the server (either by a process running on the server or on another client).

So this can occur if the directory is modified on the NFS server, but the directories modification time is not updated.

To fixed this issue, the best solution is to remount directory from the NFS client using mount command.

# umount -f /test
# mount -t nfs nfsserver:/path/to/share /test

Thursday 3 May 2018

How to configure Network Bonding on RHEL 7

Step by Step method to configure the network bonding on RHEL 7:
 
➤ Please log on to linux server and run the "ip a" command to check the available interfaces.

    [root@localhost]# ip a
    lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever

    eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:bd:c7:f9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever

➤ Load the bonding driver called “bonding” in the kernel with the modprobe command if it is not already loaded, and verify with the modinfo command:

[root@localhost]# modprobe bonding
[root@localhost]# modinfo bonding
 
➤ In this step you need to generate UUIDs for interfaces using the below command.

[root@localhost]# uuidgen <interface-name>
 
➤ Now create a file called ifcfg-bond0 in the /etc/sysconfig/network-scripts directory for bond0 with the following settings. Please use vi editor to edit this file.

[root@localhost]# cd /etc/sysconfig/network-scripts
[root@localhost]# vi ifcfg-bond0

DEVICE=bond0
Name=bond0
TYPE=bond0
BONDING_MASTER=yes
BONDING_OPTS="mode=balance-rr"
ONBOOT=yes
IPADDR=192.168.1.23
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
 
➤ Now create ifcfg-eth2 and ifcfg-eth3 files in the /etc/sysconfig/network-scripts directory for eth2 and eth3 interfaces with the following settings. Set the MASTER directive to bond0. Both interfaces will act as slaves with no IP addresses assigned to them.

[root@localhost]# vi ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
NAME=eth1
UUID=23a32d65-343d-48a2-8rf7-d2jh2388f666
ONBOOT=yes
MASTER=bond0
SLAVE=yes

[root@localhost]# vi ifcfg-eth2

DEVICE=eth2
TYPE=Ethernet
NAME=eth2
UUID=22a32d65-443d-48d2-8rf7-d2jh222f666
ONBOOT=yes
MASTER=bond0
SLAVE=yes
 
➤ Now deactivate and reactivate bond0 with the ifdown and ifup commands:

[root@localhost]# ifdown bond0; ifup bond0
 
➤ Check the status of bond0 and the slaves with the ip command. It should also show the assigned IP.

[root@localhost]# ip addr
 
➤ Restart the system to ensure the configuration survives system reboots
[root@localhost]# reboot

Sunday 29 April 2018

How to install and configure samba server in RHEL 7 or redhat linux 7

Login on Samaba server
Check samba rpm installed or not, if not installed please install it,

[root@localhost ~]# rpm -qa | grep samba
[root@localhost ~]# yum install samba*

Create a directory in root file system which is shared with client.

[root@localhost ~]#mkdir -p /home/testuser/test

Add a new group or can use existing group

To provide access on shared directory,Here we are adding new group called samba

[root@localhost ~]#groupadd samba

Change the group and permission of sharing folder

[root@localhost ~]#chgrp -R samba /home/testuser/test
[root@localhost ~]#chmod -R 777 /home/testuser/test

create user, add into group and set samba password

[root@localhost ~]#useradd testuser
[root@localhost ~]#usermod -G samba testuser
[root@localhost ~]#smbpasswd -a testuser

Now Edit /etc/samba/smb.conf file

Note: Please take a backup of origianl file.

[root@localhost ~]#cd /etc/samba/
[root@localhost ~]#cp -p smb.conf smb.conf.orig

And add the below given contents in last line of /etc/samba/smb.conf file.

vi /etc/samba/smb.conf

[test]
comment = shared-directory
path = /home/testuser/test
public = no
valid users = testuser, @samba
writable = yes
browseable = yes
create mask = 0774
directory mask = 4774

##Edit these lines in /etc/samba/smb.conf . To allow network to reach samba server

interfaces = lo ens32 192.168.1.0/24
hosts allow = 127. 192.168.1.

security = user
passdb backend = tdbsam
netbios name = localhost
server string = Samba Server localhost
workgroup = MYGROUP
log file = /var/log/samba/samba.log
max log size = 50
security = server

Add services in /etc/services files

vi /etc/services
 
netbios-ns    137/tcp    # netbios name service
netbios-ns    137/udp    # netbios name service
netbios-dgm    138/tcp    # netbios datagram service
netbios-dgm    138/udp    # netbios datagram service
netbios-ssn    139/tcp    # netbios session service
netbios-ssn    139/udp    # netbios session service

Note: Please check these above ports are open from this samba server to client machine

Now start the smb and nmb services.

systemctl start smb.service
systemctl start nmb.service

Enable smb and nmb service at booting of system

systemctl enable smb.service
systemctl enable nmb.service

Note 1: firewalld service not enable on this server so no need to add any rule.
Note 2: selinux is in permissive state so no need to change the selinux security context.

Now login on window machine

and mount this samba share on the server.

\\localhost.redhat.com\test

Thursday 26 April 2018

Job for smb.service failed because the control process exited with error code. Redhat 7 or RHEL 7

Job for smb.service failed because the control process exited with error code.

I just recently installed the samba server on my Red Hat Linux 7 / RHEL 7 operating system server. After configuration when I have taken restart of samba service, I have got the above error. Due to this samba service not running on my linux machine.

[root@localhost~]# systemctl restart smb.service

I will reveive below error

Job for smb.service failed because the control process exited with error code.
See "systemctl status smb.service" and "journalctl -xe" for details.

So in such case, first I have checked the my samba configuration is ok or not, to do this, please run the below command.

#smbstatus

the above command gives me the below output.

"WARNING: Ignoring invalid value 'share' for parameter 'security'
Can't load /etc/samba/smb.conf - run testparm to debug it"

That's means something wrong in my samba configuration.

So open the /etc/samba/smb.conf file and commented the below line which starts with security.

#security = server

now again run the smbstatus command which gives the new output.

[root@localhost~]# smbstatus

Samba version 4.2.10
PID     Username      Group         Machine            Protocol Version
------------------------------------------------------------------------------

Service      pid     machine       Connected at
-------------------------------------------------------

If you are getting such output that means now your configuration is ok, now please restart the samba service again.

[root@localhost~]# systemctl restart smb.service

[root@localhost~]# systemctl status smb.service
● smb.service - Samba SMB Daemon
   Loaded: loaded (/usr/lib/systemd/system/smb.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2018-04-26 16:19:34 BST; 12h ago
 Main PID: 4719 (smbd)
   Status: "smbd: ready to serve connections..."
   CGroup: /system.slice/smb.service
           ├─4719 /usr/sbin/smbd
           ├─4720 /usr/sbin/smbd
           ├─4721 /usr/sbin/smbd
           └─4722 /usr/sbin/smbd

Apr 26 16:19:33 localhost.redhat.com systemd[1]: Starting Samba SMB Daemon...
Apr 26 16:19:33 localhost.redhat.com systemd[1]: smb.service: Supervising process 4719 which is not our child. We'll most likely not notice when it exits.
Apr 26 16:19:34 localhost.redhat.com smbd[4719]: [2018/04/26 16:19:34.173413,  0] ../lib/util/become_daemon.c:124(daemon_ready)
Apr 26 16:19:34 localhost.redhat.com systemd[1]: Started Samba SMB Daemon.
Apr 26 16:19:34 localhost.redhat.com smbd[4719]:   STATUS=daemon 'smbd' finished starting up and ready to serve connections

Job for nfs-server.service failed because the control process exited with error code. See "systemctl status nfs-server.service" and "journalctl -xe" for details. Resolution

On Red Hat Linux 7 operating system, some time you will get below NFS service failed error message.

"Job for nfs-server.service failed because the control process exited with error code. See "systemctl status nfs-server.service" and "journalctl -xe" for details."

The above error occur when you are taking restart of nfs service.

# systemctl restart nfs.service
Job for nfs-server.service failed because the control process exited with error code. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
Resolution

# systemctl status nfs-server.service
nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Fri 2018-04-27 09:56:08 IST; 8s ago
  Process: 21370 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
  Process: 21366 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
  Process: 21362 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
  Process: 21273 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
  Process: 2714 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=1/FAILURE)
 Main PID: 21273 (code=exited, status=0/SUCCESS)

Apr 27 09:56:07 localhost.redhat.com systemd[1]: Starting NFS server and services...
Apr 27 09:56:08 localhost.redhat.com exportfs[2714]: exportfs: Failed to resolve foobar.com <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Apr 27 09:56:08 localhost.redhat.com exportfs[2714]: exportfs: Failed to resolve foobar.com <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Apr 27 09:56:08 localhost.redhat.com systemd[1]: nfs-server.service: control process exited, code=exited status=1
Apr 27 09:56:08 localhost.redhat.com systemd[1]: Failed to start NFS server and services.
Apr 27 09:56:08 localhost.redhat.com systemd[1]: Unit nfs-server.service entered failed state.
Apr 27 09:56:08 localhost.redhat.com systemd[1]: nfs-server.service failed.

To resolve such issue you need to follow the below process.

You need to check which NFS version rpm is installed on the server.

#rpm -qa | grep nfs

the above command show you the installed nfs version.

Normally the above error occur due to lower version on nfs, so to resolve the issue you need to upgrade the nfs version.

installed the latest nfs rpm nfs-utils-1.3.0-0.33.el7 or later

#yum install nfs

The above command upgrade the nfs rpm, or you can download the manually rpm file and installed it through rpm -uvh command.

After that restart the nfs service again.

# systemctl restart nfs.service

it is started successfully, you can check the status via below command.

# systemctl status nfs.service

[root@localhost ~]# systemctl status nfs.service
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since Thu 2018-04-27 03:47:57 IST; 18h ago
  Process: 32477 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
  Process: 32472 ExecStartPre=/bin/sh -c /bin/kill -HUP `cat /run/gssproxy.pid` (code=exited, status=0/SUCCESS)
  Process: 32469 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 32477 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service

Apr 26 03:47:57 localhost systemd[1]: Starting NFS server and services...
Apr 26 03:47:57 localhost systemd[1]: Started NFS server and services.