Thursday, 1 May 2025

Yum Error "Unable to read consumer identity" in Redhat

When we try to use the yum command to update or clean the repository on Red Hat 8 servers, we notice that the command gets stuck on various subcommands, including update and repolist. Similarly, when attempting to register the Red Hat device using the subscription-manager command, it hangs during the registration process.

To troubleshoot this issue, the first step is to check the RHSM logs.

# /var/log/rhsm/rhsm.log
---
2024-12-10 06:13:51,644 [ERROR] dnf:35631:MainThread @subscription-manager.py:91 - HTTP error (410 - Gone): Unit ec3b3043-4a40-4cce-9131-5249ae825bed has be en deleted
2024-12-10 06:15:26,758 [ERROR] yum:9735:MainThread @lock.py:216 - Could not lock file: /run/rhsm/cert.pid Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/subscription_manager/lock.py", line 196, in acquire
    f.open(blocking=False)
  File "/usr/lib64/python3.6/site-packages/subscription_manager/lock.py", line 58, in open
    fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
BlockingIOError: [Errno 11] Resource temporarily unavailable
---

To identify the process which causing the issue, please review the above logs and identify the process id.
 
# grep rhsm ps
---
root      1780  0.0  0.0 115744  2632 ?        -    Nov10   0:00 /usr/bin/rhsmcertd
root     21078  0.0  1.1 493032 89580 ?        -    Dec07   0:00 /usr/libexec/platform-python /usr/libexec/rhsmcertd-worker

To fix this, we need to run the following commands on the affected system.

# rm -f /var/run/rhsm/cert.pid
# ps aux | grep rhsm
# cat /var/run/rhsm/cert.pid
# ls -lZ /etc/rhsm/ca/katello-server-ca.pem
# ls -lZd /etc/rhsm/ca/

Then re-try the registration or run the yum command. Hopefully your problem will get resolve after applying the above steps.

 

Saturday, 3 February 2024

Deleting vvols inside contianer directly from 3par or HPE Primera storage

In HPE 3par or Primera storage, we have noticed that, sometimes when we deleted disks of virtual machines but didn't get back space to storage. Such issue occurred in storage when vvols not deleting from storage back-end.

To remove vvols from storage back-end, we need to run below commands on Storage CLI. 

Below steps need to be carefully followed before removing the vvols from container in storage. 

1. Login on Storage via service account in CLI mode and check the container name.

HPE cli% showvvolsc
                                        --------(MiB)--------
Name                  Num_VMs Num_VVols    In_Use Provisioned

hpe.vvol.ds01     269      1339 103369860   275678184
-------------------------------------------------------------
total                     269      1339 103369860   275678184

showvvolsc command will show you the container name in HPE 3par or Primera storage. All vvols are part of this container. 

2. To check the unbound vm_state, please run the below command.

HPE cli% showvvolvm -d -sc hpe.vvol.ds01

above command which show you the below parameter which you require to check that vm state.

VM_Name - Virtual machine name

UUID - Virtual machine UUID number

Num_vv - Number of vvols present on virtual machine

Num_snap - Number of snapshot present on virtual machine

Physical - Total vvols size of virtual machines

Logical - Used vvols size of virtual machine in container

GuestOS - Operating system information         

VM_State- Machine is power-on, unbound ( power-off or deleted) and bound state.

UsrCPG - CPG of storage

SnpCPG - CPG of snapshot storage

Container - Name of container where all vvols are kept.          

CreationTime - creation of vvols time.

3. To check the particular virtual machine vvol name from back-end, please run the below command.

HPE cli% showvvolvm -sc sys:all -vv redhat
                                                                            -----(MiB)------
VM_Name         VV_ID VVol_Name                  VVol_Type Prov Compr Dedup Physical Logical

redhat       8000 cfg-redhatba-e3fb20cb      Config    tpvv No    No        2457    4096
                 8001 dat-redhatba-a8d0bea0      Data      tpvv No    No        9830   51200
                 8002  dat-redhatb-de253534-Snap Data      snp  NA    NA           0   51200
--------------------------------------------------------------------------------------------
              1 total                          3                               12287  106496

 In above output, virtual machine name is "redhat" and it shows you the configuration file and vvol,snap data file information.

4. To remove this virtual machine "redhat" from the storage, we need to run below commands.

HPE cli% removevvset hpe.vvol.ds01 dat-redhatba-a8d0bea0
Remove dat-redhatba-a8d0bea0 from vv set hpe.vvol.ds01
select q=quit y=yes n=no: y
gb1inpst024.vvol.ds01 is a vVol storage container. vVols should not be removed from a vVol storage container individually. Instead use "setvvolsc -remove <vvsetname>" to remove the entire storage container. However, the matchbulkobjs environment variable can be used to remove vVols from a storage container. Use with caution.

Above error message is intresting. According to this message you need to set below CLI env.

HPE cli% setclienv matchbulkobjs 1

Now run the below command to delete the vvol and snap.


HPE cli% removevvset hpe.vvol.ds01 dat-redhatba-a8d0bea0
Remove dat-redhatba-a8d0bea0 from vv set hpe.vvol.ds01
select q=quit y=yes n=no: y

HPE cli% removevvset hpe.vvol.ds01 dat-hpeb-de253534-Snap
Remove dat-hpeb-de253534-Snap from vv set hpe.vvol.ds01
select q=quit y=yes n=no: y

Command to delete config file.

HPE cli% removevvset hpe.vvol.ds01 cfg-redhatba-e3fb20cb
Remove cfg-redhatba-e3fb20cb from vv set hpe.vvol.ds01
select q=quit y=yes n=no: y

Above "removevvset" command to use to remove virtual machine from container "hpe.vvol.ds01

5. Now remove the virtual machine vvol from storage to reclaim the space back, please use below command.

HPE cli% removevv dat-redhatba-a8d0bea0
Removing vv dat-redhatba-a8d0bea0
select q=quit y=yes n=no: y
VV dat-redhatba-a8d0bea0 has snapshot children which must be removed first.

If you are getting above message while deleting the data file, that means, you need to remove first snap file from storage then data file.

HPE cli% removevv dat-redhatb-de253534-Snap
Removing vv dat-redhatb-de253534-Snap
select q=quit y=yes n=no: y

HPE cli% removevv dat-redhatba-a8d0bea0
Removing vv dat-redhatba-a8d0bea0
select q=quit y=yes n=no: y

Remove config file:

HPE cli% removevv cfg-redhatba-e3fb20cb
Removing vv cfg-redhatba-e3fb20cb
select q=quit y=yes n=no: y

Once you remove all vvols and virtual machine from storage, please run the below command to check that virtual machine "redhat" exist or not.


HPE cli% showvvolvm -sc sys:all -vv redhat
No VM listed

if you are getting above message that means, virtual machine successfully remove from the storage and you can reclaim space back to storage.

You can run the below command to reclaim the space in CPG

HPE cli% compactcpg "CPG_NAME"

You can use "-dr" before run the reclaim the space, it will dry run and provide you the amount of space which you can reclaim.

Tuesday, 25 April 2023

RHEL 8: ifup fails with Error: unknown connection '/etc/sysconfig/network-scripts/ifcfg-ensX'

If we are getting below error when try to bring up ens192 interface in redhat 8 system, then you need to follow the below standard process to resolve this issue. 

[root@localhost~]# ifup ens192

Error: unknown connection '/etc/sysconfig/network-scripts/ifcfg-ens192'. 

To resolve this issue, please follow the two approach. 

1. Remove NM_CONTROLLED=no option from ifcfg-ensX file.

2. Install network-scripts rpm package to use the legacy ifup command in redhat 8.

Hope, above solution will resolve your issue. 


Friday, 4 February 2022

How to add mount option nodev for /var/lib/nfs/rpc_pipefs partition on RHEL 7

Recently latest vulnerability found on Red Hat Enterprise Linux 7 system during audit by security team. According to audit team, 'nodev' need to be an added mount for /var/lib/nfs/rpc_pipefs partition. 

To resolve this vulnerability please perform the below steps as suggested by Red Hat.

1. We need to create Drop-In directory for var-lib-nfs-rpc_pipefs.mount

[root@redhat001:~]# mkdir -p /etc/systemd/system/var-lib-nfs-rpc_pipefs.mount.d/

2. Now we need to create configuration file adding this mount point

[root@redhat001:~]# printf '[Mount]\nOptions=nodev\n' > /etc/systemd/system/var-lib-nfs-rpc_pipefs.mount.d/99-nodev.conf

3. Please take a reload of daemon service

[root@redhat001:~]# systemctl daemon-reload

now please take a restart of mount services.

[root@redhat001:~]# systemctl restart var-lib-nfs-rpc_pipefs.mount

Using above steps we can add mount option nodev for /var/lib/nfs/rpc_pipefs parition on rhel7. To verify the mount point you can run the below command

[root@redhat001:~]#grep rpc /proc/self/mounts

sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,nodev,relatime 0 0

If you see above output, nodev option has been successfully mount. This is straight steps need to follow to resolve such vulnerability. 


Saturday, 22 January 2022

Can not remove Logical Volume, message "Logical volume contains a filesystem in use"

In my local machine, I have multiple logical volume. When I try to remove below logical volume, getting below error.

[root@redhat001:~]# lvremove /dev/mapper/system-lv_redhat

Logical volume /dev/mapper/system-lv_redhat contains a filesystem in use.

To remove this logical volume in redhat linux system, please follow the below methods. 


1. Please check if any logical volume mounted on your system. To check this you can simply use "mount" command with grep option.

[root@redhat001:~]# mount | grep /dev/mapper/system-lv_redhat

Above command will provide you information if logical system is mount or not.  If you found, file system is mount, then please umount it first, remove entry from /etc/fstab then try to remove logical volume.


2. In case no logical volume mounted but still you are not able to remove logical volume then you need to check this method if any file is open or any active process using by this logical volume. To check this please run the below command. 

[root@redhat001:~]# lsof | grep /dev/mapper/system-lv_redhat

or 

[root@redhat001:~]#ps -ef | grep /dev/mapper/system-lv_redhat

Using first command you can find if any file is open at system end and it is use by this logical volume, so please check this and close files.

Second command use when you get any process using by this logical volume. please kill the particular process. If process use by root file system, I would suggest instead of killing the process, please stop any application if running and reboot the server, so it would automatically kill your process.

3. You can deactivate logical volume and remove it

[root@redhat001:~]# lvchange -an /dev/mapper/system-lv_redhat

[root@redhat001:~]#lvs

[root@redhat001:~]#lvremove -f  /dev/mapper/system-lv_redhat

Please deactivate logical volume using lvchange command and run lvs to verify that logical volume deactivate or not. 

Hope using above three methods you can resolve this error. In case of any query, please drop a comment on this article. 

Friday, 7 January 2022

mount: wrong fs type, bad option, bad superblock on /dev/mapper/system-lv_redhat error and solution

When we mount any file system or disk on redhat linux server, we have received below error message some times. 

[root@redhat001:~]# mount /redhat

mount: wrong fs type, bad option, bad superblock on /dev/mapper/system-lv_redhat

In above, /redhat is mount point and /dev/mapper/system-lv_redhat is device mapper name for this file system. 

system is volume group and lv_redhat is logical volume.


To resolve this issue, you need to check below points and performed action.

1. Check /etc/fstab entry, file system type (eg. xfs, ext4 etc.) should be correct. 

2. Check data integrity or the filesystem type using below command:

[root@redhat001:~]# file -sL /dev/mapper/system-lv_redhat

/dev/mapper/system-lv_redhat: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

The output of the above command shows that filesystem type is XFS

3. Check filesystem type mentioned in /etc/fstab:

[root@redhat001:~]#cat /etc/fstab

/dev/mapper/system-lv_redhat /redhat xfs defaults 1 2

4. If bad superblock found on the server, we need to repair file system. 

To check bad superblock you can run "e2fsck" and "tune2fs"  for ext formatted file system and "xfs_repair" for xfs file system. 

For above example, file system is xfs type , so please run the below command to repair file system.

Before doing this, please ensure, you have backup of this file system in tape library or any other backup solution. 

[root@redhat001:~]# xfs_repair  /dev/mapper/system-lv_redhat

It will take time to repair file system, also depend upon file system size. If file system size bigger, the repairing process will take long time. 


Sunday, 7 February 2021

How to fix Host key verification failed error on linux servers

When you connect to a server for the first time, the server prompts you to confirm that you are connected to the correct system. 

The following example uses the ssh command to connect to a remote host named redhat007:

[root@redhat001:~]# ssh user02@redhat007

The authenticity of host 'redhat007 (192.168.1.24)' can’t be

established. ECDSA key fingerprint is ...

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'redhat007,192.168.1.24' (ECDSA) to the list of known hosts.

The command checks to make sure that you are connecting to the host that you think you are connecting to. 

When you enter yes, the client appends the server’s public host key to the user’s ~/.ssh/known_hosts file and creating the ~/.ssh directory if necessary.

Next time when you connect to the remote server, the client compares this key to the one the server supplies. If the keys match, you are not asked if you want to continue connecting.

If someone tries to trick you into logging in to their machine so that they can sniff your SSH session, you will receive a warning similar to the following:


@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!

Someone could be eavesdropping on you right now (man-in-the-middle attack)!

It is also possible that the RSA host key has just been changed.

The fingerprint for the RSA key sent by the remote host is

22:cf:23:31:7a:5d:93:13:1s:99:23:c2:5k:19:2a:1c.

Please contact your system administrator.

Add correct host key in /home/readhat001/.ssh/known_hosts to get rid of this message.

Offending key in /home/redhat001/.ssh/known_hosts:7

RSA host key for redhat007 has changed and you have requested strict checking.

Host key verification failed.


To resolve above error, we have two different method.

1. Remove old key manually:

Normally key is stored ~/.ssh/known_hosts file

If root wants to ssh to the server, just removing entry in the /root/.ssh/known_hosts file is all right.

If user01 wants to ssh to the server, then remove the entry in the file /home/user01/.ssh/known_hosts.

I will remove the the key  for the destination server redhat007 from the file /home/user02/.ssh/known_hosts.

# vi /home/user02/.ssh/known_hosts

redhat003 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLrY91bQOihgFZQ2Ay9KiBG0rg51/YxJAK7dvAIopRaWzFEEis3fQJiYZNLzLgQtlz6pIe2tj9m/Za33W6WirN8=

redhat005 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrY/m16MdFt/Ym51Cc7kxZW3R2pcHV1jlOclv6sXix1UhMuPdtoboj+b7+NLlTcjfrUccL+1bkg8EblYucymeU=

redhat007 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrY/m16MdFt/Ym51Cc7kxZW3R2pcHV1jlOclv6sXix1UhMuPdtoboj+b7+NLlTcjfrUccL+1bkg8EblYucymeU=


2. Removing old key using the ssh-keygen command

[root@redhat001:~]# ssh-keygen -R [hostname|IP address]

[root@redhat001:~]# ssh-keygen -R redhat007

Now once you remove the entry, please login again

[root@redhat001:~]# ssh user02@redhat007

[root@redhat001:~]# ssh user02@redhat007

The authenticity of host 'redhat007 (redhat007)' can't be established.

ECDSA key fingerprint is SHA256:V+iGp3gwSlnpbtYv4Niq6tcMMSZivSnYWQIaJnUvHb4.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'redhat007' (ECDSA) to the list of known hosts.


Sunday, 24 January 2021

How to clear cache on Linux

In this article, we will guide you how to clear the memory cache on Linux system by clearing PageCache, dentries, and inodes from the command line.

In linux system basically we have a three different type of caches that need to be clear from linux system.

PageCache is cached files. Files that were recently accessed are stored here so they will not need to be queried from the hard disk again, unless that file changes or the cache is cleared to make room for other data

Dentry, inode cache is directory and file attributes. This information goes hand in hand with PageCache, although it doesn't contain the actual contents of any files.

Please find the below commands to clear the cache from linux device

To clear PageCache only, use this command:-

[root@localhost:~]#sysctl vm.drop_caches=1

To clear dentries and inodes, use this command:-

[root@localhost:~]#sysctl vm.drop_caches=2

To clear PageCache, plus dentries and inodes, use this command:-

[root@localhost:~]#sysctl vm.drop_caches=3

Please use free command or top to check your system's RAM usage and verify that the cache has been cleared.

Also you can use the following commands to accomplish the same thing as the respective systemctl commands:

Clear PageCache:-

[root@localhost:~]# echo 1 > /proc/sys/vm/drop_caches 

Clear dentries and inodes:-

[root@localhost:~]# echo 2 > /proc/sys/vm/drop_caches 

Clear PageCache, dentries and inodes:-

[root@localhost:~]# echo 3 > /proc/sys/vm/drop_caches

Using above commands you can clear the cache from linux system. In case you have any query please comment on this post. Thanks!!

Puppet agent: Exiting; no certificate found and waitforcert is disabled

Puppet agent: Exiting; no certificate found and waitforcert is disabled - 

Such type of error coming when puppet agent connecting to a Puppet master server for a first time will generate a certificate and give it to a Puppet master server to sign.

Basically its depend upon your puppet configuration, a default behavior is that the certificate must by signed manually and thus puppet agent exits with an error.

[root@puppet-client:~]#puppet agent -t

Exiting; no certificate found and waitforcert is disabled

To resolve this issue login to the Puppet master server and run the below command to list all certificates awaiting a signature.

[root@puppet-master ~]# puppet cert list

"puppet-client"      (SHA256)

B3:67:17:66:8E:78:1F:69:4E:11:8E:34:BA:86:A0:E7:07:84:BF:E9:8A:94:A9:41:DD:6C:9D:1B:07:D2:72:6A

From the above output we can see that certificate from a single host puppet-client is waiting for its certificate to be signed. 

Note: Your output may be different and contain multiple certificates awaiting for a signature.

Now we have two options on how to sign the above certificate. 

Option 1: We can sign each certificate individually.

Option 2: We can sign all awaiting certificates at once.

For option 1 , please run the below command 

[root@puppet-master ~]# puppet cert sign puppet-client

For Option 2, please run the below command

[root@puppet-master ~]# puppet cert sign --all

Using above option you can remove such errors. Now login on the puppet-client machine and run the puppet agent again.

[root@puppet-client:~]#puppet agent -t

Now you will not receive certificate error. In case you have any query on above article, please comment on this post. Thanks!!

Saturday, 22 August 2020

SSH or SFTP Authentication issue in linux

We normally getting a below error while accessing the destination server via SSH or SFTP protocols.

Error:

root@localhost> sftp  root@XYZ.com

warning: Authentication failed.

FATAL: ssh2 client failed to authenticate. (or you have too old ssh2 installed, check with ssh2 "-V")

To resolve this error first we need to understand what is an issue. In such type of above error, issue is mostly from destination server end which you want to connect from your system. 

In /etc/ssh/sshd_config file, a parameter "MaxAuth Tries" value is very less due to this when we are attempting to access the destination server using SSH or SFTP protocol then we will get such issue if your account will not authenticate in first two attempts. 

So to resolve such issue , you need to increase the value of "MaxAuth Tries" from default value.

edit the /etc/ssh/sshd_config file

search this parameter

increase the value "MaxAuth Tries" to "5" and take a restart of ssh service

systemctl restart sshd

login on source server again and try to access the server, if you are facing this issue again then increase the value again and set to 20.


Sunday, 9 August 2020

Zoning in Brocade FC SAN switch

SAN zoning is a method to manages communication of hosts and storage nodes.

Each device in a Fiber Channel will have a unique Word Wide Name (WWN). Zone contains WWN name of these devices.

There are two types of WWNs:

Word Wide Node Name (WWNN)
Word Wide Port Name (WWPN)


We can identify devices in FC using WWNN or WWPN. The idea is to bind WWPN’s of intended devices (ports) together.


This binding is called zoning and it manages communication of hosts and storage nodes.
 

SAN Zoning Method:
 

Please run the "switchshow" command. This command will help you to identify the HBA address of both the target and initiator ports which will be required for SAN Zoning configuration.

Output of this command will provide below output.

barcode01:admin> switchshow
switchName:     barcode01
switchType:      71.2
switchState:      Online
switchMode:     Native
switchRole:      Subordinate
switchDomain: 101
switchId:          fffc65
switchWwn:     90:XX:XX:99:XX:XX:ef:XX
zoning:             ON (fabric_A)
switchBeacon: OFF

Index Port Address Media Speed State     Proto
==============================================
   0   0   650000   --     N8   No_Module   FC
   1   1   650100   id     N8   Online      FC  F-Port  90:XX:XX:99:XX:XX:cf:XX
   2   2   650200   id     N8   No_Light    FC
   3   3   650300   id     N8   Online      FC  E-Port  90:01:c4:f5:7c:e7:3e:23
   4   4   650400   id     N8   Online      FC  F-Port  90:00:02:e0:db:1e:f6:10
   5   5   650500   id     N8   Online      FC  F-Port  90:00:00:e0:db:1e:f6:10
   6   6   650600   id     N8   Online      FC  F-Port  90:01:43:80:21:df:7a:12
   7   7   650700   id     N8   Online      FC  F-Port  90:01:43:80:21:df:78:16
   8   8   650800   id     N8   Online      FC  F-Port  91:02:00:02:ac:01:eb:11
   9   9   650900   id     N8   Online      FC  E-Port  90:00:50:eb:1a:ed:21:10
 
 
The 90:XX:XX:99:XX:XX:cf:XX is the WWPN of the device connecting in that port. We will use this WWPN of the connecting device to zone with another.

In below step we will create a new alias for above WWPN number as it is very difficult to remember this WWPN number during zoning.

barcode01:admin> alicreate hostname_port1,"90:XX:XX:99:XX:XX:cf:XX"

To verify run command, alishow “hostname_port1”.

Now we are going to create two zones with two aliases.

barcode01:admin> zonecreate zone01,'hostname_port1;storage_port01'

To verify run command, zoneshow “zone01”.

Once zone is created, add it to an active configuration or a new configuration by running either the cfgadd command or cfgcreate.

barcode01:admin> cfgadd fabric_A,zone01

If zone configuration not exist then please run the below command to create a zone configuration which will consist zones that we have created recently.

barcode01:admin> cfgcreate "fabric_A", "zone01"

In my case I have already created this. above is just an example to create a zone configuration if not exist.

Next you have to save the configuration by running the cfgsave And it will prompt for yes / no  you have to hit  yes at the prompt to save the configuration.

barcode01:admin> cfgsave

WARNING!!!
The changes you are attempting to save will render the
Effective configuration and the Defined configuration
inconsistent. The inconsistency will result in different
Effective Zoning configurations for switches in the fabric if
a zone merge or HA failover happens. To avoid inconsistency
it is recommended to commit the configurations using the
'cfgenable' command.

Do you want to proceed with saving the Defined
zoning configuration only?  (yes, y, no, n): [no] y
Updating flash ...


To activate the created zoning, run the cfgenable And it will prompt for yes / no, you have to hit yes at the prompt to activate the configuration.

barcode01:admin> cfgenable fabric_A

You are about to enable a new zoning configuration.
This action will replace the old zoning configuration with the
current configuration selected. If the update includes changes
to one or more traffic isolation zones, the update may result in
localized disruption to traffic on ports associated with
the traffic isolation zone changes
Do you want to enable 'fabric_A' configuration  (yes, y, no, n): [no] y
zone config "fabric_A" is in effect
Updating flash ...


barcode01:admin>

Note: This will put the zone into the Effective Configuration and will be live in production.

Hope reading above article you can perform SAN zoning without any issue. In case of any query, please comment in below section. Thanks!!

Saturday, 29 February 2020

How to reset HP iLO password from command line in Linux

If you have lost or forgotten the iLO password than please find the below steps to reset ilo password from command line.

To reset / set the password of the iLO from within the linux operating system on an HP server, the hponcfg utility needs to be installed

Here is some information about hponcfg

The hponcfg utility is an online configuration tool used to set up and reconfigure the local iLO without requiring a reboot of the server operating system. It can be used to retrieve and change the iLO configuration of the local server from the linux command line.

Please login on the linux machine and create a new xml file

[root@localhost]#vim ilo_password.xml

<RIBCL VERSION="2.0">
  <LOGIN USER_LOGIN="x" PASSWORD="x">
  <USER_INFO MODE="write">
    <MOD_USER USER_LOGIN="Administrator">
      <PASSWORD value="XXXXXXXX*"/>
    </MOD_USER>
  </USER_INFO>
  </LOGIN>
</RIBCL>

save the ilo_password.xml file

In above xml file, you just need to set PASSWORD value - XXXXXXX. Please put your new ILO password here. 

To load this xml file in ILO, please use hponcfg command 

[root@localhost]# hponcfg -w ilo_password.xml

Your iLO password will be reset, now please login in HPE iLO via new password XXXXXXX.

Please let me know in case of any query about this post. Thanks. 

Failed to start lsb bring up/down networking in rhel 7

Such type of error we have received after operating system upgradation from redhat linux operating system from version 7.x to version 7.y.

The root cause for error is network manager upgradation during operating system patching.

To troubleshoot this error, please restart the network service and check the status 

[root@localhost network-scripts]# systemctl restart network

Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.

[root@localhost network-scripts]# systemctl status network

You can see " Failed to start lsb bring up/down networking" error message

Solution: To resolve such type of network issue, please perform the below steps.

Go to /etc/sysconfig/network-scripts/ directory and list the files.

[root@localhost]# cd /etc/sysconfig/network-scripts/

You can see the ifcfg-lo filename file which you need to removed it.

after remove this file please take a restart of service. Your network service will restart properly without any issue. Also if you have any other duplicate or backup ifcfg file please remove it.

[root@localhost network-scripts]# rm -rf ifcfg-lo

[root@localhost network-scripts]# systemctl restart network

Now try to access the redhat machine via ssh. Please post your comment if you have any query regarding this post. 

Sunday, 20 May 2018

NFS Stale File Handle error and solution

In linux machine we have NFS mounted directories sometimes contain stale file handles. If you run command such as ls or vi you will see an error

# ls
.: Stale File Handle

Before moving to fix this issue first we need to understand the concept of Stale File Handle.

A filehandle becomes stale whenever the file or directory referenced by the handle is removed by another host.

while your client still holds an active reference to the object. A typical example occurs when the current directory of a process, running on your client, is removed on the server (either by a process running on the server or on another client).

So this can occur if the directory is modified on the NFS server, but the directories modification time is not updated.

To fixed this issue, the best solution is to remount directory from the NFS client using mount command.

# umount -f /test
# mount -t nfs nfsserver:/path/to/share /test

Tuesday, 8 May 2018

How can I mount a read-only filesystem as read-write in redhat linux

This post is related to the below issue which you have faced in Red Hat Linux 5/6 operating system.

Issue :
 
➤ One of my partition has been mounted as read-only. How can I make it read-write without rebooting?

➤ How can I remount the root filesystem as read-write after it goes read-only?

➤ My filesystem went read-only, can I remount without rebooting? / filesystem suddenly became read only, unable to write to files.

➤ Also if you are trying to create any file or directory inside the file system you would get the message- "Read only file system"

 [root@localhost]# touch test
touch: cannot touch `test': Read-only file system

 [root@localhost]# cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / ext3 ro,data=ordered 0 0

Solution:

Mount an already mounted file system in read-write option, Please run the below command.

 [root@localhost]# mount -o remount,rw <filesystem_path>

Remounting as read-write may work, however, if the file system remounts as read-only again, a filesystem check and reboot of the system will be required.

Thursday, 3 May 2018

How to configure Network Bonding on RHEL 7

Step by Step method to configure the network bonding on RHEL 7:
 
➤ Please log on to linux server and run the "ip a" command to check the available interfaces.

    [root@localhost]# ip a
    lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever

    eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:bd:c7:f9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever

➤ Load the bonding driver called “bonding” in the kernel with the modprobe command if it is not already loaded, and verify with the modinfo command:

[root@localhost]# modprobe bonding
[root@localhost]# modinfo bonding
 
➤ In this step you need to generate UUIDs for interfaces using the below command.

[root@localhost]# uuidgen <interface-name>
 
➤ Now create a file called ifcfg-bond0 in the /etc/sysconfig/network-scripts directory for bond0 with the following settings. Please use vi editor to edit this file.

[root@localhost]# cd /etc/sysconfig/network-scripts
[root@localhost]# vi ifcfg-bond0

DEVICE=bond0
Name=bond0
TYPE=bond0
BONDING_MASTER=yes
BONDING_OPTS="mode=balance-rr"
ONBOOT=yes
IPADDR=192.168.1.23
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
 
➤ Now create ifcfg-eth2 and ifcfg-eth3 files in the /etc/sysconfig/network-scripts directory for eth2 and eth3 interfaces with the following settings. Set the MASTER directive to bond0. Both interfaces will act as slaves with no IP addresses assigned to them.

[root@localhost]# vi ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
NAME=eth1
UUID=23a32d65-343d-48a2-8rf7-d2jh2388f666
ONBOOT=yes
MASTER=bond0
SLAVE=yes

[root@localhost]# vi ifcfg-eth2

DEVICE=eth2
TYPE=Ethernet
NAME=eth2
UUID=22a32d65-443d-48d2-8rf7-d2jh222f666
ONBOOT=yes
MASTER=bond0
SLAVE=yes
 
➤ Now deactivate and reactivate bond0 with the ifdown and ifup commands:

[root@localhost]# ifdown bond0; ifup bond0
 
➤ Check the status of bond0 and the slaves with the ip command. It should also show the assigned IP.

[root@localhost]# ip addr
 
➤ Restart the system to ensure the configuration survives system reboots
[root@localhost]# reboot

Wednesday, 2 May 2018

How to Create a Link Aggregation – Bonding on Solaris

A link aggregation consists of several interfaces on a system that are configured together as a single, logical unit. Link aggregation, also referred to as trunking

Basically link aggregation is like a bonding on server. Its work in active and passive mode. At a time one network device are up and other bonding device are in stand by mode.

Requirements for Link Aggregations:-

1. Your link aggregation configuration is bound by the following requirements:

2. You must use the dladm command to configure aggregations.

3. An interface that has been plumbed cannot become a member of an aggregation.

4. Interfaces must be of the GLDv3 type: xge, e1000g, and bge.

5. All interfaces in the aggregation must run at the same speed and in full-duplex mode.

If the solaris box matches these above requirement, after that we can only able to create a link aggregation.

How to create a link aggregation in Solaris operating system:-

1. As a root user please login on the solaris operating system so you have full administrative role to perform the action.

2. Determine which interfaces are currently installed on your system.

For this work you need to run the below command.

#dladm show-link

The above command show you the which interfaces are currently installed on your system.

#dladm show-link
ce0             type: legacy    mtu: 1500       device: ce0
ce1             type: legacy    mtu: 1500       device: ce1
bge0            type: non-vlan  mtu: 1500       device: bge0
bge1            type: non-vlan  mtu: 1500       device: bge1

As per link aggregation requirement, we can only use the interface which start from bge etc. here the device bge0, bge1 are currently installed on the server.

3. In this step now determine which interfaces have been plumbed. for this please run the below command.

#ifconfig -a

lo0: flags=2001000322 <UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
ce0: flags=1000420 <UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 192.168.1.43 netmask ffffff00 broadcast 192.168.1.255
        ether 0:4:7c:8:92:7d 

4. Now create a link aggregation using below command

#dladm create-aggr -d bge0 -d bge1 1

here 1 is the key number which identify the link aggreation , and it is lowest number.


5. now configure & plumb the newly created aggregation. Please use the below command to do this.

#ifconfig aggr1 plumb 192.168.1.56 up

6. To check the status of the aggregation you just created, please run the below command.

#dladm show-aggr

key: 1 (0x0001) policy: L4      address: 0:4:7c:8:92:7d (auto)
device   address           speed         duplex  link    state
bge0     0:4:7c:9:87:4e    1000  Mbps    full    up      attached
bge1     0:4:7c:9:32:9e    0     Mbps    unknown down    standby

7. For link aggregations with IPv4 addresses, create an /etc/hostname.aggrkey file.

#vi /etc/hostname.aggr1
92.168.1.56

8. perform a reboot.

#reboot -- -r

Sunday, 29 April 2018

How to install and configure samba server in RHEL 7 or redhat linux 7

Login on Samaba server
Check samba rpm installed or not, if not installed please install it,

[root@localhost ~]# rpm -qa | grep samba
[root@localhost ~]# yum install samba*

Create a directory in root file system which is shared with client.

[root@localhost ~]#mkdir -p /home/testuser/test

Add a new group or can use existing group

To provide access on shared directory,Here we are adding new group called samba

[root@localhost ~]#groupadd samba

Change the group and permission of sharing folder

[root@localhost ~]#chgrp -R samba /home/testuser/test
[root@localhost ~]#chmod -R 777 /home/testuser/test

create user, add into group and set samba password

[root@localhost ~]#useradd testuser
[root@localhost ~]#usermod -G samba testuser
[root@localhost ~]#smbpasswd -a testuser

Now Edit /etc/samba/smb.conf file

Note: Please take a backup of origianl file.

[root@localhost ~]#cd /etc/samba/
[root@localhost ~]#cp -p smb.conf smb.conf.orig

And add the below given contents in last line of /etc/samba/smb.conf file.

vi /etc/samba/smb.conf

[test]
comment = shared-directory
path = /home/testuser/test
public = no
valid users = testuser, @samba
writable = yes
browseable = yes
create mask = 0774
directory mask = 4774

##Edit these lines in /etc/samba/smb.conf . To allow network to reach samba server

interfaces = lo ens32 192.168.1.0/24
hosts allow = 127. 192.168.1.

security = user
passdb backend = tdbsam
netbios name = localhost
server string = Samba Server localhost
workgroup = MYGROUP
log file = /var/log/samba/samba.log
max log size = 50
security = server

Add services in /etc/services files

vi /etc/services
 
netbios-ns    137/tcp    # netbios name service
netbios-ns    137/udp    # netbios name service
netbios-dgm    138/tcp    # netbios datagram service
netbios-dgm    138/udp    # netbios datagram service
netbios-ssn    139/tcp    # netbios session service
netbios-ssn    139/udp    # netbios session service

Note: Please check these above ports are open from this samba server to client machine

Now start the smb and nmb services.

systemctl start smb.service
systemctl start nmb.service

Enable smb and nmb service at booting of system

systemctl enable smb.service
systemctl enable nmb.service

Note 1: firewalld service not enable on this server so no need to add any rule.
Note 2: selinux is in permissive state so no need to change the selinux security context.

Now login on window machine

and mount this samba share on the server.

\\localhost.redhat.com\test