Showing posts with label Server. Show all posts
Showing posts with label Server. Show all posts

Saturday, 29 February 2020

Failed to start lsb bring up/down networking in rhel 7

Such type of error we have received after operating system upgradation from redhat linux operating system from version 7.x to version 7.y.

The root cause for error is network manager upgradation during operating system patching.

To troubleshoot this error, please restart the network service and check the status 

[root@localhost network-scripts]# systemctl restart network

Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.

[root@localhost network-scripts]# systemctl status network

You can see " Failed to start lsb bring up/down networking" error message

Solution: To resolve such type of network issue, please perform the below steps.

Go to /etc/sysconfig/network-scripts/ directory and list the files.

[root@localhost]# cd /etc/sysconfig/network-scripts/

You can see the ifcfg-lo filename file which you need to removed it.

after remove this file please take a restart of service. Your network service will restart properly without any issue. Also if you have any other duplicate or backup ifcfg file please remove it.

[root@localhost network-scripts]# rm -rf ifcfg-lo

[root@localhost network-scripts]# systemctl restart network

Now try to access the redhat machine via ssh. Please post your comment if you have any query regarding this post. 

Sunday, 20 May 2018

NFS Stale File Handle error and solution

In linux machine we have NFS mounted directories sometimes contain stale file handles. If you run command such as ls or vi you will see an error

# ls
.: Stale File Handle

Before moving to fix this issue first we need to understand the concept of Stale File Handle.

A filehandle becomes stale whenever the file or directory referenced by the handle is removed by another host.

while your client still holds an active reference to the object. A typical example occurs when the current directory of a process, running on your client, is removed on the server (either by a process running on the server or on another client).

So this can occur if the directory is modified on the NFS server, but the directories modification time is not updated.

To fixed this issue, the best solution is to remount directory from the NFS client using mount command.

# umount -f /test
# mount -t nfs nfsserver:/path/to/share /test

Tuesday, 16 January 2018

How to Flush Memory Cache and Buffer Cache on Linux Server

When you run the "free -m" command and get the below output, then you observe free memory section will be low value but comparatively buffers+cache value would be higher.

Now this is not a bad thing actually since your OS has reserved this memory to speed up your most used process by keeping them in the cache. But in case any new process is executed and your system is low on memory then these cache would be automatically released to make space for memory reservation of new processes.

[root@localhost]# free -m
             total       used       free     shared    buffers     cached
Mem:          2345       1234        1145          0         24        400
-/+ buffers/cache:        664        345
Swap:         4032          0       4032

if you want to clear or free cache/buffer memory then you need to run the below command.

[root@localhost]# echo 3 > /proc/sys/vm/drop_caches

These are the different values which you can use with the above command

echo 1 is clearing only page cache

[root@localhost]# echo 1 > /proc/sys/vm/drop_caches

echo 2 is to clear free dentries and inodes

[root@localhost]# echo 2 > /proc/sys/vm/drop_caches

echo 3 is clearing page cache, dentries and inodes

[root@localhost]# echo 3 > /proc/sys/vm/drop_caches

Thursday, 11 January 2018

SSH login without password in linux

If you want to connect one Linux host to other Linux host through SSH with password-less connection then you need to perform below steps.

Lets suppose you need password-less login from host "server01" / user "redhat" to host "server02" / user "centos".

1. First login in on "server01"as user "redhat" and generate a pair of authentication keys.

[redhat@server01]# ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/home/redhat/.ssh/id_rsa):
Created directory '/home/redhat/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/redhat/.ssh/id_rsa.
Your public key has been saved in /home/redhat/.ssh/id_rsa.pub.
The key fingerprint is:
1e:4f:05:79:3a:9f:96:7c:2b:ad:e9:58:37:sc:37:e4 redhat@server01

Note: Do not enter a passphrase.

2. Now you use ssh to create a directory ~/.ssh as user "centos" on server02.

Note: If directory already exist, you do not create it again.

[redhat@server01]# ssh centos@server02 mkdir -p .ssh

centos@server02's password:

Finally append redhat's new public key to centos@server02:.ssh/authorized_keys and enter centos's password one last time:

3. Now copy the rsa key to server 02 ssh authorized_keys file

[redhat@server01]# cat .ssh/id_rsa.pub | ssh centos@server02 'cat >> .ssh/authorized_keys'

centos@server02's password:

Now you can log into server02 as "centos" from server01 as "redhat"a without password.

4. Now you can test the password less connection.

[redhat@server01]# ssh centos@server02

You are successfully login on the server02 without any password.

Note:  In case of any permission issue you need to set "700" permission on .ssh folder on server02.

Thursday, 21 December 2017

Multipath command with an examples

In linux operating system device mapping through multipath is widely used. Here, we can give you some example how we will use the multipath commands in Linux server

➤ Normally multipath device has a Word Wide Identifier (WWID), which is globally unique and unchanging.

➤ When new devices are brought under the control of DM-Multipath, the new devices may be seen in three different places under the /dev directory: /dev/mapper/mpathn, /dev/mpath/mpathn, and /dev/dm-n

➤ The devices in /dev/mapper are created early in the boot process. Use these devices to access the multipathed devices, for example when creating logical volumes.
The devices in /dev/mpath are provided as a convenience so that all multipathed devices can be seen in one directory. These devices are created by the udev device manager and may not be available on startup when the system needs to access them.

Note: Do not use these devices for creating logical volumes or filesystems

➤ Any devices of the form /dev/dm-n are for internal use only and should never be used.

Please find the multipath syntax which we used in the Linux operating system.


l    -> Display the current multipath configuration gathered from sysfs and the device mapper.
ll  -> Display the current multipath configuration gathered from sysfs, the device mapper, and all other available components on the system.
f    -> Remove the named multipath device.
F   -> Remove all unused multipath devices.
v   ->  Verbosity level
          . 0 no output
          . 1 print created devmap names only
          . 2 default verbosity
          . 3 print debug information
d   -> Dry run, do not create or update devmaps
r    -> Force devmap reload

⧪ How to display the current multipath configuration with all information.

[root@localhost~]# multipath -ll
Dec 21 11:27:17 | multipath.conf line 35, invalid keyword: selector
mpathf (3600c0ff00019e9e9dc94c25801000000) dm-6 HP,MSA 2040 SAN
size=466G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=70 status=active
  |- 1:0:0:4 sdg 8:96 active ready running
  |- 2:0:0:4 sdj 8:144 active ready running
  |- 1:0:1:4 sdq 65:0 active ready running
  `- 2:0:1:4 sdt 65:48 active ready running

⧪ How to remove multipath devices with multipath Command

[root@localhost~]# multipath -f mpathf

Note: if we use -F option then it is remove all unused devices.

⧪ How to Force reload device map with multipath Command

[root@localhost~]# multipath -r

Solaris Server process Monitoring tool- prstat

We have different type of tools and command which are used in Solaris or other Unix system to monitor the system process. But if we are talking about only Sun Solaris server then we have very good process tool which is called "prstat".

In this post, we will find that how prstat is work on the Solaris platform.

   !-[solaris]# prstat

When you run the above command on the command line you will get the below output on the CLI screen which are refreshing in every few seconds and sorting all the information regarding the system resource.

  PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP

 21322 root      11M 3236K cpu0    59    0   0:00:00 0.0% prstat/1

 21323 root      18M 4788K sleep   59    0   0:00:00 0.0% sshd/1

 22345 root      10M 2188K sleep   59    0   0:00:00 0.0% bash/1

   584 root       13M 3832K sleep   59    0   0:01:59 0.0% nscd/51

   154 root       13M 2068K sleep   59    0   0:00:00 0.0% syseventd/18

   183 root     1772K  776K sleep   59    0   0:00:13 0.0% utmpd/1

   538 root       11M 2572K sleep   59    0   0:00:00 0.0% picld/4

Total: 12 processes, 31 lwps, load averages: 0.00, 0.00, 0.00


This is a quick view of the prstat command but if you wanted to get a different view of the same info, like a summary of what users own these CPU consuming processes

   !-[solaris]# prstat -a

If you run prstat with the -a option (prstat -a) you will get an output similar to the default one, but the last few lines of it will be used for providing a really useful report of the users consuming top system resources.


  PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP

 21322 root      11M 3236K cpu0    59    0   0:00:00 0.0% prstat/1

 21323 root      18M 4788K sleep   59    0   0:00:00 0.0% sshd/1

 22345 root      10M 2188K sleep   59    0   0:00:00 0.0% bash/1


 NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU

     5 root      52M   13M   1.3%   0:00:00 0.0%

    50 root      841M  571M    56%   0:22:22 0.0%

     2 daemon     17M 4520K   0.4%   0:00:04 0.0%

Total: 12 processes, 31 lwps, load averages: 0.00, 0.00, 0.00

We have different type of syntax which we can used to monitor the Solaris server process which are listed below.

!-[solaris]# prstat -L  -> This shows thread per line instead of one process per line
!-[solaris]# prstat -s -> prstat output can be sorted using set of sub-options .sub options are                  cpu,pri,rss,size,time 
!-[solaris]# prstat -t  -> It provides complete users resource utilization.
!-[solaris]# prstat -Z  -> It provides summary per local zone.

Friday, 20 October 2017

How to mount CIFS file system on Linux Operating Server

Step by Step method for Mount CIFS on Linux Server:

➤ In the initial step we will gather all rpm's or packages information which is required for CIFS file system.

[root@localhost]# rpm -qa | grep cifs 
cifs-utils-4.8.1-20.el6.x86_64

if you see the above output, this packages is required for CIFS file system on Linux system. So you can install this rpm using rpm command if you have rpm packages on the server otherwise you will install the packages using YUM utility.

[root@localhost]#rpm -ivh cifs-utils-4.8.1-20.el6.x86_64.rpm

if you are using YUM, then please install the packages using below command.

[root@localhost]#yum install cifs-utils*

it is installed all required dependency related to CIFS file system.

➤ In this step, we will create the mount point on the server where we need to mount the CIFS file system.

[root@localhost]# mkdir -p /backup/cifs

We will create the above mount directory where i will mount the file system.

➤ Now, we will create a CIFS user and assign a password so the user can access mount folder.

[root@localhost]# touch /etc/cifspasswd
[root@localhost]# chmod 600 /etc/cifspasswd

[root@localhost]#vi /etc/cifspasswd

user=castwebsvc
password=*******

➤ In this step we need to make a permanent entry of mounted file system so once we reboot the machine mounted file system not umount.

[root@localhost]# vi /etc/fstab

ADD fstab entry :

//WindowsServer/share /mount/point cifs rw,mand,user=USER,password=PASS 0 0

Example :
//192.168.0.1/CAST_data4/AICCodeUpload  /backup/cifs cifs   rw,mand,credentials=/etc/cifspasswd         0 0

You can take an example for your system. Please change the mount point as per your requirement.

➤ In the final step you need to mount the file system using below command.

[root@localhost]# mount /backup/cifs
[root@localhost]# mount -a

Using above command the CIFS file system has been mounted successfully.

Tuesday, 18 July 2017

How to use zoneadm command in Solaris Servers

Please find the "zoneadm" commands with an example as describe below.
 
How to Verify a Solaris zone:

To verify the local solaris zone you need to run the below command with syntax.

!-[solaris]#zoneadm -z <zone> verify

Example: !-[solaris]#zoneadm -z sunz01 verify

In this example once you run the above command, if your zone is installed properly without any error then this is not showing you any output, that's means it is verified. After running the command if it is showing a message then you need to check the configuration of this zone.

How to Installing a Solaris zone:

To install the local Solaris zone you need to run the below command with syntax.

!-[solaris]#zoneadm -z <zone> install

Example: !-[solaris]#zoneadm -z sunz01 install

In this example when you run the above command then your local zone has been started for installation. It is installed the local zone using the Solaris repostiory or flar images which you are kept at server location.

How to Ready a Solaris zone:

To move the local zone in ready or maintainance state you need to run the below command.

!-[solaris]#zoneadm -z <zone> ready

Example: !-[solaris]#zoneadm -z sunz01 ready

In this example, when we run the above command then your local zone moved in to ready or maintianance state.

How to Boot a Solaris zone:

To boot the solaris zone on global system, you need to run the below command with syantx.

!-[solaris]#zoneadm -z <zone> boot

Example: !-[solaris]#zoneadm -z sunz01 boot

After running the above command your local solaris zone has been boot successfully if it is not showing any error during boot time. If you see any message during the boot time then please check where is the issue.

How to Reboot a Solaris zone:

To reboot or restart the solaris zone on Solaris server, you need to perform the below command on global zone.

!-[solaris]#zoneadm -z <zone> reboot

Example: !-[solaris]#zoneadm -z sunz01 reboot

Using this command the local zone "sunz01" has been restart again successfully. You need to run this command from global zone only.

➤ How to Shutdown/Halt a Solaris zone:

If you want to shutdown or halt your local zone without login it, then you need to run the below command from global zone.

!-[solaris]#zoneadm -z <zone> halt

Example: !-[solaris]#zoneadm -z sunz01 halt

After running this command local solaris zone state change from running to installed state that's means your server is shutdown now.

How to Uninstall a Solaris zone:

If you want to uninstall any solaris zone then you need to run the below command for uninstall the zone.

!-[solaris]#zoneadm -z <zone> uninstall -F

Example: !-[solaris]#zoneadm -z sunz01 uninstall -F

"-F" syntax is used to uninstall the zone forcefully.

How to Viewing a Solaris zone:

if you want to local zone current status and any other display information you need to run the below command.

!-[solaris]#zoneadm list -icv

 ID NAME       STATUS      PATH               BRAND   IP
   0 global         running        /                        solaris      shared
   1 sunz01        running       /zones/sunz01   solaris      excl
   2 sunz02        running       /zones/sunz02   solaris      excl

Using above command you can check the current status of all installed zones on Solaris server. It is show you the zone path and zone information. All the above command you need to run from Global zone with root privileges

Thursday, 13 July 2017

How to mount CIFS file system on Linux Operating server machine

Generally on linux or Unix operating system servers when we are mounting the any other linux machine folder is quite easiy in comparision to mount the window shared folder on linux machine.

Step by Step method for Mount CIFS on Linux Server:
 
➤ In the initial step we will gather all rpm's or packages information which is required for CIFS file system.

[root@localhost]# rpm -qa | grep cifs
cifs-utils-4.8.1-20.el6.x86_64

if you see the above output, this packages is required for CIFS file system on Linux system. So you can install this rpm using rpm command if you have rpm packages on the server otherwise you will install the packages using YUM utility.

[root@localhost]# rpm -ivh cifs-utils-4.8.1-20.el6.x86_64.rpm

if you are using YUM, then please install the packages using below command.

[root@localhost]# yum install cifs-utils*

it is installed all required dependency related to CIFS file system.
 
➤ In this step, we will create the mount point on the server where we need to mount the CIFS file system.

[root@localhost]# mkdir -p /backup/cifs

We will create the above mount directory where i will mount the file system.
 
➤ Now, we will create a CIFS user and assign a password so the user can access mount folder.

[root@localhost]# touch /etc/cifspasswd
[root@localhost]# chmod 600 /etc/cifspasswd

[root@localhost]# vi /etc/cifspasswd

user=castwebsvc
password=*******
 
➤ In this step we need to make a permanent entry of mounted file system so once we reboot the machine mounted file system not umount.

[root@localhost]# vi /etc/fstab

ADD fstab entry :

//WindowsServer/share /mount/point cifs rw,mand,user=USER,password=PASS 0 0

Example :
//192.168.0.1/CAST_data4/AICCodeUpload  /backup/cifs cifs   rw,mand,credentials=/etc/cifspasswd         0 0

You can take an example for your system. Please change the mount point as per your requirement.
 
➤ In the final step you need to mount the file system using below command.

[root@localhost]# mount /backup/cifs
[root@localhost]# mount -a

Using above command the CIFS file system has been mounted successfully

Tuesday, 11 July 2017

How to Remove or Delete a Non-Global Zone from Solaris Operating system

As you aware that Non-Global zone are hosted on Global zone on Solaris Operating system. You can check the Non-Global zone list using "zoneadm" command. It will show you are running and installed zones on Global zones.
 
Please find the below step by step method to remove of local zone from global zone.

Step by Step Method of removal a Non-Global Zone:

First of all you need to check the Non-Global Zone list to ensure which zone is running on the server.

global# zoneadm list -iv

You will see a display that is similar to the following:

ID  NAME     STATUS       PATH                           BRAND      IP
 0    global       running         /                                   solaris    shared
 1    sunz01       running         /zones/sunz01                solaris    shared
In the above command output you can see the Non-Global Zone "sunz01" is running, which we need to remove or delete from Solaris Server.

Now, we need to shutdown the required zone which we need to delete. We can shutdown the Non-Global zone using below commands.
--------------------------------------------
global#zoneadm -z sunz01 halt
or
global#zoneadm -z sunz01 shutdown
or
global#zlogin sunz01 shutdown
-------------------------------------------
In next step when your Non-Global zone shutdown you need to uninstall the local zone. You can used the below method to uninstall the Non-Global Zone.

global#zoneadm -z sunz01 uninstall

Using above command Non-Global zone "sunz01" has been uninstall successfully.

In the last step you need to remove or delete all dataset and configuration files of Non-Global zone "sunz01" from Global zone.

global#zonecfg -z sunz01 delete

Using above command all the configuration files related to this Non-Global zone has been deleted successfully. Now you can remove the folder related to this zone.

So using above method we can remove or delete the Non-Global zone from global zone or Solaris Operating system. Please let me know if you are facing any issue during using this process.

Friday, 7 July 2017

How to configure NTP Server on AIX Operating system

NTP ( Network time Protocol) is one of the oldest internet protocol still in use and it allows the synchronization of computer clocks distributing UTC (Coordinated Universal Time) over the network.

Their are several ways to configure the NTP server in different linux servers but if you are doing configuration on AIX operating system it's seems tricky, So in this post you can aware about the step by step configuration of NTP in AIX.

Step by Step Configuration of NTP Server:

➤ In the initial step we must verify that we have check the available NTP server on AIX server. For this please run the below command.

ibm_aix:/>lssrc -ls xntpd
-----------------------------------------------
 Program name:    /usr/sbin/xntpd
 Version:         3
 Leap indicator:  00 (No leap second today.)
 Sys peer:        ntp.aix.in.com
 Sys stratum:     4
 Sys precision:   -18
 Debug/Tracing:   DISABLED
 Root distance:   0.014709
 Root dispersion: 0.066422
 Reference ID:    192.168.1.22
 Reference time:  dc721077.d3a8e000  Tue, Mar 14 2017  7:47:19.826
 Broadcast delay: 0.003906 (sec)
 Auth delay:      0.000122 (sec)
 System flags:    pll monitor filegen
 System uptime:   19248381 (sec)
 Clock stability: 0.000107 (sec)
 Clock frequency: 0.000000 (sec)
 Peer: ntp.aix.in.com
      flags: (configured)(sys peer)
      stratum:  3, version: 3
      our mode: client, his mode: server
 Peer: ntpuk.aix.in.com
      flags: (configured)(sys peer)
      stratum:  3, version: 3
      our mode: client, his mode: server
Subsystem         Group            PID          Status
xntpd            tcpip            8520514      active
------------------------------------------------------

You can found the above output once you run the above command to check the available NTP server. On my AIX machine if you see the sys peer should show a valid server (ntp.aix.in.com). If the server is not showing any ntp server then we need to correct it by adding a server line into /etc/ntp.conf and will take restart of "xntpd" services.

Note : In this post I will use my dummy NTP name instead of real NTP server because of security reason.

➤ As your NTP server is not configured and it is show "insame" then you need to add manual entry on the NTP configuration file.

ibm_aix:/>vi /etc/ntp.conf

server ntp.aix.in.com
server ntpuk.aix.in.com

Once you added these ntp server entry manually on the configuration file then please take a restart of NTP services.

ibm_aix:/>stopsrc -s xntpd
ibm_aix:/>startsrc -s xntpd

Using above command we can stop and start the "xntpd" service on AIX operating system.

➤ In this step you need to again verify the status of newly added NTP server.

ibm_aix:/>lssrc -ls xntpd

It is taking some time that time because it synchronize process is running. Once the synchronization has been complete and you run the above command you can found the NTP server entry as describe in Step 1.

Step by Step configuration of NTP Client:

➤ On the client machine you need to again verify that you have a server suitable for synchronization or not. For this please run the below command.

ibm_aix:/>ntpdate -d ntp.aix.in.com
-----------------------------------------------------------
14 Mar 08:16:21 ntpdate[64356890]: 3.4y
transmit(192.168.1.22)
receive(192.168.1.22)
transmit(192.168.1.22)
receive(192.168.1.22)
transmit(192.168.1.22)
transmit(192.168.1.22)
transmit(192.168.1.22)
server 192.168.1.22, port 123
stratum 16, precision -6, leap 11, trust 000
refid [63.15.23.11], delay 0.03688, dispersion 24.00334
transmitted 4, in filter 4
reference time:      00000000.00000000  Thu, Feb  7 2036  7:28:16.000
originate timestamp: dc721745.3ff1b000  Tue, Mar 14 2017  8:16:21.249
transmit timestamp:  dc721746.3d08a000  Tue, Mar 14 2017  8:16:22.238
filter delay:  0.03688  0.05624  0.00000  0.00000
               0.00000  0.00000  0.00000  0.00000
filter offset: -0.00081 -0.00750 0.000000 0.000000
               0.000000 0.000000 0.000000 0.000000
delay 0.03688, dispersion 24.00334
offset -0.000812

14 Mar 08:16:23 ntpdate[64356890]: no server suitable for synchronization found
--------------------------------------------------------------------------

If you get the message ," no server suitable for synchronization found", verify xntpd is running on the server also verify that no firewalls are blocking port 123.

➤ If the no server suitable for synchronization then you must specify the xntpd server in /etc/ntp.conf.

ibm_aix:/>vi /etc/ntp.conf

server ntp.aix.in.com

Once you added the NTP server entry on client configuration file then restart the "xntpd" service again.

ibm_aix:/>startsrc -s xntpd

➤ If you want to start the xntpd service on boot time then you need to uncomment the below lines on the configuration file.

ibm_aix:/>vi /etc/rc.tcpip

Unconmment the following line

start /usr/sbin/xntpd "$src-running"

➤ Now verify the NTP server on client machine has been synchronized or not. Please use the same command which we used for checking the status.

ibm_aix:/>lssrsc -ls xntpd

This time on the NTP client machine sys peer should display the IP Address or name of your "xntpd" server. As you know it is taking some time to synchronization so you must wait for time.

Tuesday, 4 July 2017

Step by Step Configuration of NTP Server on HP-UX Server

In this post, I would like to explain how we configure the NTP (network time protocol) server on HP-UX operating system server. In my recent post you can found the NTP configuration on Solaris and AIX platform. 

As you know NTP ( Network time Protocol) is one of the oldest internet protocol still in use and it allows the synchronization of computer clocks distributing UTC (Coordinated Universal Time) over the network. It is basaiclly used for time synchronization on Unix servers.

Step by Step Configuration of NTP Server on HP-UX:

➤ In the first step we will check the configuration files of "xntpd" daemon. By default the configuration file for this daemon is "/etc/rc.config.d/netdaemons".

hpx:/> vi /etc/rc.config.d/netdaemons

######################################
# xntp configuration.  See xntpd(1m) #
######################################
#
#  Time synchronization daemon
#
# NTPDATE_SERVER: name of trusted timeserver to synchronize with at boot
# (default is rootserver for diskess clients)
# XNTPD:        Set to 1 to start xntpd (0 to not run xntpd)
# XNTPD_ARGS:  command line arguments for xntpd
#
# Also, see the /etc/ntp.conf and /etc/ntp.keys file for additional
# configuration.
#
export NTPDATE_SERVER=
export XNTPD=0
export XNTPD_ARGS=

This is default configuration entry of this file so for xntpd daemon we need to change the variable which is defined.

export NTPDATE_SERVER='ntp.in.pool.org'
export XNTPD=1
export XNTPD_ARGS=

Note: You must change the NTPDATE server name.

➤ For ntp config please set the correct timezone is setup in /etc/TIMEZONE file.

hpx:/> cat /etc/TIMEZONE
TZ=IST-5:30
export TZ

You can edit the file in vi editor and change the time zone as per your location.

➤ Now, we need to make some changes in NTP configuration files. 

hpx:/> cat /etc/ntp.conf
#Configuration NTP des serveurs
server ntp.in.org.com
server ntpin.in.org.com

You need to replace ntp server name accordingly. In my post I will use dummy server name.

➤ After setting the NTP server name we need to restart the NTP service on HP-UX operating system and verify the ntp configuration.

hpx:/> /sbin/init.d/xntpd restart

hpx:/> ntpq -p

If it is showing you correct ntp server information now. You can match these information with the NTP server name which we use in above step.

HP-UX Logical Volume Manager (LVM) Commands with an Example

In this post, You can get an idea about HP-UX logical volume manager commands with an example. As you know LVM is basically used for disk management in operating system that allow to manager the physical disks and logical volume.

Please find the below HP-UX LVM commands with an example.

➤ Create a new volume group, logical volume and file system:

You can used the below command in HP-UX operating system to create a new volume group, logical volume and file system.

hpx:/>pvcreate /dev/rdsk/c2t1d0

For creating a new volume group first we need to create physical volume as describe in above command.

hpx:/>mkdir /dev/vg01
hpx:/>mknod /dev/vg01/group c 64 0x010000

In above step we will create a directory where we need to create a volume group.

hpx:/>vgcreate /dev/vg01 /dev/dsk/c2t1d0

After successfully creation of volume group we will create a new logical voulme as describe in below command.

hpx:/>lvcreate -L 2048 /dev/vg01

hpx:/>newfs -F vxfs -o largefiles /dev/vg01/vgvol1

Using above command we create a new file system now in next step we will create a directory where we need to mount the newly created file system.

hpx:/>mkdir /backup
hpx:/>mount /dev/vg01/vgvol1 /backup

Once you mount the logical voulme with file system you can run the file system checking command to verify that mounting is succesfully or not.

Create a stripped filesystem:

In this, we will create a stripped file system with the help of volume group and logical voulme.

hpx:/>lvcreate -i 2 -I 32 -L 48 -n vgvol1 /dev/vg01

-i number of stripes
-I stripe size of 32KB
-L size of the volume

HP-UX display boot information:

You can use the below command to display boot information.

hpx:/>lvlnboot -v /dev/vg00

Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
        /dev/dsk/c2t0d0 (0/1/1/0.0.0) -- Boot Disk
        /dev/dsk/c2t1d0 (0/1/1/0.1.0) -- Boot Disk
Boot: lvol1     on:     /dev/dsk/c2t0d0
                        /dev/dsk/c2t1d0
Root: lvol3     on:     /dev/dsk/c2t0d0
                        /dev/dsk/c2t1d0
Swap: lvol2     on:     /dev/dsk/c2t0d0
                        /dev/dsk/c2t1d0
Dump: lvol2     on:     /dev/dsk/c2t0d0, 0

When you run the above command you can find the above output , if you see the boot information you can find you have two disk which is available for boot.

HP-UX display all disks system information:

hpx:/> ioscan -funC disk
Class     I  H/W Path        Driver   S/W State   H/W Type     Description
==============================================================
disk      0  0/0/2/0.0.0.0   sdisk    CLAIMED     DEVICE       TEAC    DV-28E-N
                            /dev/dsk/c0t0d0   /dev/rdsk/c0t0d0
disk      1  0/1/1/0.0.0     sdisk    CLAIMED     DEVICE       HP 146 GMAX3147NC
                            /dev/dsk/c2t0d0   /dev/rdsk/c2t0d0
disk      2  0/1/1/0.1.0     sdisk    CLAIMED     DEVICE       HP 146 GMAX3147NC
                            /dev/dsk/c2t1d0   /dev/rdsk/c2t1d0

In the above output you can found the all the disk which is available in the system.

HP-UX display dump devices:

hpx:/> lvlnboot -v

Normally it is showing the boot information in which you can check the dump devices name.