8. EXPRESSCLUSTER command reference

This chapter describes commands that are used on EXPRESSCLUSTER.

This chapter covers:

8.1. Operating the cluster from the command line

EXPRESSCLUSTER provides various commands to operate a cluster by the command line. These commands are useful for things like constructing a cluster or when you cannot use the Cluster WebUI. You can perform greater number of operations using the command line than Cluster WebUI.

Note

When you have configured a group resource (examples: disk resource and exec resource) as a recovery target in the settings of error detection by a monitor resource, and the monitor resource detects an error, do not perform the following actions by commands related to the actions or by the Cluster WebUI while recovery (reactivation -> failover -> final action) is ongoing.

  • terminate/suspend the cluster

  • start/terminate/migrate a group

If you perform the actions mentioned above against the cluster while the recovery caused by detection of an error by a monitor resource is ongoing, other group resources of that group may not terminate. However, you can perform these actions as long as the final action has been executed, even if a monitor resource detected an error.

Important

The installation directory contains executable-format files and script files that are not listed in this guide. Do not execute these files by programs or applications other than EXPRESSCLUSTER. Any problems caused by not using EXPRESSCLUSTER will not be supported.

8.2. EXPRESSCLUSTER commands

Commands for configuring a cluster

Command

Description

Page

clpcfctrl

Distributes configuration data created by the Cluster WebUI to servers.
Backs up the cluster configuration data to be used by the Cluster WebUI.

8.9.

clplcnsc

Manages the product or trial version license of this product.

8.12.

clpcfchk

Checks the cluster configuration data.

8.36.

Commands for displaying status

Command

Description

Page

clpstat

Displays the cluster status and configuration information.

8.3.

clphealthchk

Check the process health.

8.28.

Commands for cluster operation

Command

Description

Page

clpcl

Starts, stops, suspends, or resumes the EXPRESSCLUSTER daemon.

8.4.

clpdown

Stops the EXPRESSCLUSTER daemon and shuts down the server.

8.5.

clpstdn

Stops the EXPRESSCLUSTER daemon across the whole cluster and shuts down all servers.

8.6.

clpgrp

Starts, stops, or moves groups. This command also migrates the virtual machine.

8.7.

clptoratio

Extends or displays the various time-out values of all servers in the cluster.

8.10.

clproset

Modifies and displays I/O permission of a shared disk partition device.

8.13.

clpmonctrl

Controls monitor resources.

8.17.

clpregctrl

Displays or initializes the reboot count on a single server.

8.19.

clprsc

Stops or resumes group resources

8.18.

clpcpufreq

Controls CPU frequency.

8.21.

clpledctrl

Controls the chassis identify function.

8.22.

clptrnreq

Requests a server to execute a process.

8.23.

clprexec

Requests that an EXPRESSCLUSTER server execute a process from external monitoring.

8.24.

clpbmccnf

Changes the information on BMC user name and password.

8.25.

clpbwctrl

Controls the cluster activation synchronization wait processing.

8.26.

Log-related commands

Command

Description

Page

clplogcc

Collects logs and OS information.

8.8.

clplogcf

Modifies and displays a configuration of log level and the file size of log output.

8.11.

clpperfc

Displays the cluster statistics data about groups and monitor resources.

8.35.

Script-related commands

Command

Description

Page

clplogcmd

Writes texts in the exec resource script to create a desired message to the output destination

8.16.

Mirror-related commands (when the Replicator is used)

Command

Description

Page

clpmdstat

Displays a mirroring status and configuration information.

8.14.1.

clpmdctrl

Activates/deactivates a mirror disk resource, or recovers mirror.
Displays or modifies the maximum number of the request queues.

8.14.2.

clpmdinit

Initializes the cluster partition of a mirror disk resource.
Creates a file system on the data partition of a mirror disk resource.

8.14.3.

Hybrid disk-related commands (when the Replicator DR is used)

Command

Description

Page

clphdstat

Displays the hybrid disk status and configuration information.

8.15.1.

clphdctrl

Activates/deactivates a hybrid disk resource, or recovers mirror.
Displays or modifies the maximum number of the request queues.

8.15.2.

clphdinit

Initializes the cluster partition of a hybrid disk resource.

8.15.3.

System monitor-related commands (when the System Resource Agent is used)

Command

Description

Page

clpprer

Estimates the future value from the tendency of the given resource use amount data.

8.27.

DB rest point-related commands

Command

Description

Page

clpdb2still

Controls the securing/release of the rest point of DB2.

8.29.

clpmysqlstill

Controls the securing/release of the rest point of MySQL.

8.30.

clporclstill

Controls the securing/release of the rest point of Oracle.

8.31.

clppsqlstill

Controls the securing/release of the rest point of PostgreSQL.

8.32.

clpmssqlstill

Controls the securing/release of the rest point of SQL Server.

8.33.

clpsybasestill

Controls the securing/release of the rest point of Sybase.

8.34.

Other commands

Command

Description

Page

clplamp

Lights off the warning light of the specified server.

8.20.

8.3. Displaying the cluster status (clpstat command)

the clpstat command displays cluster status and configuration information.

Command line
clpstat -s [--long] [-h hostname]
clpstat -g [-h hostname]
clpstat -m [-h hostname]
clpstat -n [-h hostname]
clpstat -p [-h hostname]
clpstat -i [--detail] [-h hostname]
clpstat --cl [--detail] [-h hostname]
clpstat --sv [server_name] [--detail] [-h hostname]
clpstat --hb [hb_name] [--detail] [-h hostname]
clpstat --np [np_name] [--detail] [-h hostname]
clpstat --svg [servergroup_name] [--detail] [-h hostname]
clpstat --grp [group_name] [--detail] [-h hostname]
clpstat --rsc [resource_name] [--detail] [-h hostname]
clpstat --mon [monitor_name] [--detail] [-h hostname]
clpstat --xcl [xclname] [--detail] [-h hostname]
clpstat --local
Description

This command line displays a cluster status and configuration data.

Option
-s
No option

Displays a cluster status.

--long

Displays a name of the cluster name and resource name until the end.

-g

Displays a cluster group map.

-m

Displays status of each monitor resource on each server.

-n

Displays each heartbeat resource status on each server.

-p

Displays the status of network partition resolution resource on each server.

-i

Displays the configuration information of the whole cluster.

--cl

Displays the cluster configuration data. Displays the Mirror Agent information as well for the Replicator, Replicator DR.

--sv [server_name]

Displays the server configuration information. By specifying the name of a server, you can display information of the specified server.

--hb [hb_name]

Displays heartbeat resource configuration information. By specifying the name of a heartbeat resource, you can display only the information on the specified heartbeat.

--np [np_name]

Displays network partition resolution resource configuration information. By specifying the name of a network partition resolution resource, you can display only the information on the specified network partition resolution resource.

--svg [servergroup_name]

Displays server group configuration information. By specifying the name of a server group, you can display only the information on the specified server group.

--rsc [resource_name]

Displays group resource configuration information. By specifying the name of a group resource, you can display only the information on the specified group resource.

--mon [monitor_name]

Displays monitor resource configuration information. By specifying the name of a monitor resource, you can display only the information on the specified resource.

--xcl [xclname]

Displays configuration information of exclusion rules.By specifying exclusion rule name, only the specified exclusion name information can be displayed.

--detail

Displays more detailed information on the setting.

-h hostname

Acquires information from the server specified with hostname. Acquires information from the command running server (local server) when the -h option is omitted.

--local

Displays the cluster status. This option displays the same information when -s option is specified or when no option is specified. However, this option displays only information of the server on which this command is executed, without communicating with other servers.

Return Value

When the -s option is not specified

0

Success

9

The command was run duplicatedly.

Other than the above

Failure

Remarks

According to the combination of options, configuration information shows information in various forms.

Notes
  • Run this command as the root user.

  • The cluster daemon must be activated on the server where you run this command.

  • When you specify the name of a server for the -h option, the server should be in the cluster.

  • For the language used for command output, see "Cluster properties - Info tab" in "2. Parameter details" in this guide.

  • When you run the clpstat command with the -s option or without any option, names such as a cluster or a resource will not be displayed halfway.

Example of Execution

Examples of information displayed after running these commands are provided in the next topic.

Error Messages

Message

Cause/Solution

Log in as root.

Log on as the root user.

Invalid configuration file. Create valid cluster configuration data.

Create valid cluster configuration data by using the Cluster WebUI.

Invalid option.

Specify a valid option.

Could not connect to the server. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

Invalid server status.

Check if the cluster daemon is activated.

Server is not active. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

Invalid server name. Specify a valid server name in the cluster.

Specify the valid name of a server in the cluster.

Invalid heartbeat resource name. Specify a valid heartbeat resource name in the cluster.

Specify the valid name of a heartbeat resource in the cluster.

Invalid network partition resource name.
Specify a valid network partition resource name in the cluster.

Specify the valid name of a network partition resolution resource in the cluster.

Invalid group name. Specify a valid group name in the cluster.

Specify the valid name of a group in the cluster.

Invalid group resource name. Specify a valid group resource name in the cluster.

Specify the valid name of a group resource in the cluster.

Invalid monitor resource name. Specify a valid monitor resource name in the cluster.

Specify the valid name of a monitor resource in the cluster.

Connection was lost. Check if there is a server where the cluster daemon is stopped in the cluster.

Check if there is any server on which the cluster daemon has stopped in the cluster.

Invalid parameter.

The value specified as a command parameter may be invalid.

Internal communication timeout has occurred in the cluster server. If it occurs frequently, set a longer timeout.

A time-out occurred in the EXPRESSCLUSTER internal communication.
If time-out keeps occurring, set the internal communication time-out longer.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Invalid server group name. Specify a valid server group name in the cluster.

Specify the correct server group name in the cluster.

The cluster is not created.

Create and apply the cluster configuration data.

Could not connect to the server. Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Cluster is stopped. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

Cluster is suspended. To display the cluster status, use --local option.

Cluster is suspended. To display the cluster status, use --local option.

8.3.1. Common entry examples

8.3.2. Displaying the status of the cluster (-s option)

The following is an example of display when you run the clpstat command with the -s option or without any option:

Example of a command entry
# clpstat -s
Example of the display after running the command
===================== CLUSTER STATUS ======================
Cluster : cluster
<server>
 *server1............: Online server1
   lanhb1            : Normal LAN Heartbeat
   lanhb2            : Normal LAN Heartbeat
   diskhb1           : Normal Disk Heartbeat
   comhb1            : Normal COM Heartbeat
   witnesshb1        : Normal Witness Heartbeat
   pingnp1           : Normal ping resolution
   pingnp2           : Normal ping resolution
   httpnp1           : Normal http resolution

 server2.............: Online server2
   lanhb1            : Normal LAN Heartbeat
   lanhb2            : Normal LAN Heartbeat
   diskhb1           : Normal Disk Heartbeat
   comhb1            : Normal COM Heartbeat
   witnesshb1        : Normal Witness Heartbeat
   pingnp1           : Normal ping resolution
   pingnp2           : Normal ping resolution
   httpnp1           : Normal http resolution

<group>
  failover1..........: Online failover group1
   current           : server1
   disk1             : Online /dev/sdb5
   exec1             : Online exec resource1
   fip1              : Online 10.0.0.11
  failover2..........: Online failover group2
   current           : server2
   disk2             : Online /dev/sdb6
   exec2             : Online exec resource2
   fip2              : Online 10.0.0.12
<monitor>
  diskw1             : Normal disk monitor1
  diskw2             : Normal disk monitor2
  ipw1               : Normal ip monitor1
  pidw1              : Normal pidw1
  userw              : Normal usermode monitor
  sraw               : Normal sra monitor
=============================================================

Information on each status is provided in "Status Descriptions".

8.3.3. Displaying a group map (-g option)

To display a group map, run the clpstat command with the -g option.

Example of a command entry
# clpstat -g
Example of the display after running the command
===================== GROUPMAP INFORMATION =================
Cluster : cluster
 *server0 : server1
  server1 : server2
-------------------------------------------------------------
  server0 [o] : failover1[o] failover2[o]
  server1 [o] : failover3[o]
=============================================================
  • Groups that are not running are not displayed.

  • Information on each status is provided in "Status Descriptions".

8.3.4. Displaying the status of monitor resources (-m option)

To display the status of monitor resources, run the clpstat command with the -m option.

Example of a command entry
# clpstat -m
Example of the display after running the command
=================== MONITOR RESOURCE STATUS =================
Cluster : cluster
 *server0 : server1
  server1 : server2
 Monitor0 [diskw1 : Normal]
-------------------------------------------------------------
  server0 [o] : Online
  server1 [o] : Online
 Monitor1 [diskw2 : Normal]
-------------------------------------------------------------
  server0 [o] : Online
  server1 [o] : Online
 Monitor2 [ipw1 : Normal]
-------------------------------------------------------------
  server0 [o] : Online
  server1 [o] : Online
 Monitor3 [pidw1 : Normal]
-------------------------------------------------------------
  server0 [o] : Online
  server1 [o] : Offline
 Monitor4 [userw : Normal]
-------------------------------------------------------------
  server0 [o] : Online
  server1 [o] : Online
 Monitor5 [sraw : Normal]
-------------------------------------------------------------
  server0 [o] : Online
  server1 [o] : Online
=============================================================

Information on each status is provided in "Status Descriptions".

8.3.5. Displaying the status of heartbeat resources (-n option)

To display the status of heartbeat resources, run clpstat command with the -n option.

Example of a command entry
# clpstat -n
Example of the display after running the command
================== HEARTBEAT RESOURCE STATUS ====================
Cluster : cluster
 *server0 : server1
  server1 : server2
  HB0 : lanhb1
  HB1 : lanhb2
  HB2 : diskhb1
  HB3 : comhb1
  HB4 : witnesshb1

  [on server0 : Online]
         HB   0 1 2 3 4
-----------------------------------------------------------------
  erver0 : o o o o o
  server1 : o o o x o

  [on server1 : Online]
         HB   0   1 2 3 4
-----------------------------------------------------------------
  server0 : o o o x o
  server1 : o o o o o
=================================================================

Detailed information on each status is provided in "Status Descriptions".

The status of the example shown above

The example above presents the status of all heartbeat resources seen from server0 and server1 when the COM heartbeat resource is disconnected.

Because comhb1, a COM heartbeat resource, is not able to communicate from both servers, communication to server1 on server0 or communication to server0 on server1 is unavailable.

The rest of heartbeat resources on both servers are in the status allowing communications.

8.3.6. Displaying the status of network partition resolution resources (-p option)

To display the status of network partition resolution resources, run clpstat command with the -p option.

Example of a command entry
# clpstat -p
Example of the display after running the command
=============== NETWORK PARTITION RESOURCE STATUS ================
Cluster : cluster
 *server0 : server1
  server1 : server2
  NP0 : pingnp1
  NP1 : pingnp2
  NP2 : httpnp1

  [on server0 : Caution]
       NP    0 1 2
-----------------------------------------------------------------
  server0 : o x o
  server1 : o x o

  [on server1 : Caution]
       NP    0 1 2
-----------------------------------------------------------------
  server0 : o x o
  server1 : o x o
=================================================================

Detailed information on each status is provided in "Status Descriptions".

The status of the example shown above

The example above presents the status of all the network partition resolution resources seen from server0 and server1 when the device to which ping of the network partition resolution resource pingnp2 is sent is down.

8.3.7. Displaying the cluster configuration data (--cl option)

To display the configuration data of a cluster, run the clpstat command with the -i, --cl, --svg, --hb, --grp, --rsc, --mon, or --xcl option. You can see more detailed information by specifying the --detail option.

For details of each item of the list, see "Cluster properties" in "2. Parameter details" in this guide.

To display the cluster configuration data, run the clpstat command with the --cl option.

Example of a command entry
# clpstat --cl
Example of the display after running the command
===================== CLUSTER INFORMATION ==================
[Cluster : cluster]
Comment  : failover cluster
=============================================================

8.3.8. Displaying only the configuration data of certain servers (--sv option)

When you want to display only the cluster configuration data on a specified server, specify the name of the server after the --sv option in the clpstat command. If you want to see the details, specify the --detail option. When the name of the server is not specified, cluster configuration data of all servers are displayed.

Example of a command entry
# clpstat --sv server1
Example of the display after running the command
===================== CLUSTER INFORMATION ==================
 [Server0 : server1]
   Comment                : server1
   Virtual Infrastructure : vSphere
   Product                : EXPRESSCLUSTER X 4.2 for Linux
   Internal Version       : 4.2.0-1
   Edition                : X
   Platform               : Linux
=============================================================

8.3.9. Displaying only the resource information of certain heartbeats (--hb option)

When you want to display only the cluster configuration data on a specified heartbeat resource, specify the name of the heartbeat resource after the --hb option in the clpstat command. If you want to see the details, specify the --detail option.

Example of a command entry

For a LAN heartbeat resource:

# clpstat --hb lanhb1
Example of the display after running the command
==================== CLUSTER INFORMATION ===================
[HB0 : lanhb1]
   Type                 : lanhb
   Comment              : LAN Heartbeat
=============================================================
Example of a command entry

For disk heartbeat resource:

# clpstat --hb diskhb
Example of the display after running the command
===================== CLUSTER INFORMATION ==================
[HB2 : diskhb1]
Type                     : diskhb
Comment                  : Disk Heartbeat
=============================================================
Example of a command entry

For COM heartbeat resource:

# clpstat --hb comhb
Example of the display after running the command
===================== CLUSTER INFORMATION ==================
[HB3 : comhb1]
   Type                  : comhb
   Comment               : COM Heartbeat
=============================================================
Example of a command entry

For kernel mode LAN heartbeat resource:

# clpstat --hb lankhb
Example of the display after running the command
===================== CLUSTER INFORMATION ==================
[HB4 : lankhb1]
   Type                  : lankhb
   Comment               : Kernel Mode LAN Heartbeat
=============================================================
Example of a command entry

For a BMC heartbeat resource:

# clpstat --hb bmchb1
Example of the display after running the command
==================== CLUSTER INFORMATION =======================
   [HB0 : bmchb1]
   Type                 : bmchb
   Comment              : BMC Heartbeat
=================================================================
Tips

By using the --sv option and the --hb option together, you can see the information as follows.

Example of a command entry
# clpstat --sv --hb
Example of the display after running the command:
===================== CLUSTER INFORMATION =================
 [Server0 : server1]
  Comment                : server1
  Virtual Infrastructure :
  Product                : EXPRESSCLUSTER X 4.2 for Linux
  Internal Version       : 4.2.0-1
  Edition                : X
  Platform               : Linux
 [HB0 : lanhb1]
  Type                   : lanhb
  Comment                : LAN Heartbeat
 [HB1 : lanhb2]
  Type                   : lanhb
  Comment                : LAN Heartbeat
 [HB2 : diskhb1]
  Type                   : diskhb
  Comment                : Disk Heartbeat
 [HB3 : comhb1]
  Type                   : comhb
  Comment                : COM Heartbeat
 [HB4 : witnesshb]
  Type                   : witnesshb
  Comment                : Witness Heartbeat
 [Server1 : server2]
  Comment                : server2
  Virtual Infrastructure :
  Product                : EXPRESSCLUSTER X 4.2 for Linux
  Internal Version       : 4.2.0-1
  Edition                : X
  Platform               : Linux
 [HB0 : lanhb1]
  Type                   : lanhb
  Comment                : LAN Heartbeat
 [HB1 : lanhb2]
  Type                   : lanhb
  Comment                : LAN Heartbeat
 [HB2 : diskhb1]
  Type                   : diskhb
  Comment                : Disk Heartbeat
 [HB3 : comhb1]
  Type                   : comhb
  Comment                : COM Heartbeat
 [HB4 : witnesshb]
  Type                   : witnesshb
  Comment                : Witness Heartbeat
============================================================

8.3.10. Displaying only the configuration data of certain network partition resolution resources (--np option)

When you want to display only the cluster configuration data on the specified network partition resolution resource, specify the name of the network partition resolution resource after the --np option in the clpstat command. If you want to see the details, specify the --detail option. When you do not specify the name of the network partition resolution resource, the cluster configuration data of all the network partition resolution resources is displayed.

Example of a command entry

For a PING network partition resolution resource:

# clpstat --np pingnp1
Example of the display after running the command
===================== CLUSTER INFORMATION =====================
 [NP0 : pingnp1]
   Type                  : pingnp
   Comment               : ping resolution
=================================================================
Example of a command entry

For a HTTP network partition resolution resource:

# clpstat --np httpnp1
Example of the display after running the command
===================== CLUSTER INFORMATION =====================
 [NP0 : httpnp1]
   Type                  : httpnp
   Comment               : http resolution
=================================================================

8.3.11. Displaying only the configuration data of certain server group (--svg option)

To display only the cluster configuration data on a specified server group, specify the name of server group after --svg option in the clpstat command. When you do not specify the name of server group, the cluster configuration data of all the server groups is displayed.

Example of a command entry
# clpstat --svg servergroup1
Example of the display after running the command
===================== CLUSTER INFORMATION =====================
 [ServerGroup0 : servergroup1]
   server0               : server1
   server1               : server2
   server2               : server3
=================================================================

8.3.12. Displaying only the configuration data of certain groups (--grp option)

When you want to display only the cluster configuration data on a specified group, specify the name of the group after the --grp option in the clpstat command. If you want to see the details, specify the --detail option. When you do not specify the name of group, the cluster configuration data of all the groups is displayed.

Example of a command entry
# clpstat --grp failover1
Example of the display after running the command
===================== CLUSTER INFORMATION ==================
[Group0 : failover1]
   Type                  : failover
   Comment               : failover group1
============================================================

8.3.13. Displaying only the configuration data of a certain group resource (--rsc option)

When you want to display only the cluster configuration data on a specified group resource, specify the group resource after the --rsc option in the clpstat command. If you want to see the details, specify the --detail option. When you do not specify the name of server group, the cluster configuration data of all the group resources is displayed.

Example of a command entry

For floating IP resource:

# clpstat --rsc fip1
Example of the display after running the command
===================== CLUSTER INFORMATION =====================
 [Resource2 : fip1]
   Type                  : fip
   Comment               : 10.0.0.11
   IP Address            : 10.0.0.11
================================================================
Tips

By using the --grp option and the --rsc option together, you can display the information as follows.

Example of a command entry
# clpstat --grp --rsc
Example of the display after running the command
===================== CLUSTER INFORMATION ==================
 [Group0 : failover1]
  Type                      : failover
  Comment                   : failover group1
 [Resource0 : disk1]
  Type                      : disk
  Comment                   : /dev/sdb5
  Disk Type                 : disk
  File System               : ext2
  Device Name               : /dev/sdb5
  Raw Device Name           :
  Mount Point               : /mnt/sdb5
 [Resource1 : exec1]
  Type                      : exec
  Comment                   : exec resource1
  Start Script Path         : /opt/userpp/start1.sh
  Stop Script Path          : /opt/userpp/stop1.sh
 [Resource2 : fip1]
  Type                      : fip
  Comment                   : 10.0.0.11
  IP Address                : 10.0.0.11
 [Group1 : failover2]
  Type                      : failover
  Comment                   : failover group2
 [Resource0 : disk2]
  Type                      : disk
  Comment                   : /dev/sdb6
  Disk Type                 : disk
  File System               : ext2
  Device Name               : /dev/sdb6
  Raw Device Name           :
  Mount Point               : /mnt/sdb6
 [Resource1 : exec2]
  Type                      : exec
  Comment                   : exec resource2
  Start Script Path         : /opt/userpp/start2.sh
  Stop Script Path          : /opt/userpp/stop2.sh
 [Resource2 : fip2]
  Type                      : fip
  Comment                   : 10.0.0.12
  IP Address                : 10.0.0.12
=============================================================

8.3.14. Displaying only the configuration data of a certain monitor resource (--mon option)

When you want to display only the cluster configuration data on a specified monitor resource, specify the name of the monitor resource after the --mon option in the clpstat command. If you want to see the details, specify --detail option. When you do not specify the name of monitor resource, the cluster configuration data of all monitor resources is displayed.

Example of a command entry

For floating IP monitor resource:

# clpstat --mon fipw1
Example of the display after running the command:
===================== CLUSTER INFORMATION =====================
[Monitor2 : fipw1]
Type                          : fipw
Comment                       : fip monitor1
=================================================================

8.3.15. Displaying the configuration data of a resource specified for an individual server (--rsc option or --mon option)

When you want to display the configuration data on a resource specified for an individual server, specify the name of the resource after the --rsc or --mon option in the clpstat command.

Example of a command entry

When the monitor target IP address of the IP monitor resource is set to an individual server:

# clpstat --mon ipw1
Example of the display after running the command:
===================== CLUSTER INFORMATION =====================
 [Monitor2 : ipw1]
  Type                        : ipw
  Comment                     : ip monitor1
  IP Addresses                : Refer to server's setting
 <server1>
  IP Addresses                : 10.0.0.253
                              : 10.0.0.254
 <server2>
  IP Addresses                : 10.0.1.253
                              : 10.0.1.254
=================================================================

8.3.16. Displaying only the configuration data of specific exclusion rules (--xcl option)

When you want to display only the cluster configuration data on a specified exclusion rules, specify the exclusive rule name after the --xcl option in the clpstat command.

Example of a command entry
# clpstat --xcl excl1
Example of the display after running the command
===================== CLUSTER INFORMATION =====================
 [Exclusive Rule0 : excl1]
  Exclusive Attribute         : Normal
  group0                      : failover1
  group1                      : failover2
=================================================================

8.3.17. Displaying all configuration data (-i option)

By specifying the -i option, you can display the configuration information that is shown when --cl, --sv, --hb, --svg, --grp, --rsc, --mon, and --xcl options are all specified.

If you run the command with the -i option and the --detail option together, all the detailed cluster configuration data is displayed. Because this option displays large amount of information at a time, use a command, such as the less command, and pipe, or redirect the output in a file for the output.

Tips

Specifying the -i option displays all the information on a console. If you want to display some of the information, it is useful to combine the --cl, --sv, --hb, --svg, --grp, --rsc, and/or --mon option. For example, you can use these options as follows:

Example of a command entry

If you want to display the detailed information of the server whose name is "server0," the group whose name is "failover1," and the group resources of the specified group, enter:

# clpstat --sv server0 --grp failover1 --rsc --detail

8.3.18. Displaying the status of the cluster (--local option)

By specifying the --local option, you can display only information of the server on which you execute the clpstat command, without communicating with other servers.

Example of a command entry
# clpstat --local
Example of the display after running the command
===================== CLUSTER STATUS ======================
  Cluster : cluster
   cluster..........: Start        cluster
  <server>
  *server1..........: Online       server1
  lanhb1            : Normal       LAN Heartbeat
  lanhb2            : Normal       LAN Heartbeat
  diskhb1           : Normal       DISK Heartbeat
  comhb1            : Normal       COM Heartbeat
  witnesshb1        : Normal       Witness Heartbeat
  pingnp1           : Normal       ping resolution
  pingnp2           : Normal       ping resolution
  httpnp1           : Normal       http resolution

  server2...........: Online       server2
  lanhb1            : -            LAN Heartbeat
  lanhb2            : -            LAN Heartbeat
  diskhb1           : -            DISK Heartbeat
  comhb1            : -            COM Heartbeat
  witnesshb1        : -            Witness Heartbeat
  pingnp1           : -            ping resolution
  pingnp2           : -            ping resolution
  httpnp1           : -            http resolution

  <group>
  failover1.........: Online       failover group1
  current           : server1
  disk1             : Online       /dev/sdb5
  exec1             : Online       exec resource1
  fip1              : Online       10.0.0.11
  failover2.........: -            failover group2
  current           : server2
  disk2             : -            /dev/sdb6
  exec2             : -            exec resource2
  fip2              : -            10.0.0.12
  <monitor>
  diskw1            : Online       disk monitor1
  diskw2            : Online       disk monitor2
  ipw1              : Online       ip monitor1
  pidw1             : Online       pidw1
  userw             : Online       usermode monitor
  sraw              : Online       sra monitor
=============================================================

Information on each status is provided in "Status Descriptions".

8.3.19. Status Descriptions

Cluster

Function

Status

Description

Status display (--local)

Start

Starting

Suspend

Being suspended

Stop

Offline Pending

Unknown

Status unknown

Server

Function

Status

Description

Status display
Heartbeat resource status display

Online

Starting

Offline

Offline Pending

Online Pending

Now being started

Offline Pending

Now being stopped

Caution

Heartbeat resource failure

Unknown

Status unknown

-

Status unknown

Group map display
Monitor resource status display

o

Starting

x

Offline Pending

-

Status unknown

Heartbeat Resource

Function

Status

Description

Status display

Normal

Normal

Caution

Failure (Some)

Error

Failure (All)

Unused

Not used

Unknown

Unknown

-

Status unknown

Heartbeat resource status display

o

Able to communicate

x

Unable to communicate

-

Not used or status unknown

Network Partition Resolution Resource

Function

Status

Description

Status display

Normal

Normal

Error

Failure

Unused

Not used

Unknown

Status unknown

-

Status unknown

Network partition resolution status display

o

Able to communicate

x

Unable to communicate

-

Not used or status unknown

Group

Function

Status

Description

Status display

Online

Started

Offline

Stopped

Online Pending

Now being started

Offline Pending

Now being stopped

Error

Error

Unknown

Status unknown

-

Status unknown

Group map display

o

Started

e

Error

p

Now being started/stopped

Group Resource

Function

Status

Description

Status display

Online

Started

Offline

Stopped

Online Pending

Now being started

Offline Pending

Now being stopped

Online Failure

Starting failed

Offline Failure

Stopping failed

Unknown

Status unknown

-

Status unknown

Monitor Resource

Function

Status

Description

Status Display

Normal

Normal

Caution

Error (Some)

Error

Error (All)

Not Used

Not Used

Unknown

Status Unknown

Status display (--local)
Monitor Resource Status Display

Online

Started

Offline

Stopped

Caution

Caution

Suspend

Stopped temporary

Online Pending

Now being started

Offline Pending

Now being stopped

Online Failure

Starting failed

Offline Failure

Stopping failed

Not used

Not used

Unknown

Status unknown

-

Status unknown

8.4. Operating the cluster (clpcl command)

the clpcl command operates a cluster

Command line
clpcl -s [-a] [-h hostname]
clpcl -t [-a] [-h hostname] [-w timeout] [--apito timeout]
clpcl -r [-a] [-h hostname] [-w timeout] [--apito timeout]
clpcl --suspend [--force] [-w timeout] [--apito timeout]
clpcl --resume
Description

This command starts, stops, suspends, or resumes the cluster daemon.

Option
-s

Starts the cluster daemon.

-t

Stops the cluster daemon.

-r

Restarts the cluster daemon.

--suspend

Suspends the entire cluster

-w timeout

clpcl command specifies the wait time to stop or suspend the cluster daemon to be completed when -t, -r, or --suspend option is used. The unit of time is second.

When a time-out is not specified, it waits for unlimited time.

When "0 (zero)" is specified, it does not wait.

When -w option is not specified, it waits for (heartbeat time-out x 2) seconds.

--resume

Resumes the entire cluster. The status of group resource of the cluster when suspended is kept.

-a

Executed the command on all servers

-h hostname

Makes a request to run the command to the server specified in hostname. Makes a processing request to the server on which this command runs (local server) if the -h option is omitted.

--force

When used with the --suspend option, forcefully suspends the cluster regardless of the status of all the servers in the cluster.

--apito timeout
Specify the interval (internal communication timeout) to wait for the EXPRESSCLUSTER daemon start or stop in seconds. A value from 1 to 9999 can be specified.
If the --apito option is not specified, waiting for the EXPRESSCLUSTER daemon start or stop is performed according to the value set to the internal communication timeout of the cluster properties.
Return Value

0

Success

Other than 0

Failure

Remarks
When this command is executed with the -s or --resume option specified, it returns control when processing starts on the target server.
When this command is executed with the -t or --suspend option specified, it returns control after waiting for the processing to complete.
When this command is executed with the -r option specified, it returns control when the EXPRESSCLUSTER daemon restarts on the target server after stopping once.

Run the clpstat command to display the started or resumed status of the EXPRESSCLUSTER daemon.

Notes

Run this command as the root user.

This command cannot be executed while a group is being started or stopped.

For the name of a server for the -h option, specify the name of a server in the cluster.

When you suspend the cluster, the cluster daemon should be activated in all servers in the cluster. When the --force option is used, the cluster is forcefully suspended even if there is any stopped server in the cluster.

When you start up or resume the cluster, access the servers in the cluster in the order below, and use one of the paths that allowed successful access.

  1. via the IP address on the interconnect LAN

  2. via the IP address on the public LAN

When you resume the cluster, use the clpstat command to see there is no activated server in the cluster.

This command starts, stops, restarts, suspends, or resumes only the EXPRESSCLUSTER daemon. The mirror agent and the like are not started, stopped, restarted, suspended, or resumed together.

Example of a command entry

Example 1: Activating the cluster daemon in the local server

# clpcl -s

Example 2: Activating the cluster daemon in server1 from server0

# clpcl -s -h server1

Start server1 : Command succeeded.

If a server name is specified, the display after running the command should look similar to above.

Start hostname : Execution result

(If the activation fails, cause of the failure is displayed)

Example 3: Activating the cluster daemon in all servers

# clpcl -s -a

Start server0 : Command succeeded.

Start server1 : Performed startup processing to the active cluster daemon. When all the servers are activated, the display after running the command should look similar to above. Start hostname : Execution result

(If the activation fails, cause of the failure is displayed)

Example 4: Stopping the cluster daemon in all servers

# clpcl -t -a

If the cluster daemon stops on all the servers, it waits till the EXPRESSCLUSTER daemons stop on all the servers.

If stopping fails, an error message is displayed.

Error Messages

Message

Cause/Solution

Log in as root.

Log on as the root user.

Invalid configuration file. Create valid cluster configuration data.

Create valid cluster configuration data using the Cluster WebUI.

Invalid option.

Specify a valid option

Performed stop processing to the stopped cluster daemon.

The stopping process has been executed on the stopped cluster daemon.

Performed startup processing to the active cluster daemon.

The startup process has been executed on the activated cluster daemon.

Could not connect to the server. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

Could not connect to the data transfer server. Check if the server has started up.

Check if the server is running.

Failed to obtain the list of nodes.
Specify a valid server name in the cluster.

Specify the valid name of a server in the cluster.

Failed to obtain the daemon name.

Failed to obtain the cluster name.

Failed to operate the daemon.

Failed to control the cluster.

Resumed the daemon that is not suspended.

Performed the resume process for the HA Cluster daemon that is not suspended.

Invalid server status.

Check that the cluster daemon is activated.

Server is busy. Check if this command is already run.

This command may have already been run.

Server is not active. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

There is one or more servers of which cluster daemon is active. If you want to perform resume, check if there is any server whose cluster daemon is active in the cluster.

When you execute the command to resume, check if there is no server in the cluster on which the cluster daemon is activated.

All servers must be activated. When suspending the server, the cluster daemon need to be active on all servers in the cluster.

When you execute the command to suspend, the cluster daemon must be activated in all servers in the cluster.

Resume the server because there is one or more suspended servers in the cluster.

Execute the command to resume because some server(s) in the cluster is in the suspend status.

Invalid server name. Specify a valid server name in the cluster.

Specify the valid name of a sever in the cluster.

Connection was lost. Check if there is a server where the cluster daemon is stopped in the cluster.

Check if there is any server on which the cluster daemon is stopped in the cluster.

Invalid parameter.

The value specified as a command parameter may be invalid.

Internal communication timeout has occurred in the cluster server. If it occurs frequently, set the longer timeout.

A time-out occurred in the HA Cluster internal communication.
If time-out keeps occurring, set the internal communication time-out longer.

Processing failed on some servers. Check the status of failed servers.

If stopping has been executed with all the servers specified, there is one of more servers on which the stopping process has failed.
Check the status of the server(s) on which the stopping process has failed.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

There is a server that is not suspended in cluster. Check the status of each server.

There is a server that is not suspended in the cluster. Check the status of each server.

Suspend %s : Could not suspend in time.

The server failed to complete the suspending process of the cluster daemon within the time-out period. Check the status of the server.

Stop %s : Could not stop in time.

The server failed to complete the stopping process of the cluster daemon within the time-out period. Check the status of the server.

Stop %s : Server was suspended.
Could not connect to the server. Check if the cluster daemon is active.

The request to stop the cluster daemon was made. However the server was suspended.

Could not connect to the server. Check if the cluster daemon is active.

The request to stop the cluster daemon was made. However connecting to the server failed. Check the status of the server.

Suspend %s : Server already suspended.
Could not connect to the server. Check if the cluster daemon is active.

The request to suspend the cluster daemon was made. However the server was suspended.

Event service is not started.

Event service is not started. Check it.

Mirror Agent is not started.

Mirror Agent is not started. Check it.

Event service and Mirror Agent are not started.

Event service and Mirror Agent are not started. Check them.

Some invalid status. Check the status of cluster.

The status of a group may be changing. Try again after the status change of the group is complete.

Failed to shut down the server.

Failed to shut down or reboot the server.

8.5. Shutting down a specified server (clpdown command)

the clpdown command shuts down a specified server.

Command line

clpdown [-r] [-h hostname]

Description

This command stops the cluster daemon and shuts down a server.

Option
None

Shuts down a server.

-r

Reboots the server.

-h hostname

Makes a processing request to the server specified in hostname. Makes a processing request to the server on which this command runs (local server) if the -h option is omitted.

Return Value

0

Success

Other than 0

Failure

Remarks
This command runs the following commands internally after stopping the cluster daemon.
Without any option specified shutdown
With the -r option specified reboot

This command returns control when the group stop processing is completed.

This command shuts down the server even when the EXPRESSCLUSTER daemon is stopped.

Notes

Run this command as the root user.

This command cannot be executed while a group is being started or stopped.

For the name of a server for the -h option, specify the name of a server in the cluster.

Example of a command entry

Example 1: Stopping and shutting down the cluster daemon in the local server

# clpdown

Example 2: Shutting down and rebooting server1 from server0

# clpdown -r -h server1
Error Message

See "Operating the cluster (clpcl command)".

8.6. Shutting down the entire cluster (clpstdn command)

the clpstdn command shuts down the entire cluster

Command line

clpstdn [-r] [-h hostname]

Description

This command stops the cluster daemon in the entire cluster and shuts down all servers.

Option
None

Executes cluster shutdown.

-r

Executes cluster shutdown reboot.

-h hostname

Makes a processing request to the server specified in hostname. Makes a processing request to the server on which this command runs (local server) if the -h option is omitted.

Return Value

0

Success

Other than 0

Failure

Remarks

This command returns control when the group stop processing is completed.

Notes

Run this command as the root user.

This command cannot be executed while a group is being started or stopped.

For the name of a server for the -h option, specify the name of a server in the cluster.

A server that cannot be accessed from the server that runs the command (for example, a server with all LAN heartbeat resources are off-line.) will not shut down.

Example of a command entry

Example 1: Shutting down the cluster

# clpstdn

Example 2: Performing the cluster shutdown reboot

# clpstdn -r
Error Message

See "Operating the cluster (clpcl command)".

8.7. Operating groups (clpgrp command)

the clpgrp command operates groups

Command line
clpgrp -s [group_name] [-h hostname] [-f] [--apito timeout]
clpgrp -t [group_name] [-h hostname] [-f] [--apito timeout]
clpgrp -m [group_name] [-h hostname] [-a hostname] [--apito timeout]
clpgrp -l [group_name] [-h hostname] [-a hostname] [--apito timeout]
Description

This command starts, deactivates or moves groups. This command also migrates groups.

Option
-s [group_name]

Starts groups. When you specify the name of a group, only the specified group starts up. If no group name is specified, all groups start up.

-t [group_name]

Stops groups. When you specify the name of a group, only the specified group stops. If no group name is specified, all groups stop.

-m [group_name]

Moves a specified group. If no group name is specified, all the groups are moved. The status of the group resource of the moved group is kept.

-l [group_name]
Migrates the specified group. The group type must always be the migration type.
If no group name is specified, all the active migration groups on the server are migrated.
-h hostname

Makes a processing request to the server specified in hostname. Makes a processing request to the server on which this command runs (local server) if the -h option is omitted.

-a hostname

Defines the server which is specified by hostname as a destination to which a group will be moved. When the -a option is omitted, the group will be moved according to the failover policy

-f
If you use this option with the -s option against a group activated on a remote server, it will forcefully be started on the server that requested the process.
If this command is used with the -t option, the group will be stopped forcefully.
-n group_name

Displays the name of the server on which the group has been started.

--apito timeout
Specify the interval (internal communication timeout) to wait for the group resource start or stop in seconds. A value from 1 to 9999 can be specified.
If the --apito option is not specified, waiting for the group resource start or stop is performed according to the value set to the internal communication timeout of the cluster properties.
Return Value

0

Success

Other than 0

Failure

Notes

Run this command as the root user.

The cluster daemon must be activated on the server that runs this command

Specify a server in the cluster when you specify the name of server name for the -h and -a options.

Make sure to specify a group name, when you use the -m option.

If the group is moved by using the [-m] option, it is determined to be normal at the time of performing the group start process on the destination server. Please be aware that even if this command is executed successfully, the activation of the resource may fail on the server to which the group is moved. In order to check whether or not the group has started by using the return value, execute the following:
# clpgrp -s [group_name] [-h hostname] -f

In order to move a group belonging to exclusion rules whose exclusion attribute is set to "Normal" by using the [-m] option, explicitly specify a server to which the group is moved by the [-a] option.

With the [-a] option omitted, moving a group fails if a group belonging to exclusion rules whose exclusion attribute is set to "Normal" is activated in all the movable servers.

Example of Execution

The following is an example of status transition when operating the groups.

Example: The cluster has two servers and two groups.

Failover policy of group

groupA server1 -> server2
groupB server2 -> server1
  1. Both groups are stopped.

  2. Run the following command on server1.

    # clpgrp -s groupA
    

    GroupA starts in server1.

  3. Run the following command in server2.

    # clpgrp -s
    

    All groups that are currently stopped but can be started start in server2.

  4. Run the following command in server1

    # clpgrp -m groupA
    

    GroupA moves to server2.

  5. Run the following command in server1

    # clpgrp -t groupA -h server2
    

    GroupA stops.

  6. Run the following command in server1.

    # clpgrp -t
    Command Succeeded.
    

    When the command is executed, there is no group running on server1. So, "Command Succeeded." appears.

  7. Add -f to the command you have run in Step 6 and execute it on server1.

    # clpgrp -t -f
    

    Groups which were started in server2 can be forcefully deactivated from server1.

Error message

Message

Cause/Solution

Log in as root.

Log on as the root user.

Invalid configuration file. Create valid cluster configuration data.

Create valid cluster configuration data using the Cluster WebUI

Invalid option.

Specify a valid option

Could not connect to the server. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

Invalid server status.

Check if the cluster daemon is activated.

Server is not active. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

Invalid server name. Specify a valid server name in the cluster.

Specify the valid name of sever in the cluster.

Connection was lost. Check if there is a server where the cluster daemon is stopped in the cluster.

Check if there is any server on which the cluster daemon has stopped in the cluster.

Invalid parameter.

The value specified as a command parameter may be invalid.

Internal communication timeout has occurred in the cluster server. If it occurs frequently, set a longer timeout.

A time-out occurred in the EXPRESSCLUSTER internal communication.

If time-out keeps occurring, set the internal communication time-out longer.

Invalid server. Specify a server that can run and stop the group, or a server that can be a target when you move the group.

The server that starts/stops the group or to which the group is moved is invalid.

Specify a valid server.

Could not start the group. Try it again after the other server is started, or after the Wait Synchronization time is timed out.

Start up the group after waiting for the remote server to start up, or after waiting for the time-out of the start-up wait time.

No operable group exists in the server.

Check if there is any group that is operable in the server which requested the process.

The group has already been started on the local server.

Check the status of the group by using the Cluster WebUI or the clpstat command.

The group has already been started on the other server. To start/stop the group on the local server, use -f option.

Check the status of the group by using the Cluster WebUI or the clpstat command.

If you want to start up or stop a group which was started in a remote server from the local server, move the group or run the command with the -f option.

The group has already been started on the other server. To move the group, use "-h <hostname>" option.

Check the status of the group by using the Cluster WebUI or clpstat command.

If you want to move a group which was started on a remote server, run the command with the -h hostname option.

The group has already been stopped.

Check the status of the group by using the Cluster WebUI or the clpstat command.

Failed to start one or more group resources. Check the status of group

Check the status of group by using Cluster WebUI or the clpstat command.

Failed to stop one or more group resources. Check the status of group

Check the status of group by using the Cluster WebUI or the clpstat command.

The group is busy. Try again later.

Wait for a while and then try again because the group is now being started up or stopped.

An error occurred on one or more groups. Check the status of group

Check the status of the group by using the Cluster WebUI or the clpstat command.

Invalid group name. Specify a valid group name in the cluster.

Specify the valid name of a group in the cluster.

Server is not in a condition to start group or any critical monitor error is detected.

Check the status of the server by using the Cluster WebUI or clpstat command.

An error is detected in a critical monitor on the server on which an attempt was made to start a group.

There is no appropriate destination for the group. Other servers are not in a condition to start group or any critical monitor error is detected.

Check the status of the server by using the Cluster WebUI or clpstat command.

An error is detected in a critical monitor on all other servers.

The group has been started on the other server. To migrate the group, use "-h <hostname>" option.

Check the status of the group by using the Cluster WebUI or clpstat command.

If you want to move a group which was started on a remote server, run the command with the -h hostname option.

The specified group cannot be migrated.

The specified group cannot be migrated.

The specified group is not vm group.

The specified group is not a virtual machine group.

Migration resource does not exist.

Check the status of the group by using the Cluster WebUI or clpstat command.

The resource to be migrated is not found.

Migration resource is not started.

Check the status of the group by using the Cluster WebUI or clpstat command.

The resource to be migrated is not started.

Some invalid status. Check the status of cluster.

Invalid status for some sort of reason. Check the status of the cluster.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

8.8. Collecting logs (clplogcc command)

the clplogcc command collects logs.

Command line

clplogcc [ [-h hostname] | [-n targetnode1 -n targetnode2 ......] ] [-t collect_type] [-r syslog_rotate_number] [-o path] [-l]

Description

This command collects information including logs and the OS information by accessing the data transfer server.

Option
None

Collects logs in the cluster.

-h hostname

Specifies the name of the access destination server for collecting cluster node information

-t collect_type

Specifies a log collection pattern. When this option is omitted, a log collection pattern will be type1. Information on log collection types is provided in "Collecting logs by specifying a type (-t option)".

-r syslog_rotate _number

Specifies how many generations of syslog will be collected. When this option is omitted, only one generation will be collected.

-o path

Specifies the output destination of collector files. When this option is skipped, logs are output under tmp of the installation path.

-n targetnode

Specifies the name of a server that collects logs. With this specification, logs of the specified server, rather than of the entire cluster, will be collected.

-l
Collects logs on the local server without going through the data transfer server.
The -h option and the -n option cannot be specified at the same time.
Return Value

0

Success

Other than 0

Failure

Remarks

Since log files are compressed by tar.gz, add the xzf option to the tar command to decompress them.

Notes

Run this command as the root user.

For the name of server for the -h option, specify the name of a server in the cluster that allows name resolution.

For the name of server for the -n option, specify the name of server that allows name resolution. If name resolution is not possible, specify the interconnect or public LAN address.

When you run this command, access the servers in the cluster in the order below, and use one of the paths that allowed successful access.

  1. via the IP address on the interconnect LAN

  2. via the IP address on the public LAN

  3. via the IP address whose name was resolved by the server name in the cluster configuration data

If the log files collected on Linux OS (pax format of the tar command's compression) are decompressed with gnutar format of the tar command, a PaxHeaders.X folder is generated. However, it does not affect the operation.

Example of command execution

Example 1: Collecting logs from all servers in the cluster

# clplogcc
Collect Log server1 : Success
Collect Log server2 : Success

Log collection results (server status) of servers on which log collection is executed are displayed.

Process hostname: result of loc collection (server status)

Execution Result

For this command, the following processes are displayed.

Steps in Process

Meaning

Connect

Displayed when the access fails.

Get File size

Displayed when acquiring the file size fails.

Collect Log

Displayed with the file acquisition result.

The following results (server status) are displayed:

Result (server status)

Meaning

Success

Success

Timeout

Time-out occurred.

Busy

The server is busy.

Not Exist File

The file does not exist.

No Free space

No free space on the disk.

Failed

Failure caused by other errors.

Error Message

Message

Cause/Solution

Log in as root.

Log on as the root user.

Invalid configuration file. Create valid cluster configuration data.

Create valid cluster configuration data using the Cluster WebUI.

Invalid option.

Specify a valid option.

Specify a number in a valid range.

Specify a number within a valid range.

Specify a correct number.

Specify a valid number.

Specify correct generation number of syslog.

Specify a valid number for the syslog generation.

Collect type must be specified 'type1' or 'type2' or 'type3' or 'type4' or 'type5' or 'type6'. Incorrect collection type is specified.

Invalid collection type has been specified.

Specify an absolute path as the destination of the files to be collected.

Specify an absolute path for the output destination of collected files.

Specifiable number of servers are the max number of servers that can constitute a cluster.

The number of servers you can specify is within the maximum number of servers for cluster configuration.

Could not connect to the server. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

Failed to obtain the list of nodes. Specify a valid server name in the cluster.

Specify the valid name of a server in the cluster.

Invalid server status.

Check if the cluster daemon is activated.

Server is busy. Check if this command is already run.

This command may have been already activated. Check the status.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

8.8.1. Collecting logs by specifying a type (-t option)

To collect only the specified types of logs, run the clplogcc command with the -t option.

Specify a type from 1 through 6 for the log collection.

type1

type2

type3

type4

type5

type6

  1. Default collection information

y

y

y

y

n

n

  1. syslog

y

y

y

n

n

n

  1. core file

y

y

n

y

n

n

  1. OS information

y

y

y

y

n

n

  1. script

y

y

n

n

n

n

  1. ESMPRO/AC

y

y

n

n

n

n

  1. HA Logs

n

y

n

n

n

n

  1. Mirror statistics information

n

n

n

n

y

n

  1. Cluster statistics information

n

n

n

n

n

y

  1. System resource statistics information

y

y

y

y

n

y

(y=yes, n=no)

Run this command from the command line as follows.

Example: When collecting logs using type2

# clplogcc -t type2

When no option is specified, a log type will be type 1.

  1. Information to be collected by default

    Information on the following is collected by default:

    • Logs of each module in the EXPRESSCLUSTER Server

    • Alert logs

    • Attribute of each module (ls -l) in the EXPRESSCLUSTER Server

      • In bin, lib

      • In cloud

      • In alert/bin, webmgr/bin

      • In ha/jra/bin, ha/sra/bin, ha/jra/lib, ha/sra/lib

      • In drivers/md

      • In drivers/khb

      • In drivers/ka

    • All installed packages (rpm -qa expresscls execution result)

    • EXPRESSCLUSTER version

    • distribution (/etc/*-release)

    • License information

    • Cluster configuration data file

    • Policy file

    • Cloud environment configuration directory

    • Dump of shared memory used by EXPRESSCLUSTER

    • Local node status of EXPRESSCLUSTER (clpstat --local execution results)

    • Process and thread information (ps execution result)

    • PCI device information (lspci execution result)

    • Service information (execution results of the commands such as systemctl, chkconfig, and ls)

    • Output result of kernel parameter (result of running sysctl -a)

    • glibc version (rpm -qi glibc execution result)

    • Kernel loadable module configuration (/etc/modules.conf. /etc/modprobe.conf)

    • File system (/etc/fstab)

    • IPC resource (ipcs execution result)

    • System (uname -a execution result)

    • Network statistics (netstat, ss execution result IPv4/IPv6)

    • ip (execution results of the command ip addr, link, maddr, route or -s l)

    • All network interfaces (ethtool execution result)

    • Information collected at an emergency OS shutdown (See "Collecting information when a failure occurs".)

    • libxml2 version (rpm -qi libxml2 execution result)

    • Static host table (/etc/hosts)

    • File system export table (exportfs -v execution result)

    • User resource limitations (ulimit -a execution result)

    • File system exported by kernel-based NFS (/etc/exports)

    • OS locale

    • Terminal session environment value (export execution result)

    • Language locale (/etc/sysconfig/i18n)

    • Time zone (env - date execution result)

    • Work area of EXPRESSCLUSTER server

    • Monitoring options
      This information is collected if options are installed.
    • Collected dump information when the monitor resource timeout occurred

    • Collected Oracle detailed information when Oracle monitor resource abnormity was detected

  2. syslog

    • syslog (/var/log/messages)

    • syslog (/var/log/syslog)

    • Syslogs for the number of generations specified (/var/log/messages.x)

    • journal log (such as files in /var/run/log/journal/)

  3. core file

    • core file of EXPRESSCLUSTER module
      Stored in /opt/nec/clusterpro/log by the following archive names.

      Alert related:

      altyyyymmdd_x.tar

      The WebManager server related:

      wmyyyymmdd_x.tar

      EXPRESSCLUSTER core related:

      clsyyyymmdd_x.tar

      srayyyymmdd_x.tar

      jrayyyymmdd_x.tar

      yyyymmdd indicates the date when the logs are collected. x is a sequence number.

  4. OS information

    OS information on the following is collected by default:

    • Kernel mode LAN heartbeat, keep alive

      • /proc/khb_moninfo

      • /proc/ka_moninfo

    • /proc/devices

    • /proc/mdstat

    • /proc/modules

    • /proc/mounts

    • /proc/meminfo

    • /proc/cpuinfo

    • /proc/partitions

    • /proc/pci

    • /proc/version

    • /proc/ksyms

    • /proc/net/bond*

    • all files of /proc/scsi/ all files in the directory

    • all files of /proc/ide/ all files in the directory

    • /etc/fstab

    • /etc/rc*.d

    • /etc/syslog.conf

    • /etc/syslog-ng/syslog-ng.conf

    • /etc/snmp/snmpd.conf

    • Kernel ring buffer (dmesg execution result)

    • ifconfig (the result of running ifconfig)

    • iptables (the result of running iptables -L)

    • ipchains (the result of running ipchains -L)

    • df (the result of running df)

    • raw device information (the result of running raw -qa)

    • kernel module load information (the result of running lsmod)

    • host name, domain name information (the result of running hostname, domainname)

    • dmidecode (the result of running dmidecode)

    • LVM device information (the result of running vgdisplay -v)

    • snmpd version information (snmpd -v execution result)

    • Virtual Infrastructure information (the result of running virt-what)

    • blockdev (the result of running blockdev --report)

    When you collect logs, you may find the following message on the console. This does not mean failure. The logs are collected normally.

    hd#: bad special flag: 0x03
    ip_tables: (C) 2000-2002 Netfilter core team
    

    (Where hd# is the name of the IDE device that exists on the server)

  5. Script

    Start/stop script for a group that was created with the Cluster WebUI.

    If you specify a user-defined script other than the above (/opt/nec/clusterpro/scripts), it is not included in the log collection information. It must be collected separately.

  6. ESMPRO/AC Related logs

    Files that are collected by running the acupslog command.

  7. HA logs

    • System resource information

    • JVM monitor log

    • System monitor log

  8. Mirror statistics information

    • Mirror statistics information

      • In perf/disk

  9. Cluster statistics information

    • Cluster statistics information

      • In perf/cluster

  10. System resource statistics information

    • System resource statistics information

      • In perf/system

8.8.2. Syslog generations (-r option)

To collect syslogs for the number of generations specified, run the following command.

Example: Collecting logs for the 3 generations

# clplogcc -r 3

The following syslogs are included in the collected logs.

/var/log/messages
/var/log/messages.1
/var/log/messages.2
  • When no option is specified, only /var/log/messages is collected.

  • You can collect logs for 0 to 99 generations.

  • When 0 is specified, all syslogs are collected.

Number of Generation

Number of generations to be acquired

0

All Generations

1

Current

2

Current + Generation 1

3

Current + Generation 1 to 2

:

x

Current + Generation 1 to (x-1)

8.8.3. Output paths of log files (-o option)

  • Log file is named and be saved as "server name-log.tar.gz"

  • If an IP address is specified for the -n option, a log file is named and saved as "IP address-log.tar.gz."

  • Since log files are compressed by tar.gz, decompress them by adding the xzf option to the tar command.

If not specifying -o option

Logs are output in tmp of installation path.

# clplogcc
Collect Log hostname : Success
# ls /opt/nec/clusterpro/tmp
hostname-log.tar.gz

When the -o option is not specified:

If you run the command as follows, logs are located in the specified /home/log directory.

# clplogcc -o /home/log
Collect Log hostname: Success
# ls /home/log
hostname-log.tar.gz

8.8.4. Specifying log collector server (-n option)

By using the -n option, you can collect logs only from the specified server.

Example: Collecting logs from Server1 and Server3 in the cluster.

# clplogcc -n Server1 -n Server3
  • Specify a server in the same cluster.

  • The number of servers you can specify is within the maximum number of servers in the cluster configuration.

8.8.5. Collecting information when a failure occurs

When the following failure occurs, the information for analyzing the failure is collected.

  • When a cluster daemon configuring the cluster abnormally terminates due to interruption by a signal (core dump) or internal status error etc.

  • When a group resource activation error or deactivation error occurs

  • When monitoring error occurs in a monitor resource

Information to be collected is as follows:

  • Cluster information

    • Some module logs in EXPRESSCLUSTER servers

    • Dump files in the shared memory used by EXPRESSCLUSTER

    • Cluster configuration information files

    • Core files of EXPRESSCLUSTER module

  • OS information (/proc/*)

    • /proc/devices

    • /proc/partitions

    • /proc/mdstat

    • /proc/modules

    • /proc/mounts

    • /proc/meminfo

    • /proc/net/bond*

  • Information created by running a command

    • Results of the sysctl -a

    • Results of the ps

    • Results of the top

    • Results of the ipcs

    • Results of the netstat -in

    • Results of the netstat -apn

    • Results of the netstat -gn

    • Results of the netstat -rn

    • Results of the ifconfig

    • Results of the ip addr

    • Results of the ip -s l

    • Results of the df

    • Results of the raw -qa

    • journalctl -e execution result

These are collected by default in the log collection. You do not need to collect them separately.

8.9. Changing, backing up, and checking cluster configuration data (clpcfctrl command)

8.9.1. Creating a cluster and changing the cluster configuration data

the clpcfctrl --push command delivers cluster configuration data to servers.

Command line

clpcfctrl --push -l|-w [-c hostname|IP] [-h hostname|IP] [-p portnumber] [-x directory] [--force] [--nocheck]

Description

This command delivers the configuration data created by the Cluster WebUI to servers.

Option
--push

Specify this option when delivering the data. You cannot omit this option.

-l

Specify this option when using the configuration data saved by the Cluster WebUI on Linux.

-w
Specify this option when using the configuration data saved by the Cluster WebUI on Windows.
You cannot specify -l and -w together.
-c hostname | IP
Specifies a server to access for acquiring a list of servers. Specify a host name or IP address.
When this option is omitted, address in configuration data will be used.
-h hostname | IP
Specifies a server to which configuration data is delivered. Specify host name or IP address.
If this option is omitted, configuration data is delivered to all servers.
-p portnumber
Specifies a port number of data transfer port.
When this option is omitted, the default value will be used. In general, it is not necessary to specify this option.
-x directory
Specify this option when delivering configuration data to the specified directory.
This option is used with -l or -w.
When -l is specified, configuration data saved on the file system by the Cluster WebUI on Linux is used.
When -w is specified, configuration data saved by the Cluster WebUI on Windows is used.
--force

Even if there is a server that has not started, the configuration data is delivered forcefully.

--nocheck

When this option is specified, cluster configuration data is not checked. Use this option only when deleting a server.

Return Value

0

Success

Other than 0

Failure

Notes

Run this command as the root user.

When you run this command, access the servers in the order below, and use one of the paths that allowed successful access.

  1. via the IP address on the interconnect LAN

  2. via the IP address on the public LAN

Example of command execution

Example 1: Delivering configuration data that was saved on the file system using the Cluster WebUI on Linux

# clpcfctrl --push -l -x /mnt/config
file delivery to server 10.0.0.11 success.
file delivery to server 10.0.0.12 success.
The upload is completed successfully.(cfmgr:0)
Command succeeded.(code:0)

Example 2: Delivering the configuration data to the server which has been reinstalled.

# clpcfctrl --push -h server2
The upload is completed successfully.(cfmgr:0)
Command succeeded.(code:0)
Error Message

Message

Cause/Solution

Log in as root.

Log on as the root user.

This command is already run.

This command has been already started.

Invalid option.

The option is invalid.
Check the option.
Invalid mode.
Check if --push is specified.

Check if the --push option is specified.

The target directory does not exist.

The specified directory is not found.

Invalid host name.
Server specified by -h option is not included in the configuration data
The server specified with -h is not included in configuration data.
Check if the specified server name or IP address is valid.

Canceled.

Displayed when anything other than "y" is entered for command inquiry.

Failed to initialize the xml library. Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

Failed to load the configuration file.
Check if memory or OS resources are sufficient.

Same as above.

Failed to change the configuration file.
Check if memory or OS resources are sufficient.

Same as above.

Failed to load the policy files.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to load the cfctrl policy file.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the install path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the cfctrl path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the list of group.

Failed to acquire the list of group.

Failed to get the list of resource.

Failed to acquire the list of resource.

Failed to initialize the trncl library.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to connect to server %s.
Check if the other server is active and then run the command again.
Accessing the server has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Failed to connect to trnsv.
Check if the other server is active.

Accessing the server has failed. Check that other server has been started up.

Failed to get the collect size.

Getting the size of the collector file has failed.
Check if other server(s) has been started.

Failed to collect the file.

Collecting of the file has failed. Check if other server(s) has been started.

Failed to get the list of node.
Check if the server specified by -c is a member of the cluster.

Check to see if the server specified by -c is a cluster member.

Failed to check server property.
Check if the server name or ip addresses are correct.

Check if the server name and the IP address in the configuration information have been set correctly.

File delivery failed. Failed to deliver the configuration data.
Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Multi file delivery failed. Failed to deliver the configuration data.
Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Failed to deliver the configuration data.
Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
The directory "/work" is not found.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to make a working directory.

Check to see if the memory or OS resource is sufficient.

The directory does not exist.

Same as above.

This is not a directory.

Same as above.

The source file does not exist.

Same as above.

The source file is a directory.

Same as above.

The source directory does not exist.

Same as above.

The source file is a directory.

Same as above.

The source directory does not exist.

Same as above.

The source file is not a directory.

Same as above.

Failed to change the character code set (EUC to SJIS).

Same as above.

Failed to change the character code set (SJIS to EUC).

Same as above.

Command error.

Same as above.

Failed to initialize the cfmgr library.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to get size from the cfmgr library.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to allocate memory.

Check to see if the memory or OS resource is sufficient.

Failed to change the directory.

Same as above.

Failed to run the command.

Same as above.

Failed to make a directory.

Same as above.

Failed to remove the directory.

Same as above.

Failed to remove the file.

Same as above.

Failed to open the file.

Same as above.

Failed to read the file.

Same as above.

Failed to write the file.

Same as above.

Internal error.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

The upload is completed successfully.
To start the cluster, refer to "How to create a cluster"
in the Installation and Configuration Guide.
The upload is successfully completed.
To start the cluster, refer to "Creating a cluster" in "Creating the cluster configuration data"
The upload is completed successfully.
To apply the changes you made, shutdown and reboot the cluster.

The upload is successfully completed. To apply the changes you made, shut down the cluster, and reboot it.

The upload was stopped.
To upload the cluster configuration data, stop the cluster.

The upload was stopped. To upload the cluster configuration data, stop the cluster.

The upload was stopped.
To upload the cluster configuration data, stop the Mirror Agent.
The upload was stopped.
To upload the cluster configuration data, stop the Mirror Agent.
The upload was stopped.
To upload the cluster configuration data, stop the resources to which you made changes.
The upload was stopped.
To upload the cluster configuration data, stop the resources to which you made changes.
The upload was stopped.
To upload the cluster configuration data, stop the groups to which you made changes.

The upload was stopped. To upload the cluster configuration data, suspend the cluster. To upload, stop the group to which you made changes.

The upload was stopped.
To upload the cluster configuration data, suspend the cluster.

The upload was stopped. To upload the cluster configuration data, suspend the cluster.

The upload is completed successfully.
To apply the changes you made, restart the Alert Sync.
To apply the changes you made, restart the WebManager.
The upload is completed successfully.
To apply the changes you made, restart the Alert Sync.
To apply the changes you made, restart the WebManager service.
Internal error.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

The upload is completed successfully.

The upload is successfully completed.

The upload was stopped.
Failed to deliver the configuration data.
Check if the other server is active and run the command again.
The upload was stopped.
Failed to deliver the configuration data.
Check if the other server is active and run the command again.
The upload was stopped.
There is one or more servers that cannot be connected to.
To apply cluster configuration information forcibly, run the command again with "--force" option.

The upload was stopped. The server that cannot connect exists. To forcibly upload the cluster configuration information, run the command again with the --force option.

8.9.2. Backing up the Cluster configuration data

the clpcfctrl --pull command backups cluster configuration data.

Command line

clpcfctrl --pull -l|-w [-h hostname|IP] [-p portnumber] [-x directory]

Description

This command backs up cluster configuration data to be used for the Cluster WebUI.

Option
--pull

Specify this option when performing backup. You cannot omit this option.

-l
Specify this option when backing up configuration data that is used for the Cluster WebUI on Linux.
You cannot specify both -l and -w together.
-w
Specify this option when backing up configuration data that is used for the Cluster WebUI on Windows.
You cannot specify both -l and -w together.
-h hostname | IP
Specifies the source server for backup. Specify a host name or IP address.
When this option is omitted, the configuration data on the server running the command is used.
-p portnumber
Specifies a port number of data transfer port.
When this option is omitted, the default value is used. In general, it is not necessary to specify this option.
-x directory
Backs up the configuration data in the specified directory.
Use this option with either -l or -w.
When -l is specified, configuration data is backed up in the format which can be loaded by the Cluster WebUI on Linux.
When -w is specified, configuration data is saved in the format which can be loaded by the Cluster WebUI on Windows.
Return Value

0

Success

Other than 0

Failure

Notes

Run this command as the root user.

When you run this command, access the servers in the cluster in the order below, and use one of the paths that allowed successful access.

  1. via the IP address on the interconnect LAN

  2. via the IP address on the public LAN

Example of command execution

Example 1: Backing up configuration data to the specified directory so that the data can be loaded by the Cluster WebUI on Linux

# clpcfctrl --pull -l -x /mnt/config
Command succeeded.(code:0)
Error Message

Message

Cause/Solution

Log in as root.

Log on as the root user.

This command is already run.

This command has been already started.

Invalid option.

The option is invalid. Check the option.

Invalid mode.
Check if --push or --pull option is specified.

Check to see if the --pull is specified.

The target directory does not exist.

The specified directory does not exist.

Canceled.

Displayed when anything other than "y" is entered for command inquiry.

Failed to initialize the xml library.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to load the configuration file.
Check if memory or OS resources are sufficient.

Same as above.

Failed to change the configuration file.
Check if memory or OS resources are sufficient.

Same as above.

Failed to load the all.pol file.
Reinstall the RPM

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to load the cfctrl.pol file.
Reinstall the RPM

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the install path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the cfctrl path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM

Failed to initialize the trncl library.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to connect to server %1.

Accessing the server has failed. Check if other server(s) has been started.

Check if the other server is active and then run the command again.

Run the command again after the server has started up.

Failed to connect to trnsv.
Check if the other server is active.

Accessing the server has failed. Check if other server(s) has been started.

Failed to get configuration data.
Check if the other server is active.

Acquiring configuration data has failed. Check if other(s) server has been started.

The directory "/work" is not found.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM

Failed to make a working directory.

Check to see if the memory or OS resource is sufficient.

The directory does not exist.

Same as above.

This is not a directory.

Same as above.

The source file does not exist.

Same as above.

The source file is a directory.

Same as above.

The source directory does not exist.

Same as above.

The source file is not a directory.

Same as above.

Failed to change the character code set (EUC to SJIS).

Same as above.

Failed to change the character code set (SJIS to EUC).

Same as above.

Command error.

Same as above.

Failed to initialize the cfmgr library.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to get size from the cfmgr library.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to allocate memory.

Check to see if the memory or OS resource is sufficient.

Failed to change the directory.

Same as above.

Failed to run the command.

Same as above.

Failed to make a directory.

Same as above.

Failed to remove the directory.

Same as above.

Failed to remove the file.

Same as above.

Failed to open the file.

Same as above.

Failed to read the file.

Same as above.

Failed to write the file.

Same as above.

Internal error.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

8.9.3. Adding a resource without stopping the group

the clpcfctrl --dpush command adds a resource without stopping the group.

Command line

clpcfctrl --dpush -l|-w [-c hostname|IP] [-p portnumber] [-x directory] [--force]

Description

This command dynamically adds a resource without stopping the group.

Option
--dpush

Specify this option when dynamically adding a resource. You cannot omit this option.

-l
Specify this option when using the configuration data saved by the Cluster WebUI on Linux.
You cannot specify -l and -w together.
-w
Specify this option whe using the configuration data saved by the Cluster WebUI on Linux.
You cannot specify -l and -w together.
-c hostname | IP
Specifies a server to access for acquiring a list of servers. Specify a host name or IP address.
When this option is omitted, configuration data in the floppy disk will be used.
-p portnumber
Specifies a port number of data transfer port.
When this option is omitted, the default value will be used. In general, it is not necessary to specify this option.
-x directory
Specify this option when delivering configuration data to the specified directory.
This option is used with -l or -w.
When -l is specified, configuration data saved on the file system by the Cluster WebUI on Linux is used.
When -w is specified, configuration data saved by the Cluster WebUI on Windows is used.
--force

Even if there is a server that has not started, the configuration data is delivered forcefully.

Return Value

0

Success

Other than 0

Failure

Notes

Run this command as the root user.

When you run this command, access the servers in the order below, and use one of the paths that allowed successful access.

  1. via the IP address on the interconnect LAN

  2. via the IP address on the public LAN

For details on resources that support dynamic resource addition, refer to "How to add a resource without stopping the group" in "The system maintenance information" in the "Maintenance Guide".

To use this command, the internal version of EXPRESSCLUSTER of all the nodes in the cluster must be 3.2.1-1 or later.

While the dynamic resource addition command is running, do not resume the command. Otherwise, the cluster configuration data may become inconsistent, and the cluster may stop or the server may shut down.

If you abort the dynamic resource addition command, the activation status of the resource to be added may become undefined. In this case, run the command again or reboot the cluster manually.

Example of command execution

Example 1: Dynamically adding a resource using configuration data that was saved on the file system using the Cluster WebUI on Linux

# clpcfctrl --dpush -l -x /mnt/config
file delivery to server 10.0.0.11 success.
file delivery to server 10.0.0.12 success.
The upload is completed successfully.(cfmgr:0)
Command succeeded.(code:0)
Error Message

Message

Cause/Solution

Log in as root.

Log on as the root user.

This command is already run.

This command has been already started.

Invalid option.

The option is invalid.
Check the option.
Invalid mode.
Check if --push or --pull option is specified.

Check if the --push option is specified.

The target directory does not exist.

The specified directory is not found.

Invalid host name.
Server specified by -h option is not included in the configuration data.

The server specified with -h is not included in configuration data. Check if the specified server name or IP address is valid.

Canceled.

Displayed when anything other than "y" is entered for command inquiry.

Failed to initialize the xml library.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

Failed to load the configuration file.
Check if memory or OS resources are sufficient.

Same as above.

Failed to change the configuration file.
Check if memory or OS resources are sufficient.

Same as above.

Failed to load the all.pol file.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to load the cfctrl.pol file.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the install path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the cfctrl path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the list of group.

Failed to acquire the list of groups.

Failed to get the list of resource.

Failed to acquire the list of resources.

Failed to initialize the trncl library.
Check if memory or OS resources are sufficient.

Check to see if memory or OS resource is sufficient.

Failed to connect to server %1.
Check if the other server is active and then run the command again.
Accessing the server has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Failed to connect to trnsv.
Check if the other server is active.

Accessing the server has failed. Check if other server(s) has been started up.

Failed to get the collect size.

Getting the size of the collector file has failed. Check if other server(s) has been started.

Failed to collect the file.

Collecting the file has failed. Check if other server(s) has been started.

Failed to check server property.
Check if the server name or ip addresses are correct.

Check if the server name and the IP address in the configuration information have been set correctly.

File delivery failed.
Failed to deliver the configuration data. Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Multi file delivery failed.
Failed to deliver the configuration data. Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Failed to deliver the configuration data.
Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
The directory "work" is not found.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to make a working directory.

Check if the memory or OS resource is sufficient.

The directory does not exist.

Same as above.

This is not a directory.

Same as above.

The source file does not exist.

Same as above.

The source file is a directory.

Same as above.

The source directory does not exist.

Same as above.

The source file is not a directory.

Same as above.

Failed to change the character code set (EUC to SJIS).

Same as above.

Failed to change the character code set (SJIS to EUC).

Same as above.

Command error.

Same as above.

Failed to initialize the cfmgr library.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

Failed to get size from the cfmgr library.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

Failed to allocate memory.

Check if the memory or OS resource is sufficient.

Failed to change the directory.

Same as above.

Failed to run the command.

Same as above.

Failed to make a directory.

Same as above.

Failed to remove the directory.

Same as above.

Failed to remove the file.

Same as above.

Failed to open the file.

Same as above.

Failed to read the file.

Same as above.

Failed to write the file.

Same as above.

Internal error.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

The upload is completed successfully.
To start the cluster, refer to "How to create a cluster"
in the Installation and Configration Guide.
The upload is successfully completed.
The upload is completed successfully.
To apply the changes you made, shutdown and reboot the cluster.

The upload is successfully completed. To apply the changes you made, shut down the cluster, and reboot it.

The upload was stopped.
To upload the cluster configuration data, stop the cluster.

The upload was stopped. To upload the cluster configuration data, stop the cluster.

The upload was stopped.
To upload the cluster configuration data, stop the Mirror Agent.

The upload was stopped. To upload the cluster configuration data, stop the Mirror Agent.

The upload was stopped.
To upload the cluster configuration data, stop the resources to which you made changes.

The uploaded was stopped. To upload the cluster configuration data, stop the resource to which you made changes.

The upload was stopped.
To upload the cluster configuration data, stop the groups to which you made changes.

The upload was stopped. To upload the cluster configuration data, suspend the cluster. To upload, stop the group to which you made changes.

The upload was stopped.
To upload the cluster configuration data, suspend the cluster.

The upload was stopped. To upload the cluster configuration data, suspend the cluster.

The upload is completed successfully.
To apply the changes you made, restart the Alert Sync.
To apply the changes you made, restart the WebManager.

The upload is completed successfully. To apply the changes you made, restart the Alert Sync service. To apply the changes you made, restart the WebManager service.

The upload is completed successfully.

The upload is successfully completed.

The upload was stopped.
Failed to deliver the configuration data.
Check if the other server is active and run the command again.

The upload was stopped. Failed to deliver the cluster configuration data. Check if the other server is active and run the command again.

The upload was stopped.
There is one or more servers that cannot be connected to.
To apply cluster configuration information forcibly, run the command again with "--force" option.

The upload was stopped. The server that cannot connect exists. To forcibly upload the cluster configuration information, run the command again with the --force option.

The upload was stopped.
Failed to active resource.
Please check the setting of resource.

The upload was stopped. Failed to activate the resource. Check the setting of the resource.

8.9.4. Checking cluster configuration data

the clpcfctrl -- compcheck command checks cluster configuration data.

Comand line

clpcfctrl --compcheck -l|-w [-c hostname|IP] [-p portnumber] [-x directory]

Description

This command checks whether or not cluster configuration data is correct.

Option
--compcheck
Specify this option when checking configuration data.
You cannot omit this option.
-l
Specify this option when using the configuration data saved by the Cluster WebUI on Linux.
You cannot specify -l and -w together.
-w
Specify this option whe using the configuration data saved by the Cluster WebUI on Linux.
You cannot specify -l and -w together.
-x directory
Specify this option when delivering configuration data to the specified directory.
This option is used with -l or -w.
When -l is specified, configuration data saved on the file system by the Cluster WebUI on Linux is used.
When -w is specified, configuration data saved by the Cluster WebUI on Windows is used.
Return Value

0

Success

Other than 0

Failure

Notes

Run this command as the root user.

When you run this command, access the cluster servers in the order below, and use one of the paths that allowed successful access.

  1. Via the IP address on the interconnect LAN

  2. Via the IP address on the public LAN

This command finds the difference between the new and existing configuration data, and checks the resource configuration data in the added configuration data.

Example of command execution

Example 1: Checking configuration data that was saved on the file system using the Cluster WebUI on Linux

# clpcfctrl --compcheck -l -x /mnt/config
The check is completed successfully.(cfmgr:0)
Command succeeded.(code:0)
Error Message

Message

Cause/Solution

Log in as root.

Log in as the root user.

This command is already run.

This command has been already started.

Invalid option.

The option is invalid.
Check the option.

The target directory does not exist.

The specified directory is not found.

Canceled.

Displayed when anything other than "y" is entered for command inquiry.

Failed to initialize the xml library.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

Failed to load the configuration file.
Check if memory or OS resources are sufficient.

Same as above.

Failed to change the configuration file.
Check if memory or OS resources are sufficient.

Same as above.

Failed to load the all.pol file.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to load the cfctrl.pol file.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the install path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the cfctrl path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the list of group.

Failed to acquire the list of group.

Failed to get the list of resource.

Failed to acquire the list of resource.

Failed to initialize the trncl library.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

Failed to connect to server %1.
Check if the other server is active and then run the command again.
Accessing the server has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Failed to connect to trnsv.
Check if the other server is active.

Accessing the server has failed. Check that other server has been started up.

Failed to get the collect size.

Getting the size of the collector file has failed. Check if other server(s) has been started.

Failed to collect the file.

Collecting of the file has failed. Check if other server(s) has been started.

Failed to get the list of node.
Check if the server specified by -c is a member of the cluster.

Check to see if the server specified by -c is a cluster member.

Failed to check server property.
Check if the server name or ip addresses are correct.

Check if the server name and the IP address in the configuration information have been set correctly.

File delivery failed.
Failed to deliver the configuration data. Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Multi file delivery failed.
Failed to deliver the configuration data. Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
Failed to deliver the configuration data.
Check if the other server is active and run the command again.
Delivering configuration data has failed. Check if other server(s) has been started.
Run the command again after the server has started up.
The directory "work" is not found.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to make a working directory.

Check if the memory or OS resource is sufficient.

The directory does not exist.

Same as above.

This is not a directory.

Same as above.

The source file does not exist.

Same as above.

The source file is a directory.

Same as above.

The source directory does not exist.

Same as above.

The source file is not a directory.

Same as above.

Failed to change the character code set (EUC to SJIS).

Same as above.

Failed to change the character code set (SJIS to EUC).

Same as above.

Command error.

Failed to initialize the cfmgr library.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

Failed to get size from the cfmgr library.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

Failed to allocate memory.

Check if the memory or OS resource is sufficient.

Failed to change the directory.

Same as above.

Failed to run the command.

Same as above.

Failed to make a directory.

Same as above.

Failed to remove the directory.

Same as above.

Failed to remove the file.

Same as above.

Failed to open the file.

Same as above.

Failed to read the file.

Same as above.

Failed to write the file.

Same as above.

Internal error.
Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

8.10. Adjusting time-out temporarily (clptoratio command)

the clptoratio command extends or displays the current time-out ratio.

Command line
clptoratio -r ratio -t time
clptoratio -i
clptoratio -s
Description

This command displays or temporarily extends the various time-out values of the following on all servers in the cluster.

  • Monitor resource

  • Heartbeat resource (except kernel heartbet resource)

  • Mirror Agent

  • Mirror driver

  • Alert synchronous service

  • WebManager service

Option
-r ratio
Specifies the time-out ratio. Use 1 or larger integer. The maxim time-out ratio is 10,000.
If you specify "1," you can return the modified time-out ratio to the original as you can do so when you are using the -i option.
-t time
Specifies the extension period.
You can specify minutes for m, hours for h, and days for d. The maximum period of time is 30 days.
Example: 2m, 3h, 4d
-i

Sets back the modified time-out ratio.

-s

Refers to the current time-out ratio.

Return Value

0

Success

Other than 0

Failure

Remarks

When the cluster is shutdown, the time-out ratio you have set will become ineffective. However, if any server in the cluster is not shutdown, the time-out ratio and the extension period that you have set will be maintained.

With the -s option, you can only refer to the current time-out ratio. You cannot see other information such as remaining time of extended period.

You can see the original time-out value by using the status display command.

Heartbeat time-out

# clpstat --cl --detail

Monitor resource time-out

# clpstat --mon monitor resource name --detail
Notes

Run this command as the root user.

Make sure that the cluster daemon is activated in all servers in the cluster.

When you set the time-out ratio, make sure to specify the extension period. However, if you set "1" for the time-out ratio, you cannot specify the extension period.

You cannot specify a combination such as "2m3h," for the extension period.

When the server restarts within the ratio extension period, the time-out ratio is not returned to the original even after the extension period. In this case, run the clptoratio -i command to return it to the original.

Example of a command entry

Example 1: Doubling the time-out ratio for three days

# clptoratio -r 2 -t 3d

Example 2: Setting back the time-out ratio to original

# clptoratio -i

Example 3: Referring to the current time-out ratio

# clptoratio -s
present toratio : 2

The current time-out ratio is set to 2.

Error Message

Message

Cause/Solution

Log in as root.

Log on as the root user.

Invalid configuration file. Create valid cluster configuration data.

Create valid cluster configuration data by using the Cluster WebUI.

Invalid option.

Specify a valid option.

Specify a number in a valid range.

Specify a number within a valid range.

Specify a correct number.

Specify a valid number.

Scale factor must be specified by integer value of 1 or more.

Specify 1 or larger integer for ratio.

Specify scale factor in a range less than the maximum scale factor.

Specify a ratio that is not larger than the maximum ratio.

Set the correct extension period.

Set a valid extension period.

Ex) 2m, 3h, 4d

Set the extension period which does not exceed the maximum ratio.

Set the extension period in a range less than the maximum extension period.

Check if the cluster daemon is activated.

Could not connect to the server. Check if the cluster daemon is active.

Check if the cluster daemon is activated.

Server is not active.
Check if the cluster daemon is active.

Check if there is any server in the cluster with the cluster daemon stopped.

Connection was lost.
Check if there is a server where the cluster daemon is stopped in the cluster.

Check if there is any server in the cluster with the cluster daemon stopped.

Invalid parameter.

The value specified as a parameter of the command may be invalid.

Internal communication timeout has occurred in the cluster server.
If it occurs frequently, set the longer timeout.
Time-out has occurred in the internal communication of EXPRESSCLUSTER.
If it occurs frequently, set the internal communication time-out longer.

Processing failed on some servers. Check the status of failed servers.

There are servers that failed in processing. Check the status of server in the cluster.
Operate it while all the servers in the cluster are up and running.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

8.11. Modifying the log level and size (clplogcf command)

the clplogcf command modifies and displays log level and log output file size.

Command line

clplogcf -t type -l level -s size

Description

This command modifies the log level and log output file size, or displays the values currently configured.

Option
-t type
Specifies a module type whose settings will be changed.
If both -l and -s are omitted, the information set to the specified module will be displayed. For the types which can be specified, see the list of "Types that can be specified for the -t option".
-l level
Specifies a log level.
You can specify one of the following for a log level.
1, 2, 4, 8, 16, 32
You can see more detailed information as the log level increases.
For the default values for each module type, see the list of "Default log levels and log file sizes".
-s size
Specifies the size of a file for log output.
The unit is byte.
None

None Displays the entire configuration information currently set.

Return Value

0

Success

Other than 0

Failure

Remarks

Each type of output logs from EXPRESSCLUSTER uses four log files. Therefore, it is necessary to have the disk space that is four times larger than what is specified by -s.

Notes

Run this command as the root user.

To run this command, the EXPRESSCLUSTER event service must be started.

The changes made are effective only for the server on which this command was run.
The settings revert to the default values when the server restarts.
Example of command execution

Example 1: Modifying the pm log level

# clplogcf -t pm -l 8

Example 2:Seeing the pm log level and log file size

# clplogcf -t pm
TYPE, LEVEL, SIZE
pm, 8, 1000000

Example 3: Displaying the values currently configured

# clplogcf
TYPE, LEVEL, SIZE
trnsv, 4, 1000000
xml, 4, 1000000
logcf, 4, 1000000
Error Message

Message

Cause/Solution

Log in as root.

Log on as the root user.

Invalid option.

The option is invalid. Check the option.

Failed to change the configuration. Check if clpevent is running.

clpevent may not have been started.

Invalid level

The specified level is invalid.

Invalid size

The specified size is invalid.

Failed to load the configuration file. Check if memory or OS resources are sufficient.

Non-clustered server

Failed to initialize the xml library. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to print the configuration. Check if clpevent is running.

clpevent may not be started yet.

Types that can be specified for the -t option (y=yes, n=no)

Type

Module

Description

The EXPRESSCLUSTER Server

Replicator

Replicator DR

apicl

libclpapicl.so.1.0

API client library

y

y

y

apisv

libclpapisv.so.1.0

API server

y

y

y

bmccnf

clpbmccnf

BMC information update command

y

y

y

cl

clpcl

Cluster startup and stop command

y

y

y

cfctrl

clpcfctrl

Cluster generation, cluster information and backup command

y

y

y

cfmgr

libclpcfmgr.so.1.0

Cluster configuration data operation library

y

y

y

cpufreq

clpcpufreq

CPU Frequency control command

y

y

y

down

clpdown

Server stopping command

y

y

y

grp

clpgrp

Group startup, stop, move, and migration command

y

y

y

rsc

clprsc

Group resource startup and stop command

y

y

y

haltp

clpuserw

Shutdown monitoring

y

y

y

healthchk

clphealthchk

Process health check command

y

y

y

ibsv

clpibsv

Information Base server

y

y

y

lcns

libclplcns.so.1.0

License library

y

y

y

lcnsc

clplcnsc

License registration command

y

y

y

ledctrl

clpledctrl

Chassis identify control command

y

y

y

logcc

clplogcc

Collect Logs command

y

y

y

logcf

clplogcf

Log level and size modification command

y

y

y

logcmd

clplogcmd

Alert producing command

y

y

y

mail

clpmail

Mail Report

y

y

y

mgtmib

libclpmgtmib.so.1.0

SNMP coordination library

y

y

y

mm

libclpmm.so.1.0

External monitoring coordination library

y

y

y

monctrl

clpmonctrl

Monitoring control command

y

y

y

nm

clpnm

Node map management

y

y

y

pm

clppm

Process management

y

y

y

rc/rc_ex

clprc

Group and group resource management

y

y

y

reg

libclpreg.so.1.0

Reboot count control library

y

y

y

regctrl

clpregctrl

Reboot count control command

y

y

y

rm

clprm

Monitor management

y

y

y

roset

clproset

Disk control

y

y

y

relpath

clprelpath

Process kill command

y

y

y

scrpc

clpscrpc

Script log rotation command

y

y

y

skgxnr

libclpskgxnr.so.1.0

Oracle Clusterware linkage library

y

y

y

stat

clpstat

Status display command

y

y

y

stdn

clpstdn

Cluster shutdown command

y

y

y

toratio

clptoratio

Time-out ratio modification command

y

y

y

trap

clptrap

SNMP trap command

y

y

y

trncl

libclptrncl.so.1.0

Transaction library

y

y

y

trnreq

clptrnreq

Inter-cluster processing request command

y

y

y

rexec

clprexec

External monitoring link processing request command

y

y

y

bwctrl

clpbwctrl

Cluster activation synchronization wait processing control command

y

y

y

trnsv

clptrnsv

Transaction server

y

y

y

vxdgc

clpvxdgc

VxVM disk group import/deport command

y

y

y

alert

clpaltinsert

Alert

y

y

y

webmgr

clpwebmc

WebManager server

y

y

y

webalert

clpaltd

Alert synchronization

y

y

y

rd

clprd

Process for smart failover

y

y

y

rdl

libclprdl.so.1.0

Library for smart failover

y

y

y

disk

clpdisk

Disk resource

y

y

y

disk_fsck

clpdisk

Disk resource

y

Y

Y

exec

clpexec

Exec resource

y

y

y

fip

clpfip

FIP resource

y

y

y

fipw

clpfipw

FIP monitor resource

y

y

y

nas

clpnas

NAS resource

y

y

y

volmgr

clpvolmgr

Volume manager resource

y

y

y

vip

clpvip

Virtual IP resource

y

y

y

vm

clpvm

VM resource

y

y

y

ddns

clpddns

Dynamic DNS resource

y

y

y

arpw

clparpw

ARP monitor resource

y

y

y

bmcw

clpbmcw

BMC monitor resource

y

y

y

diskw

clpdiskw

Disk monitor resource

y

y

y

ipw

clpipw

IP monitor resource

y

y

y

miiw

clpmiiw

NIC link up/down monitor resource

y

y

y

mtw

clpmtw

Multi target monitor resource

y

y

y

osmw

clposmw

Oracle Clusterware Synchronization Management monitor resource

y

y

y

pidw

clppidw

PID monitor resource

y

y

y

volmgrw

clpvolmgrw

Volume manager monitor resource

y

y

y

userw

clpuserw

User-mode monitor resource

y

y

y

vipw

clpvipw

Virtual IP monitor resource

y

y

y

vmw

clpvmw

VM monitor resource

y

y

y

ddnsw

clpddnsw

Dynamic DNS monitor resource

y

y

y

mrw

clpmrw

Message receive monitor resource

y

y

y

genw

clpgenw

Custom monitor resource

y

y

y

bmchb

clpbmchb

BMC heartbeat

y

y

y

bmccmd

libclpbmc

BMC heartbeat library

y

y

y

snmpmgr

libclp snmpmgr

SNMP trap reception library

y

y

y

comhb

clpcomhb

COM heartbeat

y

y

y

diskhb

clpdiskhb

Disk heartbeat

y

y

y

lanhb

clplanhb

LAN heartbeat

y

y

y

lankhb

clplankhb

Kernel mode LAN heartbeat

y

y

y

pingnp

libclppingnp.so.1.0

PING network partition resolution

y

y

y

exping

libclppingnp.so.1.0

PING network partition resolution

y

y

y

mdadmn

libclpmdadmn.so.1.0

Mirror disk admin library

n

y

y

mdfunc

libclpmdfunc.so.1.0

Mirror disk function library

n

y

y

mdagent

clpmdagent

Mirror agent

n

y

y

mdctrl

clpmdctrl

Mirror disk resource operation command

n

y

n

mdinit

clpmdinit

Mirror disk initialization command

n

y

n

mdstat

clpmdstat

Mirror status display command

n

y

n

hdctrl

clphdctrl

Hybrid disk resource operation command

n

n

y

hdinit

clphdinit

Hybrid disk resource initialization command

n

n

y

hdstat

clphdstat

Hybrid status display command

n

n

y

md

clpmd

Mirror disk resource

n

y

n

md_fsck

clpmd

Mirror disk resource

n

y

n

mdw

clpmdw

Mirror disk monitor resource

n

y

n

mdnw

clpmdnw

Mirror disk connect monitor resource

n

y

n

hd

clphd

Hybrid disk resource

n

n

y

hd_fsck

clphd

Hybrid disk resource

n

n

y

hdw

clphdw

Hybrid disk monitor resource

n

n

y

hdnw

clphdnw

Hybrid disk connect monitor resource

n

n

y

oraclew

clp_oraclew

Oracle monitor resource

y

y

y

db2w

clp_db2w

DB2 monitor resource

y

y

y

psqlw

clp_psqlw

PostgreSQL monitor resource

y

y

y

mysqlw

clp_mysqlw

MySQL monitor resource

y

y

y

sybasew

clp_sybasew

Sybase monitor resource

y

y

y

odbcw

clp_odbcw

ODBC monitor resource

y

y

y

sqlserverw

clp_sqlserverw

SQL Server monitor resource

y

y

y

sambaw

clp_sambaw

Samba monitor resource

y

y

y

nfsw

clp_nfsw

NFS monitor resource

y

y

y

httpw

clp_httpw

HTTP monitor resource

y

y

y

ftpw

clp_ftpw

FTP monitor resource

y

y

y

smtpw

clp_smtpw

SMTP monitor resource

y

y

y

pop3w

clp_pop3w

POP3 monitor resource

y

y

y

imap4w

clp_imap4w

IMAP4 monitor resource

y

y

y

tuxw

clp_tuxw

Tuxedo monitor resource

y

y

y

wlsw

clp_wlsw

WebLogic monitor resource

y

y

y

wasw

clp_wasw

WebSphere monitor resource

y

y

y

otxw

clp_otxw

WebOTX monitor resource

y

y

y

jraw

clp_jraw

JVM monitor resource

y

y

y

sraw

clp_sraw

System monitor resource

y

y

y

psrw

clp_psrw

Process resource monitor resource

y

y

y

psw

clppsw

Process name monitor resource

y

y

y

mdperf

clpmdperf

Disk related information

n

y

y

vmctrl

libclpvmctrl.so.1.0

VMCtrl library

y

y

y

vmwcmd

clpvmwcmd

VMW command

y

y

y

awseip

clpawseip

AWS Elastic IP resource

y

y

y

awsvip

clpawsvip

AWS Virtual IP resource

y

y

y

awsdns

clpawsdns

AWS DNS resource

y

y

y

awseipw

clpawseipw

AWS Elastic IP monitor resource

y

y

y

awsvipw

clpawsvipw

AWS Virtual IP monitor resource

y

y

y

awsazw

clpawsazw

AWS AZ monitor resource

y

y

y

awsdnsw

clpawsdnsw

AWS DNS monitor resource

y

y

y

azurepp

clpazurepp

Azure probe port resource

y

y

y

azuredns

clpazuredns

Azure DNS resource

y

y

y

azureppw

clpazureppw

Azure probe port monitor resource

y

y

y

azurelbw

clpazurelbw

Azure load balance monitor resource

y

y

y

azurednsw

clpazurednsw

Azure DNS monitor resource

y

y

y

gcvip

clpgcvip

Google Cloud Virtual IP resource

y

y

y

gcvipw

clpgcvipw

Google Cloud Virtual IP monitor resource

y

y

y

gclbw

clpgclbw

Google Cloud load balance monitor resource

y

y

y

ocvip

clpocvip

Oracle Cloud Virtual IP resource

y

y

y

ocvipw

clpocvipw

Oracle Cloud Virtual IP monitor resource

y

y

y

oclbw

clpoclbw

Oracle Cloud load balance monitor resource

y

y

y

perfc

clpperfc

Cluster statistics information display command

y

y

y

cfchk

clpcfchk

Cluster configuration information check command

y

y

y

Default log levels and log file sizes

Type

Level

Size (byte)

apicl

4

5000000

apisv

4

5000000

bmccnf

4

1000000

cfmgr

4

1000000

cl

4

1000000

cfctrl

4

1000000

cpufreq

4

1000000

down

4

1000000

grp

4

1000000

rsc

4

1000000

haltp

4

1000000

healthchk

4

1000000

ibsv

4

5000000

lcns

4

1000000

lcnsc

4

1000000

ledctrl

4

1000000

logcc

4

1000000

logcf

4

1000000

logcmd

4

1000000

mail

4

1000000

mgtmib

4

1000000

mm

4

2000000

monctrl

4

1000000

nm

4

2000000

pm

4

1000000

rc

4

5000000

rc_ex

4

5000000

rd

4

1000000

rdl

4

1000000

reg

4

1000000

regctrl

4

1000000

rm

4

5000000

roset

4

1000000

relpath

4

1000000

scrpc

4

1000000

skgxnr

4

1000000

stat

4

1000000

stdn

4

1000000

toratio

4

1000000

trap

4

1000000

trncl

4

2000000

trnreq

4

1000000

rexec

4

1000000

trnsv

4

2000000

vxdgc

4

1000000

alert

4

1000000

webmgr

4

1000000

webalert

4

1000000

disk

4

2000000

disk_fsck

4

1000000

exec

4

1000000

fip

4

1000000

fipw

4

1000000

nas

4

1000000

volmgr

4

1000000

vip

4

1000000

vm

4

1000000

ddns

4

1000000

bwctrl

4

1000000

arpw

4

1000000

bmcw

4

1000000

db2w

4

4000000

diskw

4

1000000

ftpw

4

1000000

httpw

4

1000000

imap4w

4

1000000

ipw

4

1000000

miiw

4

1000000

mtw

4

1000000

mysqlw

4

4000000

nfsw

4

1000000

odbcw

4

4000000

oraclew

4

4000000

osmw

4

1000000

otxw

4

1000000

pidw

4

1000000

pop3w

4

1000000

psqlw

4

4000000

volmgrw

4

1000000

sambaw

4

1000000

smtpw

4

1000000

sqlserverw

4

4000000

sybasew

4

4000000

tuxw

4

1000000

userw

4

1000000

vipw

4

1000000

vmw

4

1000000

ddnsw

4

1000000

mrw

4

1000000

genw

4

1000000

wasw

4

1000000

wlsw

4

1000000

jraw

4

1000000

sraw

4

1000000

psrw

4

1000000

psw

4

1000000

bmchb

4

1000000

bmccmd

4

1000000

snmpmgr

4

1000000

comhb

4

1000000

diskhb

4

1000000

lanhb

4

1000000

lankhb

4

1000000

pingnp

4

1000000

exping

4

1000000

mdadmn

4

10000000

mdfunc

4

10000000

mdagent

4

10000000

mdctrl

4

10000000

mdinit

4

10000000

mdstat

4

10000000

hdctrl

4

10000000

hdinit

4

10000000

hdstat

4

10000000

md

4

10000000

md_fsck

4

10000000

mdw

4

10000000

mdnw

4

10000000

hd

4

10000000

hd_fsck

4

10000000

hdw

4

10000000

hdnw

4

10000000

vmctrl

4

10000000

vmwcmd

4

1000000

liscal 1

-

0

clpka 1

-

0

clpkhb 1

-

0

awseip

4

10000000

awsvip

4

10000000

awsdns

4

10000000

awseipw

4

10000000

awsvipw

4

10000000

awsazw

4

10000000

awsdnsw

4

10000000

azurepp

4

10000000

azuredns

4

10000000

azureppw

4

10000000

azurelbw

4

10000000

azurednsw

4

10000000

gcvip

4

10000000

gcvipw

4

10000000

gclbw

4

10000000

ocvip

4

10000000

ocvipw

4

10000000

oclbw

4

10000000

perfc

4

1000000

cfchk

4

1000000

1(1,2,3)

Output destination of log is syslog.

* If the module's size is zero, its log will not be produced.

8.12. Managing licenses (clplcnsc command)

the clplcnsc command manages licenses.

Command line
clplcnsc -i [licensefile...]
clplcnsc -l [-a]
clplcnsc -d serialno [-q]
clplcnsc -d -t [-q]
clplcnsc -d -a [-q]
clplcnsc --distribute
clplcnsc --reregister licensefile...
Description

This command registers, refers to and remove the licenses of the product version and trial version of this product.

Option
-i [licensefile...]

When a license file is specified, license information is acquired from the file for registration. You can specify multiple licenses. If nothing is specified, you need to enter license information interactively.

-l [-a]

References the registered license. The name of displayed items are as follows.

Item

Explanation

Serial No

Serial number (product version only)

User name

User name (trial version only)

Key

License key

Licensed Number of CPU

The number of license (per CPU)

Licensed Number of Computers

The number of license (per node)

Start date

Start date of valid period 2 3

End date

End date of valid period 2 3

Status

Status of the license

Status

Explanation

valid

valid

invalid

invalid

unknown

unknown

inactive

Before valid period 2 3

expired

After valid period 2 3

2(1,2,3,4)

Displayed in the case of the fixed term license

3(1,2,3,4)

Displayed in the case of the license of trial version

When -a option not specifed, the license status of "invalid", "unknown" and "expired" are not displayed.

When specifying -a option, all the licenses are displayed regardless of the license status.

-d <param>

param

serialno

Deletes the license with the specified serial number.

-t

Deletes all the registered licenses of the trial version.

-a

Deletes all the registered licenses.

-q

Deletes licenses without displaying a warning message. This is used with -d option.

--distribute

License files are delivered to all servers in the cluster. Generally, it is not necessary to run the command with this option.

--reregister licensefile...

Reregisters the fixed term license. Generally, it is not necessary to run the command with this option.

Return Value

0

Normal termination

1

Cancel

2

Normal termination (with licenses not synchronized)

* This means that license synchronization failed in the cluster at the time of license registration.

For the actions to be taken, refer to "Troubleshooting for licensing" in Appendix A "Troubleshooting" in the "Installation and Configuration Guide".

3

Initialization error

5

Invalid option

8

Other internal error

Example of a command entry
  • for registration

    • Registering the license interactively

      # clplcnsc -i
      

    Product Version/Product Version (Fixed Term)

    Select a product division

    Selection of License Version
      1. Product Version
      2. Trial Version
      e. Exit
    Select License Version. [1, 2, or e (default:1)] ...
    

    Enter a serial number

    Enter serial number [ Ex. XXXXXXXX000000] .
    

    Enter a license key

    Enter license key
    [ Ex. XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX] ...
    

    Trial Version

    Select a product division

    Selection of License Version
      1. Product Version
      2. Trial Version
      e. Exit
    Select License Version. [1, 2, or e (default:1)] ...
    

    Enter a user name

    Enter user name [ 1 to 63byte ] .
    

    Enter a license key

    Enter license key
    [Ex. XXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX].
    
    • Specify a license file

      # clplcnsc -i /tmp/cpulcns.key
      
  • for referring to the license

    # clplcnsc -l
    

Product version

< EXPRESSCLUSTER X <PRODUCT> >
Seq... 1
    Key..... A1234567-B1234567-C1234567-D1234567
    Licensed Number of CPU... 2
    Status... valid
Seq... 2
    Serial No..... AAAAAAAA000002
    Key..... E1234567-F1234567-G1234567-H1234567
    Licensed Number of Computers... 1
    Status... valid

Product version (fixed term)

< EXPRESSCLUSTER X <PRODUCT> >

Seq... 1
    Serial No..... AAAAAAAA000001
    Key..... A1234567-B1234567-C1234567-D1234567
    Start date..... 2018/01/01
    End date...... 2018/01/31
    Status........... valid
Seq... 2
    Serial No..... AAAAAAAA000002
    Key..... E1234567-F1234567-G1234567-H1234567
    Status........... inactive

Trial version

< EXPRESSCLUSTER X <TRIAL> >
Seq... 1
    Key..... A1234567-B1234567-C1234567-D1234567
    User name... NEC
    Start date..... 2018/01/01
    End date...... 2018/02/28
    Status........... valid
  • for deleting the license

    # clplcnsc -d AAAAAAAA000001 -q
    
  • for referring todeleting the license

    # clplcnsc -d -t -q
    
  • for deleting the license

    # clplcnsc -d -a
    

Deletion confirmation

Are you sure to remove the license? [y/n] ...
Notes

Run this command as the root user.

When you register a license, verify that the data transfer server is started up and a cluster has been generated for license synchronization.

When synchronizing the licenses, access the cluster servers in the order below, and use one of the paths that allowed successful access:

  1. via the IP address on the interconnect LAN

  2. via the IP address on the public LAN

  3. via the IP address whose name was resolved by the server name in the cluster configuration data.

When you delete a license, only the license information on the server where this command was run is deleted. The license information on other servers is not deleted. To delete the license information in the entire cluster, run this command in all servers.

Furthermore, when you use -d option and -a option together, all the trial version licenses and product version licenses will be deleted. To delete only the trial license, also specify the -t option. If the licenses including the product license have been deleted, register the product license again.

When you refer to a license which includes multiple licenses, all included licenses information are displayed.

If one or more servers in the cluster are not working, it may take time to execute this command.

Error Messages

Message

Cause/Solution

Processed license num
(success : %d error : %d).
The number of processed licenses (success:%d error:%d)
If error is not 0, check if the license information is correct.

Command succeeded.

The command ran successfully.

Command failed.

The command did not run successfully.

Command succeeded.
But the license was not applied to all the servers in the cluster
because there are one or more servers that are not started up.
There is one or more server that is not running in the cluster.
Perform the cluster generation steps in all servers in the cluster.
Refer to "Installing EXPRESSCLUSTER" the "Installation and Configuration Guide" for information on cluster generation.

Log in as root.

You are not authorized to run this command. Log on as the root user.

Invalid cluster configuration data. Check the cluster configuration information.

The cluster configuration data is invalid. Check the cluster configuration data by using the Cluster WebUI.

Initialization error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

The command is already run.

The command is already running. Check the running status by using a command such as the ps command.

The license is not registered.

The license has not been registered yet.

Could not open the license file. Check if the license file exists on the specified path.

Input/Output cannot be done to the license file. Check to see if the license file exists in the specified path.

Could not read the license file. Check if the license file exists on the specified path.

Same as above.

The field format of the license file is invalid. The license file may be corrupted. Check the destination from where the file is sent.

The field format of the license file is invalid. The license file may be corrupted. Check it with the file sender.

The cluster configuration data may be invalid or not registered.

The cluster configuration data may be invalid or not registered. Check the configuration data.

Failed to terminate the library. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to register the license. Check if the entered license information is correct.

Check to see if the entered license information is correct.

Failed to open the license. Check if the entered license information is correct.

Same as above.

Failed to remove the license.

License deletion failed. Parameter error may have occurred or resources (memory or OS) may not be sufficient.

This license is already registered.

This license has already been registered.
Check the registered license.

This license is already activated.

This license has already been activated.
Check the registered license.

This license is unavailable for this product.

This license is unavailable for this product.
Check the license.

The maximum number of licenses was reached.

The maximum number of registrable licenses was reached.
Delete the expired licenses.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

8.13. Locking disk I/O (clproset command)

the clproset command modifies and displays I/O permission of the partition device.

Command line
clproset -o [-d device_name | -r resource_name -t resource_type | -a | --lockout]
clproset -w [-d device_name | -r resource_name -t resource_type | -a | --lockout]
clproset -s [-d device_name | -r resource_name -t resource_type | -a | --lockout]
Description

This command configures the partition device I/O permission of a shared disk to ReadOnly/ReadWrite possible.

This command displays the configured I/O permission status of the partition device.

Option
-o

Sets the partition device I/O to ReadOnly. When ReadOnly is set to a partition device, you cannot write the data into the partition device.

-w

Sets the partition device I/O to ReadWrite possible. When ReadWrite is set to a partition device, you may read from and write the data into the partition device.

-s

Displays the I/O permission status of the partition device.

-d device_name

Specifies a partition device.

-r resource_name

Specifies a disk resource name.

-t resource_type

Specifies a group resource type. For the current EXPRESSCLUSTER version, always specify "disk" as group resource type.

-a

Runs this command against all disk resources.

--lockout

Runs this command against the device specified as a disk lock device.

Return Value

0

Success

Other than 0

Failure

Notes

Run this command as the root user.

This command can only be used on shared disk resources. It cannot be used for mirror disk resources and hybrid disk resources.

Make sure to specify a group resource type when specifying a resource name.

Example of command execution

Example 1: When changing the I/O of disk resource name, disk1, to RW:

# clproset -w -r disk1 -t disk
/dev/sdb5 : success

Example 2:When acquiring I/O information of all resources:

# clproset -s -a
/dev/sdb5 : rw (disk)
/dev/sdb6 : ro (raw)
/dev/sdb7 : ro (lockout)
Error Messages

Message

Cause/Solution

Log in as root.

Log on as the root user.

Invalid configuration file. Create valid cluster configuration data.

Create valid cluster configuration data by using the Cluster WebUI.

Invalid option.

Specify a valid option.

The -t option must be specified for the -r option.

Be sure to specify the -t option when using the -r option.

Specify 'disk' or 'raw to specify a group resource.

Specify "disk" or "raw" when specifying a group resource type.

Invalid group resource name. Specify a valid group resource name in the cluster.

Specify a valid group resource name.

Invalid device name.

Specify a valid device name.

Command timeout.

The OS may be heavily loaded. Check to see how heavily it is loaded.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Note

Do not use this command for the purposes other than those mentioned in "Verifying operation" in the "Installation and Configuration Guide".
If you run this command while the cluster daemon is activated, the file system may get corrupted.

8.16. Outputting messages (clplogcmd command)

the clplogcmd command registers the specified message with syslog and alert, reports the message by mail, or sends it as an SNMP trap.

Command line

clplogcmd -m message [--syslog] [--alert] [--mail] [--trap] [-i eventID] [-l level]

Note

Generally, it is not necessary to run this command for constructing or operating the cluster. You need to write the command in the exec resource script.

Description

Write this command in the exec resource script and output messages you want to send to the destination.

Options
-m message
Specifies a message. This option cannot be omitted. The maximum size of message is 511 bytes. (When syslog is specified as an output destination, the maximum size is 485 bytes.) The message exceeding the maximum size will not be shown.
You may use alphabets, numbers, and symbols. See below 8 for notes on them.
--syslog
--alert
--mail
--trap

Specify the output destination from syslog, alert, mail, and trap. (Multiple destinations can be specified.)

This parameter can be omitted. The syslog and alert will be the output destinations when the parameter is omitted.

For more information on output destinations, see "Directory structure of EXPRESSCLUSTER" in "The system maintenance information" in the "Maintenance Guide".

-i eventID

Specify event ID. The maximum value of event ID is 10000.

This parameter can be omitted. The default value 1 is set when the parameter is omitted.

-l level

Select a level of alert output from ERR, WARN, or INFO. The icon on the alert logs of the Cluster WebUI is determined according to the level you select here.

This parameter can be omitted. The default value INFO is set when the parameter is omitted. For more information, see the online manual.

8

Notes on using symbols in the message:

The symbols below must be enclosed in double quotes (" "):

# & ' ( ) ~ | ; : * < > , .

(For example, if you specify "#" in the message, # is produced.)

The symbols below must have a backslash \ in the beginning:

\ ! " & ' ( ) ~ | ; : * < > , .

(For example, if you specify \\ in the message, \ is produced.)

The symbol that must be enclosed in double quotes (" ") and have a backslash \ in the beginning:

(For example, if you specify "\`" in the message, ` will is produced.)

  • When there is a space in the message, it must be placed in enclosed in double quotes (" ").

  • The symbol % cannot be used in the message.

Return Value

0

Success

Other than 0

Failure

Notes

Run this command as the root user.

When mail is specified as the output destination, you need to make the settings to send mails by using the mail command.

Example of command execution

Example 1: When specifying only message (output destinations are syslog and alert):

When the following is written in the exec resource script, the message is produced in syslog and alert.

clplogcmd -m test1.

The following log is the log output in syslog:

Sep 1 14:00:00 server1 expresscls: <type: logcmd><event: 1> test1

Example 2: When specifying message, output destination, event ID, and level (output destination is mail):

When the following is written in the exec resource script, the message is sent to the mail address set in the Cluster Properties. For more information on the mail address settings, see "Alert Service tab" in "Cluster properties" in "2. Parameter details" in this guide.

clplogcmd -m test2 --mail -i 100 -l ERR

The following information is sent to the mail destination:

Message:test2
Type: logcmd
ID: 100
Host: server1
Date: 2004/09/01 14:00:00

Example 3: When specifying a message, output destination, event ID, and level (output destination is trap):

When the following is written in the exec resource script, the message is set to the SNMP trap destination set in Cluster Properties of the Cluster WebUI. For more information on the SNMP trap destination settings, see "Alert Service tab" in "Cluster properties" in "2. Parameter details" in this guide.

clplogcmd -m test3 --trap -i 200 -l ERR

The following information is sent to the SNMP trap destination:

Trap OID: clusterEventError
Attached data 1: clusterEventMessage = test3
Attached data 2: clusterEventID = 200
Attached data 3: clusterEventDateTime = 2011/08/01 09:00:00
Attached data 4: clusterEventServerName = server1
Attached data 5: clusterEventModuleName = logcmd

8.17. Controlling monitor resources (clpmonctrl command)

the clpmonctrl command controls the monitor resources.

Command line
clpmonctrl -s [-h <hostname>] [-m resource_name ...] [-w wait_time]
clpmonctrl -r [-h <hostname>] [-m resource_name ...] [-w wait_time]
clpmonctrl -c [-m resource_name ...]
clpmonctrl -v [-m resource_name ...]
clpmonctrl -e [-h <hostname>] -m resource_name
clpmonctrl -n [-h <hostname>] [-m resource_name]

Note

The -c and -v options must be run on all servers that control monitoring because the command controls the monitor resources on a single server.
It is recommended to use the Cluster WebUI if you suspend or resume monitor resources on all the servers in a cluster.
Description

This command suspends and/or resumes the monitor resources, displays and/or resets the times counter of the recovery action, and enable and/or disable Dummy Failure.

Option
-s

Suspends monitoring

-r

Resumes monitoring

-c

Resets the times counter of the recovery action.

-v

Displays the times counter of the recovery action.

-e

Enables the Dummy Failure. Be sure to specify a monitor resource name with the -m option.

-n

Disables the Dummy Failure. When a monitor resource name is specified with the -m option, the function is disabled only for the resource. When the -m option is omitted, the function is disabled for all monitor resources.

-m resource_name ...
Specifies one or more monitor resources to be controlled.
This option can be omitted. All monitor resources are controlled when the option is omitted.
-w wait_time
Waits for control monitoring on a monitor resource basis (in seconds).
This option can be omitted. The default value 5 is set when the option is omitted.
-h

Makes a processing request to the server specified in hostname. Makes a processing request to the server on which this command runs (local server) if the -h option is omitted. The -c and -v options cannot specify the server.

Return Value

0

Normal termination

1

Privilege for execution is invalid

2

The option is invalid

3

Initialization error

4

The cluster configuration data is invalid

5

Monitor resource is not registered.

6

The specified monitor resource is invalid

10

The cluster is not activated

11

The cluster daemon is suspended

12

Waiting for cluster synchronization

90

Monitoring control wait time-out

128

Duplicated activation

200

Server connection error

201

Invalid status

202

Invalid server name

255

Other internal error

Example of command execution

Example 1: When suspending all monitor resources:

# clpmonctrl -s
Command succeeded.

Example 2: When resuming all monitor resources:

# clpmonctrl -r
Command succeeded.
Remarks

If you suspend a monitor resource that is already suspended or resume that is already resumed, this command terminates with error, without changing the status of the monitor resource.

Notes

Run this command as the root user.

Check the status of monitor resource by using the status display clpstat command or Cluster WebUI.

Before you run this command, use the clpstat command or Cluster WebUI to verify that the status of monitor resources is in either "Online" or "Suspend."

If the recovery action for the monitor resource is set as follows, "Final Action Count", which displayed by the -v option, means the number of times "Execute Script before Final Action" is executed.

  • Execute Script before Final Action: Enable

  • final action: No Operation

Error Messages

Message

Causes/Solution

Command succeeded.

The command ran successfully.

Log in as root.

You are not authorized to run this command. Log on as the root user.

Initialization error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Invalid cluster configuration data. Check the cluster configuration information.

The cluster configuration data is invalid. Check the cluster configuration data by using the Cluster WebUI.

Monitor resource is not registered.

The monitor resource is not registered.

Specified monitor resource is not registered. Check the cluster configuration information.

The specified monitor resource is not registered.
Check the cluster configuration data by using the Cluster WebUI.

The cluster has been stopped. Check the active status of the cluster daemon by using the command such as ps command.

The cluster has been stopped.
Check the activation status of the cluster daemon by using a command such as ps command.

The cluster has been suspended. The cluster daemon has been suspended. Check activation status of the cluster daemon by using a command such as the ps command.

The cluster daemon has been suspended. Check the activation status of the cluster daemon by using a command such as ps command.

Waiting for synchronization of the cluster. The cluster is waiting for synchronization. Wait for a while and try again.

Synchronization of the cluster is awaited.
Try again after cluster synchronization is completed.
Monitor %1 was unregistered, ignored. The specified monitor resources %1 is not registered, but continue processing. Check the cluster configuration data.
There is an unregistered monitor resource in the specified monitor resources but it is ignored and the process is continued
Check the cluster configuration data by using the Cluster WebUI.
%1: Monitor resource name

Monitor %1 denied control permission, ignored. but continue processing.

The specified monitor resources contain the monitor resource which cannot be controlled, but it does not affect the process.
%1: Monitor resource name

This command is already run.

The command is already running. Check the running status by using a command such as ps command.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Could not connect to the server.
Check if the cluster service is active.

Check if the cluster service has started.

Some invalid status.
Check the status of cluster.
The status is invalid.
Check the status of the cluster.

Invalid server name. Specify a valid server name in the cluster.

Specify the valid server name in the cluster.

Monitor resource types that can be specified for the -m option

Type

Suspending/
resuming monitoring
Resetting the times counter of
the recovery action

Enabling/disabling Dummy Failure

arpw

n

y

n

bmcw

y

y

y

diskw

y

y

y

fipw

y

y

y

ipw

y

y

y

miiw

y

y

y

mtw

y

y

y

pidw

y

y

y

volmgrw

y

y

y

userw

y

y

n

vipw

n

y

n

vmw

y

y

n

ddnsw

n

y

n

mrw

y

y

n

genw

y

y

y

mdw

y

y

n

mdnw

y

y

n

hdw

y

y

n

hdnw

y

y

n

oraclew

y

y

y

osmw

y

y

y

db2w

y

y

y

psqlw

y

y

y

mysqlw

y

y

y

sybasew

y

y

y

odbcw

y

y

y

sqlserverw

y

y

y

sambaw

y

y

y

nfsw

y

y

y

httpw

y

y

y

ftpw

y

y

y

smtpw

y

y

y

pop3w

y

y

y

imap4w

y

y

y

tuxw

y

y

y

wlsw

y

y

y

wasw

y

y

y

otxw

y

y

y

jraw

y

y

y

sraw

y

y

y

psrw

y

y

y

psw

y

y

y

awsazw

y

y

y

awsdnsw

y

y

y

awseipw

y

y

y

awsvipw

y

y

y

azurednsw

y

y

y

azurelbw

y

y

y

azureppw

y

y

y

gclbw

y

y

y

gcvipw

y

y

y

oclbw

y

y

y

ocvipw

y

y

y

8.18. Controlling group resources (clprsc command)

the clprsc command controls group resources

Command line
clprsc -s resource_name [-h hostname] [-f] [--apito timeout]
clprsc -t resource_name [-h hostname] [-f] [--apito timeout]
clprsc -n resource_name
clprsc -v resource_name
Description

This command starts and stops group resources.

Option
-s

Starts group resources.

-t

Stops group resources.

-h

Requests processing to the server specified by the hostname.

When this option is skipped, request for processing is made to the following servers.

  • When the group is offline, the command execution server (local server).

  • When the group is online, the server where group is activated.

-f

When the group resource is online, all group resources that the specified group resource depends starts up.

When the group resource is offline, all group resources that the specified group resource depends stop.

-n

Displays the name of the server on which the group resource has been started.

--apito timeout

Specify the interval (internal communication timeout) to wait for the group resource start or stop in seconds. A value from 1 to 9999 can be specified.

If the --apito option is not specified, waiting for the group resource start or stop is performed according to the value set to the internal communication timeout of the cluster properties.

-v

Displays the failover counter of the group resource.

Return Value

0

success

Other than 0

failure

Example

Group resource configuration

# clpstat

=========== CLUSTER STATUS ===========
 Cluster : cluster
 <server>
  *server1..................: Online
     lanhb1                 : Normal
     lanhb2                 : Normal
     pingnp1                : Normal
   server2..................: Online
     lanhb1                 : Normal
     lanhb2                 : Normal
     pingnp1                : Normal
 <group>
     ManagementGroup........: Online
      current               : server1
      ManagementIP          : Online
     failover1..............: Online
      current               : server1
      fip1                  : Online
      md1                   : Online
      exec1                 : Online
     failover2..............: Online
      current               : server2
      fip2                  : Online
      md2                   : Online
      exec2                 : Online
 <monitor>
     ipw1                   : Normal
     mdnw1                  : Normal
     mdnw2                  : Normal
     mdw1                   : Normal
     mdw2                   : Normal
======================================

Example 1: When stopping the resource (fip1) of the group (failover 1)

# clprsc -t fip1
Command succeeded.
#clpstat

========== CLUSTER STATUS =============
 <abbreviation>
 <group>
     ManagementGroup........: Online
      current               : server1
      ManagementIP          : Online
     failover1..............:Online
      current               : server1
      fip1                  : Offline
      md1                   : Online
      exec1                 : Online
     failover2..............: Online
      current               : server2
      fip2                  : Online
      md2                   : Online
      exec2                 : Online
  <abbreviation>

Example 2: When starting the resource (fip1) of the group(failover 1)

# clprsc -s fip1
Command succeeded.
# clpstat

========== CLUSTER STATUS ============
 <Abbreviation>
 <group>
      ManagementGroup.......: Online
       current              : server1
       ManagementIP         : Online
      failover1.............: Online
       current              : server1
       fip1                 : Online
       md1                  : Online
       exect1               : Online
      failover2.............: Online
       current              : server2
       fip2                 : Online
       md2                  : Online
       exec2                : Online
 <Abbreviation>
Notes

Run this command as a user with root privileges.

Check the status of the group resources by the status display or the Cluster WebUI.

When there is an active group resource in the group, the group resources that are offline cannot be started on another server.

Error Messages

Message

Causes/Solution

Log in as root.

Run this command as a user with root privileges.

Invalid cluster configuration data. Check the cluster configuration information.

The cluster construction information is not correct. Check the cluster construction information by Cluster WebUI.

Invalid option.

Specify a correct option.

Could not connect server. Check if the cluster service is active.

Check if the EXPRESSCLUSTER is activated.

Invalid server status. Check if the cluster service is active.

Check if the EXPRESSCLUSTER is activated.

Server is not active. Check if the cluster service is active.

Check if the EXPRESSCLUSTER is activated.

Invalid server name. Specify a valid server name in the cluster.

Specify a correct server name in the cluster.

Connection was lost. Check if there is a server where the cluster service is stopped in the cluster.

Check if there is any server with EXPRESSCLUSTER service stopped in the cluster,

Internal communication timeout has occurred in the cluster server. If it occurs frequently, set the longer timeout.

Timeout has occurred in internal communication in the EXPRESSCLUSTER.
Set the internal communication timeout longer if this error occurs frequently.

The group resource is busy. Try again later.

Because the group resource is in the process of starting or stopping, wait for a while and try again.

An error occurred on group resource. Check the status of group resource.

Check the group resource status by using the Cluster WebUI or the clpstat command.

Could not start the group resource. Try it again after the other server is started, or after the Wait Synchronization time is timed out.

Wait until the other server starts or the wait time times out, and then start the group resources.

No operable group resource exists in the server.

Check there is a processable group resource on the specified server.

The group resource has already been started on the local server.

Check the group resource status by using the Cluster WebUI or clpstat command.

The group resource has already been started on the other server.

Check the group resource status by using the Cluster WebUI or clpstat command.
Stop the group to start the group resources on the local server.

The group resource has already been stopped.

Check the group resource status by using the Cluster WebUI or clpstat command.

Failed to start group resource. Check the status of group resource.

Check the group resource status by using the Cluster WebUI or clpstat command.

Failed to stop resource. Check the status of group resource.

Check the group resource status by using the Cluster WebUI or clpstat command.

Depended resource is not offline. Check the status of resource.

Because the status of the depended group resource is not offline, the group resource cannot be stopped. Stop the depended group resource or specify the -f option.

Depending resource is not online. Check the status of resource.

Because the status of the depended group is not online, the group resource cannot be started. Start the depended group resource or specify the -f option.

Invalid group resource name. Specify a valid group resource name in the cluster.

The group resource is not registered.

Server is not in a condition to start resource or any critical monitor error is detected.

Check the group resource status by using the Cluster WebUI or clpstat command.
An error is detected in a critical monitor on the server on which an attempt to start a group resource was made.

Internal error. Check if memory or OS resources are sufficient.

Memory or OS resources may be insufficient. Check them.

8.19. Controlling reboot count (clpregctrl command)

the clpregctrl command controls reboot count limitation.

Command line
clpregctrl --get
clpregctrl -g
clpregctrl --clear -t type -r registry
clpregctrl -c -t type -r registry

Note

This command must be run on all servers that control the reboot count limitation because the command controls the reboot count limitation on a single server.

Description

This command displays and/or initializes reboot count on a single server.

Option
-g, --get

Displays reboot count information.

-c, --clear

Initializes reboot count.

-t type

Specifies the type to initialize the reboot count. The type that can be specified is rc or rm.

-r registry

Specifies the registry name. The registry name that can be specified is haltcount.

Return Value

0

Normal termination

1

Privilege for execution is invalid

2

Duplicated activation

3

Option is invalid

4

The cluster configuration data is invalid

10 to 17

Internal error

20 to 22

Obtaining reboot count information has failed.

90

Allocating memory has failed.

91

Changing the work directory as failed.

Example of command execution

Display of reboot count information

# clpregctrl -g
******************************

-------------------------
   type       : rc
   registry   : haltcount
   comment    : halt count
   kind       : int
   value      : 0
   default    : 0

-------------------------
   type       : rm
   registry   : haltcount
   comment    : halt count
   kind       : int
   value      : 3
   default    : 0

******************************
Command succeeded.(code:0)

The reboot count is initialized in the following examples.

Run this command on server2 when you want to control the reboot count of server2.

Example1: When initializing the count of reboots caused by group resource error:

# clpregctrl -c -t rc -r haltcount
Command succeeded.(code:0)
#

Example2: When initializing the count of reboots caused by monitor resource error:

# clpregctrl -c -t rm -r haltcount
Command succeeded.(code:0)
#
Remarks

For information on the reboot count limit, see "Attributes common to group resources" "Reboot count limit" in "3. Group resource details" in this guide.

Notes

Run this command as the root user.

Error Messages

Message

Causes/Solution

Command succeeded.

The command ran successfully.

Log in as root.

You are not authorized to run this command. Log on as the root user.

The command is already executed. Check the execution state by using the "ps" command or some other command.

The command is already running. Check the running status by using a command such as ps command.

Invalid option.

Specify a valid option.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

8.20. Turning off warning light (clplamp command)

The clplamp command turns the warning light off.

Command line

clplamp -h hostname

Description

Turns the warning light of the specified server off.

If the reproduction of audio file is set, audio file reproduction is stopped.

Option
-h hostname

Specify a server whose warning light you want to turn off.

Return Value

0

Normal termination

Other than 0

Abnormal termination

Example

Example 1: When turning off the warning light and audio alert for server1

# clplamp -h server1
Command succeeded
Notes

This command should be performed by the user with root privilege.

8.21. Controlling CPU frequency (clpcpufreq command)

The clpcpufreq command controls CPU frequency.

Command line:
clpcpufreq --high [-h hostname]
clpcpufreq --low [-h hostname]
clpcpufreq -i [-h hostname]
clpcpufreq -s [-h hostname]
Description

This command enables/disables power-saving mode by CPU frequency control.

Option
--high

Sets CPU frequency to highest.

--low

Sets CPU frequency to lowest.

-i

Switch to automatic control by cluster.

-s

Displays the current CPU frequency level.

  • high: Frequency is highest

  • low: Frequency is lowered and it is in power-saving mode

-h hostname

Requests the server specified in hostname for processing.

If this is omitted, it requests the local server for processing.

Return Value

0

Completed successfully.

Other than 0

Terminated due to a failure.

Example
# clpcpufreq -s
performance
Command succeeded.
# clpcpufreq --high
Command succeeded.
# clpcpufreq --low -h server1
Command succeeded.
# clpcpufreq -i
Command succeeded
Remark

If the driver for CPU frequency control is not loaded, an error occurs.

If the Use CPU Frequency Control checkbox is not selected in the power saving settings in cluster properties, this command results in error.

Notes

This command must be executed by a user with the root privilege.

When you use CPU frequency control, it is required that frequency is changeable in the BIOS settings, and that the CPU supports frequency control by Windows OS power management function.

Error Messages

Message

Cause/Solution

Log in as root.

Log in as the root user.

This command is already run.

This command has already been run.

Invalid option.

Specify a valid option.

Invalid mode.
Check if --high or --low or -i or -s option is specified.

Check if either of the --high, --low, -I or -s option is specified.

Failed to initialize the xml library.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to load the configuration file.
Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Failed to load the all.pol file.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to load the cpufreq.pol file.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the install path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to get the cpufreq path.
Reinstall the RPM.

Reinstall the EXPRESSCLUSTER Server RPM.

Failed to initialize the apicl library.
Reinstall the RPM.

Check to see if the memory or OS resource is sufficient.

Failed to change CPU frequency settings.
Check the BIOS settings and the OS settings.
Check if the cluster is started.
Check if the setting is configured so that CPU frequency
control is used.
Check the BIOS settings and the OS settings.
Check if the cluster service is started.
Check if the setting is configured so that CPU frequency control is used.
Failed to change CPU frequency settings.
Check the BIOS settings and the OS settings.
Check if the cluster is started.
Check if the setting is configured so that CPU frequency
control is used.
Check the BIOS settings and the OS settings.
Check if the cluster service is started.
Check if the setting is configured so that CPU frequency control is used.

Internal error. Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

8.22. Controlling chassis identify lamp (clpledctrl command)

The clpledctrl command controls the chassis identify function.

Command line
clpledctrl -d [-h hostname] [-a] [-w timeout]
clpledctrl -i [-h hostname] [-a] [-w timeout]
Description

This command disables/enables chassis identify function.

Option
-d

Disables the chassis identify function.

-i

Enables the chassis identify function.

-h hostname

Specifies the name of the server which enables/disables the chassis identify function. Specify -a to omit this.

-a

All servers in the cluster are the targets.

The -a option can be omitted. If so, specify hostname.

-w timeout

Specifies the timeout value of the command by the second.

If the -w option is not specified, it waits for 30 seconds.

Return Value

0

Completed successfully.

Other than 0

Terminated due to a failure.

Notes

This command must be executed by a user with the root privilege.

Execute this command in the server operating normally in the same cluster as the one which the target server belongs to.

If you disable the chassis identify function by this command, it is canceled when the cluster is restarted or when the target server recovers the normal status.

Examples

Example 1: When disabling (i.e. turn off the lamp which is turned on) the chassis identify function in server1 (specify the command timeout as 60 seconds)

# clpledctrl -d server1 -w 60

Example 2: When disabling chassis identify in all servers in the cluster

# clpledctrl -d -a

Example 3: When enabling the chassis identify function in server1 where the function was disabled

# clpledctrl -i server1

The result of command execution is displayed as follows:

Detail of the processing Server name: Result (Cause if failed)

Error messages

Message

Cause/solution

Log in as root.

Log in as the root user.

Invalid option.

The command line option is invalid. Specify the correct option.

Could not connect to the data transfer server.
Check if the server has started up.

Check if the server has started up.

Could not connect to all data transfer servers.
Check if the servers have started up.

Check the all servers in the cluster have started up.

Command timeout.

The cause may be heavy load on OS and so on. Check this.

Chassis identify is not setting or active at all servers.

Chassis identify is disabled or not used.

Failed to obtain the list of nodes.
Specify a valid server name in the cluster.

Specify a valid server name in the cluster.

All servers are busy. Check if this command is already run.

This command may be run already. Check it.

Internal error. Check if memory or OS resource is sufficient.

Check if the memory or OS resource is sufficient.

8.23. Processing inter-cluster linkage (clptrnreq command)

The clptrnreq command requests a server to execute a process.

Command line

clptrnreq -t request_code -h IP [-r resource_name] [-s script_file] [-w timeout]

Description

The command issues the request to execute specified process to the server in another cluster.

Option
-t request_code
Specifies the request code of the process to be executed.
The following request codes can be specified:
GRP_FAILOVER Group failover
EXEC_SCRIPT Execute script
-h IP

Specifies the server to issue the request to execute the process with IP address. You can specify more than one server by separating by commas.

When you specify group failover for request code, specify the IP addresses of all the servers in the cluster.

-r resource_name

Specifies the resource name which belongs to the target group for the request for process when GRP_FAILOVER is specified for request code.

If GRP_FAILOVER is specified, -r cannot be omitted.

-s script_file

Specifies the file name of the script to be executed (e.g. batch file or executable file) when EXEC_SCRIPT is specified for request code.

The script needs to be created in the work\trnreq folder in the folder where EXPRESSCLUSTER is installed in each server specified with -h.de.

If EXEC_SCRIPT is specified, -s cannot be omitted.

-w timeout

Specifies the timeout value of the command by the second.

If the -w option is not specified, the command waits 30 seconds.

Return Value

0

Completed successfully.

Other than 0

Terminated due to a failure.

Notes

This command must be executed by a user with the root privilege.

Examples

Example 1: When performing a failover on the group having the exec1 resource of another cluster

# clptrnreq -t GRP_FAILOVER -h 10.0.0.1,10.0.0.2 -r exec1
Command succeeded.

Example 2: When executing the scrpit1.bat script by the server with IP address 10.0.0.1

# clptrnreq -t EXEC_SCRIPT -h 10.0.0.1 -s script1.bat
Command Succeeded.
Error messages

Message

Cause/solution

Log in as root.

Log in as the root user.

Invalid option.

The command line option is invalid. Specify the correct option.

Could not connect to the data transfer server.
Check if the server has started up.

Check if the server has started up.

Could not connect to all data transfer servers.
Check if the servers have started up.

Check if all the servers in the cluster have started up.

Command timeout.

The cause may be heavy load on OS and so on. Check this.

All servers are busy. Check if this command is already run.

This command may be run already. Check it.

GRP_FAILOVER %s : Group that specified resource(%s) belongs to is offline.

Failover process is not performed because the group to which the specified resource belongs is not started.

EXEC_SCRIPT %s : Specified script(%s) does not exist.

The specified script does not exist.
Check it.

EXEC_SCRIPT %s : Specified script(%s) is not executable.

The specified script could not be executed.
Check that execution is permitted.

%s %s : This server is not permitted to execute clptrnreq.

The server that executed the command does not have permission. Check that the server is registered to the connection restriction IP list of Cluster WebUI.

GRP_FAILOVER %s : Specified resource(%s) does not exist.

The specified resource does not exist.
Check it.

%s %s : %s failed in execute..

Failed to execute the specified action.

Internal error. Check if memory or OS resource is sufficient.

Check if the memory or OS resource is sufficient.

8.24. Requesting processing to cluster servers (clprexec command)

This command requests a server to execute a process.

Command line
clprexec --failover ( [group_name] | [-r resource_name] ) -h IP [-w timeout] [-p port_number] [-o logfile_path]
clprexec --script script_file -h IP [-p port_number] [-w timeout] [-o logfile_path]
clprexec --notice ( [mrw_name] | [-k category[.keyword]] ) -h IP [-p port_number] [-w timeout] [-o logfile_path]
clprexec --clear ( [mrw_name] | [-k category[.keyword]] ) -h IP [-p port_number] [-w timeout] [-o logfile_path]
Description

This command is an expansion of the existing clptrnreq command and has additional functions such as issuing a processing request (error message) from the external monitor to the EXPRESSCLUSTER server.

Option
--failover

Requests group failover. Specify a group name for group_name.

When not specifying the group name, specify the name of a resource that belongs to the group by using the -r option.

--script script_name

Requests script execution.

For script_name, specify the file name of the script to execute (such as a shell script or executable file).

The script must be created in the work/rexec directory, which is in the directory where EXPRESSCLUSTER is installed, on each server specified using -h.

--notice

Sends an error message to the EXPRESSCLUSTER server.

Specify a message receive monitor resource name for mrw_name.

When not specifying the monitor resource name, specify the category and keyword of the message receive monitor resource by using the -k option.

--clear

Requests changing the status of the message receive monitor resource from "Abnormal" to "Normal."

Specify a message receive monitor resource name for mrw_name.

When not specifying the monitor resource name, specify the category and keyword of the message receive monitor resource by using the -k option.

-h IP Address

Specify the IP addresses of EXPRESSCLUSTER servers that receive the processing request.

Up to 32 IP addresses can be specified by separating them with commas.

* If this option is omitted, the processing request is issued to the local server.

-r resource_name

Specify the name of a resource that belongs to the target group for the processing request when the --failover option is specified.

-k category[.keyword]

For category, specify the category specified for the message receive monitor when the --notice or --clear option is specified.

To specify the keyword of the message receive monitor resource, specify them by separating them with dot after category.

-p port_number

Specify the port number.

For port_number, specify the data transfer port number specified for the server that receives the processing request.

The default value, 29002, is used if this option is omitted.

-o logfile_path

For logfile_path, specify the file path along which the detailed log of this command is output.

The file contains the log of one command execution.

* If this option is not specified on a server where EXPRESSCLUSTER is not installed, the log is always output to the standard output.

-w timeout

Specify the command timeout time. The default, 180 seconds, is used if this option is not specified.

A value from 5 to MAXINT can be specified.

Return Value

0

Completed successfully.

Other than 0

Terminated due to a failure.

Notes

When issuing error messages by using the clprexec command, the message receive monitor resources for which an action to take in EXPRESSCLUSTER server when an error occurs is specified must be registered and started.

The server that has the IP address specified for the -h option must satisfy the following conditions:

  • EXPRESSCLUSTER X3.0 or later must be installed.

  • EXPRESSCLUSTER must be running.
    (When an option other than --script is used)
  • mrw must be set up and running.
    (When the --notice or --clear option is used)

When using the Controlling connection by using client IP address function, add the IP address of the device in which the clprexec command is executed to the IP Addresses of the Accessible Clients list.

For details of the Controlling connection by using client IP address function, see "WebManager tab" in "Cluster properties" in "2. Parameter details" in this guide.

Examples

Example 1: This example shows how to issue a request to fail over the group failover1 to EXPRESSCLUSTER server 1 (10.0.0.1):

# clprexec --failover failover1 -h 10.0.0.1 -p 29002

Example 2: This example shows how to issue a request to fail over the group to which the group resource (exec1) belongs to EXPRESSCLUSTER server 1 (10.0.0.1):

# clprexec --failover -r exec1 -h 10.0.0.1

Example 3: This example shows how to issue a request to execute the script (script1.sh) on EXPRESSCLUSTER server 1 (10.0.0.1):

# clprexec --script script1.sh -h 10.0.0.1

Example 4: This example shows how to issue an error message to EXPRESSCLUSTER server 1 (10.0.0.1):

*mrw1 set, category: earthquake, keyword: scale3

  • This example shows how to specify a message receive monitor resource name:

    # clprexec --notice mrw1 -h 10.0.0.1 -w 30 -p /tmp/clprexec/ lprexec.log
    
  • This example shows how to specify the category and keyword specified for the message receive monitor resource:

    # clprexec --notice -k earthquake.scale3 -h 10.0.0.1 -w 30 -p /tmp/clprexec/clprexec.log
    

Example 5: This example shows how to issue a request to change the monitor status of mrw1 to EXPRESSCLUSTER server 1 (10.0.0.1):

*mrw1 set, category: earthquake, keyword: scale3

  • This example shows how to specify a message receive monitor resource name:

    # clprexec --clear mrw1 -h 10.0.0.1
    
  • This example shows how to specify the category and keyword specified for the message receive monitor resource:

    # clprexec --clear -k earthquake.scale3 -h 10.0.0.1
    
Error messages

Message

Cause/solution

rexec_ver:%s

-

%s %s : %s succeeded.

-

%s %s : %s will be executed from now.

Check the processing result on the server that received the request.

%s %s : Group Failover did not execute because Group(%s) is offline.

-

%s %s : Group migration did not execute because Group(%s) is offline.

-

Invalid option.

Check the command argument.

Could not connect to the data transfer servers. Check if the servers have started up.

Check whether the specified IP address is correct and whether the server that has the IP address is running.

Command timeout.

Check whether the processing is complete on the server that has the specified IP address.

All servers are busy.Check if this command is already run.

This command might already be running. Check whether this is so.

%s %s : This server is not permitted to execute clprexec.

Check whether the IP address of the server that executes the command is registered in the list of client IP addresses that are not allowed to connect to the Cluster WebUI.

%s %s : Specified monitor resource(%s) does not exist.

Check the command argument.

%s %s : Specified resource(Category:%s, Keyword:%s) does not exist.

Check the command argument.

%s failed in execute.

Check the status of the EXPRESSCLUSTER server that received the request.

8.25. Changing BMC information (clpbmccnf command)

The clpbmccnf command changes the information on BMC user name and password.

Command line

clpbmccnf [-u username] [-p password]

Description

This command changes the user name/password for the LAN access of the baseboard management controller (BMC) which EXPRESSCLUSTER uses for chassis identify or forced stop.

Option
-u username

Specifies the user name for BMC LAN access used by EXPRESSCLUSTER. A user name with root privilege needs to be specified.

The -u option can be omitted. Upon omission, when the -p option is specified, the value currently set for user name is used. If there is no option specified, it is configured interactively.

-p password
Specifies the password for BMC LAN access used by EXPRESSCLUSTER. The -p option can be omitted.
Upon omission, when the -u option is specified, the value currently set for password is used. If there is no option specified, it is configured interactively.
Return Value

0

Completed successfully.

Other than 0

Terminated due to a failure.

Notes

This command must be executed by a user with root privilege.

Execute this command when the cluster is in normal status.

BMC information update by this command is enabled when the cluster is started/resumed next time.

This command does not change the BMC settings. Use a tool attached with the server or other tools in conformity with IPMI standard to check or change the BMC account settings.

Examples

When you changed the IPMI account password of the BMC in server1 to mypassword, execute the following on server1:

# clpbmccnf -p mypassword

Alternatively, enter the data interactively as follows:

# clpbmccnf
New user name: <- If there is no change, press Return to skip
New password: *********
Retype new password: *********
Cluster configuration updated successfully.
Error messages

Message

Cause/solution

Log in as root

Log in as the root user.

Invalid option.

The command line option is invalid. Specify the correct option.

Failed to download the cluster configuration data. Check if the cluster status is normal.

Downloading the cluster configuration data has been failed. Check if the cluster status is normal.

Failed to upload the cluster configuration data. Check if the cluster status is normal.

Uploading the cluster configuration data has been failed. Check if the cluster status is normal.

Invalid configuration file. Create valid cluster configuration data.

The cluster configuration data is invalid. Check the cluster configuration data by using the Cluster WebUI.

Internal error. Check if memory or OS resources are sufficient.

Check if the memory or OS resource is sufficient.

8.26. Controlling cluster activation synchronization wait processing (clpbwctrl command)

The clpbwctrl command controls the cluster activation synchronization wait processing.

Command line
clpbwctrl -c
clpbwctrl -h
Description

This command skips the cluster activation synchronization wait time that occurs if the server is started when the cluster services for all the servers in the cluster are stopped.

Option
-c, --cancel

Cancels the cluster activation synchronization wait processing.

-h, --help

Displays the usage.

Return Value

0

Completed successfully.

Other than 0

Terminated due to a failure.

Notes

This command must be executed by a user with root privileges.

Examples

This example shows how to cancel the cluster activation synchronization wait processing:

# clpbwctrl -c
Command succeeded.
Error messages

Message

Cause/solution

Log in as root

Log in as a root user.

Invalid option.

The command option is invalid.
Specify correct option.

Cluster service has already been started.

The cluster has already been started. It is not in startup synchronization waiting status.

The cluster is not waiting for synchronization.

The cluster is not in startup synchronization waiting processing. The cluster service stop or other causes are possible.

Command Timeout.

Command execution timeout.

Internal error.

Internal error occurred.

8.27. Estimating the amount of resource usage (clpprer command)

Estimates the future value from the transition of the resource use amount data listed in the input file, and then outputs the estimate data to a file. Also, the result of threshold judgment on the estimate data can be confirmed.

Command line

clpprer -i inputfile -o outputfile [-p number] [-t number [-l]]

Description

Estimates the future value from the tendency of the given resource use amount data.

Option
-i inputfile

Specifies the resource data for which a future value is to be obtained.

-o outputfile

Specifies the name of the file to which the estimate result is output.

-p number

Specifies the number of estimate data items.

If omitted, 30 items of estimate data are obtained.

-t number

Specifies the threshold to be compared with the estimate data.

-l

Valid only when the threshold is set with the -t option. Judges the status to be an error when the data value is less than the threshold.

Return Value

0

Normal end without threshold judgment

1

Error occurrence

2

As a result of threshold judgment, the input data is determined to have exceeded the threshold.

3

As a result of threshold judgment, the estimate data is determined to have exceeded the threshold.

4

As a result of threshold judgment, the data is determined to have not exceeded the threshold.

5

If the number of data items to be analyzed is less than the recommended number of data items to be analyzed (120), the input data is determined to have exceeded the threshold as a result of threshold judgment.

6

If the number of data items to be analyzed is less than the recommended number of data items to be analyzed (120), the estimate data is determined to have exceeded the threshold as a result of threshold judgment.

7

If the number of data items to be analyzed is less than the recommended number of data items to be analyzed (120), the data is determined to have not exceeded the threshold as a result of threshold judgment.

Notes

This command can be used only when the license for the system monitor resource (System Resource Agent) is registered. (If the license is registered, you do not have to set up the system monitor resource when configuring a cluster.)

The maximum number of input data items of the resource data file specified with the -i option is 500. A certain number of input data items are required to estimate the amount of resource usage. However, if the number of input data items is large, it takes a considerable amount of time to perform the analysis. So, it is recommended that the number of input data items be restricted to about 120. Moreover, the maximum number of output data items that can be specified in option -p is 500.

If the time data for the input file is not arranged in ascending order, the estimate will not be appropriate. In the input file, therefore, set the time data arranged in ascending order.

Input file

The input file format is explained below. Prepare an input file which contains the resource usage data for which to obtain an estimate, in the following format.

The input file format is CSV. One piece of data is coded in the form of date and time, numeric value.

Moreover, the data and time format is YYYY/MM/DD hh:mm:ss.

File example

Examples

The estimation of the future value is explained using a simple example.

When an error is detected in the input data:

If the latest value of the input data exceeds the threshold, an error is assumed and a return value of 2 is returned. If the number of input data items is less than the recommended value (=120), a return value of 5 is returned.

Figure: Error detection in the input data

When an error is detected in the estimate data:

If the estimate data exceeds the threshold, an error is assumed and a return value of 3 is returned. If the number of input data items is less than the recommended value (=120), a return value of 6 is returned.

Figure: Error detection in the estimate data

When no threshold error is detected:

If neither the input data nor the estimate data exceeds the threshold, a return value of 4 is returned. If the number of input data items is less than the recommended value (=120), a return value of 7 is returned.

Figure: When no threshold error is detected

When the -l option is used:

If the -l option is used, an error is assumed when the data is less than the threshold.

Figure: Use of the -l option

Examples

Prepare a file which contains data in the specified format, and then execute the clpprer command. The estimate result can be confirmed as the output file.

Input file: test.csv

2012/06/14 10:00:00,10.0
2012/06/14 10:01:00,10.5
2012/06/14 10:02:00,11.0
# clpprer -i test.csv -o result.csv

Output result: result.csv

2012/06/14 10:03:00,11.5
2012/06/14 10:04:00,12.0
2012/06/14 10:05:00,12.5
2012/06/14 10:06:00,13.0
2012/06/14 10:07:00,13.5

:

Also, by specifying a threshold as an option, you can confirm the threshold judgment result for the estimate at the command prompt.

# clpprer -i test.csv -o result.csv -t 12.5

Execution result

Detect over threshold. datetime = 2012/06/14 10:06:00, data = 13.00, threshold = 12.5

Error messages

Message

Causes/Solution

Normal state.

As a result of threshold judgment, no data exceeding the threshold is detected.

Detect over threshold. datetime = %s, data = %s, threshold = %s

As a result of threshold judgment, data exceeding the threshold is detected.

Detect under threshold. datetime = %s, data = %s, threshold = %s

As a result of threshold judgment with the -l option, data less than the threshold is detected.

License is nothing.

The license for the valid System Resource Agent is not registered. Check to see the license.

Inputfile is none.

The specified input data file does not exist.

Inputfile length error.

The path for the specified input data file is too long. Specify no more than 1023 bytes.

Output directory does not exist.

The directory specified with the output file does not exist. Check whether the specified directory exists.

Outputfile length error.

The path for the specified output file is too long. Specify no more than 1023 bytes.

Invalid number of -p.

The value specified in the -p option is invalid.

Invalid number of -t.

The value specified in the -t option is invalid.

Not analyze under threshold(not set -t) .

The -t option is not specified. When using the -I option, also specify the -t option.

File open error [%s]. errno = %s

The file failed to open. The amount of memory or OS resources may be insufficient. Check for any insufficiency.

Inputfile is invalid. cols = %s

The number of input data items is not correct. Set the number of input data items to 2 or more.

Inputfile is invalid. rows = %s

The input data format is incorrect. One line needs to be divided into two rows.

Invalid date format. [expected YYYY/MM/DD HH:MM:SS]

The date of the input data is not of the correct format. Check to see the data.

Invalid date format. Not sorted in ascending order.

Input data is not arranged in ascending order of date and time. Check the data.

File read error.

An invalid value is set in the input data. Check the data.

Too large number of data [%s]. Max number of data is %s.

The number of input data items exceeds the maximum value (500). Reduce the number of data items.

Input number of data is smaller than recommendable number.

The number of input data items is less than the recommended number of data items to be analyzed (120).
* Data is analyzed even if the recommended number of data items to be analyzed is small.

Internal error.

An internal error has occurred.

8.28. Checking the process health (clphealthchk command)

Checks the process health.

Command line

clphealthchk [ -t pm | -t rc | -t rm | -t nm | -h ]

Note

This command must be run on the server whose process health is to be checked because this command checks the process health of a single server.

Description

This command checks the process health of a single server.

Option
None

Checks the health of all of pm, rc, rm, and nm.

-t <process>

process

pm

Checks the health of pm.

rc

Checks the health of rc.

rm

Checks the health of rm.

nm

Checks the health of nm.

-h

Displays the usage.

Return Value

0

Normal termination

1

Privilege for execution is invalid

2

Duplicated activation

3

Initialization error

4

The option is invalid

10

The process stall monitoring function has not been enabled.

11

The cluster is not activated (waiting for the cluster to start or the cluster has been stopped.)

12

The cluster daemon is suspended

100

There is a process whose health information has not been updated within a certain period.

If the -t option is specified, the health information of the specified process is not updated within a certain period.

255

Other internal error

Examples

Example 1: When the processes are healthy

# clphealthchk
pm OK
rc NG
rm OK
nm OK

Example 2: When clprc is stalled

# clphealthchk
pm OK
rc NG
rm OK
nm OK
# clphealthchk -t rc
rc NG

Example 3: When the cluster has been stopped

# clphealthchk
The cluster has been stopped
Remarks

If the cluster has been stopped or suspended, the process is also stopped.

Notes

Run this command as the root user.

Error Messages

Message

Cause/Solution

Log in as root.

You are not authorized to run this command. Log on as the root user.

Initialization error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

Invalid option.

Specify a valid option.

The function of process stall monitor is disabled.

The process stall monitoring function has not been enabled.

The cluster has been stopped.

The cluster has been stopped.

The cluster has been suspended.

The cluster has been suspended.

This command is already run.

The command has already been started. Check the running status by using a command such as ps command.

Internal error. Check if memory or OS resources are sufficient.

Check to see if the memory or OS resource is sufficient.

8.29. Controlling the rest point of DB2 (clpdb2still command)

Controls the rest point of DB2.

Command line
clpdb2still -d databasename -u username -s
clpdb2still -d databasename -u username -r
Description

Controls the securing/release of the rest point of DB2.

Option
-d databasename

Specifies the name of the target database for the rest point control.

-u username

Specifies the name of a user who executes the rest point control.

-s

Secures the rest point.

-r

Releases the rest point.

Return Value

0

Normal completion

2

Invalid command option

5

Failed to secure the rest point.

6

Failed to release the rest point.

Examples
# clpdb2still -d sample -u db2inst1 -s

Database Connection Information

Database server = DB2/LINUXX8664 11.1.0
SQL authorization ID = DB2INST1
Local database alias = SAMPLE

DB20000I The SET WRITE command completed successfully.
DB20000I The SQL command completed successfully.
DB20000I The SQL DISCONNECT command completed successfully.
# clpdb2still -d sample -u db2inst1 -r

Database Connection Information

Database server = DB2/LINUXX8664 11.1.0
SQL authorization ID = DB2INST1
Local database alias = SAMPLE

DB20000I The SET WRITE command completed successfully.
DB20000I The SQL command completed successfully.
DB20000I The SQL DISCONNECT command completed successfully.
Notes

Run this command as the root user.

A user specified in the -u option needs to have the privilege to run the SET WRITE command of DB2.

Error Messages

Message

Cause/Solution

invalid database name

The database name is invalid.
Check the database name.

invalid user name

The user name is invalid.
Check the user name.

missing database name

No database name is specified.
Specify a database name.

missing user name

No user name is specified.
Specify a user name.

missing operation '-s' or '-r'

Neither the securing nor release of the rest point is specified.
Specify either the securing or release of the rest point.

suspend command return code = n

Failed to secure the rest point.
If an error message of the su command is output at the last minute, check the user name and password. Additionally, if an error message of the db2 command is output, take appropriate actions based on the error message.

resume command return code = n

Failed to release the rest point.
If an error message of the su command is output at the last minute, check the user name and password. Additionally, if an error message of the db2 command is output, take appropriate actions based on the error message.

8.30. Controlling the rest point of MySQL (clpmysqlstill command)

Controls the rest point of MySQL.

Command line.
clpmysqlstill -d databasename [-u username] -s
clmypsqlstill -d databasename -r
Description

Controls the securing/release of the rest point of MySQL.

Option
-d databasename

Specifies the name of the target database for rest point control.

-u username

Specifies the name of the database user who executes rest point control. This option can be specified only when the -s option is specified. If it is omitted, root is automatically set as a default user.

-s

Secures the rest point.

-r

Releases the rest point.

Return Value

0

Normal completion

2

Invalid command option

3

DB connection error

4

Authentication error for the user specified in the -u option

5

Failed to secure the rest point.

6

Failed to release the rest point.

99

Internal error

Examples
# clpmysqlstill -d mysql -u root -s
Command succeeded.
# clpmysqlstill -d mysql -r
Command succeeded.
Notes

Run this command as the root user.

Configure a directory, where libmysqlclient.so client library of MySQL exists, to LD_LIBRARY_PATH, an environment variable.

Preliminarily configure the password of a user specified in the -u option, in the stillpoint.conf file of the etc directory under EXPRESSCLUSTER install directory. Use the following format for the password. Put a colon ":" at the end of the row.

"User name:Password:"

Example of file path: /opt/nec/clusterpro/etc/stillpoint.conf

Example of password setting: root:password:

A user specified in the -u option needs to have privileges to execute FLUSH TABLES WITH READ LOCK statement of MySQL.

If the rest point has been secured by running the command for securing the rest point with the -s option, the control is not returned while the command remains resident. By running the command for releasing the rest point with the -r option at a different process, the resident command for securing the rest point finishes and the control is returned.

Error Messages

Message

Cause/Solution

Invalid option.

Invalid command option.
Check the command option.

Cannot connect to database.

Failed to connect to the database.
Check the name and the status of the database.

Username or password is not correct.

User authentication failed.
Check your user name and password.

Suspend database failed.

Failed to secure the rest point.
Check the user privileges and the database settings.

Resume database failed.

Failed to release the rest point.
Check the user privileges and the database settings.

Internal error.

An internal error has occurred.

8.31. Controlling the rest point of Oracle (clporclstill command)

Controls the rest point of Oracle.

Command line
clporclstill -d connectionstring [-u username] -s
clporclstill -d connectionstring -r
Description

Controls the securing/release of the rest point of Oracle.

Option
-d connectionstring

Specifies the connection string for the target database for rest point control.

-u username

Specifies the name of a database user who executes rest point control. This option can be specified only when the -s option is specified. If it is omitted, OS authentication is used.

-s

Secures the rest point.

-r

Releases the rest point.

Return Value

0

Normal completion

2

Invalid command option

3

DB connection error

4

User authentication error

5

Failed to secure the rest point.

6

Failed to release the rest point.

99

Internal error

Examples
# clporclstill -d orcl -u oracle -s
Command succeeded.
# clporclstill -d orcl -r
Command succeeded.
Notes

Run this command as the root user.

Configure a directory, where libclntsh.so client library of Oracle exists, to LD_LIBRARY_PATH, an environment variable.

Additionally, configure the home directory of Oracle to ORACLE_HOME, an environment variable.

If OS authentication is used without specifying the -u option, a user who runs this command needs to belong to the dba group, in order to gain administrative privileges for Oracle.

Preliminarily configure the password of a user specified in the -u option, in the stillpoint.conf file of the etc directory under EXPRESSCLUSTER install directory. Use the following format for the password. Put a colon ":" at the end of the row.

"User name:Password:"

Example of file path: /opt/nec/clusterpro/etc/stillpoint.conf

Example of password setting: root:password:

A user specified in the -u option needs to have administrative privileges for Oracle.

If the rest point has been secured by running the command for securing the rest point with the -s option, the control is not returned while the command remains resident. By running the command for releasing the rest point with the -r option at a different process, the resident command for securing the rest point finishes and the control is returned.

Configure Oracle in the ARCHIVELOG mode in advance to run this command.

If an Oracle data file is acquired while this command is used to secure the rest point, the backup mode will be set for the data file. To restore and use the data file, disable the backup mode on Oracle to restore the data file.

Error Messages

Message

Cause/Solution

Invalid option.

Invalid command option.
Check the command option.

Cannot connect to database.

Failed to connect to the database.
Check the name and the status of the database.

Username or password is not correct.

User authentication failed.
Check your user name and password.

Suspend database failed.

Failed to secure the rest point.
Check the user privileges and the database settings.

Resume database failed.

Failed to release the rest point.
Check the user privileges and the database settings.

Internal error.

An internal error has occurred.

8.32. Controlling the rest point of PostgreSQL (clppsqlstill command)

Controls the rest point of PostgreSQL.

Command line
clppsqlstill -d databasename -u username -s
clppsqlstill -d databasename -r
Description

Controls the securing/release of the rest point of PostgreSQL.

Option
-d databasename

Specifies the name of the target database for rest point control.

-u username

Specifies the name of the database user who executes rest point control.

-s

Secures the rest point.

-r

Releases the rest point.

Return Value

0

Normal completion

2

Invalid command option

3

DB connection error

4

Authentication error for the user specified in the -u option

5

Failed to secure the rest point.

6

Failed to release the rest point.

99

Internal error

Examples
# clppsqlstill -d postgres -u postgres -s
Command succeeded.
# clppsqlstill -d postgres -r
Command succeeded.
Notes

Run this command as the root user.

Configure a directory, where libpq.so client library of PostgreSQL exists, to LD_LIBRARY_PATH, an environment variable.

If any number other than the default value (5432) is set to the port number connected to PostgreSQL, configure the port number in PQPORT, an environment variable.

Preliminarily configure the password of a user specified in the -u option, in the stillpoint.conf file of the etc directory under EXPRESSCLUSTER install directory. Use the following format for the password. Put a colon ":" at the end of the row.

"User name:Password:"

Example of file path: /opt/nec/clusterpro/etc/stillpoint.conf

Example of password setting: root:password:

A user specified in the -u option needs to have superuser privileges for PostgreSQL.

Enable WAL archive of PostgresSQL in advance to run this command.

If the rest point has been secured by running the command for securing the rest point with the -s option, the control is not returned while the command remains resident. By running the command for releasing the rest point with the -r option at a different process, the resident command for securing the rest point finishes and the control is returned.

Error Messages

Message

Cause/Solution

Invalid option.

Invalid command option.
Check the command option.

Cannot connect to database.

Failed to connect to the database.
Check the name and the status of the database.

Username or password is not correct.

User authentication failed.
Check your user name and password.

Suspend database failed.

Failed to secure the rest point.
Check the user privileges and the database settings.

Resume database failed.

Failed to release the rest point.
Check the user privileges and the database settings.

Internal error.

An internal error has occurred.

8.33. Controlling the rest point of SQL Server (clpmssqlstill command)

Controls the rest point of SQL Server.

Command line
clpmssqlstill -d databasename -u username -v vdiusername -s
clpmssqlstill -d databasename -v vdiusername -r
Description

Controls the securing/release of the rest point of SQL Server.

Option
-d databasename

Specifies the name of the target database for rest point control.

-u username

Specifies the name of the database user who executes rest point control.

-v vdiusername

Specifies the name of an OS user who executes vdi

-s

Secures the rest point.

-r

Releases the rest point.

Return Value

0

Normal completion

2

Invalid command option

3

DB connection error

4

Authentication error for the user specified in the -u option

5

Failed to secure the rest point.

6

Failed to release the rest point.

7

Timeout error

99

Internal error

Examples
# clpmssqlstill -d userdb -u sa -v mssql -s
Command succeeded.
# clpmssqlstill -d userdb -v mssql -r
Command succeeded.
Notes

Run this command as the root user.

Configure directories, where libsqlvdi.so VDI client library of SQL Server and libodbc.so ODBC library exist, to LD_LIBRARY_PATH, an environment variable.

Preliminarily configure the password of a user specified in the -u option, in the stillpoint.conf file of the etc directory under EXPRESSCLUSTER install directory. Use the following format for the password. Put a colon ":" at the end of the row.

"User name:Password:"

Example of file path: /opt/nec/clusterpro/etc/stillpoint.conf

Example of password setting: sa:password:

A user specified in the -u option needs to have privileges to execute the BACKUP DATABASE statement of SQL Server.

An OS user specified in the -v option needs to have privileges to execute VDI client.

You need to preliminarily configure the timeout value of this command in the stillpoint.conf file of the etc directory under EXPRESSCLUSTER install directory. Use the following format for the timeout time. Put a colon ":" at the last row. Unless it is set, the value described in the following example will be used as the default value.

"Timeout name: number of seconds:"

Example of file path: /opt/nec/clusterpro/etc/stillpoint.conf

Example of time-out (GetConfiguration) configured: cfgtimeout:1:

Example of time-out (GetCommand) configured: cmdtimeout:90:

Example of time-out (SQL) configured: sqltimeout:60:

You need to preliminarily configure the ODBC driver used for operating the database, in the stillpoint.conf file of the etc directory under EXPRESSCLUSTER install directory. Use the following format for the ODBC driver. Put a colon ":" at the end of the row. Unless it is set, the value described in the following example is used as the default value.

"ODBC driver: Name of ODBC driver to be used:"

Example of file path: /opt/nec/clusterpro/etc/stillpoint.conf

Example of ODBC driver: odbcdriver:ODBC Driver 13 for SQL Server:

If the rest point has been secured by running the command for securing the rest point with the -s option, the control is not returned while the command remains resident. By running the command for releasing the rest point with the -r option at a different process, the resident command for securing the rest point finishes and the control is returned.

Error Messages

Message

Cause/Solution

Invalid option.

Invalid command option.
Check the command option.

Cannot connect to database.

Failed to connect to the database.
Check the name and the status of the database.

Username or password is not correct.

User authentication failed.
Check your user name and password.

Suspend database failed.

Failed to secure the rest point.
Check the user privileges and the database settings.

Resume database failed.

Failed to release the rest point.
Check the user privileges and the database settings.

Timeout.

The command timed out.

Internal error.

An internal error has occurred.

8.34. Controlling the rest point of Sybase (clpsybasestill command)

Controls the rest point of Sybase.

Command line
clpsybasestill -d databasename -u username -s
clpsybasestill -d databasename -r
Description

Controls the securing/release of the rest point of Sybase.

Option
-d databasename

Specifies the name of the target database for rest point control.

-u username

Specifies the name of the database user who executes rest point control.

-s

Secures the rest point.

-r

Releases the rest point.

Return Value

0

Normal completion

2

Invalid command option

3

DB connection error

4

Authentication error for the user specified in the -u option

5

Failed to secure the rest point.

6

Failed to release the rest point.

99

Internal error

Examples
# clpsybasestill -d master -u sa -s
Command succeeded.
# clpsybasestill -d master -r
Command succeeded.
Notes

Run this command as the root user. Configure a directory, where libsybdb64.so client library of Sybase exists, to LD_LIBRARY_PATH, an environment variable. Additionally, configure appropriate settings for the following environment variables.

SYBASE: Install directory of Sybase.
LANG: Languages which the installed Sybase can accommodate.
DSQUERY: Database server name of Sysbase.

Preliminarily configure the password of a user specified in the -u option, in the stillpoint.conf file of the etc directory under EXPRESSCLUSTER install directory. Use the following format for the password. Put a colon ":" at the end of the row.

"User name:Password:"

Example of file path: /opt/nec/clusterpro/etc/stillpoint.conf

Example of password setting: root:password:

A user specified in the -u option needs to have privileges to execute the quiesce database command of Sybase.

If the rest point has been secured by running the command for securing the rest point with the -s option, the control is not returned while the command remains resident. By running the command for releasing the rest point with the -r option at a different process, the resident command for securing the rest point finishes and the control is returned.

Error Messages

Message

Cause/Solution

Invalid option.

Invalid command option.
Check the command option.

Cannot connect to database.

Failed to connect to the database.
Check the name and the status of the database.

Username or password is not correct.

User authentication failed.
Check your user name and password.

Suspend database failed.

Failed to secure the rest point.
Check the user privileges and the database settings.

Resume database failed.

Failed to release the rest point.
Check the user privileges and the database settings.

Internal error.

An internal error has occurred.

8.35. Displaying the cluster statistics information (clpperfc command)

the clpperfc command displays the cluster statistics information.

Command line
clpperfc --starttime -g group_name
clpperfc --stoptime -g group_name
clpperfc -g [group_name]
clpperfc -m monitor_name
Description

This command displays the median values (millisecond) of the group start time and group stop time.

This command displays the monitoring processing time (millisecond) of the monitor resource.

Option
--starttime -g group_name

Displays the median value of the group start time.

--stoptime -g group_name

Displays the median value of the group stop time.

-g [group_name]

Displays the each median value of the group start time and group stop time.

If groupname is omitted, it displays the each median value of the start time and stop time of all the groups.

-m monitor_name

Displays the last monitor processing time of the monitor resource.

Return value

0

Normal termination

1

Invalid command option

2

User authentication error

3

Configuration information load error

4

Configuration information load error

5

Initialization error

6

Internal error

7

Internal communication initialization error

8

Internal communication connection error

9

Internal communication processing error

10

Target group check error

12

Timeout error

Example of Execution

When displaying the median value of the group start time:

# clpperfc --starttime -g failover1
200

When displaying each median value of the start time and stop time of the specific group:

# clpperfc -g failover1
            start time    stop time
failover1          200          150

When displaying the monitor processing time of the monitor resource:

# clpperfc -m monitor1
100
Remarks

The time is output in millisecond by this commands.

If the valid start time or stop time of the group was not obtained, - is displayed.

If the valid monitoring time of the monitor resource was not obtained, 0 is displayed.

Notes

Execute this command as a root user.

Error Messages

Message

Cause/Solution

Log in as root.

Run this command as the root user.

Invalid option.

The command option is invalid. Check the command option.

Command timeout.

Command execution timed out.

Internal error.

Check if memory or OS resources are sufficient.

8.36. Checking the cluster configuration information (clpcfchk command)

This command checks the cluster configuration information.

Command line
clpcfchk -o path [-i conf_path]
Description

This command checks the validness of the setting values based on the cluster configuration information.

Option
-o path

Specifies the directory to store the check results.

-i conf_path

Specifies the directory which stored the configuration information to check.

If this option is omitted, the applied configuration information is checked.

Return Value

0

Normal termination

Other

than 0 Termination with an error

Example of Execution

When checking the applied configuration information:

# clpcfchk -o /tmp
server1 : PASS
server2 : PASS

When checking the stored configuration information:

# clpcfchk -o /tmp -i /tmp/config
server1 : PASS
server2 : FAIL
Execution Result

For this command, the following check results (total results) are displayed.

Check Results (Total Results)

Description

PASS

No error found.

FAIL

An error found.
Check the check results.
Remarks

Only the total results of each server are displayed.

Notes

Run this command as a root user.

When checking the configuration information exported through Cluster WebUI, decompress it in advance.

Error Messages

Message

Cause/Solution

Log in as root.

Log in as a root user.

Invalid option.

Specify a valid option.

Could not opened the configuration file. Check if the configuration file exists on the specified path.

The specified path does not exist. Specify a valid path.

Server is busy. Check if this command is already run.

This command has been already activated.

Failed to obtain properties.

Failed to obtain the properties.

Failed to check validation.

Failed to check the cluster configuration.

Internal error. Check if memory or OS resources are sufficient.

The amount of memory or OS resources may be insufficient. Check for any insufficiency.