EXPRESSCLUSTER X Getting Started Guide is intended for first-time users of the EXPRESSCLUSTER. The guide covers topics such as product overview of the EXPRESSCLUSTER, how the cluster system is installed, and the summary of other available guides. In addition, latest system requirements and restrictions are described.
This guide is intended for system engineers and administrators who want to build, operate, and maintain a cluster system. Instructions for designing, installing, and configuring a cluster system with EXPRESSCLUSTER are covered in this guide.
This guide is intended for system administrators. The guide covers topics such as how to operate EXPRESSCLUSTER, function of each module and troubleshooting. The guide is supplement to the "Installation and Configuration Guide".
This guide is intended for administrators and for system administrators who want to build, operate, and maintain EXPRESSCLUSTER-based cluster systems. The guide describes maintenance-related topics for EXPRESSCLUSTER.
A key to success in today's computerized world is to provide services without them stopping. A single machine down due to a failure or overload can stop entire services you provide with customers. This will not only result in enormous damage but also in loss of credibility you once had.
Introducing a cluster system allows you to minimize the period during which your system stops (down time) or to improve availability by load distribution.
As the word "cluster" represents, a system aiming to increase reliability and performance by clustering a group (or groups) of multiple computers. There are various types of cluster systems, which can be classified into following three listed below. EXPRESSCLUSTER is categorized as a high availability cluster.
High Availability (HA) Cluster
In this cluster configuration, one server operates as an active server. When the active server fails, a stand-by server takes over the operation. This cluster configuration aims for high-availability. The high availability cluster is available in the shared disk type and the mirror disk type.
Load Distribution Cluster
This is a cluster configuration where requests from clients are allocated to each of the nodes according to appropriate load distribution rules. This cluster configuration aims for high scalability. Generally, data cannot be passed. The load distribution cluster is available in a load balance type or parallel database type.
High Performance Computing (HPC) Cluster
This is a cluster configuration where the computation amount is huge and a single operation is performed with a super computer. CPUs of all nodes are used to perform a single operation.
To enhance the availability of a system, it is generally considered that having redundancy for components of the system and eliminating a single point of failure is important. "Single point of failure" is a weakness of having a single computer component (hardware component) in the system. If the component fails, it will cause interruption of services. The high availability (HA) cluster is a cluster system that minimizes the time during which the system is stopped and increases operational availability by establishing redundancy with multiple nodes.
The HA cluster is called for in mission-critical systems where downtime is fatal. The HA cluster can be divided into two types: shared disk type and mirror disk type. The explanation for each type is provided below.
The HA cluster can be divided into two types: shared disk type and data mirror type. The explanation for each type is provided below.
Data must be inherited from one server to another in cluster systems. A cluster typology where data is stored in an external disk (shared disk) accessible from two or more servers and inherited among them through the disk (for example, FibreChannel disk array device of SAN connection) is called shared disk type.
Fig. 2.1 HA cluster configuration (Shared disk type)
Expensive since a shared disk is necessary.
Ideal for the system that handles large data
If a failure occurs on a server where applications are running (active server), the cluster system automatically detects the failure and starts applications in a stand-by server to take over operations. This mechanism is called failover. Operations to be inherited in the cluster system consist of resources including disk, IP address, and application.
In a non-clustered system, a client needs to access a different IP address if an application is restarted on a server other than the server where the application was originally running. In contrast, many cluster systems allocate a virtual IP address of another network but not of an IP address given to a server on an operational basis. A server where the operation is running, be it an active or a stand-by server, remains transparent to a client. The operation is continued as if it has been running on the same server.
If a failover occurs because an active server is down, data on the shared disk is inherited to a stand-by server without necessary application-ending processing being completed. For this reason, it is required to check logic of data on a stand-by server. Usually this processing is the same as the one performed when a non-clustered system is rebooted after its shutdown. For example, roll-back or roll-forward is necessary for databases. With these actions, a client can continue operation only by re-executing the SQL statement that has not been committed yet.
After a failure occurs, a server with the failure can return to the cluster system as a stand-by server if it is physically separated from the system, fixed, and then succeeds to connect the system. It is not necessary to failback a group to the original server when continuity of operations is important. If it is essentially required to perform the operations on the original server, move the group.
Fig. 2.2 From occurrence of a failure to recovery
Normal operation
Occurrence of failure
Recovering server
Operation transfer
When the specification of the failover destination server does not meet the system requirements or overload occurs due to multi-directional stand-by, operations on the original server are preferred. In such a case, after finishing the recovery of the original node, stop the operations and start them again on the original node. Returning a failover group to the original server is called failback.
A stand-by mode where there is one operation and no operation is active on the stand-by server, as shown in Figure 2.3 HA cluster topology, is referred to as uni-directional stand-by.
Fig. 2.3 HA cluster topology (Uni-directional standby)
A mode where there are two or more operations with each server in the cluster serving as both active and standby server, as shown in Fig. 2.4 HA cluster topology (Multi-directional standby), is referred to as multi-directional standby.
Server 1 is the active server for Application A and also the standby server for Application B.
Server 2 is the active server for Application B and also the standby server for Application A.
Fig. 2.4 HA cluster topology (Multi-directional standby)
The shared disk type cluster system is good for large-scale systems. However, creating a system with this type can be costly because shared disks are generally expensive. The mirror disk type cluster system provides the same functions as the shared disk type with smaller cost through mirroring of server disks.
The mirror disk type is not recommended for large-scale systems that handle a large volume of data since data needs to be mirrored between servers.
When a write request is made by an application, the data mirror engine writes data in the local disk and sends the written data to the stand-by server via the interconnect. Interconnect is a cable connecting servers. It is used to monitor whether the server is activated or not in the cluster system. In addition to this purpose, interconnect is sometimes used to transfer data in the data mirror type cluster system. The data mirror engine on the stand-by server achieves data synchronization between stand-by and active servers by writing the data into the local disk of the stand-by server.
For read requests from an application, data is simply read from the disk on the active server.
Snapshot backup is applied usage of data mirroring. Because the data mirror type cluster system has shared data in two locations, you can keep the data of the stand-by server as snapshot backup by simply separating the server from the cluster.
HA cluster mechanism and problems
The following sections describe cluster implementation and related problems.
In a shared disk-type cluster, a disk array device is shared between the servers in a cluster. When an error occurs on a server, the standby server takes over the applications using the data on the shared disk.
In the mirror disk type cluster, a data disk on the cluster server is mirrored via the network. When an error occurs on a server, the applications are taken over using the mirror data on the stand-by server. Data is mirrored for every I/O. Therefore, the mirror disk type cluster appears the same as the shared disk viewing from a high level application.
The following the shared disk type cluster configuration.
A failover-type cluster can be divided into the following categories depending on the cluster topologies:
Uni-Directional Standby Cluster System
In the uni-directional standby cluster system, the active server runs applications while the other server, the standby server, does not. This is the simplest cluster topology and you can build a high-availability system without performance degradation after failing over.
Multi-directional standby cluster system with the same application
In the same application multi-directional standby cluster system, the same applications are activated on multiple servers. These servers also operate as standby servers. These applications are operated on their own. When a failover occurs, the same applications are activated on one server. Therefore, the applications that can be activated by this operation need to be used. When the application data can be split into multiple data, depending on the data to be accessed, you can build a load distribution system per data partitioning basis by changing the client's connecting server.
Fig. 2.9 Multi-directional standby cluster system with the same application (1)
Fig. 2.10 Multi-directional standby cluster system with the same application (2)
Multi-directional standby cluster system with different applications
In the different application multi-directional standby cluster system, different applications are activated on multiple servers and these servers operate as standby servers. When a failover occurs, two or more applications are activated on one server. Therefore, these applications need to be able to coexist. You can build a load distribution system per application unit basis.
Application A and Application B are different applications.
Fig. 2.11 Multi-directional standby cluster system with different applications (1)
Fig. 2.12 Multi-directional standby cluster system with different applications (2)
N-to-N Configuration
The configuration can be expanded with more nodes by applying the configurations introduced thus far. In an N-to-N configuration described below, three different applications are run on three servers and one standby server takes over the application if any problem occurs. In a uni-directional standby cluster system, the stand-by server does not operate anything, so one of the two server functions as a stand-by server. However, in an N-to N configuration, only one of the four servers functions as a stand-by server. Performance deterioration is not anticipated if an error occurs only on one server.
Cluster software executes failover (for example, passing operations) when a failure that can affect continued operation is detected. The following section gives you a quick view of how the cluster software detects a failure.
EXPRESSCLUSTER regularly checks whether other servers are properly working in the cluster system. This function is called "heartbeat communication."
Heartbeat and detection of server failures
Failures that must be detected in a cluster system are failures that can cause all servers in the cluster to stop. Server failures include hardware failures such as power supply and memory failures, and OS panic. To detect such failures, the heartbeat is used to monitor whether the server is active or not.
Some cluster software programs use heartbeat not only for checking if the target is active through ping response, but for sending status information on the local server. Such cluster software programs begin failover if no heartbeat response is received in heartbeat transmission, determining no response as server failure. However, grace time should be given before determining failure, since a highly loaded server can cause delay of response. Allowing grace period results in a time lag between the moment when a failure occurred and the moment when the failure is detected by the cluster software.
Detection of resource failures
Factors causing stop of operations are not limited to stop of all servers in the cluster. Failure in disks used by applications, NIC failure, and failure in applications themselves are also factors that can cause the stop of operations. These resource failures need to be detected as well to execute failover for improved availability.
Accessing a target resource is used to detect resource failures if the target is a physical device. For monitoring applications, trying to service ports within the range not affecting operation is a way of detecting an error in addition to monitoring if application processes are activated.
In a failover cluster system of the shared disk type, multiple servers physically share the disk device. Typically, a file system enjoys I/O performance greater than the physical disk I/O performance by keeping data caches in a server.
What if happens a file system is accessed by multiple servers simultaneously?
Because a general file system assumes no server other than the local updates data on the disk, inconsistency between caches and the data on the disk arises. Ultimately the data will be destroyed. The failover cluster system locks the disk device to prevent multiple servers from mounting a file system simultaneously due to a network partition explained below.
When all interconnects between servers are disconnected, it is not possible to tell if a server is down, only by monitoring if it is activated by a heartbeat. In this status, if a failover is performed and multiple servers mount a file system simultaneously considering the server has been shut down, data on the shared disk may be corrupted.
The problem explained in the section above is referred to as "network partition" or "Split Brain Syndrome." To resolve this problem, the failover cluster system is equipped with various mechanisms to ensure shared disk lock at the time when all interconnects are disconnected.
As mentioned earlier, resources to be managed by a cluster include disks, IP addresses, and applications. The functions used in the failover cluster system to inherit these resources are described below.
In the shared disk type cluster, data to be passed from a server to another in a cluster system is stored in a partition in a shared disk. This means inheriting data is re-mounting the file system of files that the application uses from a healthy server. What the cluster software should do is simply mount the file system because the shared disk is physically connected to a server that inherits data.
The diagram above (Figure 2.16 Inheriting data) may look simple. Consider the following issues in designing and creating a cluster system.
One issue to consider is recovery time for a file system or database. A file to be inherited may have been used by another server or to be updated just before the failure occurred. For this reason, a cluster system may need to do consistency checks to data it is moving on some file systems, as well as it may need to rollback data for some database systems. These checks are not cluster system-specific, but required in many recovery processes, including when you reboot a single server that has been shut down due to a power failure. If this recovery takes a long time, the time is wholly added to the time for failover (time to take over operation), and this will reduce system availability.
Another issue you should consider is writing assurance. When an application writes data into the shared disk, usually the data is written through a file system. However, even though the application has written data - but the file system only stores it on a disk cache and does not write into the shared disk - the data on the disk cache will not be inherited to a stand-by server when an active server shuts down. For this reason, it is required to write important data that needs to be inherited to a stand-by server into a disk, by using a function such as synchronous writing. This is same as preventing the data becoming volatile when a single server shuts down. Namely, only the data registered in the shared disk is inherited to a stand-by server, and data on a memory disk such as a disk cache is not inherited. The cluster system needs to be configured considering these issues.
When a failover occurs, it does not have to be concerned which server is running operations by inheriting IP addresses. The cluster software inherits the IP addresses for this purpose.
The last to come in inheritance of operation by cluster software is inheritance of applications. Unlike fault tolerant computers (FTC), no process status such as contents of memory is inherited in typical failover cluster systems. The applications running on a failed server are inherited by rerunning them on a healthy server.
For example, when the database instance is failed over, the database that is started in the stand-by server can not continue the exact processes and transactions that have been running in the failed server, and roll-back of transaction is performed in the same as restarting the database after it was down. It is required to connect to the database again from the client. The time needed for this database recovery is typically a few minutes though it can be controlled by configuring the interval of DBMS checkpoint to a certain extent.
Many applications can restart operations by re-execution. Some applications, however, require going through procedures for recovery if a failure occurs. For these applications, cluster software allows to start up scripts instead of applications so that recovery process can be written. In a script, the recovery process, including cleanup of files half updated, is written as necessary according to factors for executing the script and information on the execution server.
Cluster software is required to complete each task quickly and reliably (see Figure 2.17 Failover time chart) Cluster software achieves high availability with due consideration on what has been described so far.
Having a clear picture of the availability level required or aimed is important in building a high availability system. This means when you design a system, you need to study cost effectiveness of countermeasures, such as establishing a redundant configuration to continue operations and recovering operations within a short period, against various failures that can disturb system operations.
Single point of failure (SPOF), as described previously, is a component where failure can lead to stop of the system. In a cluster system, you can eliminate the system's SPOF by establishing server redundancy. However, components shared among servers, such as shared disk may become a SPOF. The key in designing a high availability system is to duplicate or eliminate this shared component.
A cluster system can improve availability but failover will take a few minutes for switching systems. That means time for failover is a factor that reduces availability. Solutions for the following three, which are likely to become SPOF, will be discussed hereafter although technical issues that improve availability of a single server such as ECC memory and redundant power supply are important.
Typically a shared disk uses a disk array for RAID. Because of this, the bare drive of the disk does not become SPOF. The problem is the RAID controller is incorporated. Shared disks commonly used in many cluster systems allow controller redundancy.
In general, access paths to the shared disk must be duplicated to benefit from redundant RAID controller. There are still things to be done to use redundant access paths in Linux (described later in this chapter). If the shared disk has configuration to access the same logical disk unit (LUN) from duplicated multiple controllers simultaneously, and each controller is connected to one server, you can achieve high availability by failover between nodes when an error occurs in one of the controllers.
Fig. 2.18 Example of a RAID controller and access paths both being SPOF
Fig. 2.19 Example of RAID controllers and access paths both being redundant
* HBA stands for Host Bus Adapter. This is an adapter of the server not of the shared disk.
With a failover cluster system of data mirror type, where no shared disk is used, you can create an ideal system having no SPOF because all data is mirrored to the disk in the other server. However you should consider the following issues:
Degradation of disk I/O performance in mirroring data over the network (especially writing performance)
Degradation of system performance during mirror resynchronization in recovery from server failure (mirror copy is done in the background)
Time for mirror resynchronization (failover cannot be done until mirror resynchronization is completed)
In a system with frequent data viewing and a relatively small volume of data, choosing the failover cluster of data mirror type is effective to increase availability.
In a typical configuration of the shared disk type cluster system, the access path to the shared disk is shared among servers in the cluster. To take SCSI as an example, two servers and a shared disk are connected to a single SCSI bus. A failure in the access path to the shared disk can stop the entire system.
What you can do for this is to have a redundant configuration by providing multiple access paths to the shared disk and make them look as one path for applications. The device driver allowing such is called a path failover driver.
In any systems that run services on a network, a LAN failure is a major factor that disturbs operations of the system. If appropriate settings are made, availability of cluster system can be increased through failover between nodes at NIC failures. However, a failure in a network device that resides outside the cluster system disturbs operation of the system.
In the case of this above figure, if the router has a failure, the access from the PC to the service on the server cannot be maintained (Router becomes a SPOF).
LAN redundancy is a solution to tackle device failure outside the cluster system and to improve availability. You can apply ways used for a single server to increase LAN availability. For example, choose a primitive way to have a spare network device with its power off, and manually replace a failed device with this spare device. Choose to have a multiplex network path through a redundant configuration of high-performance network devices, and switch paths automatically. Another option is to use a driver that supports NIC redundant configuration such as Intel's ANS driver.
Load balancing appliances and firewall appliances are also network devices that are likely to become SPOF. Typically, they allow failover configurations through standard or optional software. Having redundant configuration for these devices should be regarded as requisite since they play important roles in the entire system.
Given many of factors causing system troubles are said to be the product of incorrect settings or poor maintenance, evaluation before actual operation is important to realize a high availability system and its stabilized operation. Exercising the following for actual operation of the system is a key in improving availability:
Clarify and list failures, study actions to be taken against them, and verify effectiveness of the actions by creating dummy failures.
Conduct an evaluation according to the cluster life cycle and verify performance (such as at degenerated mode)
Arrange a guide for system operation and troubleshooting based on the evaluation mentioned above.
Having a simple design for a cluster system contributes to simplifying verification and improvement of system availability.
Despite the above efforts, failures still occur. If you use the system for long time, you cannot escape from failures: hardware suffers from aging deterioration and software produces failures and errors through memory leaks or operation beyond the originally intended capacity. Improving availability of hardware and software is important yet monitoring for failure and troubleshooting problems is more important. For example, in a cluster system, you can continue running the system by spending a few minutes for switching even if a server fails. However, if you leave the failed server as it is, the system no longer has redundancy and the cluster system becomes meaningless should the next failure occur.
If a failure occurs, the system administrator must immediately take actions such as removing a newly emerged SPOF to prevent another failure. Functions for remote maintenance and reporting failures are very important in supporting services for system administration.
To achieve high availability with a cluster system, you should:
Remove or have complete control on single point of failure.
Have a simple design that has tolerance and resistance for failures, and be equipped with a guide for operation and troubleshooting.
Detect a failure quickly and take appropriate action against it.
A core component of EXPRESSCLUSTER. Install this to the server machines that constitute the cluster system. This includes all high availability functions of EXPRESSCLUSTER. The server functions of the Cluster WebUI are also included.
Cluster WebUI
This is a tool to create the configuration data of EXPRESSCLUSTER and to manage EXPRESSCLUSTER operations. Uses a Web browser as a user interface. The Cluster WebUI is installed in EXPRESSCLUSTER Server, but it is distinguished from the EXPRESSCLUSTER Server because the Cluster WebUI is operated from the Web browser on the management PC.
The software configuration of EXPRESSCLUSTER should look similar to the figure below. Install the EXPRESSCLUSTER Server (software) on a server that constitutes a cluster. Because the main functions of Cluster WebUI are included in EXPRESSCLUSTER Server, it is not necessary to separately install them. The Cluster WebUI can be used through the web browser on the management PC or on each server in the cluster.
EXPRESSCLUSTER Server (Main module)
Cluster WebUI
Fig. 3.1 Software configuration of EXPRESSCLUSTER
3.3.1. How an error is detected in EXPRESSCLUSTER
There are three kinds of monitoring in EXPRESSCLUSTER: (1) server monitoring, (2) application monitoring, and (3) internal monitoring. These monitoring functions let you detect an error quickly and reliably. The details of the monitoring functions are described below.
Server monitoring is the most basic function of the failover-type cluster system. It monitors if a server that constitutes a cluster is properly working.
Server Monitoring (heartbeat) uses the following communication paths:
Primary Interconnect
LAN dedicated to communication between the cluster servers. This is used to exchange information between the servers as well as to perform heartbeat communication.
Fig. 3.2 LAN heartbeat/Kernel mode LAN heartbeat (Primary Interconnect)
Secondary Interconnect
This is used as a path to be used for the communicating with a client. This is used for exchanging data between the servers as well as for a backup interconnects.
Fig. 3.3 LAN heartbeat/Kernel mode LAN heartbeat (Secondary Interconnect)
Witness
This is used by the external Witness server running the Witness server service to check if other servers constructing the failover-type cluster exist through communication with them.
Application monitoring is a function that monitors applications and factors that cause a situation where an application cannot run.
Monitoring applications and/or protocols to see if they are stalled or failed by using the monitoring option.
In addition to the basic monitoring of successful startup and existence of applications, you can even monitor stall and failure in applications including specific databases (such as Oracle, DB2), protocols (such as FTP, HTTP) and / or application servers (such as WebSphere, WebLogic) by introducing optional monitoring products of EXPRESSCLUSTER. For the details, see "Monitor resource details" in the "Reference Guide".
Monitoring activation status of applications
An error can be detected by starting up an application by using an application-starting resource (called application resource and service resource) of EXPRESSCLUSTER and regularly checking whether the process is active or not by using application-monitoring resource (called application monitor resource and service monitor resource). It is effective when the factor for application to stop is due to error termination of an application.
Note
An error in resident process cannot be detected in an application started up by EXPRESSCLUSTER.
Note
An internal application error (for example, application stalling and result error) cannot be detected.
Resource monitoring
An error can be detected by monitoring the cluster resources (such as disk partition and IP address) and public LAN using the monitor resources of the EXPRESSCLUSTER. It is effective when the factor for application to stop is due to an error of a resource that is necessary for an application to operate.
Internal monitoring refers to an inter-monitoring of modules within EXPRESSCLUSTER. It monitors whether each monitoring function of EXPRESSCLUSTER is properly working. Activation status of EXPRESSCLUSTER process monitoring is performed within EXPRESSCLUSTER.
Monitoring activation status of an EXPRESSCLUSTER process
There are monitorable and non-monitorable errors in EXPRESSCLUSTER. It is important to know what kind of errors can or cannot be monitored when building and operating a cluster system.
3.3.6. Detectable and non-detectable errors by server monitoring
Monitoring conditions: A heartbeat from a server with an error is stopped
Example of errors that can be monitored:
Hardware failure (of which OS cannot continue operating)
Stop error
Example of error that cannot be monitored:
Partial failure on OS (for example, only a mouse or keyboard does not function)
3.3.7. Detectable and non-detectable errors by application monitoring
Monitoring conditions: Termination of application with errors, continuous resource errors, disconnection of a path to the network devices.
Example of errors that can be monitored:
Abnormal termination of an application
Failure to access the shared disk (such as HBA failure)
Public LAN NIC problem
Example of errors that cannot be monitored:
Application stalling and resulting in error.
EXPRESSCLUSTER cannot monitor application stalling and error results [1]. However, it is possible to perform failover by creating a program that monitors applications and terminates itself when an error is detected, starting the program using the application resource, and monitoring application using the application monitor resource.
Upon detecting that a heartbeat from a server is interrupted, EXPRESSCLUSTER determines whether the cause of this interruption is an error in a server or a network partition. If it is judged as a server failure, failover (activate resources and start applications on a healthy server) is performed. If it is judged as a network partition, protecting data is given priority over operations and a processing such as emergency shutdown is performed.
The following are the network partition resolution methods:
When a server failure is detected, a healthy server can send a stop request to the failed server. Making the failed server stop eliminates the possibility of simultaneously starting business applications on two or more servers. The forced stop is made before a failover is started.
Upon detecting that a heartbeat from a server is interrupted, EXPRESSCLUSTER determines whether the cause of this interruption is an error in a server or a network partition before starting a failover. Then a failover is performed by activating various resources and starting up applications on a properly working server.
The group of resources which fail over at the same time is called a "failover group." From a user's point of view, a failover group appears as a virtual computer.
Note
In a cluster system, a failover is performed by restarting the application from a properly working node. Therefore, what is saved in an application memory cannot be failed over.
From occurrence of error to completion of failover takes a few minutes. See the time-chart below:
The time for a standby server to detect an error after that error occurred on the active server.
The setting values of the cluster properties should be adjusted depending on the delay caused by application load. (The default value is 30 seconds.)
Fencing
The time for network partition resolution and forced stopping.
For network partition resolution, EXPRESSCLUSTER checks whether stop of heartbeat (heartbeat timeout) detected from the other server is due to a network partition or an error in the other server.
Confirmation completes immediately.
For forced stopping, a stop request is sent to the server that is recognized to be the failure source.
How long it will take varies depending on the cluster's operating environment such as a physical one, a virtual one, or the cloud.
Activating resources
The time to activate the resources necessary for operating an application.
The file system recovery, transfer of the data in disks, and transfer of IP addresses are performed.
The resources can be activated in a few seconds in ordinary settings, but the required time changes depending on the type and the number of resources registered to the failover group. For more information, see the "Installation and Configuration Guide".
Recovering and restarting applications
The startup time of the application to be used in operation. The data recovery time such as a roll-back or roll-forward of the database is included.
The time for roll-back or roll-forward can be predicted by adjusting the check point interval. For more information, refer to the document that comes with each software product.
3.5.1. Hardware configuration of the shared disk type cluster configured by EXPRESSCLUSTER
The hardware configuration of the shared disk type cluster in EXPRESSCLUSTER is described below. In general, the following is used for communication between the servers in a cluster system:
Two NIC cards (one for external communication, one for EXPRESSCLUSTER)
Specific space of a shared disk
SCSI or FibreChannel can be used for communication interface to a shared disk; however, recently FibreChannel is more commonly used.
Fig. 3.6 Example of cluster configuration (Shared disk type)
FIP1
10.0.0.11 (Access destination from the Cluster WebUI client)
FIP2
10.0.0.12 (Access destination from the operation client)
NIC1-1
192.168.0.1
NIC1-2
10.0.0.1
NIC2-1
192.168.0.2
NIC2-2
10.0.0.2
Shared disk:
Drive letter of the partition for disk heartbeat
E
Drive letter of the disk resource
F
File system
NTFS
3.5.2. Hardware configuration of the mirror disk type cluster configured by EXPRESSCLUSTER
The mirror disk type cluster is an alternative to the shared disk device, by mirroring the partition on the server disks. This is good for the systems that are smaller-scale and lower-budget, compared to the shared disk type cluster.
Note
To use a mirror disk, it is a requirement to purchase the Replicator option or the Replicator DR option.
A network for copying mirror disk data is required, but normally interconnect (NIC for EXPRESSCLUSTER internal communication) is used for this purpose.
The hardware configuration of the data mirror type cluster configured by EXPRESSCLUSTER is described below.
Sample cluster environment with mirror disks used (when the cluster partitions and data partitions are allocated to the OS-installed disks)
In the following configuration, free partitions of the OS-installed disks are used as cluster partitions and data partitions.
Fig. 3.7 Example of cluster configuration (1) (Mirror disk type)
FIP1
10.0.0.11 (Access destination from the Cluster WebUI client)
FIP2
10.0.0.12 (Access destination from the operation client)
NIC1-1
192.168.0.1
NIC1-2
10.0.0.1
NIC2-1
192.168.0.2
NIC2-2
10.0.0.2
Drive letter of the cluster partition
E
File system
RAW
Drive letter of the data partition
F
File system
NTFS
Sample cluster environment with mirror disks used (when disks are prepared for cluster partitions and data partitions)
In the following configuration, disks are prepared for cluster partitions and data partitions and connected to the servers.
Fig. 3.8 Example of cluster configuration (2) (Mirror disk type)
FIP1
10.0.0.11 (Access destination from the Cluster WebUI client)
FIP2
10.0.0.12 (Access destination from the operation client)
NIC1-1
192.168.0.1
NIC1-2
10.0.0.1
NIC2-1
192.168.0.2
NIC2-2
10.0.0.2
Drive letter of the cluster partition
E
File system
RAW
Drive letter of the data partition
F
File system
NTFS
3.5.3. Hardware configuration of the hybrid disk type cluster configured by EXPRESSCLUSTER
By combining the shared disk type and the mirror disk type and mirroring the partitions on the shared disk, this configuration allows the ongoing operation even if a failure occurs on the shared disk device. Mirroring between remote sites can also serve as a disaster countermeasure.
Note
To use the hybrid disk type configuration, it is a requirement to purchase the Replicator DR option.
As is the case with the mirror disk configuration, a network to copy the data is necessary. In general, NIC for internal communication in EXPRESSCLUSTER is used to meet this purpose.
The hardware configuration of the hybrid disk type cluster configured by EXPRESSCLUSTER is as follows:
Sample cluster environment with hybrid disks used (a shared disk is used by two servers and the data is mirrored to the normal disk of the third server)
Fig. 3.9 Example of cluster configuration (Hybrid disk type)
FIP1
10.0.0.11 (Access destination from the Cluster WebUI client)
FIP2
10.0.0.12 (Access destination from the operation client)
NIC1-1
192.168.0.1
NIC1-2
10.0.0.1
NIC2-1
192.168.0.2
NIC2-2
10.0.0.2
NIC3-1
192.168.0.3
NIC3-2
10.0.0.3
Shared disk
Drive letter of the partition for heartbeat
E
File system
RAW
Drive letter of the cluster partition
F
File system
RAW
Drive letter of the data partition
G
File system
NTFS
The above figure shows a sample of the cluster environment where a shared disk is mirrored in the same network. While the hybrid disk type configuration mirrors between server groups that are connected to the same shared disk device, the sample above mirrors the shared disk to the local disk in server3. Because of this, the stand-by server group svg2 has only one member server, server3.
Fig. 3.10 Example of cluster configuration (Hybrid disk type, remote cluster)
VIP1
10.0.0.11 (Access destination from the Cluster WebUI client)
VIP2
10.0.0.12 (Access destination from the operation client)
NIC1-1
192.168.0.1
NIC1-2
10.0.0.1
NIC2-1
192.168.0.2
NIC2-2
10.0.0.2
NIC3-1
192.168.0.3
NIC3-2
10.0.0.3
Shared disk
Drive letter of the partition for heartbeat
E
File system
RAW
Drive letter of the cluster partition
F
File system
RAW
Drive letter of the data partition
G
File system
NTFS
The above sample shows a sample of the cluster environment where mirroring is performed between remote sites. This sample uses virtual IP addresses but not floating IP addresses because the server groups have different network segments of the Public-LAN. When a virtual IP address is used, all the routers located in between must be configured to pass on the host route. The mirror disk connect communication transfers the write data to the disk as it is. It is recommended to enable use a VPN with a dedicated line or the compression and encryption functions.
In EXPRESSCLUSTER, a group used for monitoring the target is called "resources." The resources that perform monitoring and those to be monitored are classified into two groups and managed. There are four types of resources and are managed separately. Having resources allows distinguishing what is monitoring and what is being monitored more clearly. It also makes building a cluster and handling an error easy. The resources can be divided into heartbeat resources, network partition resolution resources, group resources, and monitor resources.
See also
For the details of each resource, see the "Reference Guide".
Heartbeat resources are used for verifying whether the other server is working properly between servers. The following heartbeat resources are currently supported:
LAN heartbeat resource
Uses Ethernet for communication.
Witness heartbeat resource
Uses the external server running the Witness server service to show the status (of communication with each server) obtained from the external server.
A group resource constitutes a unit when a failover occurs. The following group resources are currently supported:
Application resource (appli)
Provides a mechanism for starting and stopping an application (including user creation application.)
Floating IP resource (fip)
Provides a virtual IP address. A client can access a virtual IP address the same way as accessing a regular IP address.
Mirror disk resource (md)
Provides a function to perform mirroring a specific partition on the local disk and control access to it. It can be used only on a mirror disk configuration.
Registry synchronization resource (regsync)
Provides a mechanism to synchronize specific registries of more than two servers, to set the applications and services in the same way among the servers that constitute a cluster.
Script resource (script)
Provides a mechanism for starting and stopping a script (BAT) such as a user creation script.
Disk resource (sd)
Provides a function to control access to a specific partition on the shared disk. This can be used only when the shared disk device is connected.
Service resource (service)
Provides a mechanism for starting and stopping a service such as database and Web.
Virtual computer name resource (vcom)
Provides a virtual computer name. This can be accessed from a client in the same way as a general computer name.
Dynamic DNS resource (ddns)
Registers a virtual host name and the IP address of the active server to the dynamic DNS server.
Virtual IP resource (vip)
Provides a virtual IP address. This can be accessed from a client in the same way as a general IP address. This can be used in the remote cluster configuration among different network addresses.
CIFS resource (cifs)
Provides a function to disclose and share folders on the shared disk and mirror disks.
Hybrid disk resource (hd)
A resource in which the disk resource and the mirror disk resource are combined. Provides a function to perform mirroring on a certain partition on the shared disk or the local disk and to control access.
LB probe port resource (lbpp)
Provides a system for opening a specific port on a node on which the operation is performed.
AWS elastic ip resource (awseip)
Provides a system for giving an elastic IP (referred to as EIP) when EXPRESSCLUSTER is used on AWS.
AWS virtual ip resource (awsvip)
Provides a system for giving a virtual IP (referred to as VIP) when EXPRESSCLUSTER is used on AWS.
AWS secondary ip resource (awssip)
Provides a system for giving a secondary IP when EXPRESSCLUSTER is used on AWS.
AWS DNS resource (awsdns)
Registers the virtual host name and the IP address of the active server to Amazon Route 53 when EXPRESSCLUSTER is used on AWS.
Azure probe port resource (azurepp)
Provides a system for opening a specific port on a node on which the operation is performed when EXPRESSCLUSTER is used on Microsoft Azure.
Azure DNS resource (azuredns)
Registers the virtual host name and the IP address of the active server to Azure DNS when EXPRESSCLUSTER is used on Microsoft Azure.
Google Cloud virtual IP resource (gcvip)
Provides a system for opening a specific port on a node on which the operation is performed when EXPRESSCLUSTER is used on Google Cloud.
Google Cloud DNS resource (gcdns)
Registers the virtual host name and the IP address of the active server to Cloud DNS when EXPRESSCLUSTER is used on Google Cloud.
Oracle Cloud virtual IP resource (ocvip)
Provides a system for opening a specific port on a node on which the operation is performed when EXPRESSCLUSTER is used on Oracle Cloud Infrastructure.
Oracle Cloud DNS resource (ocdns)
Registers the virtual host name and the IP address of the active server to Oracle Cloud DNS when EXPRESSCLUSTER is used on Oracle Cloud Infrastructure.
Note
To use a mirror disk resource, the EXPRESSCLUSTER X Replicator license or the EXPRESSCLUSTER X Replicator DR license is required.
To use a hybrid disk resource, the EXPRESSCLUSTER X Replicator DR license is required.
Above resources are not listed on the resource list of the Cluster WebUI if the licenses of those are not registered.
A monitor resource monitors a cluster system. The following monitor resources are currently supported:
Application monitor resource (appliw)
Provides a monitoring mechanism to check whether a process started by application resource is active or not.
Disk RW monitor resource (diskw)
Provides a monitoring mechanism for the file system and function to perform a failover by resetting the hardware or an intentional stop error at the time of file system I/O stalling. This can be used for monitoring the file system of the shared disk.
Floating IP monitor resource (fipw)
Provides a monitoring mechanism of the IP address started by floating IP resource.
IP monitor resource (ipw)
Provides a mechanism for monitoring the network communication.
Mirror disk monitor resource (mdw)
Provides a monitoring mechanism of the mirroring disks.
NIC Link Up/Down monitor resource (miiw)
Provides a monitoring mechanism for link status of LAN cable.
Multi target monitor resource (mtw)
Provides a status with multiple monitor resources.
Provides a monitoring mechanism of the synchronization process by a registry synchronization resource.
Disk TUR monitor resource (sdw)
Provides a mechanism to monitor the operation of access path to the shared disk by the TestUnitReady command of SCSI. This can be used for the shared disk of FibreChannel.
Service monitor resource (servicew)
Provides an alive monitoring mechanism for services.
Virtual computer name monitor resource (vcomw)
Provides a monitoring mechanism of the virtual computer started by a virtual computer name resource.
Dynamic DNS monitor resource (ddnsw)
Periodically registers a virtual host name and the IP address of the active server to the dynamic DNS server.
Virtual IP monitor resource (vipw)
Provides a monitoring mechanism of the IP address started by a virtual IP resource.
CIFS monitor resource (cifsw)
Provides a monitoring mechanism of the shared folder disclosed by a CIFS resource.
Hybrid disk monitor resource (hdw)
Provides a monitoring mechanism of the hybrid disk.
Hybrid disk TUR monitor resource (hdtw)
Provides a monitoring mechanism for the behavior of the access path to the shared disk device used as a hybrid disk by the TestUnitReady command. It can be used for a shared disk using FibreChannel.
Custom monitor resource (genw)
Provides a monitoring mechanism to monitor the system by the operation result of commands or scripts which perform monitoring, if any.
Process name monitor resource (psw)
Provides a monitoring mechanism for checking whether a process specified by a process name is active.
DB2 monitor resource (db2w)
Provides a monitoring mechanism for the IBM DB2 database.
ODBC monitor resource (odbcw)
Provides a monitoring mechanism for the database that can be accessed by ODBC.
Oracle monitor resource (oraclew)
Provides a monitoring mechanism for the Oracle database.
PostgreSQL monitor resource (psqlw)
Provides a monitoring mechanism for the PostgreSQL database.
SQL Server monitor resource (sqlserverw)
Provides a monitoring mechanism for the SQL Server database.
FTP monitor resource (ftpw)
Provides a monitoring mechanism for the FTP server.
HTTP monitor resource (httpw)
Provides a monitoring mechanism for the HTTP server.
IMAP4 monitor resource (imap4w)
Provides a monitoring mechanism for the IMAP server.
POP3 monitor resource (pop3w)
Provides a monitoring mechanism for the POP server.
SMTP monitor resource (smtpw)
Provides a monitoring mechanism for the SMTP server.
Tuxedo monitor resource (tuxw)
Provides a monitoring mechanism for the Tuxedo application server.
WebLogic monitor resource (wlsw)
Provides a monitoring mechanism for the WebLogic application server.
WebSphere monitor resource (wasw)
Provides a monitoring mechanism for the WebSphere application server.
WebOTX monitor resource (otxw)
Provides a monitoring mechanism for the WebOTX application server.
External link monitor resource (mrw)
Specifies the action to take when an error message is received and how the message is displayed on the Cluster WebUI.
JVM monitor resource (jraw)
Provides a monitoring mechanism for Java VM.
System monitor resource (sraw)
Provides a monitoring mechanism for the resources of the whole system.
Process resource monitor resource (psrw)
Provides a monitoring mechanism for running processes on the server.
User mode monitor resource (userw)
Provides a stall monitoring mechanism for the user space and a function for performing failover by an intentional STOP error or an HW reset at the time of a user space stall.
LB probe port monitor resource (lbppw)
Provides a monitoring mechanism for ports for alive monitoring for the node where a LB probe port resource has been activated.
AWS Elastic Ip monitor resource (awseipw)
Provides a monitoring mechanism for the elastic ip given by the AWS elastic ip (referred to as EIP) resource.
AWS Virtual Ip monitor resource (awsvipw)
Provides a monitoring mechanism for the virtual ip given by the AWS virtual ip (referred to as VIP) resource.
AWS Secondary Ip monitor resource (awssipw)
Provides a monitoring mechanism for the secondary ip given by the AWS secondary ip resource.
AWS AZ monitor resource (awsazw)
Provides a monitoring mechanism for an Availability Zone (referred to as AZ).
AWS DNS monitor resource (awsdnsw)
Provides a monitoring mechanism for the virtual host name and IP address provided by the AWS DNS resource.
Azure probe port monitor resource (azureppw)
Provides a monitoring mechanism for ports for alive monitoring for the node where an Azure probe port resource has been activated.
Azure load balance monitor resource (azurelbw)
Provides a mechanism for monitoring whether the port number that is same as the probe port is open for the node where an Azure probe port resource has not been activated.
Azure DNS monitor resource (azurednsw)
Provides a monitoring mechanism for the virtual host name and IP address provided by the Azure DNS resource.
Google Cloud virtual IP monitor resource (gcvipw)
Provides a mechanism for monitoring the alive-monitoring port for the node where a Google Cloud virtual IP resource has been activated.
Google Cloud load balance monitor resource (gclbw)
Provides a mechanism for monitoring whether the same port number as the health-check port number has already been used, for the node where a Google Cloud virtual IP resource has not been activated.
Google Cloud DNS monitor resource (gcdnsw)
Provides a monitoring mechanism for the virtual host name and IP address provided by the Google Cloud DNS resource.
Oracle Cloud virtual IP monitor resource (ocvipw)
Provides a mechanism for monitoring the alive-monitoring port for the node where an Oracle Cloud virtual IP resource has been activated.
Provides a mechanism for monitoring whether the same port number as the health-check port number has already been used, for the node where an Oracle Cloud virtual IP resource has not been activated.
Oracle Cloud DNS monitor resource (ocdnsw)
Provides a monitoring mechanism for the virtual host name and IP address provided by the Oracle Cloud DNS resource.
Note
To use the DB2 monitor resource, ODBC monitor resource, Oracle monitor resource, PostgreSQL monitor resource, and SQL Server monitor resource, the EXPRESSCLUSTER X Database Agent license is required.
To use the FTP monitor resource, HTTP monitor resource, IMAP4 monitor resource, POP3 monitor resource and SMTP monitor resource, the EXPRESSCLUSTER X Internet Server Agent license is required.
To use Tuxedo monitor resource, WebLogic monitor resource, WebSphere monitor resource and WebOTX monitor resource, the EXPRESSCLUSTER X Application Server Agent license is required.
To use the JVM monitor resources, the EXPRESSCLUSTER X Java Resource Agent license is required.
To use the system monitor resources and the process resource monitor resources, the EXPRESSCLUSTER X System Resource Agent license is required.
Above monitor resources are not listed on the monitor resource list of the Cluster WebUI if the licenses of those are not registered.
When changing to asynchronous method, changing the queue size or changing the difference bitmap size, it is required to add more memory. Memory size increases as disk load increases because memory is used corresponding to mirror disk I/O.
For the required size of a partition for a DISK network partition resolution resource, see "Partition for shared disk".
The use of the JVM monitor requires a Java runtime environment.
Java(TM) Runtime Environment
Version 8.0 Update 11 (1.8.0_11) or later
Java(TM) Runtime Environment
Version 9.0 (9.0.1) or later
Java(TM) SE Development Kit
Version 11.0 (11.0.5) or later
Java(TM) SE Development Kit
Version 17.0 (17.0.2) or later
Java(TM) SE Development Kit
Version 21.0 (21.0.3) or later
4.2.6. Operation environment for system monitor or process resource monitor or function of collecting system resource information
The use of the System Resource Agent requires the Microsoft .NET Framework environment.
Microsoft .NET Framework 4.6.2 or later
Note
On the OS of Windows Server 2016 or later, NET Framework 4.6.2 version or later is pre-installed (The version of the pre-installed one varies depending on the OS).
4.2.7. Operation environment for AWS Elastic IP resource, AWS Elastic IP monitor resource and AWS AZ monitor resource
The use of the AWS elastic ip resource, AWS elastic IP monitor resource and AWS AZ monitor resource requires the following software.
Software
Version
Remarks
AWS CLI
1.12.0 or later
2.0.0 or later
4.2.8. Operation environment for AWS Virtual IP resource and AWS Virtual IP monitor resource
The use of the AWS virtual ip resource and AWS virtual IP monitor resource requires the following software.
Software
Version
Remarks
AWS CLI
1.12.0 or later
2.0.0 or later
4.2.9. Operation environment for AWS secondary IP resource and AWS Secondary IP monitor resource
The use of the AWS secondary IP resource, AWS secondary IP monitor resource requires the following software.
Software
Version
Remarks
AWS CLI
1.12.0 or later
2.0.0 or later
4.2.10. Operation environment for AWS DNS resource and AWS DNS monitor resource
The use of the AWS DNS resource and AWS DNS monitor resource requires the following software.
Software
Version
Remarks
AWS CLI
1.12.0 or later
2.0.0 or later
4.2.11. Operation environment for AWS forced stop resource
The use of the AWS forced stop resource requires the following software.
Software
Version
Remarks
AWS CLI
1.15.0 or later
2.0.0 or later
4.2.12. Operation environment for Azure DNS resource and Azure DNS monitor resource
The use of the Azure DNS resource and Azure DNS monitor resource requires the following software.
Software
Version
Remarks
Azure CLI
2.0 or later
Use the 64-bit version.
4.2.13. Operation environment for Azure forced stop resource
The use of the Azure forced stop resource requires the following software.
Software
Version
Remarks
Azure CLI
2.0 or later
Use the 64-bit version.
4.2.14. Operation environments for Google Cloud DNS resource, Google Cloud DNS monitor resource
The use of the Google Cloud DNS resource, Azure Google Cloud monitor resource requires the following software.
Software
Version
Remarks
Google Cloud SDK
295.0.0 or later
4.2.15. Operation environments for Oracle Cloud DNS resource, Oracle Cloud DNS monitor resource
The use of the Oracle Cloud DNS resource, Azure Oracle Cloud monitor resource requires the following software.
Software
Version
Remarks
OCI CLI
3.27.1 or later
4.2.16. Operation environment for OCI forced stop resource
The use of the OCI forced stop resource requires the following software.
Software
Version
Remarks
OCI CLI
3.5.3 or later
4.2.17. Operation environment for enabling encryption
For EXPRESSCLUSTER components, enabling communication encryption requires the following software:
Software
Version
Remarks
OpenSSL
1.1.1 (1.1.1a or later)
3.0 (3.0.0 or later)
3.1 (3.1.0 or later)
3.2 (3.2.0 or later)
3.3 (3.3.0 or later)
3.4 (3.4.0 or later)
3.5 (3.5.0 or later)
The following components support communication encryption using the above software:
When using an IP address to connect to Cluster WebUI, the IP address must be registered to Site of Local Intranet in advance.
Note
No mobile devices, such as tablets and smartphones, are supported.
Note
When upgrading EXPRESSCLUSTER, it is recommended to upgrade your browser as well. If the browser version is outdated, the Cluster WebUI screen may not display correctly.
5.1. Correspondence list of EXPRESSCLUSTER and a manual
Description in this manual assumes the following version of EXPRESSCLUSTER. Make sure to note and check how EXPRESSCLUSTER versions and the editions of the manuals are corresponding.
The following features and improvements have been released.
No.
Internal
Version
Contents
1
13.00
Windows Server 2022 is now supported.
2
13.00
Along with the major upgrade, some functions have been removed. For details, refer to the list of removed functions.
3
13.00
Added a function to suppress the automatic failover against a server crash, collectively in the whole cluster.
4
13.00
Added a function to give a notice in an alert log that the server restart count was reset as the final action against the detected activation error or deactivation error of a group resource or against the detected error of a monitor resource.
5
13.00
Added a function to exclude a server (with an error detected by a specified monitor resource) from the failover destination, for the automatic failover other than dynamic failover.
6
13.00
Added the clpfwctrl command for adding a firewall rule.
7
13.00
Added AWS secondary IP resources and AWS secondary IP monitor resources.
8
13.00
The forced stop function using BMC has been redesigned as a BMC forced-stop resource.
9
13.00
Redesigned the function for forcibly stopping virtual machines as a vCenter forced-stop resource.
10
13.00
The forced stop function in the AWS environment has been added to forced stop resources.
11
13.00
The forced stop function in the OCI environment has been added to forced stop resources.
12
13.00
Redesigned the forced stop script as a custom forced-stop resource.
13
13.00
Added a function to collectively change actions (followed by OS shutdowns such as a recovery action following an error detected by a monitor resource) into OS reboots.
14
13.00
Improved the alert message regarding the wait process for start/stop between groups.
15
13.00
The display option for the clpstat configuration information has allowed displaying the setting value of the resource start attribute.
16
13.00
The clpcl/clpstdn command has allowed specifying the -h option even when the cluster service on the local server is stopped.
17
13.00
A warning message is now displayed when Cluster WebUI is connected via a non-actual IP address and is switched to config mode.
18
13.00
In the config mode of Cluster WebUI, cluster configuration data can now be applied and exported even when information on the partition to be excluded cannot be obtained.
19
13.00
In the config mode of Cluster WebUI, a group can now be deleted with the group resource registered.
20
13.00
Changed the content of the error message that a communication timeout occurred in Cluster WebUI.
21
13.00
Changed the content of the error message that executing the full copy failed on the mirror disk screen in Cluster WebUI.
22
13.00
Added a function to copy a group, group resource, or monitor resource registered in the config mode of Cluster WebUI.
23
13.00
Added a function to move a group resource registered in the config mode of Cluster WebUI, to another group.
24
13.00
The settings can now be changed at the group resource list of [Group Properties] in the config mode of Cluster WebUI.
25
13.00
The settings can now be changed at the monitor resource list of [Monitor Common Properties] in the config mode of Cluster WebUI.
26
13.00
The dependency during group resource deactivation is now displayed in the config mode of Cluster WebUI.
27
13.00
Added a function to display a dependency diagram at the time of group resource activation/deactivation in the config mode of Cluster WebUI.
28
13.00
Added a function to narrow down a range of display by type or resource name of a group resource or monitor resource on the status screen of Cluster WebUI.
29
13.00
The default value for [Errors in restoring file share setting are treated as activity failure] of CIFS resource has been changed from [On] to [Off].
30
13.00
An intermediate certificate can now be used as a certificate file when HTTPS is used for communication in the WebManager service.
31
13.00
Added the clpcfconv command, which changes the cluster configuration data file from the old version to the current one.
32
13.00
Added a function to delay the start of the cluster service for starting the OS.
33
13.00
Details such as measures can now be displayed for error results of checking cluster configuration data in Cluster WebUI.
34
13.00
The OS type can be specified for specifying the create option of the clpcfset command.
35
13.00
Added a function to delete a resource or parameter from cluster configuration data, which is enabled by adding the del option to the clpcfset command.
36
13.00
Added the clpcfadm.py command, which enhances the interface for the clpcfset command.
37
13.00
The start completion timing of an AWS DNS resource has been changed to the timing before which the following is confirmed: The record set was propagated to AWS Route 53.
38
13.00
Changed the default value for [Wait Time to Start Monitoring] of AWS DNS monitor resources to 300 seconds.
39
13.00
The clpstat command can now be run duplicately.
40
13.00
Added the Node Manager service.
41
13.00
Added a function for statistical information on heartbeat.
42
13.00
The proxy server has become available even when a Witness heartbeat resource is not used for an HTTP NP resolution resource.
43
13.00
HTTP monitor resources now support digest authentication.
44
13.00
The FTP server that uses FTPS for the FTP monitor resource can now be monitored.
45
13.00
Multiple system monitor resources can now be registered.
46
13.00
Multiple process resource monitor resources can now be registered.
47
13.00
Added a function to target only specific processes for a process resource monitor resource.
48
13.00
A single service monitor resource alone can now monitor any service.
49
13.00
The options for the clpmdctrl and clpmdstat commands have been made the same as those for the clphdctrl and clphdstat commands.
50
13.02
JVM monitor resource supports Apache Tomcat 10.0.
51
13.10
Added protection against vulnerabilities (CVE-2022-34824 and CVE-2022-34825): a feature for appropriately giving permission to the installation folder during installation.
52
13.10
Added SMTPS and STARTTLS support for the mail reporting function.
53
13.10
Added a forced stop function for the Azure environment to the forced stop resource.
54
13.10
Added a forced stop function for the vCenter forced stop resource used with vSphere Automation APIs.
55
13.10
Allowed specifying a log-file storage period.
56
13.10
Expanded the check items of cluster configuration data.
57
13.10
Allowed changing the transmission source IP address of a floating IP resource.
58
13.10
Accelerated the initial construction and full copying of a ReFS-based mirror disk resource.
59
13.10
Added a feature for allowing a failover on a mirror break.
60
13.10
Allowed registering the following monitor resources with the multi target monitor resource:
- AWS Elastic IP monitor resource
- AWS Virtual IP monitor resource
- AWS Secondary IP monitor resource
- AWS AZ monitor resource
- AWS DNS monitor resource
- Azure probe port monitor resource
- Azure load balance monitor resource
- Azure DNS monitor resource
- Google Cloud Virtual IP monitor resource
- Google Cloud load balance monitor resource
- Google Cloud DNS monitor resource
- Oracle Cloud Virtual IP monitor resource
- Oracle Cloud load balance monitor resource
61
13.10
Added a feature for setting as a warning a value returned from the specified script, to custom monitor resources.
62
13.10
Added support for SQL Server 2022 for SQL Server monitor resources.
63
13.10
Added support for PostgreSQL 15.1 for PostgreSQL monitor resources.
64
13.10
Eliminated the need for Python for configurations in AWS environments where only AWS Virtual IP resources and AWS Virtual IP monitor resources are used.
65
13.10
Allowed using Cluster WebUI to specify environment variables for AWS-related features to access instance metadata and the AWS CLI.
66
13.10
Added a feature for specifying command line options for the AWS CLI accessed by AWS-related features.
67
13.10
Added support for WebSAM SVF PDF Enterprise 10.1 for JVM monitor resources.
68
13.10
Added support for WebSAM RDE SUITE 10.1 for JVM monitor resources.
69
13.10
Added support for WebSAM SVF Connect SUITE Standard 10.1 for JVM monitor resources.
70
13.10
Added a feature for outputting process resource statistics.
71
13.10
Added support for client authentication for HTTP monitor resources.
72
13.10
Added support for OpenSSL 3.0 for FTP monitor resources.
73
13.10
Added a feature for JVM monitor resources to output retry count information to the operation log.
74
13.10
Added support for Java 17 for JVM monitor resources.
75
13.10
Subtracted support for Java 7 for JVM monitor resources.
76
13.10
Allowed a shutdown in case of an NP state due to the abnormal statuses of all PING NP resolution resources.
77
13.10
Added an option for the clpbackup command not to perform a server shutdown or restart.
78
13.10
Added an option in the clpcfadm.py command to create a backup file of existing cluster configuration data.
79
13.10
Allowed Cluster WebUI to display its operation log.
80
13.10
Implemented measures to safeguard against changes in cluster configuration data during the mirroring process.
81
13.10
Added support for OpenSSL 3.0 for Cluster WebUI.
82
13.10
Disabled TLS 1.1 for the HTTPS connection of Cluster WebUI.
83
13.10
Added a feature for Cluster WebUI to apply cluster configuration data to only servers which can be communicated.
84
13.10
Added a feature for the status screen of Cluster WebUI to list settings with which cluster operation is disabled.
85
13.10
Added features for the config mode of Cluster WebUI to display or hide and to sort the following:
- Group resource list in [Group Properties]
- Monitor resource list in [Monitor Resources Common Properties]
86
13.10
Made the following changes for [Accessible number of clients] of cluster properties: its name to [Number of sessions which can be established simultaneously], and its lower limit value.
87
13.10
Hid [Received time] by default in the Alert logs of Cluster WebUI.
88
13.10
Changed the description of the [Restart the manager] button on the status screen of Cluster WebUI to "Restart WebManager service".
89
13.10
Allowed [Copy the group] in the config mode of Cluster WebUI to copy group resources' dependency on a case-by-case basis as well.
90
13.10
Implemented safeguards in Cluster WebUI to prevent configuration errors of AWS DNS resources.
91
13.10
Implemented safeguards in Cluster WebUI to prevent configuration errors with [Monitor Type] of custom monitor resources set to [Asynchronous].
92
13.10
Implemented safeguards in Cluster WebUI to prevent configuration errors of the PING NP resolution resource.
93
13.10
Allowed distinguishing in cluster statistics between automatic failover due to error detection and manual failover.
94
13.11
Added support for OpenSSL 3.0 for RESTful API.
95
13.11
Added support for OpenSSL 3.0 for Witness heartbeat resources.
96
13.11
Added support for OpenSSL 3.0 for HTTP network partition resolution resources.
97
13.12
Added support for OpenSSL 3.1 for the following functions:
- Cluster WebUI
- RESTful API
- Mirror disk resources
- Hybrid disk resources
- FTP monitor resources
- Mail report
98
13.20
Allowed collecting a log files for investigation with a failure in a group/monitor/forced-stop resource detected, and downloading the log files from Alert logs of Cluster WebUI.
99
13.20
Changed the action against a stop failure of a group targeted for awaiting its stop: The stop timeout is no longer awaited.
100
13.20
Added Oracle Cloud DNS resources and Oracle Cloud DNS monitor resources.
101
13.20
Change the default dependency values of the following group resources:
- Azure probe port resources
- Google Cloud virtual IP resources
- Oracle Cloud virtual IP resources
- Script resources
- Application resources
- Service resources
- Dynamic DNS resources
- Registry synchronization resources
- Virtual computer name resources
102
13.20
Eliminated the need for Python for the following AWS-related resources and monitor resources:
- AWS Elastic IP resources
- AWS DNS resources
- AWS Elastic IP monitor resources
- AWS DNS monitor resources
- AWS AZ monitor resources
103
13.20
Supported environments where HTTP/1.1 is required for HTTP monitor resources.
104
13.20
Added POP3S as an authentication method of POP3 monitor resources.
105
13.20
Changed the operation environment for system monitoring, process resource monitoring, and system resource information, to Microsoft .NET Framework 4.6.2 or higher.
106
13.20
Supported WebOTX V11.1 for WebOTX monitor resources.
107
13.20
Supported WebOTX V11.1 for JVM monitor resources.
108
13.20
Supported Oracle Tuxedo 22c (22.1.0) for Tuxedo monitor resources.
109
13.20
Allowed specifying a URI as a target for an HTTP network partition resolution resource.
110
13.20
Supported giving a notice in an alert log, in an environment where an AWS forced-stop resource is set, that protection against stopping an EC2 instance is enabled.
111
13.20
Added, for the forced-stop action of the Azure forced-stop resource, an option to immediately stop without resource deallocation.
112
13.20
Allowed checking the status of a mirror/hybrid disk resource with the a value returned by the clpmdstat/clphdstat command.
113
13.20
Changed the folder from work\trnreq to work\rexec, which stores script files to be specified with the --script option for the clprexec command and fits the command name.
114
13.20
Provided more error messages about cloud-related functions.
115
13.20
Modified the type of message outputted with a server down.
116
13.20
Allowed outputting the RESTful API operation log to the server.
117
13.20
Added an API for getting the following metrics information with the RESTful API:
- Group's continuous operation time
- Date and time when cluster configuration data was last applied
118
13.20
Provided more check items for cluster configuration data to be checked.
119
13.20
Reduced the process time for cluster configuration data to be checked.
120
13.20
Added time data to the name of a cluster configuration data file (.zip) to be saved with [Exporting the setting] of Cluster WebUI.
121
13.20
Supported making a warning pop up with [Action at NP Occurrence] changed to any of the following options in the config mode of Cluster WebUI:
- Stop the cluster service
- Stop the cluster service and shutdown OS
- Stop the cluster service and reboot OS
122
13.20
Supported displaying server statuses in color in the status tab of Cluster WebUI.
123
13.20
Changed the display position of a pop-up alert in Cluster WebUI, from the upper right to the lower right.
124
13.20
Supported displaying the expiry date and remaining days of the license in the operation mode of Cluster WebUI.
125
13.21
The HBA filtering settings configured during the installation of EXPRESSCLUSTER is now automatically imported when the cluster configuration data is uploaded.
126
13.21
Added support for OpenSSL 3.2 and OpenSSL 3.3 for the following functions:
- Cluster WebUI
- RESTful API
- Witness heartbeat resources
- HTTP network partition resolution resources
- FTP monitor resources
- POP3 monitor resources
127
13.21
Added support for PostgreSQL 16.3 for PostgreSQL monitor resources.
128
13.30
Added support for Windows Server 2025.
129
13.30
Allowed specifying more than one receiver for the Amazon SNS linkage function.
130
13.30
Added a function for alerting the user to a failure in notification to a receiver specified with the Amazon SNS linkage function.
131
13.30
Allowed checking a cluster's status with a value returned by the clpstat command.
132
13.30
Changed the specifications so that starting/stopping the clpgrp or clprsc command will be treated as success with the command already started/stopped.
133
13.30
Changed the name of [User URI], an item to set Azure-related resources, to [Application ID].
134
13.30
Added support for SSH for the network warning light feature.
135
13.30
Enabled [Disable Group Failover When Execution Fails], an item for forced stop resources, by default.
136
13.30
Simplified how to script custom forced-stop resources.
137
13.30
Modified the default dependency values of the following group resources:
- AWS DNS resources
- Azure DNS resources
- Dynamic DNS resources
- Google Cloud DNS resources
- Oracle Cloud DNS resources
- Application resources
- Script resources
- Registry synchronization resources
- Service resources
138
13.30
Improved some expressions in the alert service configuration window opened from Cluster WebUI.
139
13.30
Allowed choosing IPMI or Redfish from [Server Properties] for using a BMC.
140
13.30
Added the feature of dummy server failure to the verification mode.
141
13.30
Changed the expression of the [Password] item (on the button and label) seen in, for example, the monitor resource properties in the config mode of Cluster WebUI.
142
13.30
Added Integrated Cluster WebUI.
143
13.30
Added LB probe port resources and LB probe port monitor resources.
144
13.30
Added the following features for RESTful APIs:
- Generating a dummy failure in a monitor resource
- Clearing a dummy failure in a monitor resource
145
13.30
Added clpalttrace, a command for exporting a file of server-specific alert logs.
146
13.30
Improved the behavior so that neither a pre- nor a post-deactivation script will be executed during an emergency shutdown.
147
13.30
Added support for Apache Tomcat 10.1 for JVM monitor resources.
148
13.30
Added support for Java 21 for JVM monitor resources.
149
13.30
Added support for PostgreSQL 17.2 for PostgreSQL monitor resources.
150
13.30
Added support for OpenSSL 3.4 for the following features:
- Cluster WebUI
- RESTful APIs
- FTP monitor resources
- POP3 monitor resources
- Mail reporting
151
13.31
Added support for DB2 v12 for DB2 monitor resource.
152
13.31
Added support for OpenSSL 3.5 for the following features:
- Cluster WebUI
- FTP monitor resources
- POP3 monitor resources
- Mail reporting
153
13.31
Added support for clpfwctrl.sh command for LB probe port resources.
154
13.31
When adding a Google Cloud DNS resource in the config mode of Cluster WebUI, a Google Cloud DNS monitor resource is now automatically added.
155
13.31
When adding an AWS secondary IP resource in the config mode of Cluster WebUI, an AWS secondary IP monitor resource is now automatically added.
156
13.31
Added support for the forced stop via Redfish on BMC installed on servers other than iLO (such as ASPEED AST2600).
Modification has been performed on the following minor versions.
Critical level:
L
Operation may stop. Data destruction or mirror inconsistency may occur.
Setup may not be executable.
M
Operation stop should be planned for recovery.
The system may stop if duplicated with another fault.
S
A matter of displaying messages.
Recovery can be made without stopping the system.
No.
Version in which the problem has been solved
/ Version in which the problem occurred
Phenomenon
Level
Occurrence condition/
Occurrence frequency
1
13.00
/ 9.00 to 12.32
In a group, when a group resource alone is successfully activated, the restoration of another group resource may be executed.
S
This problem occurs in a group where a group resource alone is activated with another group resource failing in activation.
2
13.00
/ 12.10 to 12.32
In the config mode of Cluster WebUI, modifying a comment on a group resource may not be applied.
S
This problem occurs in the following case: A comment on a group resource is modified, the [Apply] button is clicked, the change is undone, and then the [OK] button is clicked.
3
13.00
/ 12.10 to 12.32
In the config mode of Cluster WebUI, modifying a comment on a monitor resource may not be applied.
S
This problem occurs in the following case: A comment on a monitor resource is modified, the [Apply] button is clicked, the change is undone, and then the [OK] button is clicked.
4
13.00
/ 12.10 to 12.32
When Cluster WebUI is connected to a stopped server, the [Recover] button remains disabled for a server restarting after its crash.
S
This problem occurs in the following case: When Cluster WebUI is connected to a stopped server, there is a server restarting after its crash.
5
13.00
/ 12.10 to 12.32
In the config mode of Cluster WebUI, the [Install Path] item is not required to be entered in the [Monitor (special)] tab of a WebLogic monitor resource.
S
This problem always occurs.
6
13.00
/ 12.00 to 12.32
In the status screen of Cluster WebUI, a communication timeout during the operation of a cluster causes a request to be repeatedly issued.
M
This problem always occurs when a communication timeout occurs between Cluster WebUI and a cluster server.
7
13.00
/ 12.10 to 12.32
Custer WebUI may freeze when dependency is set in the config mode of Cluster WebUI.
S
This problem occurs when two group resources are made dependent on each other.
8
13.00
/ 12.20 to 12.32
The response of the clpstat command may be delayed.
S
This problem may occur when communication with other servers is cut off.
9
13.00
/ 11.10 to 12.32
In the alert log for a delay warning of a monitor resource, the response time may read zero (0).
S
This problem may occur when the alert log for a delay warning of a monitor resource is outputted.
10
13.00
/ 12.20 to 12.32
An AP error of clpwebmc may occur.
S
This problem rarely occurs when cluster configuration data with a server removed is applied in the config mode of Cluster WebUI.
11
13.00
/ 12.00 to 12.32
A monitor resource may mistakenly detect a monitoring timeout.
M
This problem very rarely occurs when a monitoring process is executed by a monitor resource.
12
13.00
/ 12.20 to 12.32
An error occurs when the status code of a target response is 301 in an HTTP NP resolution resource.
S
This problem occurs when the response status code is 301.
13
13.00
/ 12.00 to 12.32
In [Monitoring usage of memory] for process resource monitor resources, [Duration time (min)] has been replaced with [Maximum Refresh Count (time)].
S
This problem occurs when the properties are displayed with Cluster WebUI or the clpstat command.
14
13.00
/ 12.00 to 12.32
In an HTTP monitor resource, a warning instead of an error is issued in the following case: The status code of a response to an issued HEAD request is in the 400s or 500s, and a non-default URI is specified as the monitor URI.
S
This problem occurs in the following case: The status code of a response to an issued HEAD request is in the 400s or 500s, and a non-default URI is specified as the monitor URI.
15
13.00
/ 12.10 to 12.32
In a custom monitor resource, when the process of a script to be monitored is cleared, the corresponding monitor resource name is not outputted to the alert message.
S
This problem occurs when the process of a script to be monitored is cleared in a custom monitor resource.
16
13.00
/ 11.01 to 12.32
A response to a mirror-related command may take time.
S
This problem occurs when a mirror disk connection is disconnected or when some of servers constituting a cluster are down.
17
13.00
/ 12.20 to 12.32
The EXPRESSCLUSTER Information Base service may abend.
S
This problem very rarely occurs when one of the following is performed:
- Cluster startup
- Cluster stop
- Cluster suspension
- Cluster resumption
18
13.01
/ 9.00 to 12.32,13.00
The vulnerabilities of CVE-2021-20700 to 20707 may cause the following acts by third parties:
- Execution of an arbitrary code
- Upload of an arbitrary file
- Reading of an arbitrary file
L
These problems occur when a specific process in EXPRESSCLUSTER receives a packet crafted by a malicious third party against the internal protocol of EXPRESSCLUSTER.
19
13.01
/ 13.00
For the clprexec command, the --script option does not work.
S
This problem occurs when the clprexec command is executed with the --script option specified.
20
13.01
/ 13.00
After a forced-stop resource is added by executing the clpcfset command, the cluster fails to start up.
S
This problem occurs during an attempt to start up a cluster to which cluster configuration data (including a forced-stop resource added by executing the clpcfset command) was applied.
21
13.02
/ 13.00 to 13.01
The EXPRESSCLUSTER Node Manager service starts without waiting for a service startup delay time.
S
This problem occurs with [Service Startup Delay Time] set to a value larger than zero seconds.
22
13.02
/ 13.01
Update installation registers the EXPRESSCLUSTER Old API Support service.
S
This problem occurs with the internal version 13.00 updated to 13.01.
23
13.02
/ 13.00 to 13.01
After a server is removed from the [Servers that can run the Group] list of the failover group, trying to apply the configuration data does not lead to a group-stop request.
S
This problem occurs in the following case: After a server is removed from the [Servers that can run the Group] list of the failover group, applying the configuration data is tried.
24
13.02
/ 13.00 to 13.01
The STOP error may occur during the application of cluster configuration data including a mirror/hybrid disk resource.
M
This problem occurs with the mirror/hybrid disk resource named with eight or more characters.
25
13.02
/ 13.00 to 13.01
A monitor resource may detect a monitoring timeout by mistake.
S
This problem occurs on very rare occasions during a monitoring process by the monitor resource.
26
13.02
/ 13.00 to 13.01
When [Recovery Action tab] for a monitor resource is set with [Generate an intentional stop error], the recovery action may not be performed.
S
This problem occurs on rare occasions when the recovery action is tried.
27
13.02
/ 13.00 to 13.01
An initialization error may occur in a kernel mode LAN heartbeat resource during a cluster service start.
M
This problem occurs when the kernel mode LAN heartbeat resource starts up with the network device yet to become available.
28
13.02
/ 12.00 to 13.01
A cluster service stop as an action at NP occurrence is not completed.
M
This problem occurs with [Action at NP Occurrence] set to [Stop the cluster service].
29
13.02
/ 13.00 to 13.01
Forcibly stopping more than one server may fail.
S
This problem occurs on rare occasions when one of three or more servers in a cluster tries to forcibly stop other servers.
30
13.02
/ 9.00 to 13.01
An application error may occur with the clpstat command.
S
This problem occurs in an environment where a failover group is set with no group resources registered.
31
13.02
/ 13.00 to 13.01
With a cluster suspended, Cluster WebUI or the clpstat command may show the server status as stopped.
S
This problem occurs when both of the following services are restarted with the cluster suspended:
- EXPRESSCLUSTER Node Manager
- EXPRESSCLUSTER Information Base
32
13.02
/ 13.00 to 13.01
A group/monitor resource status may be incorrectly shown.
S
This problem occurs with something wrong in the internal processing of cluster services during OS startup.
33
13.02
/ 13.00 to 13.01
Cluster WebUI or the clpstat command incorrectly shows the status of a server using no forced-stop resources.
S
This problem occurs when any of three or more servers in a cluster is configured not to use the forced-stop function.
34
13.02
/ 9.00 to 13.01
A STOP error may occur during OS startup or OS shutdown.
M
This problem occurs on very rare occasions during OS startup or OS shutdown.
35
13.02
/ 9.00 to 13.01
The vulnerabilities of CVE-2022-34822 to 34823 may cause the following acts by third parties:
- Reading of an arbitrary file
- Execution of an arbitrary code
L
These problems occur when a specific process in EXPRESSCLUSTER receives a packet crafted by a malicious third party against the internal protocol of EXPRESSCLUSTER.
36
13.10
/ 12.20 to 13.02
The EXPRESSCLUSTER Information Base service may abend.
S
This problem occurs on rare occasions when a cluster shutdown is performed.
37
13.10
/ 13.00 to 13.02
The clpnm.exe process may abend, leading to an OS restart.
M
This problem occurs on very rare occasions.
38
13.10
/ 13.00 to 13.02
After a cluster service is started up, an alert may be put out due to abnormal heartbeat.
S
This problem occurs on rare occasions after a cluster service is started up.
39
13.10
/ 12.00 to 13.02
A cluster may not be started up, due to a corrupted license file.
S
This problem occurs on rare occasions in the following case: While a cluster is being started up, its server is de-energized.
40
13.10
/ 12.00 to 13.02
Instead of a product version license, a fixed-term license may become active despite its expiration.
S
This problem occurs with both an unused fixed-term license and a product version license registered, when the former expires.
41
13.10
/ 13.00 to 13.02
The status of the BMC forced stop resource becomes abnormal.
S
This problem occurs with the iLO shared network port enabled.
42
13.10
/ 9.00 to 13.02
Failure in resuming a cluster may lead to its abend.
M
This problem occurs when a cluster is repeatedly suspended and resumed in the following environment: Two or more monitor resources are registered and each of their names consists of only one letter.
43
13.10
/ 13.00 to 13.02
When a server is shut down, the notification may not be sent.
S
This problem occurs on rare occasions during a server shutdown.
44
13.10
/ 12.10 to 13.02
A recovery script for a monitor resource may not be run.
S
This problem occurs in the following case: With [Execute Script before Recovery Action] on in Cluster WebUI, the user does not edit the script or simultaneously changes the script and something else.
45
13.10
/ 9.00 to 13.02
A monitor resource, configured to perform continuous monitoring, may not work.
S
This problem occurs in a monitor resource with the setting of [Monitor Timing] changed from [Active] to [Always].
46
13.10
/ 9.00 to 13.02
With [Service Name] of a service resource or service monitor resource set to the service display name of the service, the monitoring process may fail.
M
This problem occurs with a failure in obtaining the service name from the service display name.
47
13.10
/ 11.10 to 13.02
A CIFS monitor resource considers the monitoring result as normal by mistake.
S
This problem occurs at the time of the first monitoring by a CIFS monitor resource.
48
13.10
/ 12.10 to 13.02
[JVM Monitor Resource Tuning Properties] does not allow specifying a usage threshold for [Metaspace].
S
This problem always occurs.
49
13.10
/ 9.00 to 13.02
Hostname resolution may fail if the host is accessible from HTTP monitor resources.
S
This problem may occur when the hostname (not the IP address) is specified as a connection destination.
50
13.10
/ 13.00 to 13.02
With more than one DISK NP resolution resource configured, cluster resumption may cause an error message to be displayed.
S
This problem may occur depending on the timing.
51
13.10
/ 12.20 to 13.02
The display of the clpstat command may vary depending on the server where the command is executed.
S
This problem may occur when the command is executed on the server with the cluster service stopped.
52
13.10
/ 12.30 to 13.02
After the clpcfset command is executed to create cluster configuration data, its XML attribute value may be wrong.
S
This problem occurs when an ID attribute node is added by executing the clpcfset command.
53
13.10
/ 13.00 to 13.02
After the clpcfset command is executed to create cluster configuration data, its object count may be wrong.
S
This problem occurs when, by executing the clpcfset command, the object count is added to or deleted from the cluster configuration data including a forced stop resource.
54
13.10
/ 13.00 to 13.02
The clpcfadm.py command may not be correctly executed.
S
This problem occurs in the following case: Cluster WebUI executes the clpcfadm.py command on cluster configuration data from which all failover groups were deleted.
55
13.10
/ 13.00 to 13.02
The clpcfadm.py command may allow an invalid monitor resource to be configured.
S
This problem occurs in the following case: When the clpcfadm.py command is used to add a monitor resource, jra is specified as the type of monitor resource.
56
13.10
/ 13.00 to 13.02
After the clpcfadm.py command is executed to create cluster configuration data, its resource activation/deactivation timeout value may be wrong.
S
This problem occurs when executing the clpcfadm.py command changes the parameter requiring the calculation of the resource activation/deactivation timeout value.
57
13.10
/ 12.20 to 13.02
For a cluster with a RESTful API, obtaining its status may fail.
S
This problem may occur with the EXPRESSCLUSTER Information Base service restarted.
58
13.10
/ 12.20 to 13.02
A RESTful API may show the status of a cluster different from its actual status.
S
This problem may occur in the following case: The status is obtained while communication with other servers is cut off.
59
13.10
/ 12.20 to 13.02
A RESTful API may fail to collect information.
S
This problem occurs on rare occasions in the following case: An API for collecting information is executed just after an API for operation is executed.
60
13.10
/ 12.22 to 13.02
In group information retrieval with a RESTful API, an incorrect response to an exception may occur.
S
This problem may occur when a cluster server encounters an internal error.
61
13.10
/ 12.00 to 13.02
Display on Cluster WebUI may be delayed for a configuration with multiple mirror/hybrid disk resources registered.
S
This problem may occur when mirror recovery is performed for multiple resources.
62
13.10
/ 12.00 to 13.02
Cluster WebUI may fail to suspend mirror recovery.
S
This problem occurs in the following case: Mirror recovery suspension is tried with a browser session different from that of Cluster WebUI, where the mirror recovery was started; or the browser session of Cluster WebUI is reloaded during the mirror recovery.
63
13.10
/ 12.10 to 13.02
The cluster-creating wizard of Cluster WebUI fails to automatically register a floating IP monitor resource corresponding to [Management IP Address].
S
This problem occurs with [Management IP Address] registered through the cluster-creating wizard.
64
13.10
/ 12.30 to 13.02
Cluster WebUI may fail to obtain cloud environment information.
S
This problem occurs with Cluster WebUI connected via a proxy server.
65
13.10
/ 12.00 to 13.02
After [TTL] is changed for an Azure DNS resource in the config mode of Cluster WebUI, the change is not applied to the record.
S
This problem always occurs.
66
13.10
/ 12.10 to 13.02
When configuring strings like a resource name on the Cluster WebUI, consecutive spaces of two or more bytes are reduced to a single byte.
S
This problem occurs when the setting of cluster configuration data is changed while two or more bytes of spaces are input consecutively.
67
13.10
/ 12.10 to 13.02
In Cluster WebUI, when a group of PING NP resolution resources is added, the group list may be incorrectly displayed.
S
This problem may occur with one or more groups registered in the list of PING NP resolution resource groups.
68
13.11
/ 12.20 to 13.10
Applying cluster configuration data may fail.
S
This problem may occur when applying cluster configuration data repeatedly in the config mode of the Cluster WebUI.
69
13.11
/ 12.30 to 13.10
Cluster operation may be disabled.
S
This problem occurs in an environment where both a CPU license and a VM node license are registered.
70
13.11
/ 13.00 to 13.10
When the EXPRESSCLUSTER service starts, a failover group may not be started.
M
This problem may occur in the following case: the EXPRESSCLUSTER service of each server is stopped one server at a time, and then starting the EXPRESSCLUSTER service.
71
13.11
/ 11.30 to 13.10
After changing [Startup Server], the appropriate action for cluster configuration application is not required.
S
This problem always occurs.
72
13.11
/ 11.10 to 13.10
A SQL Server monitor resource may not detect an error.
S
This problem occurs when [Monitor Level] is 0.
73
13.11
/ 13.10
Mail reporting function may not work.
S
This problem occurs when the version is upgraded from X 5.0.2 or earlier to X 5.1.0 while mail reporting function is configured.
74
13.11
/ 12.20 to 13.10
Heartbeat status may be incorrect.
S
This problem may occur in the following occasions: Connecting the Cluster WebUI on multiple cluster servers, or executing the clpstat command on multiple cluster servers.
75
13.11
/ 13.00 to 13.10
Group resource status may be incorrect.
S
This problem may occur when restarting the EXPRESSCLUSTER service on a single node.
76
13.11
/ 13.00 to 13.10
When a cluster is configured with ESMPRO/ARC, the process of waiting for a shared disk to power on does not work.
S
This problem occurs when a cluster is started.
77
13.11
/ 9.00 to 13.10
EXPRESSCLUSTER system services may not be started, due to a failure of applying cluster configuration data.
S
This problem very rarely occurs when applying cluster configuration data.
78
13.11
/ 13.00
In the config mode of the Cluster WebUI, a service monitor resource may not be registered.
S
This problem occurs in the following case: A service monitor resource is registered while there are no group resources registered.
79
13.12
/ 13.11
A cluster may not start due to an incorrect cluster server status.
M
This problem may occur after a cluster service is stopped.
80
13.12
/ 9.00 to 13.11
A mirror disk connection may be disconnected when a failover group moves repeatedly.
S
This problem may occur when a failover group is moving at short intervals.
81
13.12
/ 13.10 to 13.11
Failover of a failover group including a hybrid disk resource may fail.
S
This problem occurs when a failover group fails over to a server other than the current server in the server group designated as the failover destination, with [Allow failover on mirror break for specified time] enabled.
82
13.12
/ 13.00 to 13.11
An alert that a restart count has been reset may appear when a monitor resource executes the recovery action.
S
This problem occurs when a monitor resource executes one of the following recovery actions.
- Stop the cluster service and shutdown OS
- Stop the cluster service and reboot OS
- Generate an intentional stop error
83
13.12
/ 9.00 to 13.11
The EXPRESSCLUSTER Node Manager service (clpnm.exe) may abend when a network partition is resolved, and then it causes the STOP error.
M
This problem occurs rarely when all of the following conditions are met.
- In an environment where a DISK NP resolution resource is configured.
- On a server other than the master server.
- When a network partition is resolved.
84
13.12
/ 13.10 to 13.11
The screen may not display when connecting to Cluster WebUI via HTTPS.
S
This problem occurs rarely with OpenSSL 3.0 or later.
85
13.12
/ 12.30 to 13.11
In the Cluster WebUI operation mode, the specific configuration values of some resources cannot be displayed.
Also the clpstat command fails to display these values.
S
This problem occurs when one of the following items is set to the maximum length.
- A resource name of AWS secondary ip resource (31 characters)
- A resource name of AWS virtual ip resource (31 characters)
- A resource name of Google Cloud DNS resource (31 characters)
- A zone name of Google Cloud DNS resource (63 characters)
- A DNS name of Google Cloud DNS resource (253 characters)
86
13.12
/ 9.00 to 13.11
The vulnerabilities of CVE-2023-39544 to 39548 may cause the following acts by third parties:
- Execution of an arbitrary code
- Uploading of an arbitrary file
- Skimming a cluster configuration data file
L
These problems occur when a specific process in EXPRESSCLUSTER receives a packet crafted by a malicious third party against the internal protocol of EXPRESSCLUSTER.
87
13.20
/ 12.20 to 13.12
The EXPRESSCLUSTER Information Base service may abend.
S
This problem may occur when cluster configuration data is uploaded with its server data deleted.
88
13.20
/ 11.00 to 13.12
The EXPRESSCLUSTER Transaction service may abend and the OS may be restarted.
S
This problem occurs when starting the EXPRESSCLUSTER Transaction service leads to initialization failure.
89
13.20
/ 12.20 to 13.12
Starting the OS may lead to outputting the error log of the EXPRESSCLUSTER API service.
S
This problem occurs when the OS is restarted without a cluster created.
90
13.20
/ 9.00 to 13.12
EXPRESSCLUSTER does not work normally.
L
This problem occurs in the following case: EXPRESSCLUSTER was installed, the system locale was changed from Japanese to another language, and then EXPRESSCLUSTER was reinstalled.
91
13.20
/ 13.00 to 13.12
A cluster service may fail to start up.
S
This problem may occur when a cluster service is starting up just after its stop.
92
13.20
/ 9.00 to 13.12
An emergency shutdown may occur during an attempt to stop a cluster service.
M
This problem occurs when one hour passes in stopping a cluster service.
93
13.20
/ 9.00 to 13.12
clprc.exe, a cluster service process, may abend.
M
This problem occurs on rare occasions with a delay in stopping a monitor resource which monitors an active target.
94
13.20
/ 9.00 to 13.12
During an attempt to restart a resource due to a monitoring error or to perform a failover, a stopped resource is also started.
S
This problem occurs when starting up a resource fails, with its final action against a resource activation failure set to [No operation (not activate next resource)], and then the recovery action due to a monitoring error is taken.
95
13.20
/ 12.30 to 13.12
A stopped resource may be started during a failover due to a server failure.
S
This problem occurs when the failover occurs with a resource which was set to be manually started but has never started despite the startups of the cluster.
96
13.20
/ 9.00 to 13.11
The mirror disk connect communication is disconnected.
M
This problem occurs on rare occasions with a failover group moved repeatedly in a short period of time.
97
13.20
/ 9.00 to 13.12
An application error occurs in an attempt to stop a virtual computer name resource, which may fail.
M
This problem occurs on rare occasions depending on the timing.
98
13.20
/ 12.00 to 13.12
Starting a dynamic DNS resource may lead to outputting an unnecessary error message.
S
This problem occurs with both of the following cases true:
- [Delete the Registered IP Address] is enabled.
- The resource is configured separately for each server.
99
13.20
/ 12.00 to 13.12
The alert log of an Azure DNS resource may be outputted incorrectly.
S
This problem occurs depending on the error type.
100
13.20
/ 12.10 to 13.12
With the monitoring timing of a monitor resource set to active, the monitor resource may perform monitoring despite the deactivation state of the target resource.
S
This problem may occur with the resource repeatedly restarted.
101
13.20
/ 12.20 to 13.12
Stopping a monitor resource may lead to outputting the following invalid alert log: "Failed to stop monitor <name of the monitor resource>".
S
This problem occurs on rare occasions in an attempt to stop a monitor resource.
102
13.20
/ 12.30 to 13.12
The following monitor resources may consider their normal targets to be abnormal:
- AWS Virtual IP monitor resources
- AWS Secondary IP monitor resources
- Google Cloud DNS monitor resources
M
This problem occurs after the internal process becomes abnormal.
103
13.20
/ 12.00 to 13.12
An Azure DNS monitor resource fails in the normal monitoring process.
S
This problem occurs when the version of the Azure CLI is 2.50.0 or higher.
104
13.20
/ 11.10 to 13.12
An unnecessary event log may be outputted.
S
This problem occurs in either of the following cases:
- [Monitor NIC Link Up/Down] is set to [On] for a floating IP monitor resource.
- An NIC Link Up/Down monitor resource is configured.
105
13.20
/ 11.35 to 13.12
A heartbeat resource may detect a timeout by mistake.
M
This problem may occur with the heartbeat timeout value set to 400 seconds or more.
106
13.20
/ 12.10 to 13.12
More than one DISK network partition resolution resource can be configured.
S
This problem always occurs.
107
13.20
/ 13.10 to 13.12
The Azure forced-stop resource may not work normally.
S
This problem may occur with the configuration of [Servers in Use] for the Azure forced-stop changed in an environment with three or more nodes.
108
13.20
/ 13.10 to 13.12
It may take time for the Azure forced-stop resource to reboot an instance.
S
This problem occurs with [Forced Stop Action] set to [reboot].
109
13.20
/ 13.10 to 13.12
When a timeout occurs in a forced-stop resource in a cloud environment, a regular check may fail.
S
This problem may occur with the system heavily loaded.
110
13.20
/ 13.10 to 13.12
Running the Amazon CloudWatch linkage function may fail.
S
This problem occurs with [Send polling time metrics] set to [On] for at least two monitor resources.
111
13.20
/ 13.00 to 13.12
When cluster configuration data is created by executing the clpcfadm.py command, either of the following may occur:
- A value is set different from the specified one.
- The specified value is not set.
S
This problem occurs after a particular parameter is set.
112
13.20
/ 13.10 to 13.12
The operation log of Cluster WebUI may fail to be collected.
S
This problem occurs with the path of [Log output path] including either of the following:
- A symbolic link
- "\" at the end
113
13.20
/ 9.00 to 13.12
When applying a setting from Cluster WebUI leads to an authentication error, necessary services may not restart.
S
This problem occurs with the following performed at the same time:
- Creating or changing a password on the cluster password method
- A change involving a service restart
114
13.20
/ 12.00 to 13.12
In Cluster WebUI, a forcible mirror recovery may fail.
S
This problem occurs when an unknown-status server exists in hybrid disk configuration.
115
13.20
/ 9.00 to 13.12
In the HTTP response header of the WebManager server, no appropriate character encoding method is specified.
S
This problem always occurs in Cluster WebUI.
116
13.20
/ 13.00 to 13.12
RESTful API execution may fail.
S
This problem may occur in RESTful API execution just after an OS startup.
117
13.20
/ 13.00 to 13.12
In cooperation with ESMPRO/AC, the alert log may display an unnecessary error message.
S
This problem may occur in the following case: When a power failure occurs, ESMPRO/AC makes a cluster shutdown performed simultaneously on two or more servers.
118
13.20
/ 12.00 to 13.12
In Alert logs of Cluster WebUI, the display may become invalid.
S
This problem occurs when Cluster WebUI displays a corrupted alert log.
119
13.20
/ 13.00 to 13.12
In the config mode of Cluster WebUI, a dependency diagram may not be displayed.
S
This problem occurs with an extremely large number of resources.
120
13.20
/ 12.20 to 13.12
Cluster WebUI may become uncontrollable in config mode when uploading configuration data with its server data deleted.
S
This problem occurs with at least two failover groups started on the server whose data was removed.
121
13.20
/ 12.10 to 13.12
In the config mode of Cluster WebUI, the setting for [Network Partition Resolution Tuning Properties] is not saved after the [Apply] button is pressed and [Cluster properties] is closed by pressing the [Cancel] button.
S
This problem occurs after the setting for [Network Partition Resolution Tuning Properties] is changed and then [Cluster properties] is closed by pressing the [Cancel] button.
122
13.20
/ 12.10 to 13.12
In the config mode of Cluster WebUI, [User Name] in the [Monitor (special)] tab for an FTP monitor resource is not a mandatory item.
S
This problem always occurs.
123
13.20
/ 13.00 to 13.12
in the config mode of Cluster WebUI, going to [Group properties] -> the [Resources] tab -> [Resource Properties] wrongly displays the [Recovery Operation] tab.
S
This problem occurs with [Failover Count Method] set to [Cluster].
124
13.20
/ 12.00 to 13.12
The display of Cluster WebUI is delayed.
S
This problem may occur when several Hybrid disk resources are configured.
125
13.21
/ 13.20
When the EXPRESSCLUSTER service is stopped, an application error may occur in a specific process of EXPRESSCLUSTER (either clprc.exe or clpibsv.exe).
S
This problem very rarely occurs when the EXPRESSCLUSTER service is stopped.
126
13.21
/ 12.20 to 13.20
The EXPRESSCLUSTER Information Base service may take time to stop.
S
This problem may occur when the EXPRESSCLUSTER Information Base service is restarted after operating cluster for an extended period of time.
127
13.21
/ 9.00 to 13.12
In the EXPRESSCLUSTER Web Alert service, unnecessary local communication may occur.
S
This problem may occur when a blank is set for some servers in [Interconnect] tab for a heartbeat I/F.
128
13.21
/ 12.20 to 13.20
In Cluster WebUI, when applying cluster configuration data, the service restart screen may not close.
S
This problem occurs when the restart of WebManager service, information Base service, and API service is simultaneously required as the application method.
129
13.21
/ 11.30 to 13.20
The Azure load balance monitor resource may become abnormal in monitoring.
S
This problem may occur depending on the timing during the failover group shutdown process.
130
13.21
/ 9.00 to 13.20
The DISK network partition resolution resource may take time to stop.
S
This problem may occur when the cluster service is stopped by a network partition resolution with [Action at NP Occurrence] set to [Stop the cluster service].
131
13.21
/ 13.00 to 13.20
The DISK network partition resolution resource fails to start.
S
This problem occurs when the cluster service is started after stopping it by a network partition resolution with [Action at NP Occurrence] set to [Stop the cluster service].
132
13.21
/ 13.00 to 13.20
An application error may occur in the DISK network partition resolution resources.
M
This problem rarely occurs when the DISK network partition resolution resource is stopped.
133
13.21
/ 12.00 to 13.20
The activation and deactivation timeout values for Azure DNS resources created with the clpcfadm command may be incorrect.
S
This problem occurs when the parameters that require calculation of the active/deactive timeout value for Azure DNS resources are changed using the clpcfadm command.
134
13.21
/ 13.20
The WebManager server internal logs may be partially lost.
S
This problem may occur when the WebManager server is restarted.
135
13.21
/ 13.20
The DISK network partition resolution resource cannot be configured per server group in a hybrid disk configuration environment.
S
This problem always occurs.
136
13.21
/ -
In Cluster WebUI, it may not be possible to configure group resources and monitor resources that require optional product licenses.
S
This problem may occur in an environment running with trial version licenses when valid licenses and expired licenses are registered together.
137
13.21
/ 13.20
In Cluster WebUI, if an user without the operation right logs in, an authentication error message is displayed.
S
This problem occurs when one of the followings is configured:
- Control connection by using password
- Control connection by using client IP address
138
13.21
/ 13.20
The following alert log is output, and the log file for investigation cannot be downloaded.
Module Type: trnsv
Event ID: 2301
S
This problem occurs in an environment where the [Control connection by using client IP address] setting is enabled.
139
13.30
/ 13.10 to 13.21
Forced stops may fail in Azure environments.
S
This problem occurs on rare occasions when Azure login fails.
140
13.30
/ 13.00 to 13.21
Forced stops may fail in vCenter environments.
S
This problem occurs with the number of vSphere Automation API's HTTP response bytes several kilobytes or more.
141
13.30
/ 12.20 to 13.21
For a suspended cluster, executing the clpstat command with some options displays incorrect results.
S
This problem occurs when the clpstat -s --cl command is executed with the cluster suspended.
142
13.30
/ 12.00 to 13.21
With an expired license and a product version license coexisting, opening the WebUI license information screen displays the product version license in red.
S
This problem occurs when a product version license and an expired license coexist.
143
13.30
/ 13.00 to 13.21
When the maximum reboot count is zero, the following alert log may be outputted:
Module: rc
Event ID: 1106
Module: rm
Event ID: 1602
S
This problem occurs one hour after the cluster is started.
144
13.30
/ 12.30 to 13.21
When a monitor resource times out with the maximum reboot count reached, the recovery action may occur.
S
This problem may occur with [Generate an intentional stop error] set in [Monitor Resource Properties] ([Monitor(common)] tab -> [Operation at Timeout Detection]).
145
13.30
/ 12.30 to 13.21
When a monitor resource times out with the recovery target yet to be started, the following problems may occur:
- The recovery action occurs.
- No output occurs to the alert log to the effect that the recovery action will be suppressed.
S
This problem may occur with [Generate an intentional stop error] set in [Monitor Resource Properties] ([Monitor(common)] tab -> [Operation at Timeout Detection]).
146
13.30
/ 11.30 to 13.21
Changing the configuration of [Servers that can run the Group] for a failover group may not request an appropriate way of the application.
S
This problem may occur when the configuration of [Servers that can run the Group] for a failover group is changed in an environment with three or more nodes.
147
13.30
/ 11.13 to 13.21
A CIFS monitor resource may unexpectedly encounter a monitoring error.
S
This problem may occur when [Access Check] in [Monitor(special)] tab is set to [Folder Check] or [File Check].
148
13.30
/ 9.00 to 13.21
Executing the clpcfctrl command may fail.
S
This problem always occurs when the clpcfctrl command has a value of 32768 or more specified as the argument of the -p option.
149
13.30
/ 13.10 to 13.21
An AWS VIP monitor resource may consider the monitoring result as normal by mistake.
S
This problem may occur when an AWS VIP monitor resource fails to execute the AWS CLI command.
150
13.30
/ 12.20 to 13.21
After a cluster is created, applying cluster configuration data for the first time may request the user to take an unnecessary step of restarting the Information Base service.
S
This problem occurs when restarting the OS and restarting the Information Base service are simultaneously requested as a way of the application.
151
13.30
/ 13.00 to 13.21
For the vCenter forced stop resource, a setting with a special character causes a status error.
S
This problem occurs when any of the following settings with a special character is registered:
- Virtual machine name
- Datacenter name
- Username
- Password
152
13.30
/ 9.00 to 13.21
The usage of the clpstat command does not show a combination of the --sv and --detail options.
S
This problem always occurs.
153
13.30
/ 13.00 to 13.21
Detecting a server failure may not trigger a failover.
M
This problem occurs when the EXPRESSCLUSTER NodeManager service is restarted without the cluster stopped or suspended.
154
13.30
/ 13.00 to 13.21
After a cluster is resumed, the disk RW monitor resource may encounter a monitoring error.
M
This problem may occur after a cluster resumption, depending on the timing.
155
13.30
/ 9.00 to 13.21
During a server start, the following alert message may appear before the occurrence of a heartbeat timeout: "The status of heartbeat %1 is abnormal."
S
This problem occurs when a server is started with only some heartbeats able to communicate.
156
13.30
/ 12.30 to 13.21
In the config mode of Cluster WebUI, adding/removing a server or changing a server name changes the cluster identifier (UUID) of cluster configuration data.
S
This problem occurs when a server is added or removed In the config mode of Cluster WebUI.
157
13.30
/ 13.00 to 13.21
In the config mode of Cluster WebUI, [Script created with this product] for custom forced-stop resources allows the user to add or remove their original scripts.
S
This problem always occurs.
158
13.30
/ 12.10 to 13.21
When a RESTful API for getting information is executed for a monitor resource with a dummy failure, the status turns Unknown.
S
This problem always occurs.
159
13.30
/ 13.20 to 13.21
A RESTful API may fail to get a group's continuous operation time due to daylight saving time.
S
This problem always occurs with the group start time forward of the system time.
160
13.30
/ 13.00 to 13.21
For a configuration of three or more servers, a forced stop is performed for a server which is set not to use the forced-stop feature.
S
This problem occurs in a configuration of three or more servers: With the group of a server (which is set not to use the forced-stop feature) operating, the server fails.
161
13.30
/ 9.00 to 13.21
During a cluster shutdown, a monitoring error may occur in a monitor resource whose [Monitor Timing] is set to [Active].
S
This problem occurs with a cluster shutdown in an environment where there is a monitor resource with its [Monitor Timing] set to [Active].
162
13.30
/ 11.10 to 13.21
A memory leak may occur in a Java process of a JVM monitor resource.
S
This problem occurs with a JVM monitor resource paused.
163
13.30
/ 11.10 to 13.21
Unexpected enablement may occur to a disabled monitoring option of the [Memory] tab for a JVM monitor resource.
S
This problem occurs between [Monitor Heap Memory Rate] and [Monitor Non-Heap Memory Rate] (a JVM monitor resource's [Monitor (special)] tab -> [JVM Monitor Resource Tuning Properties] -> the [Memory] tab):
Enabling one of the two options with the other disabled causes the latter to be enabled.
164
13.30
/ 11.10 to 13.21
After a JVM monitor resource detects an error, the monitoring target is recovered normally. However, the status may not be back to normal.
S
This problem occurs when both of the following are true:
- In a JVM monitor resource's [Monitor (special)] tab -> [JVM Monitor Resource Tuning Properties] -> the [GC] or [WebLogic] tab, the monitoring-related settings are enabled.
- In either of the [Memory] or [Thread] tabs, any of the monitoring-related boxes is unchecked.
165
13.30
/ 13.20
EXPRESSCLUSTER may not allow following the right steps (e.g., to restore a disk image) described in Maintenance Guide.
M
This problem occurs if the GUID of a data partition or cluster partition changes from that in cluster configuration data through the steps to, for example, restore a disk image.
166
13.30
/ 13.20 to 13.21
Activating an AWS DNS resource fails or an AWS DNS monitor resource encounters a monitoring error.
S
This problem occurs, depending on the runtime environment, during the activation of an AWS DNS resource or during monitoring by an AWS DNS monitor resource.
167
13.30
/ 13.21
Stopping or suspending a cluster may cause the cluster service process to abend.
M
This problem occurs on rare occasions when a cluster is stopped or suspended.
168
13.30
/ 12.00 to 13.21
The following may fail: activating an Azure DNS resource or performing an Azure forced-stop.
S
This problem occurs in an environment with the Azure CLI (version 2.67.0 or higher) installed, during the activation of an Azure DNS resource or the performance of an Azure forced-stop.
169
13.31
/ 13.20 to 13.30
Activation of an AWS DNS resource may fail.
M
This problem very rarely occurs when activating an AWS DNS resource.
170
13.31
/ 13.30
Alert logs related to some cluster operations may not be output.
S
This problem always occurs.
171
13.31
/ 13.10 to 13.30
The log file of system resource statistics is not rotated.
L
This problem occurs when the log storage period feature is enabled and any of the following is configured.
- System resource statistics
- System monitor resource
- Process resource monitor resource
172
13.31
/ 12.00 to 13.30
A cluster reboot may be issued twice.
S
This problem very rarely occurs when connecting to the IP address set for a Floating IP resource and executing a cluster reboot from Cluster WebUI.
173
13.31
/ 13.20 to 13.30
The clpgrp command may output an inappropriate message.
S
This problem occurs when the clpgrp command is executed with the -t option to specify a group name that is running on another server.
174
13.31
/ 12.10 to 13.30
In the config mode of Cluster WebUI, command arguments cannot be set in the [Command] field on the [Monitor (special)] tab of the JVM monitor resource.
S
This problem always occurs.
175
13.31
/ 9.00 to 13.30
Failover may not occur when a server failure is detected.
L
This problem occurs in an environment where network partition resolution resources are configured, when the OS startup of another server is delayed during the OS startup of an active server, and the active server then fails.
176
13.31
/ 13.30
The clpfwctrl command may output an unnecessary message.
S
This problem always occurs when the clpfwctrl --add command is executed for the first time using PowerShell.
6.1.2. Hardware requirements for mirror disk and hybrid disk
Dynamic disks cannot be used. Use basic disks.
The partitions (data and cluster partitions) for mirror disks and hybrid disks cannot be used by mounting them on an NTFS folder.
To use a mirror disk resource or a hybrid disk resource, partitions for mirroring (i.e. data partition and cluster partition) are required.
There are no specific limitations on locating partitions for mirroring, but the data partition sizes need to be perfectly matched with one another on a byte basis. A cluster partition also requires space of 1024MiB or larger.
When making data partitions as logical partitions on the extended partition, make sure to select the logical partition for both servers. Even when the same size is specified on both primary partition and logical partition, their actual sizes may different from each other.
It is recommended to create a cluster partition and a data partition on different disks for the load distribution. (There are not any problems to create them on the same disk, but the writing performance will slightly decline, in case of asynchronous mirroring or in a state that mirroring is suspended.)
Use the same type of disks for reserving data partitions that perform mirroring by mirror resources on both of the servers.
Example
Combination
server1
server2
OK
SCSI
SCSI
OK
IDE
IDE
NG
IDE
SCSI
Partition size reserved by Disk Management is aligned by the number of blocks (units) per disk cylinder. For this reason, if disk geometries used as disks for mirroring differ between servers, the data partition sizes cannot be matched perfectly. To avoid this problem, it is recommended to use the same hardware configurations including RAID configurations for the disks that reserve data partitions on server1 and server2.
When you cannot synchronize the disk type or geometry on the both servers, make sure to check the exact size of data partitions by using the clpvolsz command before configuring a mirror disk resource or a hybrid disk resource. If they do not match, make the larger partition small by using the clpvolsz command.
When RAID-disk is mirrored, it is recommended to use writeback mode because writing performance decreases a lot when the disk array controller cache is set to write-thru mode. However, when writeback mode is used, it is necessary to use disk array controller with battery installed or use with UPS.
A partition with the OS page file cannot be mirrored.
The cluster configuration cannot be configured or operated in an environment, such as NAT, where an IP address of a local server is different from that of a remote server.
The following figure shows two servers connected to different networks with a NAT device set between them.
For example, assume that the NAT device is set as "the packet from the external network to 10.0.0.2 is forwarded to the internal network."
However, to build a cluster with Server 1 and Server 2 in this environment, IP addresses for different networks must be set in each server.
In the environment with each server set in different subnets like this, a cluster cannot be properly configured or operated.
Fig. 6.1 Example of the environment where a cluster cannot be configured
The partitions (disk heartbeat and disk resource switchable partitions) for shared disks cannot be used by mounting them on an NTFS folder.
Software RAID (stripe set, mirror set, stripe set with parity) and volume set cannot be used.
6.1.6. Write function of the mirror disk and hybrid disk
There are 2 types of disk mirroring of mirror disk resources and hybrid disk resources: synchronous mirroring and asynchronous mirroring.
In synchronous mirroring, data is written in the disks of both servers for every request to write data in the data partition to be mirrored and its completion is waited. Data is written in each of the servers along with this, but it is written in disks of other servers via network, so writing performance declines more significantly compared to a normal local disk that is not to be mirrored. In case of the remote cluster configuration, since the network communication speed is slow and delay is long, the writing performance declines drastically.
In asynchronous mirroring, data is written to the local server immediately. However, when writing data to other server, it is saved to the local queue first and then written in the background. Since the completion of writing data to other server is not waited for, even when the network performance is low, the writing performance will not decline significantly. However, in case of asynchronous mirror, the data to be updated is saved in the queue for every writing request as well, so the writing performance declines more significantly, compared to the normal local disk that is not to be mirrored and the shared disk. For this reason, it is recommended to use the shared disk for the system (such as the database system with lots of update systems) that is required high throughput for writing data in disks.
In case of asynchronous mirroring, the writing sequence will be guaranteed, but the data that has been updated to the latest may be lost, if an active server shuts down. For this reason, if it is required to inherit the data immediately before an error occurs for sure, use synchronous mirroring or the shared disk.
In mirror disk or hybrid disk with asynchronous mode, data that cannot afford to be written in memory queue is recorded temporarily in a folder specified to save history files. When the limit of the file is not specified, history files are written in the specified folder without limitation. In this case, the line speed is too low, compared to the disk update amount of application, writing data to other server cannot catch up with updating the disk, and history files will overflow from the disk.
For this reason, it is required to reserve a communication line with enough speed in the remote cluster configuration as well, in accordance with the amount of disk application to be updated.
In case the folder with history files overflows from the disk because the communication band gets narrowed or the disk is updated continuously, it is required to reserve enough empty space in the drive and specify the limit of the history file size. This space will be specified as the destination to write history files, and to specify the drive different from the system drive as much as possible.
6.1.8. Data consistency among multiple asynchronous mirror disks
In mirror disk or hybrid disk with asynchronous mode, writing data to the data partition of the active server is performed in the same order as the data partition of the standby server.
This writing order is guaranteed except during the initial mirror disk configuration or recovery (copy) period after suspending mirroring the disks. The data consistency among the files on the standby data partition is guaranteed.
However, the writing order is not guaranteed among multiple mirror disk resources and hybrid disk resources. For example, if a file gets older than the other and files that cannot maintain the data consistency are distributed to multiple asynchronous mirror disks, an application may not run properly when it fails over due to server failure.
For this reason, be sure to place these files on the same asynchronous mirror disk or hybrid disk.
Avoid using multi boot if either of mirror disk or shared disk is used because if an operating system is started from another boot disk, access restrictions on mirroring and the shared disk become ineffective. The mirror disk consistency will not be guaranteed and data on the shared disk will not be protected.
Up to 25 Java VMs can be monitored concurrently. The Java VMs that can be monitored concurrently are those which are uniquely identified by the Cluster WebUI (with Identifier in the Monitor(special) tab)
Connections between Java VMs and JVM monitor resources do not support SSL.
It may not be possible to detect thread deadlocks. This is a known problem in Java VM. For details, refer to "Bug ID: 6380127" in the Oracle Bug Database
The JVM monitor resources can monitor only the Java VMs on the server on which the JVM monitor resources are running.
The Java installation path setting made by the Cluster WebUI (with Java Installation Path in the JVM monitor tab in Cluster Property) is shared by the servers in the cluster. The version and update of Java VM used for JVM monitoring must be the same on every server in the cluster.
The management port number setting made by the Cluster WebUI (with Management Port in the Connection Setting dialog box opened from the JVM monitor tab in Cluster Property) is shared by all the servers in the cluster.
Application monitoring is disabled when an application to be monitored on the IA32 version is running on an x86_64 version OS.
If a large value such as 3,000 or more is specified as the maximum Java heap size by the Cluster WebUI (by using Maximum Java Heap Size on the JVM monitor tab in Cluster Property), The JVM monitor resources will fail to start up. The maximum heap size differs depending on the environment, so be sure to specify a value based on the capacity of the mounted system memory.
Use NTFS for file systems of a partition to install OS, a partition to be used as a disk resource of the shared disk, and of a data partition of a mirror disk resource and a hybrid disk resource.
In EXPRESSCLUSTER, the following port numbers are used by default. You can change the port number by using the Cluster WebUI.
Make sure not to access the following port numbers from a program other than EXPRESSCLUSTER.
Configure to be able to access the port number below when setting a firewall on a server:
After installing EXPRESSCLUSTER, you can use the clpfwctrl command to configure a firewall. For more information, see "Reference Guide" -> "EXPRESSCLUSTER command reference" -> "Adding a firewall rule (clpfwctrl command)".
Ports to be set with the clpfwctrl command are marked with ✓ in the clpfwctrl column of the table below. The applicable protocols are ICMPv4 and ICMPv6.
For a cloud environment, allow access to ports numbered as below, not only in a firewall configuration at the instance side but also in a security configuration at the cloud infrastructure side.
Communication port number specified with Cluster WebUI
Connection destination host of the Witness heartbeat resource
Server
Automatic allocation
Monitor target
icmp
IP monitor resource
Server
Automatic allocation
Monitor target
icmp
Monitoring target of PING method of network partition resolution resource
Server
Automatic allocation
Monitor target
Management port number set by the Cluster WebUI
Monitoring target of HTTP method of network partition resolution resource
Server
Automatic allocation
Server
Management port number set by the Cluster WebUI
JVM monitor resource
✓
Server
Automatic allocation
Monitoring target
Connection port number set by the Cluster WebUI
JVM monitor resource
Server
Automatic allocation
Server
Port number set in Cluster WebUI
LB probe port resource
✓
Server
Automatic allocation
Server
Probe port set by the Cluster WebUI
Azure probe port resource
✓
Server
Automatic allocation
AWS service endpoint
443/tcp
AWS Elastic IP resource
AWS virtual IP resource
AWS secondary IP resource
AWS DNS resource
AWS Elastic IP monitor resource
AWS virtual IP monitor resource
AWS secondary IP monitor resource
AWS AZ monitor resource
AWS DNS monitor resource
AWS forced stop resource
Server
Automatic allocation
Azure endpoint
443/tcp
Azure DNS resource
Server
Automatic allocation
Azure authoritative name server
53/udp
Azure DNS monitor resource
Server
Automatic allocation
Server
Port number set in Cluster WebUI
Google Cloud virtual IP resource
✓
Server
Automatic allocation
Server
Port number set in Cluster WebUI
Oracle Cloud virtual IP resource
✓
Server
Automatic allocation
Oracle Cloud endpoint
443/tcp
Oracle Cloud DNS resource
Oracle Cloud DNS monitor resource
OCI Forced stop resource
For an AWS environment, modify the Security Group setting in addition to the firewall setting.
JVM monitor uses the following two port numbers:
This management port number is a port number that the JVM monitor resource uses internally. To set the port number, open the Cluster Properties window of the Cluster WebUI, select the JVM monitor tab, and then open the Connection Setting dialog box. For more information, refer to " Parameter details" in the "Reference Guide".
This connection port number is the port number used to connect to the Java VM on the monitoring target (WebLogic Server or WebOTX). To set the port number, open the Properties window for the relevant JVM monitoring resource name, and then select the Monitor(special) tab. For more information, refer to "Monitor resource details" in the "Reference Guide".
The following are port numbers used by the load balancer for the alive monitoring of each server: Probeport of an Azure probe port resource, Port Number of a Google Cloud virtual IP resource, Port Number of an Oracle Cloud virtual IP resource, and Port Number of a LB probe port resource.
The above port numbers are used with the AWS CLI, which is executed by the following AWS-related resources:
AWS Elastic IP resource
AWS Virtual IP resource
AWS Secondary IP resource
AWS DNS resource
AWS Elastic IP monitor resource
AWS Virtual IP monitor resource
AWS Secondary IP monitor resource
AWS AZ monitor resource
AWS DNS monitor resource
AWS Forced stop resource
The Azure DNS resource runs the Azure CLI. The above port numbers are used by the Azure CLI.
The above port numbers are used with the OCI CLI, which is executed by the following OCI-related resources:
Oracle Cloud DNS resource
Oracle Cloud DNS monitor resource
OCI Forced stop resource
6.2.3. Changing automatic allocation range of communication port numbers managed by the OS
The automatic allocation range of communication port numbers managed by the OS may overlap the communication port numbers used by EXPRESSCLUSTER.
Check the automatic allocation range of communication port numbers managed by the OS, by using the following method. If there is any overlap, change the port numbers used by EXPRESSCLUSTER or change the automatic allocation range of communication port numbers managed by the OS, by using the following method to prevent any overlap.
Display and set the automatic allocation range by using the Windows netsh command.
Checking the automatic allocation range of communication port numbers managed by the OS
This example indicates that the range in which communication port numbers are automatically allocated in the TCP protocol is 49152 to 68835 (allocation of 16384 ports beginning with port number 49152). If any of the port numbers used by EXPRESSCLUSTER fall within this range, change the port numbers used by EXPRESSCLUSTER or follow description given in "Setting the automatic allocation range of communication port numbers managed by the OS," below.
Setting the automatic allocation range of communication port numbers managed by the OS
netsh interface <ipv4|ipv6> set dynamicportrange <tcp|udp> [startport=]<start_port_number> [numberofports=]<range_of_automatic_allocation>
This example sets the range in which communication port numbers are automatically allocated in the TCP protocol (ipv4) to between 10000 and 10999 (allocation of 1000 ports beginning with port number 10000).
If a lot of servers and resources are used for EXPRESSCLUSTER, the number of temporary ports used for internal communications by EXPRESSCLUSTER may be insufficient and the servers may not work properly as the cluster server.
Adjust the range of port number and the time before a temporary port is released as needed.
If multiple servers that are connected to the shared disk are started while access is not restricted by EXPRESSCLUSTER, data on the shared disk may be corrupted. When the access is restricted, make sure to start only one of the servers.
When a disk method is used to solve network partition, create a raw partition (disk heartbeat partition) with space larger than 17 MB that disk network partition resolution resources use on the shared disk.
Format the partition (switchable partition) used to transfer data between servers as disk resources with NTFS.
For each partition on the shared disk, assign the same drive letter on all servers.
Partitions on the shared disk can be formatted and created from one of the servers. It is not necessary to recreate or reformat a partition on each server. However, the drive letter needs to be set in each server.
When you continue using the data on the shared disk at times such as server reinstallation, do not create or format a partition. The data on the shared disk gets deleted if you allocate or format a partition.
Create a raw partition with larger than 1024MiB space on local disk of each server as a management partition for mirror disk resource (cluster partition.)
Create a partition (data partition) for mirroring on local disk of each server and format it with NTFS. It is not necessary to recreate a partition when the existing partition is mirrored.
Set the same data partition size to both servers. Use the clpvolsz command for checking and adjusting the partition size accurately.
Set the same drive letter to both servers for a cluster partition and data partition.
As a partition for hybrid disk resource management (cluster partition), create a RAW partition of 1024MiB or larger in the shared disk of each server group (or in the local disk if there is one member server in the server group).
Create a partition to be mirrored (data partition) in the shared disk of each server group (or in the local disk if there is one member server in the server group) and format the partition with NTFS (it is not necessary to create a partition again when an existing partition is mirrored).
Set the same data partition size to both server groups. Use the clpvolsz command for checking and adjusting the partition size accurately.
Set the same drive letter to cluster partitions in all servers. Set the same drive letter to data partitions in all servers..
6.2.9. Access permissions of a folder or a file on the data partition
In the workgroup environment, you must set access permission of a folder or a file on the data partition for an user on each cluster server. For example, you must set access permission for "test" user of "server1" and "server2" which are cluster servers.
It is necessary to configure the time from power-on of each node in the cluster to the server operating system startup to be longer than the following[8]:
The time from power-on of the shared disks to the point they become available.
On all servers in the cluster, verify the status of the following networks using the ipconfig or ping command.
Check the network settings by using the ipconfig and ping commands.
Public LAN (used for communication with all the other machines)
Interconnect-dedicated LAN (used for communication between servers in EXPRESSCLUSTER)
Mirror disk connect LAN (used with interconnect)
Host name
The IP address does not need to be set as floating IP resource in the operating system.
When NIC is link down, IP address will be disabled in a server that if IPv6 is specified for the EXPRESSCLUSTER configuration (such as heartbeat and mirror disk connect).
In that case, EXPRESSCLUSTER may cause some problems. Type following command to disable media sense function to avoid this problem.
6.2.12. Coordination with ESMPRO/AutomaticRunningController
The following are the notes on EXPRESSCLUSTER configuration when EXPRESSCLUSTER works together with ESMPRO/AutomaticRunningController (hereafter ESMPRO/AC). If these notes are unmet, EXPRESSCLUSTER may fail to work together with ESMPRO/AC.
The function to use EXPRESSCLUSTER with ESMPRO/AC does not work on the OS of x64 Edition.
You cannot specify only the DISK-method resource as a network partition resolution resource. When you specify the DISK method, do so while combining with other network partition resolution method such as PING method.
When creating a disk TUR monitor resource, do not change the default value (No Operation) for the final action.
When creating a Disk RW monitor resource, if you specify a path on the shared disk for the value to be set for file name, do not change the default value (active) for the monitor timings.
After recovery from power outage, the following alerts may appear on the EXPRESSCLUSTER manager. This does not affect the actual operation due to the configuring the settings mentioned above.
ID:18
Module name: nm
Message: Failed to start the resource <resource name of DiskNP>. (server name:xx)
ID:1509
Module name: rm
Message: Monitor <disk TUR monitor resource name> detected an error. (4 : device open failed. Check the disk status of the volume of monitoring target.)
For information on how to configure ESMPRO/AC and notes etc, see the chapter for ESMPRO/AC in the EXPRESSCLUSTER X for Windows PP Guide.
6.2.13. Control by a baseboard management controller (BMC)
Control by a BMC involves OpenIPMI or Redfish APIs.
Using ipmiutil, which is public as an open-source application with a BSD license, allows you to control each server's BMC firmware. To use the feature, install ipmiutil on each cluster server.
Configure each server's BMC so that communication can be established between the IP address of the LAN port for BMC management and the IP address used by the OS. The feature is unavailable with no BMC installed on the server or with the BMC management network blocked. For more information on configuring the BMC, see the server manual.
Users are responsible for making decisions and assuming responsibilities. NEC does not support or assume any responsibilities for:
Inquires about ipmiutil itself
Operations of ipmiutil
Malfunction of ipmiutil or any error caused by such malfunction
Inquiries about whether or not ipmiutil is supported by a given server
Check if your server (hardware) supports ipmiutil in advance. Note that even if the machine complies with the IPMI standard as hardware, ipmiutil may not run when you actually try to run it.
Using Redfish APIs allows you to control each server's BMC firmware. Using the feature requires Redfish APIs to be supported by each cluster server.
Configure each server's BMC so that communication can be established between the IP address of the LAN port for BMC management and the IP address used by the OS. The feature is unavailable with no BMC installed on the server or with the BMC management network blocked. For more information on configuring the BMC, see the server manual.
No additional applications are required, because the HTTPS protocol is used.
Users are responsible for making decisions and assuming responsibilities. NEC does not support or assume any responsibilities for:
Inquires about Redfish API itself
Operations of Redfish API
Malfunction of Redfish API or any error caused by such malfunction
Inquiries about whether or not Redfish API is supported by a given server
Check beforehand if your servers (hardware) support Redfish APIs and what types of power operation can be done through Redfish APIs. Not all hardware conforming to the standard specification for Redfish APIs supports all the types of power operation.
When installing EXPRESSCLUSTER on Server Core environment in Windows Server 2008, execute menu.exe just under the root of CD media at a command prompt. This displays the menu screen.
Although the procedures hereafter are the same as those in normal installation, you cannot select Register with License File in license registration. Make sure to select Register with License Information.
6.2.15. Access restriction for an HBA to which a system disk is connected
When an HBA to which a system disk is connected is listed in HBAs to be managed by the cluster system, access to the system partition in which the OS is installed is restricted and the OS may not start.
When an HBA to which a system disk is connected is added to HBAs to be managed by the cluster system in such an environment that enables SAN boot, the system partition should be added to Partition excluded from cluster management so that the access to it will not be restricted.
This section describes the settings of IAM (Identity & Access Management) in AWS environment.
Some of EXPRESSCLUSTER's functions internally run AWS CLI for their processes. To run AWS CLI successfully, you need to set up IAM in advance.
You can give access permissions to AWS CLI by using IAM role or IAM user. IAM role method offers a high-level of security because you do not have to store AWS access key ID and AWS secret access key in an instance. Therefore, it is recommended to use IAM role basically.
Advantages and disadvantages of the two methods are as follows:
Advantages
Disadvantages
IAM role
- This method is more secure than using IAM user
- The procedure for maintaining key information is simple.
None
IAM user
You can set access permissions for each instance later.
The risk of key information leakage is high.
The procedure for maintaining key information is complicated.
The procedure of setting IAM is shown below.
First, create IAM policy by referring to "Creating IAM policy" explained below.
Next, configure the instance settings.
To use IAM role, refer to "Setting up an instance by using IAM role" described later.
To use IAM user, refer to "Setting up an instance by using IAM user" described later.
Creating IAM policy
Create a policy that describes access permissions for the actions to the services such as EC2 and S3 of AWS. The actions required for AWS-related resources and monitor resources to execute AWS CLI are as follows:
The necessary policies are subject to change.
AWS virtual IP resources / AWS virtual IP monitor resources
Action
Description
ec2:DescribeNetworkInterfaces
ec2:DescribeVpcs
ec2:DescribeRouteTables
This is required for obtaining information of VPC, route table and network interfaces.
ec2:ReplaceRoute
This is required for updating the route table.
AWS Elastic IP resources /AWS Elastic IP monitor resource
Action
Description
ec2:DescribeNetworkInterfaces
ec2:DescribeAddresses
This is required for obtaining information of EIP and network interfaces.
ec2:AssociateAddress
This is required for associating EIP with ENI.
ec2:DisassociateAddress
This is required for disassociating EIP from ENI.
AWS secondary IP resources / AWS secondary IP monitor resources
Action
Description
ec2:DescribeNetworkInterfaces
ec2:DescribeSubnets
This is required for obtaining information on network interfaces and subnets.
ec2:AssignPrivateIpAddresses
This is required for assigning secondary IP addresses.
ec2:UnassignPrivateIpAddresses
This is required for deassigning secondary IP addresses.
AWS AZ monitor resource
Action
Description
ec2:DescribeAvailabilityZones
This is required for obtaining information of the availability zone.
AWS DNS resource / AWS DNS monitor resource
Action
Description
route53:ChangeResourceRecordSets
This is required for a resource record set is added or deleted or when the resource record set configuration is updated.
route53:GetChange
This is required for a resource record set is added or when the resource record set configuration is updated.
route53:ListResourceRecordSets
This is required for obtaining information of a resource record set.
AWS forced stop resource
Action
Description
ec2:DescribeInstances
This is required for obtaining information on instances.
ec2:StopInstances
This is required for stopping instances.
ec2:RebootInstances
This is required for restarting instances.
ec2:DescribeInstanceAttribute
This is required for obtaining instance attributes.
Function for sending data on the monitoring process time taken by the monitor resource, to Amazon CloudWatch.
Action
Description
cloudwatch:PutMetricData
This is required for sending custom metrics.
Function for sending alert service messages to Amazon SNS
Action
Description
sns:Publish
This is required for sending messages.
The example of a custom policy as shown below permits actions used by all the AWS-related resources and monitor resources.
Create the IAM role and attach the IAM Policy to the role.
You can create the IAM role from [Roles] - [Create New Role] in IAM Management Console
When creating an instance, specify the IAM role you created to IAM Role.
Log on to the instance.
Install the AWS CLI.
Download and install the AWS CLI.
The installer automatically adds the path of the AWS CLI to the system environment variable PATH. If the automatic path addition fails, refer to "AWS Command Line Interface" of the AWS document to add the path.
The AWS CLI has been installed in an environment with EXPRESSCLUSTER already installed, restart the OS before operating EXPRESSCLUSTER.
Launch the command prompt as the Administrator and execute the command as shown below.
>awsconfigure
Input the information required to execute AWS CLI in response to the prompt. Do not input AWS access key ID and AWS secret access key.
AWS Access Key ID [None]: (Just press Enter key)AWS Secret Access Key [None]: (Just press Enter key)Default region name [None]: <default region name>Default output format [None]: text
For "Default output format", other format than "text" may be specified.
When you input the wrong data, delete the files under %SystemDrive%\Users\Administrator\.aws and the directory itself and repeat the step described above.
Setting up an instance by using IAM user
In this method, you can execute AWS CLI after creating the IAM user and storing its access key ID and secret access key in the instance. You do not have to assign the IAM role to the instance when creating the instance.
Create the IAM user and attach the IAM Policy to the role.
You can create the IAM user in [Users] - [Create New Users] of IAM Management Console
Log on to the instance.
Install the AWS CLI.
Download and install the AWS CLI.
The installer automatically adds the path of the AWS CLI to the system environment variable PATH. If the automatic path addition fails, refer to "AWS Command Line Interface" of the AWS document to add the path.
The AWS CLI has been installed in an environment with EXPRESSCLUSTER already installed, restart the OS before operating EXPRESSCLUSTER.
Launch the command prompt as the Administrator and execute the command as shown below.
>awsconfigure
Input the information required to execute AWS CLI in response to the prompt. Obtain AWS access key ID and AWS secret access key from IAM user detail screen to input.
AWS Access Key ID [None]: <AWS access key>AWS Secret Access Key [None]: <AWS secret access key>Default region name [None]: <default region name >Default output format [None]: text
For "Default output format", other format than "text" may be specified.
When you input the wrong data, delete the files under %SystemDrive%\Users\Administrator\.aws and the directory itself and repeat the step described above.
Using a Google Cloud virtual IP resource with Windows Server 2019 requires Startup type for the following services to be set at Automatic (Delayed Start):
This section describes policy setting in the OCI environment.
Some of EXPRESSCLUSTER's functions internally run the OCI CLI for their processes. To run the OCI CLI successfully, the policy setting is required in advance.
Policy setting
For EXPRESSCLUSTER's OCI-related functions to run the OCI CLI, the following policies are required:
These policies are subject to change in the future.
For Oracle Cloud DNS resources and Oracle Cloud DNS monitor resources
Policy syntax
Description
Allow <subject> to use dns in <location>
Required to create, update, or delete an A record of Oracle Cloud DNS, or to retrieve information on it.
For OCI forced-stop resource
Policy syntax
Description
Allow <subject> to use instance-family in <location>
Required to stop or restart an instance, or to retrieve information on it.
Into each of <subject> and <location>, enter a value suitable for the environment.
6.3. Notes when creating the cluster configuration data
Notes when creating a cluster configuration data and before configuring a cluster system is described in this section.
6.3.1. Folders and files in the location pointed to by the EXPRESSCLUSTER installation path
The folders and files in the location pointed to by the EXPRESSCLUSTER installation path must not be handled (edited, created, added, or deleted) by using any application or tool other than EXPRESSCLUSTER.
Any effect on the operation of a folder or file caused by using an application or tool other than EXPRESSCLUSTER will be outside the scope of NEC technical support.
6.3.2. Final action for group resource deactivation error
If select No Operation as the final action when a deactivation error is detected, the group does not stop but remains in the deactivation error status. Make sure not to set No Operation in the production environment.
If the delay warning rate is set to 0 or 100, the following can be achieved:
When 0 is set to the delay monitoring rate
An alert for the delay warning is issued at every monitoring.
By using this feature, you can calculate the polling time for the monitor resource at the time the server is heavily loaded, which will allow you to determine the time for monitoring timeout of a monitor resource.
When 100 is set to the delay monitoring rate
The delay warning will not be issued.
Be sure not to set a low value, such as 0%, except for a test operation.
6.3.4. Monitoring method TUR for disk monitor resource and hybrid disk TUR monitor resource
You cannot use the TUR methods on a disk or disk interface (HBA) that does not support the Test Unit Ready (TUR) command of SCSI. Even if your hardware supports these commands, consult the driver specifications because the driver may not support them.
TUR methods burdens OS and disk load less compared to Read methods.
In some cases, TUR methods may not be able to detect errors in I/O to the actual media.
For an interconnect with the highest priority, configure kernel mode LAN heartbeat resources which can be exchanged between all servers.
Configuring at least two kernel mode LAN heartbeat resources is recommended unless it is difficult to add a network to an environment such as the cloud or a remote cluster.
It is recommended to register both an interconnect-dedicated LAN and a public LAN as LAN heartbeat resources.
Time for heartbeat timeout needs to be shorter than the time required for restarting the OS. If the heartbeat timeout is not configured in this way, an error may occur after reboot in some servers in the cluster because other servers cannot detect the reboot.
6.3.6. Double-byte character set that can be used in script comments
Scripts edited in Windows environment are dealt as Shift-JIS code, and scripts edited in Linux environment are dealt as EUC code. In case that other character codes are used, character corruption may occur depending on environment.
6.3.7. The number of server groups that can be set as servers to be started in a group
The number of server groups that can be set as servers to be started in one group is 2.
If three or more server groups are set, the EXPRESSCLUSTER Disk Agent service (clpdiskagent.exe) may not operate properly.
When the monitoring target is WebLogic, the maximum values of the following JVM monitor resource settings may be limited due to the system environment (including the amount of installed memory):
The number under Monitor the requests in Work Manager
Average under Monitor the requests in Work Manager
The number of Waiting Requests under Monitor the requests in Thread Pool
Average of Waiting Requests under Monitor the requests in Thread Pool
The number of Executing Requests under Monitor the requests in Thread Pool
Average of Executing Requests under Monitor the requests in Thread Pool
To use the Java Resource Agent, install the Java runtime environment (JRE) described in "Operation environment for JVM monitor" in "4.Installation requirements for EXPRESSCLUSTER" or a Java development kit (JDK). You can use either the same JRE or JDK as that used by the monitoring target (WebLogic Server or WebOTX) or a different one. If both JRE and JDK are installed on a server, you can use either one.
The monitor resource name must not include a blank.
The System Resource Agent performs detection by using thresholds and monitoring duration time as parameters.
The System Resource Agent collects the data (used size of memory, CPU usage rate, and used size of virtual memory) on individual system resources continuously, and detects errors when data keeps exceeding a threshold for a certain time (specified as the duration time).
You can specify command line options to be applied to processes with the AWS CLI, by going to Cluster properties -> the Cloud tab and setting AWS CLI command line options.
This is effective when, for example, you specify the URL of an endpoint to which a request is sent with the AWS CLI running.
To specify two or more of the command line options, separate each of them with a space.
The command line options can be specified for each AWS service.
The following lists the features for which the settings of AWS CLI command line options are effective:
aws cloudwatch
Amazon CloudWatch linkage
aws ec2
AWS Elastic IP resource
AWS Virtual IP resource
AWS Secondary IP resource
AWS Elastic IP monitor resource
AWS Virtual IP monitor resource
AWS Secondary IP monitor resource
AWS AZ monitor resource
AWS Forced stop resource
Obtaining cloud environment information with Cluster WebUI
aws route53
AWS DNS resource
AWS DNS monitor resource
aws sns
Amazon SNS linkage
For more information on the command line options for the AWS CLI, see AWS documents.
Note
Using any of the following characters disables the command line options specified for the AWS CLI: ;, &&, ||, or `.
Using the --output option disables the command line options specified for the AWS CLI.
6.3.13. Environment variables for running AWS-related features
AWS-related features access instance metadata as well as the AWS CLI.
You can specify environment variables to be applied to processes for running AWS-related features, by going to Cluster properties -> the Cloud tab and setting Environment variables at the time of performing AWS-related features.
This is effective when you, for example, use a proxy server in an AWS environment or specify for the AWS CLI a configuration file and an authentication data file.
The following lists the features for which the settings of Environment variables at the time of performing AWS-related features are effective:
AWS Elastic IP resource
AWS Virtual IP resource
AWS Secondary IP resource
AWS DNS resource
AWS Elastic IP monitor resource
AWS Virtual IP monitor resource
AWS Secondary IP monitor resource
AWS AZ monitor resource
AWS DNS monitor resource
AWS Forced stop resource
Amazon SNS linkage
Amazon CloudWatch linkage
Obtaining cloud environment information with Cluster WebUI
The environment variables can also be specified by using the environment variable configuration file.
In this case, do not set Environment variables at the time of performing AWS-related features. With Environment variables at the time of performing AWS-related features set, the environment variable configuration file cannot be used.
Note
The environment variable configuration file is for ensuring compatibility with old versions.
Using Environment variables at the time of performing AWS-related features is recommended for configuring the environment variables.
The environment variable configuration file is stored in the following location.
The specifications of the environment variable configuration file are as follows:
Write [ENVIRONMENT] on the first line, otherwise the environment variables may not be set.
If the environment variable configuration file does not exist or you do not have read permission for the file, the variables are ignored. This does not cause an activation failure or a monitor error.
If the same environment variables already exist in the file, the values are overwritten.
If an environment variable name follows a space or tab, or if = is placed between two tabs, then the setting may not be applied.
Environment variable names are case sensitive.
Even if a value contains spaces, you do not have to enclose the value in "" (double quotation marks).
The environment variables are not applied to scripts which are common to group and monitor resources (e.g., scripts before final action, ones before and after activation/deactivation).
6.3.14. Configuration file and authentication data file, for running AWS-related features
The AWS CLI run from AWS-related features uses the configuration file and authentication data file stored in the following folder:
<System drive>\Users\Administrator\.aws
To use a configuration file and an authentication data file, in a folder other than the above, you must specify the environment variables.
In the AWS environment, floating IP resources, floating IP monitor resources, virtual IP resources, virtual IP monitor resources, virtual computer name resources, and virtual computer name monitor resources cannot be used.
Only ASCII characters is supported. Check that the character besides ASCII character isn't included in an execution result of the following command.
In the AWS environment, floating IP resources, floating IP monitor resources, virtual IP resources, virtual IP monitor resources, virtual computer name resources, and virtual computer name monitor resources cannot be used.
Only ASCII characters is supported. Check that the character besides ASCII character isn't included in an execution result of the following command.
AWS virtual IP resources cannot be used if access via a VPC peering connection is necessary. This is because it is assumed that an IP address to be used as a VIP is out of the VPC range and such an IP address is considered invalid in a VPC peering connection. If access via a VPC peering connection is necessary, use the AWS DNS resource that use Amazon Route 53.
When a AWS Virtual IP resource is set, Windows registers the physical host name and VIP record in the DNS (if the property of the corresponding network adapter for registering addresses to the DNS is set to ON). To convert the IP address linked by the physical host name resolution into a physical IP address, set the relevant data as follows.
Check the setting of the network adapter to which the corresponding VIP address is assigned, by choosing Properties - Internet Protocol Version 4 - Advanced - DNS tab - Register this connection's address in DNS. If this check box is selected, clear it.
Additionally, execute one of the following in order to apply this setting:
Reboot the DNS Client service.
Explicitly run the ipconfig/registerdns command.
Register the physical IP address of the network adapter to which the corresponding VIP address is assigned to the DNS server statically.
An AWS virtual IP resource starts up normally, even if the route table to be used by instances does not include any route to an IP address to be used by the AWS virtual IP resource. This operation is as required. When activated, an AWS virtual IP resource updates the content of a route table that includes a specified IP address entry. Finding no route table, the resource considers the situation as nothing to be updated and therefore as normal. Which route table should have a specified entry, depending on the system configuration, is not the resource's criterion for judging the normality.
An AWS virtual IP resource uses a Windows OS API to add a virtual IP address to a NIC--without setting the skipassource flag. Hence this flag is disabled after the AWS virtual IP resource is activated. However, the skipassource flag can be enabled by using PowerShell after the activation of the resource.
In the AWS environment, floating IP resources, floating IP monitor resources, virtual IP resources, virtual IP monitor resources, virtual computer name resources, and virtual computer name monitor resources cannot be used.
Only ASCII characters is supported. Check that the character besides ASCII character isn't included in an execution result of the following command.
An AWS secondary IP resource adds a secondary IP address to a NIC with the help of the netsh command--with the skipassource flag not set. Hence this flag is disabled after the AWS secondary IP resource is activated. However, the skipassource flag can be enabled by using PowerShell after the activation of the resource.
When a AWS Secondary IP resource is set, Windows registers the physical host name and secondary IP record in the DNS (if the property of the corresponding network adapter for registering addresses to the DNS is set to ON). To convert the IP address linked by the physical host name resolution into a physical IP address, set the relevant data as follows.
Check the setting of the network adapter to which the corresponding secondary IP address is assigned, by choosing Properties - Internet Protocol Version 4 - Advanced - DNS tab - Register this connection's address in DNS. If this check box is selected, clear it.
Additionally, execute one of the following in order to apply this setting:
Reboot the DNS Client service.
Explicitly run the ipconfig/registerdns command.
Register the physical IP address of the network adapter to which the corresponding secondary IP address is assigned to the DNS server statically.
In the AWS environment, floating IP resources, floating IP monitor resources, virtual IP resources, virtual IP monitor resources, virtual computer name resource, and virtual computer name monitor resource cannot be used.
In the Resource Record Set Name field, enter a name without an escape code. If it is included in the Resource Record Set Name, a monitor error occurs.
Associated with a single account, an AWS DNS resource cannot be used for different accounts, AWS access key IDs, or AWS secret access keys. If you want such usage, consider creating a script to execute the AWS CLI with a script resource and then setting the environment variables in the script for authenticating other accounts.
Immediately after the AWS DNS resource is activated, monitoring by the AWS DNS monitor resource may fail due to the following events. If monitoring failed, set Wait Time to Start Monitoring of the AWS DNS monitor resource longer than the time to reflect the changed DNS setting of Amazon Route 53 (https://aws.amazon.com/route53/faqs/).
When the AWS DNS resource is activated, a resource record set is added or updated.
If the AWS DNS monitor resource starts monitoring before the changed DNS setting of Amazon Route 53 is applied, name resolution cannot be done and monitoring fails.
The AWS DNS monitor resource will continue to fail monitoring while a DNS resolver cache is enabled.
The changed DNS setting of Amazon Route 53 is applied.
Name resolution succeeds after the TTL valid period of the AWS DNS resource elapses. Then, the AWS DNS monitor resource succeeds monitoring.
In the Microsoft Azure environment, floating IP resources, floating IP monitor resources, virtual IP resources, virtual IP monitor resources, virtual computer name resources, and virtual computer name monitor resources cannot be used.
6.3.21. Setting up Azure load balance monitor resources
When a Azure load balance monitor resource error is detected, there is a possibility that switching of the active server and the stand-by server from Azure load balancer is not performed correctly. Therefore, in the Final Action of Azure load balance monitor resources and the recommended that you select Stop the cluster service and shutdown OS.
In the Microsoft Azure environment, floating IP resources, floating IP monitor resources, virtual IP resources, virtual IP monitor resources, virtual computer name resources, and virtual computer name monitor resources cannot be used.
6.3.23. Setting up Google Cloud virtual IP resources
IPv6 is not supported.
6.3.24. Setting up Google Cloud load balance monitor resources
For Final Action of Google Cloud load balance monitor resources, selecting Stop the cluster service and shutdown OS is recommended. When a Google Cloud load balance monitor resource detects an error, the load balancer may not correctly switch between the active server and the standby server.
In the Google Cloud environment, floating IP resources, floating IP monitor resources, virtual IP resources, and virtual IP monitor resources cannot be used.
When using multiple Google Cloud DNS resources in the cluster, you need to configure them to prevent their simultaneous activation/deactivation for their dependence or a wait for a group start/stop. Their simultaneous activation/deactivation may cause an error.
6.3.26. Setting up Oracle Cloud virtual IP resources
IPv6 is not supported.
6.3.27. Setting up Oracle Cloud load balance monitor resources
For Final Action of Oracle Cloud load balance monitor resources, selecting Stop the cluster service and shutdown OS is recommended. When an Oracle Cloud load balance monitor resource detects an error, the load balancer may not correctly switch between the active server and the standby server.
In the Oracle Cloud environment, floating IP resources, floating IP monitor resources, virtual IP resources, and virtual IP monitor resources cannot be used.
6.3.29. Configuration file for running OCI-related features
The OCI CLI run from OCI-related features uses the configuration file stored in the following folder:
<System drive>\Users\opc\.oci
6.3.30. Recovery operation on systems with Windows Server 2012 or later when a service fails
This applies to systems with Windows Server 2012 or later, with Restart Computer selected as the recovery option to be exercised when a service fails (abends): If the failure actually occurs, the OS is restarted not in the same way as on Windows Server 2008 or earlier but with a STOP error.
The EXPRESSCLUSTER services for which Restart Computer is set as the recovery operation by default are the following:
EXPRESSCLUSTER Disk Agent service
EXPRESSCLUSTER Node Manager service
EXPRESSCLUSTER Server service
EXPRESSCLUSTER Transaction service
6.3.31. Coexistence with the Network Load Balancing function of the OS
The IP address added to the NIC that is used by the Network Load Balancing (NLB) function of the OS is recognized as a virtual IP address of the NLB.
It is assumed that this virtual IP address is assigned to all servers within the NLB cluster.
If a floating IP address is assigned to the relevant NIC, the assigned floating IP address is also recognized as a virtual IP address.
When this floating IP address is accessed, the NLB function also balances the load of a network. However, since a floating IP address is not assigned to the NIC of the standby server, an error may occur in accessing to the floating IP address.
When you create a new cluster by changing the access control settings under the HBA tab of the Server Properties dialog box and uploading the configuration data, you are possibly not prompted to restart the OS to apply the change. Even so, restart the OS after changing the access control settings under the HBA tab to apply the configuration data.
6.3.33. Resource types listed in the wizard window for adding resources
By default, the wizard window for adding group and monitor resources lists resource types based on the environment where EXPRESSCLUSTER is installed. In other words, some of the resource types may be hidden.
To display hidden resource types, click the Show All Types button.
6.3.34. Coexistence of a mirror disk resource with a hybrid disk resource
A mirror disk resource and a hybrid disk resource cannot coexist in the same failover group.
6.3.35. Notes on Allow failover on mirror break for specified time
When using this setting, pay attention to the following:
Enabling this setting temporarily suppresses automatic mirror recovery after a mirror break.
Enabling this setting restricts the configuration of some failover attributes.
If you use this feature for a hybrid disk resource, make sure that the times of servers constituting a server group synchronize with each other.
For Timeout, it is recommended to set a value equal to or higher than a value for the heartbeat timeout.
Do not perform the following operations by the Cluster WebUI or from the command line while recovery processing is changing (reactivation -> failover -> last operation), if a group resource such as disk resource or application resource is specified as a recovery target and when a monitor resource detects an error.
Stop and suspend of a cluster
Start, stop, moving of a group
If these operations are controlled at the transition to recovering due to an error detected by a monitor resource, the other group resources in the group may not be stopped.
Even if a monitor resource detects an error, it is possible to control the operations above after the last operation is performed.
6.4.2. Executable format file and script file not described in the command reference
Executable format files and script files which are not described in "EXPRESSCLUSTER command reference" in the "Reference Guide" exist under the installation directory. Do not run these files on any system other than EXPRESSCLUSTER. The consequences of running these files will not be supported.
6.4.3. Cluster shutdown and cluster shutdown reboot
When using a mirror disk, do not execute cluster shutdown or cluster shutdown reboot from the clpstdn command or the Cluster WebUI while a group is being activated. A group cannot be deactivated while being activated. OS may shut down while mirror disk resource is not properly deactivated and mirror break may occur.
With a mirror disk used, a mirror break is caused by using a command or Cluster WebUI to stop a cluster service on a server, shut down a server, or run the shutdown reboot command.
The servers that constitute a cluster cannot check the status of other servers if a network partition occurs. Therefore, if a group is operated (started/stopped/moved) or a server is restarted in this status, a recognition gap about the cluster status occurs among the servers. If a network is recovered in a state that servers with different recognitions about the cluster status are running like this, a group cannot be operated normally after that. For this reason, during the network partition status, shut down the server separated from the network (the one cannot communicate with the client) or stop the EXPRESSCLUSTER Server service. Then, start the server again and return to the cluster after the network is recovered. In case that a network is recovered in a state that multiple servers have been started, it becomes possible to return to the normal status, by restarting the servers with different recognitions about the cluster status.
When a network partition resolution resource is used, even though a network partition occurs, emergent shut-down of a server (or all the servers) is performed. This prevents two or more servers that cannot communicate with one another from being started. When manually restarting the server that emergent shut down took place, or when setting the operations during the emergent shut down to restarting, the restarted server performs emergent shut down again. (In case of ping method or majority method, the EXPRESSCLUSTER Server service will stop.) However, if two or more disk heartbeat partitions are used by the disk method, and if a network partition occurs in the state that communication through the disk cannot be performed due to a disk failure, both of the servers may continue their operations with being suspended.
If the Cluster WebUI is operated in the state that it cannot communicate with the connection destination, it may take a while until the control returns.
When going through the proxy server, configure the settings for the proxy server be able to relay the port number of the Cluster WebUI.
When going through the reverse proxy server, the Cluster WebUI will not operate properly.
When updating EXPRESSCLUSTER, close all running browsers. Clear the browser cache and restart the browser.
Cluster configuration data created using a later version of this product cannot be used with this product.
When closing the Web browser, the dialog box to confirm to save may be displayed.
When you continue to edit, click the Stay on this page button.
Reloading the Web browser (by selecting Refresh from the menu or tool bar) , the dialog box to confirm to save may be displayed.
When you continue to edit, click the Stay on this page button.
For notes and restrictions of Cluster WebUI other than the above, see the online manual.
Make sure not to stop the EXPRESSCLUSTER Disk Agent Service. This cannot be manually started once you stop. Restart the OS, and then restart the EXPRESSCLUSTER Disk Agent Service.
6.4.8. Changing the cluster configuration data during mirroring
Make sure not to change the cluster configuration data during the mirroring process including initial mirror configuration. The driver may malfunction if the cluster configuration is changed.
6.4.9. Returning the stand-by server to the cluster during mirror-disk activation
If the stand-by server is running while the cluster service (EXPRESSCLUSTER server service) is stopped and the mirror disk is activated, restart the stand-by server before starting the service and returning the stand-by server to the cluster. If the stand-by server is returned without being restarted, the information about mirror differences will be invalid and a mirror disk inconsistency will occur.
6.4.10. Changing the configuration between the mirror disk and hybrid disk
To change the configuration so that the disk mirrored using a mirror disk resource will be mirrored using a hybrid disk resource, first delete the existing mirror disk resource from the configuration data, and then upload the data. Next, add a hybrid disk resource to the configuration data, and then upload it again. You can change a hybrid disk to a mirror disk in a similar way.
If you upload configuration data in which the existing resource has been replaced with a new one without deleting the existing resource as described above, the disk mirroring setting might not be changed properly, potentially resulting in a malfunction.
The chkdsk command or defragmentation to be executed on a switchable partition controlled by a disk resource or a data partition mirrored by a mirror disk resource must be executed on the server where the resource has already been started. Otherwise, the command or defragmentation cannot be executed due to access restriction.
When the chkdsk command is run in the restoration mode (/f option), stop the failover group and execute it while only the target disk resource/mirror disk resource is running. If not, and files or folders in the target partition are open, running the command. When there is a Disk RW monitor resource which monitors the target partition, it is necessary to suspend this monitor resource.
When you create a shared disk/mirror disk directory on the index service catalog to make an index for the folders on the shared disk / mirror disk, it is necessary to configure the index service to be started manually and to be controlled from EXPRESSCLUSTER so that the index service starts after the shared disk / mirror disk is activated. If the index service is configured to start automatically, the index service opens the target volume, which leads to failure in mounting upon the following activation, resulting in failure in disk access from an application or explorer with the message telling the parameter is wrong.
6.4.13. Issues with User Account Control (UAC) in a Windows Server 2012 or later environment
In a Windows Server 2012 or later environment, User Account Control (UAC) is enabled by default. When UAC is enabled, there are following issues.
Monitor Resource
Following resource has issues with UAC.
Oracle Monitor Resource
For the Oracle monitor resource, if you select OS Authentication for Authentication Method and then set any user other than those in the Administrators group as the monitor user, the Oracle monitoring processing will fail.
When you set OS Authentication in Authentication Method, the user to be set in Monitor User must belong to the Administrators group.
6.4.14. Environment in which the network interface card (NIC) is duplicated
In an environment in which the NIC is duplicated, NIC initialization at OS startup may take some time. If the cluster starts before the NIC is initialized, the starting of the kernel mode LAN heartbeat resource (lankhb) may fail. In such cases, the kernel mode LAN heartbeat resource cannot be restored to its normal status even if NIC initialization is completed. To restore the kernel mode LAN heartbeat resource, you must first suspend the cluster and then resume it.
In that environment, we recommend to delay startup of the cluster by following setting.
Network Initialization Complete Wait Time Setting
You can configure this setting in the Timeout tab of Cluster Properties. This setting will be enabled on all cluster servers. If NIC initialization is completed within timeout, the cluster service starts up.
The EXPRESSCLUSTER service login account is set in Local System Account. If this account setting is changed, EXPRESSCLUSTER might not properly operate as a cluster.
6.4.16. Monitoring the EXPRESSCLUSTER resident process
The EXPRESSCLUSTER resident process can be monitored by using software monitoring processes. However, recovery actions such as restarting a process when the process abnormally terminated must not be executed.
Error notification to external link monitor resources can be done in any of three ways: using the clprexec command, or linkage with the server management infrastructure.
To use the clprexec command, use the relevant file stored on the EXPRESSCLUSTER CD. Use this method according to the OS and architecture of the notification-source server. The notification-source server must be able to communicate with the notification-destination server.
When restarting the monitoring-target Java VM, you must first suspend JVM monitor resources or stop the cluster.
When changing the JVM monitor resource settings, you must suspend and resume the cluster.
JVM monitor resources do not support a delay warning for monitor resources.
6.4.19. System monitor resources, Process resource monitor resource
To change a setting, the cluster must be suspended.
System monitor resources do not support a delay warning for monitor resources.
If the date and time of the OS is changed during operation, the timing of analysis processing being performed at 10-minute intervals will change only once immediately after the date and time is changed. This will cause the following to occur; suspend and resume the cluster as necessary.
An error is not detected even when the time to be detected as abnormal elapses.
An error is detected before the time to be detected as abnormal elapses.
Up to 26 disks that can be monitored by the System monitor resources of disk resource monitor function at the same time.
6.4.20. Event log output relating to linkage between mirror statistical information collection function and OS standard function
The following error may be output to an application event log in the environment where the internal version is updated from 11.16 or earlier.
Event ID: 1008
Source: Perflib
Message: The Open Procedure for service clpdiskperf in DLL <EXPRESSCLUSTER installation path>binclpdiskperf.dll failed. Performance data for this service will not be available. The first four bytes (DWORD) of the Data section contains the error code.
If the linkage function for the mirror statistical information collection function and OS standard function is used, execute the following command at the Command Prompt to suppress this message.
When the linkage function is not used, even if this message is output, there is no problem in EXPRESSCLUSTER and performance monitor operations. If this message is frequently output, execute the following two commands at the Command Prompt to suppress this message.
If the linkage function for the mirror statistical information collection function and OS standard function is enabled, the following error may be output in an application event log:
Event ID: 4806
Source: EXPRESSCLUSTER X
Message: Cluster Disk Resource Performance Data can't be collected because a performance monitor is too numerous.
When the linkage function is not used, even if this message is output, there is no problem in EXPRESSCLUSTER and performance monitor operations. If this message is frequently output, execute the following two commands at the Command Prompt to suppress this message.
6.4.21. Restoration from an AMI in an AWS environment
If the ENI ID of a primary network interface is set to the ENI ID of the AWS virtual ip resource or AWS Elastic IP resource or AWS secondary ip resource, the AWS virtual ip resource or AWS Elastic IP resource or AWS secondary ip resource setting is required to change when restoring data from an AMI.
If the ENI ID of a secondary network interface is set to the ENI ID of the AWS virtual ip resource or AWS Elastic IP resource or AWS secondary ip resource, it is unnecessary to set the AWS virtual ip resource or AWS Elastic IP resource or AWS secondary ip resource again because the same ENI ID is inherited by a detach/attach processing when restoring data from an AMI.
6.5. Notes when changing the EXPRESSCLUSTER configuration
The section describes what happens when the configuration is changed after starting to use EXPRESSCLUSTER in the cluster configuration.
When the exclusive attribute of the exclusive rule is changed, the change is applied by suspending and resuming the cluster.
When a group is added to the exclusive rule whose exclusive attribute is set to Absolute, multiple groups of Absolute may start on the same server depending on the group startup status before suspending the cluster.
Exclusive control will be performed at the next group startup.
When the dependency between resources has been changed, the change is applied by suspending and resuming the cluster.
If a change in the dependency between resources that requires the resources to be stopped during application is made, the startup status of the resources after the resume may not reflect the changed dependency.
Dependency control will be performed at the next group startup.
6.5.3. Setting cluster statistics information of external link monitor resources
Once the settings of cluster statistics information of monitor resource has been changed, the settings of cluster statistics information are not applied to external link monitor resources even if you execute the suspend and resume. Reboot the OS to apply the settings to the external link monitor resources.
The following describes the functions changed for each of the versions.
Internal version 12.00
Management tool
The default management tool has been changed to Cluster WebUI. If you want to use the conventional WebManager as the management tool, specify "http://management IP address of management group or actual IP address:port number of the server in which EXPRESSCLUSTER Server is installed/main.htm" in the address bar of a web browser.
Mirror/Hybrid disk resource
Considering that the minimum size of a cluster partition has been increased to 1 GiB, prepare a sufficient size of it for upgrading EXPRESSCLUSTER..
Internal Version 12.10
Configuration tool
The default configuration tool has been changed to Cluster WebUI, which allows you to manage and configure clusters with Cluster WebUI.
Cluster statistical information collection function
By default, the cluster statistical information collection function saves statistics information files under the installation path. To avoid saving the files for such reasons as insufficient disk capacity, disable the cluster statistical information collection function. For more information on settings for this function, see "Parameter details" in the Reference Guide.
System monitor resource
The System Resource Agent process settings part of the system monitor resource has been separated to become a new monitor resource. Therefore, the conventional monitor settings of the System Resource Agent process settings are no longer valid. To continue the conventional monitoring, configure it by registering a new process resource monitor resource after upgrading EXPRESSCLUSTER. For more information on monitor settings for process resource monitor resources, see "Understanding process resource monitor resources" in "Monitor resource details" in the "Reference Guide".
BMC linkage
The ipmiutil parameters have been changed as follows.
The way of evaluating the AZ status grasped through the AWS CLI has been changed: available as normal, information or impaired as warning, and unavailable as warning. (Previously, any AZ status other than available was evaluated as abnormal.)
Internal Version 12.30
WebLogic monitor resource
REST API has been added as a new monitoring method. From this version, REST API is the default value for the monitoring method. At the version upgrade, reconfigure the monitoring method.
The default value of the password has been changed. If you use weblogic that is the previous default value, reset the password default value.
Internal Version 13.00
Forced stop function and scripts
These have been redesigned as individual forced stop resources adapted to environment types.
Since the forced stop function and scripts configured before the upgrade are no longer effective, set them up again as forced stop resources.
Internal Version 13.10
AWS Virtual IP resources
Some of the parameters have been changed due to a discontinuation of using Python.
Internal Version 13.20
Supported browsers for Cluster WebUI
If you use an internal version since 13.20, Cluster WebUI does not support Internet Explorer. For information on supported browsers, refer to "4.3.1.Supported browsers" .
The following describes the functions removed for each of the versions.
Important
Upgrading EXPRESSCLUSTER from its old version requires manually updating the cluster configuration data for functions with corresponding actions described in the table below.
Open Cluster Properties -> NP resolution tab, then remove each NP resolution resource whose type is unknown.
NAS resources
NAS monitor resources
If NAS resources are individually set in group resources' dependency, remove the dependency settings first.
For the group resources, open Resource Properties -> the Dependency tab, select the NAS resources, and then click the deleted button to exclude them from the dependency.
Delete NAS resources, and NAS monitor resources will also be deleted.
Print spooler resources
Print spooler monitor resources
If print spooler resources are individually set in group resources' dependency, remove the dependency settings first.
For the group resources, open Resource Properties -> the Dependency tab, select the print spooler resources, and then click the deleted button to exclude them from the dependency.
Delete print spooler resources, and print spooler monitor resources will also be deleted.
Virtual machine groups
Virtual machine resources
Virtual machine monitor resources
You cannot move configuration data (for a host cluster) which involves virtual machine groups.
BMC linkage
Delete relevant external link monitor resources.
Compatible commands
Script resources
Custom monitor resources
Scripts before final action
Scripts before and after activation/deactivation
Recovery scripts
Pre-recovery action scripts
Forced-stop scripts
Other scripts configured with EXPRESSCLUSTER
If any of these scripts includes a compatible command, modify the script by excluding the command.
Example
To start or stop services controlled with the armload command, use the sc command instead.
To monitor services, use service monitor resources instead.
If you used armdelay to specify a delay time for starting EXPRESSCLUSTER services, open the Cluster properties Timeout tab, then specify the value in Service Startup Delay Time instead.
Controlling CPU frequency command
(clpcpufreq command)
-
Estimating the amount of resource usage command
(clpprer command)
-
Controlling chassis identify lamp command
(clpledctrl command)
-
Processing inter-cluster linkage command
(clptrnreq command)
-
Changing BMC information command
(clpbmccnf command)
-
Broadcast for kernel mode LAN heartbeat resources
The Broadcast option (see Heartbeat I/F -> Cast Method) has been removed.
If you use cluster configuration data created with an old version, Unicast is applied for the heartbeat transmission.
EXPRESSCLUSTER Task Manager
-
EXPRESSCLUSTER clients
-
Linking with the load balancer
(JVM monitor resource)
-
The forced stop function using the System Center Virtual Machine Manager (SCVMM)
6.7.1. Compatibility with EXPRESSCLUSTER X 1.0/2.0/2.1/3.0/3.1/3.2/3.3/4.0/4.1/4.2/4.3/5.0/5.1/5.2
The cluster configuration information created of X 1.0/2.0/2.1/3.0/3.1/3.2/3.3/4.0/4.1/4.2/4.3/5.0/5.1/5.2 can be used in X 5.3 or later. Since the type of failover destination server selection upon failure detection of group resource / monitor resource is the stable server which is the default, what is selected for failover destination in X 2.0 or later may differ from that of X 1.0 for the configuration of three nodes or more.
If the stable server is configured as failover destination and there are multiple failover destinations, a server with no error will be given a higher priority when a failover takes place. On the other hand, with X 1.0, since the server configured to have the highest priority among the movable servers is the failover destination, failback to the server where the error occurred in the first place takes place, which can result in failure to failing over to the third server.
For the reason described above, it is generally recommended to configure the stable server as failover destination . However if the same behavior as X 1.0 is required, change the failover destination select Maximum Propriety Server in the Settings tab of the properties in each resource.
A server that is part of a cluster in a cluster system. In networking terminology, it refers to devices, including computers and routers, that can transmit, receive, or process signals.