Configure to Use Multiworkers

Feedback


A single process cannot fully utilize system resources such as CPU and memory, while GIS services have high performance requirements due to their massive data and computationally intensive nature. Therefore, there is a strong demand for parallel computing and multiple processes. In order to fully utilize system resources, we used to enable multiple iServer services in a system by configuring a single machine cluster. It involves manual modification of port numbers and other operations, which are quite cumbersome.

Currently, iServer Standard, Advanced, and Ultra support multiworkers and provide a visual multi-process configuration to help you quickly create multiple iServer process. You only need to start multiple processes, and iServer automatically creates and starts multiple processes based on the number of processes you specify. In addition, with the iServer multi-process architecture, you can deploy independent services for each piece of data so as to realize the quarantine between different services in the process.

Multi-process Architecture

The iServer multi-process architecture has a Master and multiple Workers, as well as a to monitor the Master to avoid its failure Daemon character.

Configuring to Use Multiple Processes

Enable multiple processes

Enabling multiple processes means: iServer in the current operating system changes from a single process to multiple processes, while disabling multiple processes means from multiple iServer process becomes a single process.

You can refer to the following steps to configure and enable multiple processes on a single machine:

  1. Access the iServer service management server, and click "Cluster" and "Multiworkers".
  2. On the multi-process configuration page, click "Enable Multi-process" and set the Worker according to your system configuration Worker process number (it is recommended that the number of workers be equal to the number of CPU cores for best performance), 2 by default
  3. Set the Java virtual machine memory. Default is 1024m
  4. Set the communication port number between the main process and the child process to avoid conflict with other ports. Restart iServer after modification Only then can it take effect
  5. Set the connection timeout of master node and child node in millisecond
  6. Set the port range of the Worker process according to your network conditions. The default is 8900-9000. Then Worker ports increase from 8900
  7. Click "Save Configuration", iServer will automatically start multiple workers according to the above configuration, and restart manually iServer, enabling multi-process configuration
  8. After the iServer is restarted, access the "Cluster" and "Multiworkers" pages in the service management server to view the started individual Workers, including individual worker ports and automatically deployed services
  9. Click each service link in the "Service" column to directly access and use these services, you can also view and access services from the "Service List" page.

Note: On the multiprocessing configuration page, after enabling or disabling multiprocessing, you need to restart iServer for it to take effect.

In addition, you can configure the above information and the communication IP between the main process and the child process through the iServer System Config File.

 

Dynamic addition and subtraction of child nodes

The number of nodes of iServer multi-process supports dynamic scaling, and you can increase or decrease the number of nodes at any time according to the system conditions and usage requirements.

To add or subtract child nodes:

  1. Log in to the master service manager of the iServer master node, and click "Cluster" and "Multiworkers".
  2. On the multi-process configuration page, modify the Worker worker process number, which can be larger or smaller than original process number
  3. Reset the port range for the Worker process as needed
  4. After clicking "Save Configuration", iServer will dynamically adjust individual Workers based on your settings, including updating ports and deployed GIS services
  5. If the port range of the Worker has changed, restart the iServer for the above configuration to take effect

After making the above changes, you can view the currently running worker child nodes on the multiworkers page.

Managing multi-process GIS services

After enabling multiprocessing, the original iServer becomes a multiprocessing iServer Master node, and the Master automatically deploys the original GIS services of iServer in the newly added Worker node. The Master node is responsible for the unified management of Workers, including service management, security management, service monitoring, access statistics, log viewing, etc. for each Worker. That is to say, you only need to manage all GIS services on each Worker on the Master node, and you can:

In short, after enabling multiprocessing, although multiple iServer processes are automatically started, you do not need to manage each iServer separately. You only need to manage the services in all Workers uniformly through the Master node of the original port, just like managing a single iServer before.

Configure multiple service instances

The iServer supports multiple service instances. You can dynamically set the number of service instances, that is, allocate the number of Workers to the service. For example, if you set the number of instances of a service to n, the service will be allocated to n Workers, and these Workers will process the requests of the service. You can configure multiple instances in several ways:

Configuring multiple service instances through the service manager

How to enable multi-instance when release works pace is GIS service:

When publishing GIS services from other sources, the default publishing is multi-instance, and you do not need to manually enable multi-instance. You can directly modify the instance count in the following ways:

Configuring multiple service instances through XML file

If you configure the service through the XML file, you can add parameters to the Service Providers Configuration to enable multiple instances. Set instance count in the Service Components Configuration. Details are as follows:

    <provider class="com.supermap.services.providers.UGCMapProvider"  enabled="true" name="map-World"> 
      <config class="com.supermap.services.providers.UGCMapProviderSetting"> 
         <workspacePath>E:/supermap_iserver_801_4/samples/data/World/World.sxwu</workspacePath> 
          <multiThread>true</multiThread> 
          <poolSize>0</poolSize> 
          <ugcMapSettings/> 
          <useCompactCache>false</useCompactCache> 
          <extractCacheToFile>true</extractCacheToFile> 
          <queryExpectCount>1000</queryExpectCount> 
          <ignoreHashcodeWhenUseCache>false</ignoreHashcodeWhenUseCache> 
          <cacheDisabled>false</cacheDisabled> 
          <isMultiInstance>true</isMultiInstance> 
       </config> 
    </provider>  
If the service source is not a workspace, the above parameters are not required.
    <component class="com.supermap.services.components.impl.MapImpl"  enabled="true" instanceCount="3" interfaceNames="rest" 
 name="map-World" providers="map-World"> 
      <config class="com.supermap.services.components.MapConfig"> 
          <useCache>true</useCache> 
          <useVectorTileCache>true</useVectorTileCache> 
          <expired>0</expired> 
          <cacheReadOnly>false</cacheReadOnly> 
      </config> 
    </component> 

After modifying the number of service instances in the above way, you can go to the "Multi-process" page to view it. In addition, if you set the instance count to be more than number of workers, it will be the same as the number of workers by default, and the service will be allocated to all workers.

The advantage of configuring the number of instances lies in the rational allocation of resources. For example, for services with high resource consumption, multiple instances can be configured to get more resources, including CPU, memory, network bandwidth, etc. Thereby effectively improving the resource utilization rate and optimizing the service access efficiency.

Automate worker recycling

The iServer supports the automatic recycling of work processes on a regular basis. After the iServer is started, the system will regularly detect and recycle work processes with abnormal resource occupation.

To enable automatic recycling of a worker process:

  1. Log in to the iServer master node Master service manager, and click "Cluster" and "Multiworkers".
  2. On the multi-process configuration page, enable "Enable Recycling Worker Processes Automatically".
  3. Set relevant parameters as required, and the parameters are as follows:

Application Scenario

With the development of hardware technology, the general computer is multi-core configuration, which can improve resource utilization by using multi-process iServer. Specifically, the following scenarios can take advantage of multiple processes:

When iServer divides services by multiple processes, it will deploy services with data from the same workspace in one Worker, so when there are many data sources, using multiple processes can effectively quarantine services with different data.

After the iServer enables multiple processes, all Worker nodes are automatically participates in distributed tiling as graph cutting nodes, so the efficiency of single-machine graph cutting can be greatly improved.

In the aspect of data push, the cluster parent node needs to push task data to each child node in the traditional way (such as building a distributed graph cutting environment through a multi-machine cluster). For distributed graph cutting tasks based on multi-process creation, for read-only data files, such as read-only UDB data source, SMTiles file, etc. The parent node of the cluster only needs to push a task data to the designated location in the child node, and each Worker of the child node (with multiple processes enabled) obtains the data from this location when cutting the graph. As the number of graph-cutting tasks increases, so does the amount of data that needs to be pushed. In order to ensure the efficiency of graph cutting, each Worker The nodes all participate in the graph cutting, and there is no obvious difference between the nodes, so based on this method, you can manage the pushed data in a unified way.

If the distributed graph cutting is performed in the multiworkers mode, there is no need to push the task data, and the worker node can directly cut the graph from the Master The data is obtained, so that the data pushing time is saved, and the graph cutting efficiency is improved.

Precautions

The multi-process architecture uses 8900-9000 ports by default, which are used to start the HTTP service. Please make sure that these ports are not occupied, otherwise the multi-process cannot be used normally.

For all ports used by SuperMap iServer by default, please refer to: Introduction to Ports.