How to Use Distributed Tiling Service |
Aiming at the shortcomings of traditional cache tiling technology, such as time-consuming and failure-free recovery mechanism, SuperMap iServer provides a distributed tiling service that supports parallel tiling of multiple machines, which can add multiple tiling nodes located in different machines, so as to achieve parallel tiling and improve the efficiency of tiling work.
The distributed tiling service supports tiling all published map services, and the data sources of the services can be SuperMap workspace data, image services, remote WMS services, remote WMTS services, remote REST Map services, Bing Maps services, Tianditu services, Google Map services, Baidu Map services, OpenStreetMap map services, ArcGIS REST map services, MBTiles files, SMFiles files, etc. You can also directly tile the images of the published image services. In addition, the distributed tiling service also supports tiling the 3D layers loaded in the scene of the 3D service.
The server that creates the tiling task is the tiling master node (TileMaster), and other cluster sub-nodes are the tiling sub-nodes (TileWorker). All operations such as the preparation of the tiling environment and storage, and the creation and monitoring of the task are performed on the master node, and the sub-nodes do not need to perform any operation. After the tiling task is created, the data of the to be tile will be automatically deployed to the child node. If the data of the master node changes, it will be automatically synchronized to the child node. For the principle and internal communication mechanism of distributed tiling, please refer to: Distributed Tiling Mechanism .
The distributed tiling service supports the production of 2D and 3D tiles of various types and storage formats.
The tile result is a grid image format of map tiles, supporting FastDFS (outdated) and MongoDB distributed storage, SMTiles, MBTiles formats,SuperMap UGC format (version 5.0 raw cache, i.e. "UGCV5" in storage type), and GeoPackage format.
The vector layers (point, line, surface and text layers) in the map layer are stored in the format of vector tiles, and the SVTiles format is supported.
The attribute data of the vector layer (point, line, surface and text layer) in the map layer is stored in the format of attribute tile, and UTFGrid format is supported.
For other detailed information in tile format, please refer to: Map Cache Format .
The distributed tiling service also supports the tiling of 3D tiles to the layers in the scene of the 3D service, which can be stored in MongoDB. Supported 3D tiles include:
Splitting image layers in a 3D scene to generate 3D image tiles;
Splitting terrain layers to generate 3D terrain tiles;
Splitting vector layers in 3D scenes to generate 3D vector tiles;
Splitting the model layers to generate 3D model tiles;
The distributed tiling service is mainly used through tile management function, with the address (TileMaster) being: http://localhost:8090/iserver/webapp/index.html#/. The basic operations of using distributed tiling services are all performed at the tiling master node. Create a tiling task and have it executed. However, as a comprehensive function for commanding parallel work of multiple machines, the stability and availability of the tiling results of distributed tiling services are even more important. Therefore. The distributed tiling service of SuperMap iServer provides comprehensive operation and maintenance functions, including real-time monitoring of tiling tasks and version management of tiling results. By using the distributed tiling service, you can:
The process of using the distributed tiling service to tile 3D tiles is basically the same as the above process, except for a slight difference in the creation of tiling tasks. For the creation of 3D tiles tiling tasks, please refer to: Create Tiling Task-3D Tiles .
Distributed tiling services can automatically use 2D tiles produced by map or image services, whether they are map tiles, vector tiles, or attribute tiles (outdated), except for UGCV5 type tiles, without the need for additional configuration. You can manually configure the service provider to use UGCV5 tiles. Of course, if you modify the default storage path for tiling or make other custom settings, you need to configure to use the cached tiles.
Furthermore, you can Publish map tiles directly to map services, and you also can distribute the cached tile set to share them offline.
Note: After creating a tiling task, the tiling master node will push all data under the folder where the tiling data workspace (*.smwu, *.sxwu) is located to each child node by default. Therefore, please store the tiling data and other data in different directories.