Wiki source code of Réglages avancés des paramètres système
Last modified by Aurelie Bertrand on 2025/02/07 10:18
Show last authors
author | version | line-number | content |
---|---|---|---|
1 | This document describes some advanced tweaks that can be made in the Digdash Enterprise System (DDE). These changes concerns: | ||
2 | |||
3 | * Tomcat settings | ||
4 | ** Allocating more memory to Tomcat | ||
5 | ** Change Tomcat network ports | ||
6 | ** Change Inactive / idle session timeout | ||
7 | ** Change number of concurrent requests threads | ||
8 | ** Enable HTTP compression | ||
9 | * Advanced Performance settings | ||
10 | * Automatic Files Cleaner | ||
11 | * Using several servers in cluster mode | ||
12 | * Other Advanced Settings | ||
13 | ** Change application data path | ||
14 | ** LDAP settings: Port and instance name | ||
15 | ** Advanced parameters for Dashboard editor / viewer | ||
16 | |||
17 | The following files will be modified: | ||
18 | |||
19 | * **server.xml** | ||
20 | ** Location (global Tomcat): <DDE Install>/apache-tomcat/conf/**server.xml** | ||
21 | * **system.xml** | ||
22 | ** Location: <user>/Application Data/Enterprise Server/ddenterpriseapi/config/**system.xml** | ||
23 | * **web.xml** | ||
24 | ** Location (global Tomcat): <DDE Install>/apache-tomcat/conf/**web.xml** | ||
25 | ** Location (ddenterpriseapi): <DDE Install>/apache-tomcat/webapps/**ddenterpriseapi**/WEB-INF/**web.xml** | ||
26 | ** Location (dashboard): < DDE Install>/apache-tomcat/webapps/**digdash_dashboard**/WEB-INF/**web.xml** | ||
27 | ** Location (adminconsole): < DDE Install>/apache-tomcat/webapps/**adminconsole**/WEB-INF/**web.xml** | ||
28 | ** Location (adswrapper): < DDE Install>/apache-tomcat/webapps/**adswrapper**/WEB-INF/**web.xml** | ||
29 | * **setenv.bat** | ||
30 | ** Location: <DDE Install>/configure/**setenv.bat** | ||
31 | * **dashboard_system.xml** | ||
32 | ** Location: <user>/Application Data/Enterprise Server/**dashboard_system.xml** | ||
33 | |||
34 | {{ddtoc/}} | ||
35 | |||
36 | = Tomcat Settings = | ||
37 | |||
38 | == Allocating more memory to Tomcat == | ||
39 | |||
40 | Modified file: **setenv.bat** | ||
41 | |||
42 | Find the lines at the top of the file: | ||
43 | |||
44 | set JVMMS=**512** | ||
45 | |||
46 | set JVMMX=**512** | ||
47 | |||
48 | Change both "512" numbers to the amount of memory (megabytes) you want to assign to tomcat. For instance "**4096**" will allocate 4GB of memory to Tomcat: | ||
49 | |||
50 | set JVMMS=**4096** | ||
51 | |||
52 | set JVMMX=**4096** | ||
53 | |||
54 | //Important~:// | ||
55 | |||
56 | (% class="box warningmessage" %) | ||
57 | ((( | ||
58 | On 64 bits Windows OS there is no limit other than the physical memory limit of your computer. If the value is too big, then Tomcat will not start. | ||
59 | ))) | ||
60 | |||
61 | //Note for Windows 32 bits//: | ||
62 | |||
63 | (% class="box infomessage" %) | ||
64 | ((( | ||
65 | If you have a 32 bits machine/OS, or if you deployed the DigDash Enterprise 32 bits version on your 64 bits machine/OS, then you are limited in the amount of memory you can allocate to Tomcat. The theoretical limit in this case is known to be approximately 1.5GB. It depends on the current memory fragmentation. Our testing generally shows that we can allocate a maximum of 1.4GB on a Windows 32 bits computer. | ||
66 | ))) | ||
67 | |||
68 | For this reason, we recommend a 64 bits machine/OS. | ||
69 | |||
70 | //Note for “PermGen space” memory// | ||
71 | |||
72 | (% class="box infomessage" %) | ||
73 | ((( | ||
74 | If you encounter an error referring to the “PermGen space” in DigDash Enterprise log file, you can increase the value defined in the variable JVMMPS (same place than JVMMS and JVMMX). | ||
75 | ))) | ||
76 | |||
77 | __**Important: If Tomcat is installed as a service**__ | ||
78 | |||
79 | (% class="box warningmessage" %) | ||
80 | ((( | ||
81 | When you install Tomcat as a Windows service (see document [[install_guide_windows_en.pdf>>path:../input/install_guide_windows_en.pdf]]), using **servers_install_service.bat** or **servers_install_service_64.bat** this is the settings of **setenv.bat** which will be applied //when service is installed//. | ||
82 | ))) | ||
83 | |||
84 | So if you want to change the memory allocated to Tomcat, it is necessary to: | ||
85 | |||
86 | .1 Uninstall it using the command** sc delete tomcat7 ==** | ||
87 | |||
88 | .2 Change variables and JVMMS JVMMX of **setenv.bat **file | ||
89 | |||
90 | .3 Restart **servers_install_service.bat **or **servers_install_service_64.bat** | ||
91 | |||
92 | == Change Tomcat network port == | ||
93 | |||
94 | If one or more port mandatory to Tomcat are already in use by another process, then it will not start. It is important to check for the availability of the network ports on the system. By default the three following ports are needed: 8005, 8080 and 8009. | ||
95 | Follow the following steps to modify them : | ||
96 | |||
97 | 1. Open the folder **<install DDE>\apache-tomcat\bin** then edit the file **server.xml** | ||
98 | 1. Find and replace all port values 8005, 8080 and 8009 by number of available ports on the system. | ||
99 | |||
100 | == Change inactive / idle session timeout == | ||
101 | |||
102 | Modified file: **web.xml** (global Tomcat configuration file located at <DDE Install>/apache-tomcat/conf/web.xml) | ||
103 | |||
104 | Find the lines in the file: | ||
105 | |||
106 | {{code language="XML" cssClass="notranslate"}} | ||
107 | <session-config> | ||
108 | <session-timeout>30</session-timeout> | ||
109 | </session-config> | ||
110 | {{/code}} | ||
111 | |||
112 | Change the value to modify the timeout of an inactive or idle session. By default the timeout is 30 minutes. | ||
113 | |||
114 | == Change number of concurrent requests threads == | ||
115 | |||
116 | Modified file: **server.xml** | ||
117 | |||
118 | By default Tomcat will not accept more than 200 **simultaneous** requests. This setting can be too low when the deployment addresses thousands or millions of users, or when benching the performances of the server (eg. jmeter) which execute hundreds or thousands of simultaneous requests. | ||
119 | |||
120 | To increase this limit you must add a **maxthread** attribute to the **Connector** XML tag corresponding to the connector used. | ||
121 | |||
122 | Example when the connector is http (there is no Apache web server on the front-end): | ||
123 | |||
124 | (% class="box" %) | ||
125 | ((( | ||
126 | <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" **maxthreads="400"** redirectPort="8443" maxPostSize="-1" URIEncoding="UTF-8" /> | ||
127 | ))) | ||
128 | |||
129 | Example when the connector is AJP (there is an Apache web server on the front-end): | ||
130 | |||
131 | (% class="box" %) | ||
132 | ((( | ||
133 | <Connector port="8009" protocol="AJP/1.3" **maxthreads="400"** redirectPort="8443" maxPostSize="-1" URIEncoding="UTF-8" /> | ||
134 | ))) | ||
135 | |||
136 | |||
137 | == Enable HTTP compression == | ||
138 | |||
139 | Modified file: **server.xml** | ||
140 | |||
141 | HTTP compression is used to decrease the usage of network bandwidth by compressing HTTP responses. By default this option is not enabled in Tomcat, but all modern browsers support it. | ||
142 | |||
143 | This option can compress responses up to 90% on some types of data or files: HTML, Javascript, CSS while consuming low CPU power. | ||
144 | |||
145 | //Important~:// | ||
146 | |||
147 | (% class="box warningmessage" %) | ||
148 | ((( | ||
149 | This option will work only if Tomcat is directly used as the front-end server to DigDash Enterprise, using the HTTP/1.1 connector. If there is an Apache httpd in the front-end, then you should activate HTTP compression directly in the Apache httpd configuration itself (see document on Apache httpd website). | ||
150 | ))) | ||
151 | |||
152 | HTTP compression is not supported on the AJP connector or any other protocol than HTTP(S)/1.1. | ||
153 | |||
154 | In **server.xml** file, add the attributes **compression="on"** and **compressionMinSize="40000"** on the connector HTTP/1.1: | ||
155 | |||
156 | Example : | ||
157 | |||
158 | (% class="box" %) | ||
159 | ((( | ||
160 | <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" maxPostSize="-1" URIEncoding="UTF-8" **compression="on" compressionMinSize="40000"**/> | ||
161 | ))) | ||
162 | |||
163 | The **compressionMinSize** attribute defines a minimal response size (in bytes) below which compression is not used. It is recommended to give it a high enough value to avoid compressing files that are already very small (PNG icons for example). | ||
164 | |||
165 | //Note~:// | ||
166 | |||
167 | (% class="box infomessage" %) | ||
168 | ((( | ||
169 | This setting has no negative effect if a client browser do not support compression. Tomcat will automatically decide to not use HTTP compression for this browser. | ||
170 | ))) | ||
171 | |||
172 | = Advanced Performance Settings = | ||
173 | |||
174 | Modified file: **system.xml** | ||
175 | |||
176 | XML syntax example: | ||
177 | |||
178 | (% class="box" %) | ||
179 | ((( | ||
180 | <Property key="CORE_TP_EXECSIZE" value="64"/> | ||
181 | ))) | ||
182 | |||
183 | == Scheduled flow execution threading == | ||
184 | |||
185 | Act on the number of "threads" devoted to scheduled execution of flows (scheduler). | ||
186 | |||
187 | Available parameters: | ||
188 | |||
189 | * //Name//: **INIT_TP_EXECSIZE** | ||
190 | //Value//: integer >= 0 (default: 16) | ||
191 | //Description//: Number of threads created when the server starts | ||
192 | * //Name//: **CORE_TP_EXECSIZE** | ||
193 | //Value//: integer >= 0 (default: 16) | ||
194 | //Description//: Number of threads to keep when the server is idle | ||
195 | * //Name//: **MAX_TP_EXECSIZE** | ||
196 | //Value//: integer > 0 (default: 16) | ||
197 | //Description//: Maximum number of threads when the server is working | ||
198 | |||
199 | == Scheduled flow execution threading == | ||
200 | |||
201 | Act on the number of "threads" devoted to interactive execution of flows (admin console, dashboards, mobile, etc.). | ||
202 | |||
203 | Available parameters: | ||
204 | |||
205 | * //Name//: **INIT_TP_PLAYSIZE** | ||
206 | //Value//: integer >= 0 (default: 4) | ||
207 | //Description//: Number of threads created when the server starts | ||
208 | * //Name//: **CORE_TP_PLAYSIZE** | ||
209 | //Value//: integer >= 0 (default: 4) | ||
210 | //Description//: Number of threads to keep when the server is idle | ||
211 | * //Name//: **MAX_TP_PLAYSIZE** | ||
212 | //Value//: integer > 0 (defaut: 4) | ||
213 | //Description//: Maximum number of threads when the server is working | ||
214 | |||
215 | == Cube Manager timeouts == | ||
216 | |||
217 | Act on the way the Cube Manager component discards unused memory cubes. | ||
218 | |||
219 | Theses settings discards cubes that are not used since a period of time, even if the session is still active | ||
220 | |||
221 | Available parameters: | ||
222 | |||
223 | * //Name//: **CUBE_TIMEOUT_INTERACTIVE** | ||
224 | //Value//: minutes: > 0 (default: 10 minutes) | ||
225 | //Description//: Duration of inactivity period for a cube loaded in interactive mode (server side cube navigation) | ||
226 | |||
227 | * //Name//: **CUBE_TIMEOUT_SYNC** | ||
228 | //Value//: minutes: > 0 (default: 4 minutes) | ||
229 | //Description//: Duration of inactivity period for a cube loaded in scheduled mode (scheduled flow generation) | ||
230 | * //Name//: **CUBE_TIMEOUT_PERIOD** | ||
231 | //Value//: minutes: > 0 (default: 2 minutes) | ||
232 | //Description//: Interval of checking when cube are inactive, should be at least min **CUBE_TIMEOUT_SYNC** / 2 | ||
233 | |||
234 | == Data cubes performance == | ||
235 | |||
236 | Apart from cube compression (CUBE_COMPRESSION), all these settings will affect the performance of data cube //interactive// processing (expanding them into result cube during a chart display). These settings will not affect the data cube generation performance. | ||
237 | |||
238 | Available parameters: | ||
239 | |||
240 | * //Name//: **CUBEPART_MAXSIZEMB** | ||
241 | //Value//: mega-bytes: > 0 (default: 100 MB) | ||
242 | //Description//: Size of a data cube chunk in MB. A data cube chunk (or part) is a part of the data cube that can be processed (expanded) in parallel and/or distributed across different other DigDash Enterprise Server in cluster mode (see chapter "//Using Several Servers In Cluster Mode//" in this document). | ||
243 | * //Name//: **TP_MCUBESIZE** | ||
244 | //Value//: threads: > 0 (default: 64 threads) | ||
245 | //Description//: Size of the thread pool for concurrent processing units for a data cube. Big cubes (eg. millions/billions rows) chunks are processed in parallel by the server and/or other servers (in cluster mode). This variable is the number of concurrent threads allowed to process these chunks on one machine. Each thread occupies a slot in the pool during its treatment. If the pool is full, the additional threads are put in a waiting queue. | ||
246 | * //Name//: **MCUBE_ROWS_PER_THREAD** | ||
247 | //Value//: rows: > 0 (default: 100000) | ||
248 | //Description//: This is a threshold of the number of rows of a data cube above which DigDash enterprise will activate the parallel processing of cube chunks (if there is more than one chunk for this cube). Below that limit, the cube processing is not paralleled, but serialized. | ||
249 | |||
250 | * //Name//: **CUBE_COMPRESSION** | ||
251 | //Value//: boolean (default: true) | ||
252 | //Description//: As opposed to the other settings, this one will affect data cube generation performance, but not processing performance. This (de)activates the cube compression for the storage of cubes on the disk drive. By default DigDash Enterprise server compresses the data cube storage (true), lowering the storage space, but also slowing down the save of the cube (note that loading is almost not affected by this setting). So in case you want to speed up cube generation and have a load of cubes (and a big available storage disk space), you could set this to false. | ||
253 | * //Name//: **CUBE_UPLOAD_MODE** | ||
254 | //Value//: integer: 0, 1 or 2 (default: 1) | ||
255 | //Description//: Clustering deployment only. Specify if the cube parts should be uploaded from master server to slave servers when a user interacts with the cube (1), when the cube is generated by the master (2), or never (0). Also see the chapter "//Using Several Servers In Cluster Mode//" in this document, section "Use The Cluster". | ||
256 | |||
257 | == Other performance parameters == | ||
258 | |||
259 | The following parameters are used to analyze optimize system performance. | ||
260 | |||
261 | Available parameters: | ||
262 | |||
263 | * //Name//: **LOW_MEMORY_THRESHOLD** | ||
264 | //Value//: percentage > 0 (default: 10%) | ||
265 | //Description//: This is the threshold of the percentage of free memory under which the system will raise a low memory alert. This alert can be viewed in the server status page for 24 hours. It is also recorded in the DDAudit data base when the system audit service is started. | ||
266 | Last, a DigDash event is fired when the threshold is reached: SYSCHECK_LOWMEM. An example of using this event can be found in the DDAudit deployment documentation. | ||
267 | |||
268 | = Automatic files cleaning = | ||
269 | |||
270 | The DDE platform comes with an integrated files cleaner (also known as Files GC) that cleans any unused files like old history files, cubes and other flow dependent files. | ||
271 | |||
272 | The module will clean all files that are not used anymore by any user or role wallet. | ||
273 | |||
274 | Cleaning the files will scan the indexes of all users as well as the disk to find out files which are no more linked to any indexes. Files found during the scan are removed. Removed files are the following ones : cubes files (.dcg), js files of cubes (cube_data_xx.js), models (cube_dm_xx.js) and flow (cube_view_xx.js) | ||
275 | |||
276 | This process will free disk space and can improve the search of js files which can take time if you have a lot of files (number of personal cubes * number of history files > 100000). | ||
277 | |||
278 | Depending on the age of your server and the number of files to remove (number of refresh done...), the cleaning can take lot of time during its first execution (if DigDash Enterprise is used by many users and has many personalized cubes, one to two hours). | ||
279 | |||
280 | After that, if the cleaning is done on a regular basis, it will take less time. The duration depends on the performance of the files system and the computer, so is difficult to estimate. | ||
281 | |||
282 | By default, cleaning is done everyday at midnight. | ||
283 | |||
284 | //Important~:// | ||
285 | |||
286 | (% class="box warningmessage" %) | ||
287 | ((( | ||
288 | //The file cleaner starts only if no user session are active at this time. Additionally, during its processing, no user is allowed to login to DigDash Enterprise.// | ||
289 | //Be careful to schedule it correctly so it will not interfere with normal DigDash Enterprise user activity nor scheduler activity. Depending on the needs, we advise to schedule it on night, and during hours the scheduler is not working.// | ||
290 | ))) | ||
291 | |||
292 | This chapter describes how to configure the activation and scheduling of this module. | ||
293 | |||
294 | == (De)Activation and/or clean on startup == | ||
295 | |||
296 | Activating the files cleaner can be done in two different ways: | ||
297 | |||
298 | 1- __From the server status page__: | ||
299 | |||
300 | You can access the server status page from the welcome page by clicking the **Configuration**, then the **Server status **link. | ||
301 | |||
302 | In the Files cleaner status section, click the green arrow besides **Files cleaner started - No **to start the Files cleaner: | ||
303 | |||
304 | [[image:serverstatus-filesgc.png||queryString="width=540&height=125" height="125" width="540"]] | ||
305 | The next cleaning will start at midnight. To start the cleaning immediately, click the icon [[image:refresh.png||queryString="width=24&height=23" height="23" width="24"]] . | ||
306 | |||
307 | 2- __from the web.xml file:__ | ||
308 | |||
309 | Modified File: **web.xml (ddenterpriseapi)** | ||
310 | |||
311 | Activate or not the Files GC module, and/or make it run at server's startup. | ||
312 | |||
313 | Available parameters: | ||
314 | |||
315 | * //Name//: **startCleaner** | ||
316 | //Value//: true or false (default) | ||
317 | //Description//: | ||
318 | * true: //automatic files cleaner scheduled. Note: cleanup time slot is defined in **system.xml**, in the **FILESGC_SCHEDXML** property. | ||
319 | The default cleanup time slot (if none specified in system.xml, FILESGC_SCHEDXML) is every day at 0:00// | ||
320 | * false (default): do not use automatic files cleaner | ||
321 | * //Name//: **cleanOnStart** | ||
322 | //Value//: true or false (default) | ||
323 | //Description//: | ||
324 | * true: clean useless files when server starts (history, cubes, output...) | ||
325 | * false (default): do not clean useless files on server's startup | ||
326 | |||
327 | == Files GC Scheduling == | ||
328 | |||
329 | Modified file: **system.xml**. | ||
330 | |||
331 | Available parameters: | ||
332 | |||
333 | * //Name//: **FILESGC_SCHEDXML** | ||
334 | //Value//: XML string (encoded) (default : none) | ||
335 | //Description//: This settings contains an XML __encoded__ string that describes the schedule. | ||
336 | |||
337 | Example: | ||
338 | |||
339 | (% class="box" %) | ||
340 | ((( | ||
341 | <Property key="FILESGC_SCHEDXML" value="<Schedule frequency="**daily**" fromDay="11" fromHour="0" fromMinute="0" fromMonth="7" fromYear="2011" periods="**1**" time="**0:0**"/>"/> | ||
342 | ))) | ||
343 | |||
344 | Interesting attributes are: **frequency** (**hourly**, **daily** or **monthly**), **periods** (number of hours, days or months between 2 cleanups) and **time** (hour of the cleanup for daily and monthly frequencies). This example means every day (frequency="daily" and periods="1") at 0:00 (time="0:0"). | ||
345 | |||
346 | * //Name//: **FILESGC_SESSIONSCHECK** | ||
347 | //Value//: true/false boolean (default : none, eq. true) | ||
348 | //Description//: This settings tells if the files cleaner should check for active sessions before starting (true), or if it should start even if therer are active sessions (false). In this case all active sessions will be disconnected immediately. | ||
349 | |||
350 | Example: | ||
351 | |||
352 | (% class="box" %) | ||
353 | ((( | ||
354 | <Property key="FILESGC_SESSIONSCHECK" value="false"/> | ||
355 | ))) | ||
356 | |||
357 | = Using several servers in cluster mode = | ||
358 | |||
359 | To handle a big volume of data (billions of rows), it is possible to use several servers in a cluster mode. Each server becomes a processing node of the cluster. This one consists in a master server and slave servers. | ||
360 | |||
361 | The master server handles data models, documents, roles, users, sessions and refreshing of cubes and flows. Exactly like a standard Digdash Enterprise server in a mono-machine deployment. | ||
362 | |||
363 | Additional slave servers are only used to help interactive cube expanding, during flow display, filtering, drill-down, etc. | ||
364 | |||
365 | //<TODO schema>// | ||
366 | |||
367 | == Install DigDash Enterprise in cluster mode == | ||
368 | |||
369 | __Pré-requisite: several machines connected to the same network__ | ||
370 | |||
371 | === Master server (on the most powerful machine of the cluster): === | ||
372 | |||
373 | 1. Standard DigDash Enterprise installation (see documentation). | ||
374 | 1. Start the server as usual with **start_servers.bat** | ||
375 | |||
376 | === Slave server (on all other machines of the cluster): === | ||
377 | |||
378 | 1. Standard DigDash Enterprise installation (see documentation). | ||
379 | The difference is that a slave server does not need a license to be used as a processing unit of the cluster. It also does not need an LDAP directory neither does he need a SVN document server. Finally it does not need the digdash_dashboard web application module, so the war archive could be optionally removed from Tomcat. | ||
380 | 1. Start only the Tomcat module with **start_tomcat.bat** | ||
381 | |||
382 | == Configure the cluster == | ||
383 | |||
384 | __Procedure to repeat on all servers of the cluster__ | ||
385 | |||
386 | 1. With a browser, connect to the DigDash Enterprise home page page (eg. http:~/~/<serveur>:8080) | ||
387 | 2. Click on **Configuration**, then **Server Parameters** | ||
388 | 3. Log in as DigDash Enterprise administrator (admin/admin by default) to display the server parameters page | ||
389 | 4. Click on the link **Cluster Parameters** at the bottom of the page | ||
390 | 5. Fill the different fields according to each server machine (see below for details) | ||
391 | |||
392 | === Section System Performance === | ||
393 | |||
394 | [[image:advanced_system_guide_en_html_cc6665f518a39daa.png||queryString="width=534&height=176" height="176" width="534"]] | ||
395 | The section **System Performance** defines the performance specifications of the current machine in the cluster. The parameters **CPU Number**, **CPU Score** and **Allocated Memory** are used to dispatch the calculation load at the best. | ||
396 | |||
397 | 1. **CPU Number**: the number of processors * number of cores per processor. Eventually multiplied bay a factor if the processors have a technology like Hyper-Threading. By default, -1 uses the data as read by the operating system. | ||
398 | 1. **CPU Score**: This is an arbitrary score between 1 and 10 which allows sorting the different machines of the cluster according to their global performance (case of an heterogeneous cluster). By default -1 gives an average score (5). | ||
399 | 1. **Allocated Memory**: the maximum fraction of the allocated memory allowed to be used for processing cubes. This value is lower or equal to the maximum memory allocated to Tomcat. By default -1 allows all the memory. | ||
400 | |||
401 | === Section Authorized Clusters === | ||
402 | |||
403 | [[image:advanced_system_guide_en_html_94a180b24cdb5311.png||queryString="width=538&height=259" height="259" width="538"]] | ||
404 | The section **Authorized Clusters** is used to specify if the current server can be used in one or more clusters. Actually, a server can be used in different DigDash Enterprise clusters. This section restrains this server to be used as a slave by only specified clusters masters (Selection list). | ||
405 | |||
406 | (% class="box infomessage" %) | ||
407 | ((( | ||
408 | //Note: If the list is empty, then this server can be used in all requesting clusters.// | ||
409 | ))) | ||
410 | |||
411 | This is also in this section where we define an optional password of the current server in the cluster. | ||
412 | |||
413 | To add a cluster allowed to use this server as a slave: | ||
414 | |||
415 | 1. **Name**: name of the authorized cluster (arbitrary, used only as a reference in the UI) | ||
416 | 2. **Master Server IP Address**: address of the cluster's master server (eg. http:~/~/192.168.1.1) | ||
417 | 3. **Password**: password for the slave server in the context of the selected cluster | ||
418 | 4. Click on the **Add** button to add this cluster to the list of authorized clusters | ||
419 | |||
420 | (% class="box infomessage" %) | ||
421 | ((( | ||
422 | //Note: You can edit and remove authorized clusters by selecting them in the list **Selection** and clicking **Edit** or **Remove** buttons.// | ||
423 | ))) | ||
424 | |||
425 | === Section Cluster Definition === | ||
426 | |||
427 | __To be filled only on the master server of the cluster__ | ||
428 | |||
429 | [[image:advanced_system_guide_en_html_fad14db7ad67f4c7.png||queryString="width=538&height=285" height="285" width="538"]] | ||
430 | The section **Cluster Definition** concerns only the master server. This is where we define the list of the slave server machines (**Selection** list, fields **Name**, **Address**, **Domain** and **Password**). | ||
431 | |||
432 | To add a slave server to the cluster: | ||
433 | |||
434 | 1. **Name**: name of the slave machine (arbitrary) | ||
435 | 2. **Server URL**: URL of the slave server (eg. http:~/~/192.168.1.123:8080) | ||
436 | 3. **Domain**: DigDash Enterprise domain (by default ddenterpriseapi) | ||
437 | 4. **Password**: password for the slave as you defined it previously during the slave configuration (**Authorized Cluster** section, **Password** field) | ||
438 | 5. Click on the **Add** button to add this slave server to the cluster. | ||
439 | |||
440 | (% class="box infomessage" %) | ||
441 | ((( | ||
442 | //Note: You can edit and remove machines from the cluster by selecting them in the list **Selection**, then clicking **Edit** or **Remove** buttons.// | ||
443 | ))) | ||
444 | |||
445 | == Use the cluster == | ||
446 | |||
447 | In a simple cluster deployment, there is nothing more to do than what was previously described. | ||
448 | |||
449 | However there are some interesting details that can help tuning the performance of the cluster. | ||
450 | |||
451 | The cluster is used depending on the size of a data cube. Under a specified threshold, depending on the cube, the master machine and the slaves, it is possible that the cluster mode would not be used. But if the size of one or more data cubes becomes big, for instance above hundreds of millions of rows, these cubes will be split in parts and their calculation (expanding) will be parallelized on all available processors in the cluster to decrease the global response time. This will be done each time a dashboard user (or mobile user, etc.) will request data from the cube. | ||
452 | |||
453 | It is important to note that cube generation (data source refresh) is done solely by the master server. Slaves are only used for interactive cube expanding (flow display, filtering, drill-down, etc.). | ||
454 | |||
455 | By default, the different part of the cube to be processed is sent on demand to the slaves (if they do not already have them). This can induce a slowdown of the system on the first expand requested by a user, especially on low bandwidth network (< 1 gigabit). | ||
456 | |||
457 | Nevertheless, there are different ways to avoid this network bottleneck. Here are some suggestions: | ||
458 | |||
459 | A first way is to have the cubes folders (sub-folder of Application Data/Enterprise Server/ddenterpriseapi by default) on a centralized network disk reachable from all the cluster machines. For instance through a symbolic link (Linux, NFS). This link should be established for all the cluster machines. The principle is that the master server will directly generate the data cube files in that network folder, and then when a user will interact with the system, master and slaves will all have a //common// view of the cubes. Because reading the cube files is done only once in the cube life-cycle (//in-memory// cube), impact of the network folder on the performance is negligible. | ||
460 | |||
461 | Another way is to use a third-party folder synchronization tool between the cluster machines. This tool will copy the cubes folder from the master to the slaves after cube generation for instance. The principle is that the master server will generate data cube in its local folder, then the synchronization tool will transfer the folder (probably using an optimized delta algorithm) on all the slaves machines. All of this outside the server main activity periods if possible. Master and slaves will all have an //identical// view of the cubes. | ||
462 | |||
463 | = Other advanced settings = | ||
464 | |||
465 | == Change application data path == | ||
466 | |||
467 | By default DigDash Enterprise stores its configuration, data models, information wallets, cube flow history and other work files in the operating system user folder, in a sub-folder **Application Data/Enterprise Server/<domain>**. | ||
468 | |||
469 | For example on Windows 7 this folder is: | ||
470 | |||
471 | (% class="box" %) | ||
472 | ((( | ||
473 | **C:\Users\<user>\AppData\Roaming\Enterprise Server\ddenterpriseapi** | ||
474 | ))) | ||
475 | |||
476 | Under certain circumstances, it can be useful to change this folder path, either to get more storage on another drive, either for organizational purposes, scripting, etc. | ||
477 | |||
478 | There are different ways to change this path. | ||
479 | |||
480 | === On the global level (Tomcat) === | ||
481 | |||
482 | Modified file: **setenv.bat** | ||
483 | |||
484 | The optional Tomcat parameter **digdash.appdata** is used to specify a folder path where DigDash Enterprise will store its data files. | ||
485 | |||
486 | In the file **<DDE Install>/configure/setenv.bat** add the line: | ||
487 | |||
488 | @set CATALINA_OPTS = -Ddigdash.appdata=<path to the new app data folder> | ||
489 | |||
490 | //Important ~:// | ||
491 | |||
492 | (% class="box warningmessage" %) | ||
493 | ((( | ||
494 | //The path is interpreted as a Java variable. Folder separator must be /, not \, even on Windows.// | ||
495 | //There is no space between -D and digdash.appdata// | ||
496 | //This setting will not work if your tomcat is stared as a service// | ||
497 | ))) | ||
498 | |||
499 | If this folder does not exist, DigDash Enterprise will create it. The data will not be stored directly in this folder but in another sub-folder **<digdash.appdata>/Enterprise Server/<domain>** | ||
500 | |||
501 | Example: | ||
502 | |||
503 | To make Digdash Enterprise on another hard drive than the system drive: | ||
504 | |||
505 | 1. Modify **<DD Install>/configure/setenv.bat** eby adding the line: | ||
506 | @set CATALINA_OPTS=-Ddigdash.appdata=D:/digdashdata | ||
507 | 2. Restart Tomcat server | ||
508 | 3. A folder **D:\digdashdata\Enterprise Server\ddenterpriseapi** is created and will contain the all the data files of DigDash Enterprise | ||
509 | |||
510 | //Pro~:// | ||
511 | |||
512 | This configuration is done at the **setenv.bat** script level, it will not be overwritten during an update of the DigDash Enterprise WAR files. | ||
513 | |||
514 | //Cons~:// | ||
515 | |||
516 | This configuration is global for all the DigDash Enterprise domains on this Tomcat server. However the data for the different domains are stored in their own sub-folder. There is no risk of data collision between the domains. | ||
517 | |||
518 | === At the domain level (context's web.xml) === | ||
519 | |||
520 | Modified File: **web.xml (ddenterpriseapi)** | ||
521 | |||
522 | The variable **AppDataPath** defined in this file (empty value by default) has the same behavior than the Java parameter digdash.appdata detailed above. | ||
523 | |||
524 | The only difference is this parameter is specific to a DigDash Enterprise domain. | ||
525 | |||
526 | //Pros~:// | ||
527 | |||
528 | Configuration specific for a DigDash Enterprise domain | ||
529 | |||
530 | //Cons~:// | ||
531 | |||
532 | Updating DigDash Enterprise with new WAR files will overwrite this configuration (web .xml is overwritten when deploying new WAR files). | ||
533 | |||
534 | == LDAP settings (adswrapper): Port and instance name == | ||
535 | |||
536 | === LDAP server network port (adswrapper) === | ||
537 | |||
538 | Modified file: **web.xml (adswrapper)** | ||
539 | |||
540 | The variable **ads.ldap.port** (default value: **11389**) defines the network port used by the LDAP server integrated in DigDash Enterprise. You must change this value if the port is already used by another process on the system, or another LDAP instance (of another DigDash domain on the same system for example). | ||
541 | |||
542 | === LDAP instance name (adswrapper) === | ||
543 | |||
544 | Modifier file: **web.xml (adswrapper)** | ||
545 | |||
546 | The variable **ads.instance.name** (default value: **ldapdigdash**) defines the name of the LDAP directory instance used by DigDash enterprise. You must change this value if two DigDash domains deployed in the same Tomcat need to use their own LDAP instance. | ||
547 | |||
548 | == Advanced parameters for the dashboard editor / viewer == | ||
549 | |||
550 | === dashboard_system.xml parameters === | ||
551 | |||
552 | Modified file: **dashboard_system.xml** | ||
553 | |||
554 | This file is located in the folder **<user>/Application Data/Enterprise Server/dashboard_system.xml**. By default this file does not exist, you must create it in order to modify the advanced parameters of the dashboard editor / viewer. It is an XML file with the following format: | ||
555 | |||
556 | {{code language="XML" cssClass="notranslate"}} | ||
557 | <SystemProperties> | ||
558 | <Property key="<parameter name>" value="<parameter value>"/> | ||
559 | <Property key="<parameter name>" value="<parameter value>"/> | ||
560 | <Property key="<parameter name>" value="<parameter value>"/> | ||
561 | </SystemProperties> | ||
562 | {{/code}} | ||
563 | |||
564 | //Note~:// | ||
565 | |||
566 | (% class="box infomessage" %) | ||
567 | ((( | ||
568 | All these parameters can also be defined in **<DDE Install>/apache-tomcat/webapps/digdash_dashboard/WEB-INF/web.xml**. | ||
569 | ))) | ||
570 | |||
571 | Available parameters: | ||
572 | |||
573 | * //Name//: **SERVERURL** | ||
574 | //Value//: URL of the DigDash Enteprise server | ||
575 | //Description//: URL of the server on which the dashboard must connect in priority. | ||
576 | * //Name//: **DOMAIN** | ||
577 | //Value//: Name of the DigDash Enterprise domain | ||
578 | //Description//: Name of the domain on which the dashboard must connect in priority. | ||
579 | * //Name//: **FORCESERVERURL** | ||
580 | //Value//: Boolean (default: false) | ||
581 | //Description//: Used with parameter **SERVERURL**. Forces the server on which the dashboard must connect. The user can not choose another server. | ||
582 | * //Name//: **FORCEDOMAIN** | ||
583 | //Value//: Boolean (default: false) | ||
584 | //Description//: Used with parameter **DOMAIN**. Forces the domain on which the dashboard must connect. The user can not choose another domain. | ||
585 | * //Name//: **GRIDSIZEEDITOR** | ||
586 | //Value//: Integer (default: 10) | ||
587 | //Description//: Size in pixels of the magnetic grid in the dashboard editor. | ||
588 | * //Name//: **THEME** | ||
589 | //Value//: Name of the theme (default: none) | ||
590 | //Description//: Name of the graphical theme to be applied for users who do not have a specified theme in their settings. | ||
591 | * //Name//: **urlLogout** | ||
592 | //Value//: URL | ||
593 | //Description//: Specify a URL to redirect the user when he disconnect from the dashboard. Default behavior is to come back to the login screen. See “//Redirection on dashboard logout//” paragraph. | ||
594 | * //Name//: **CANCHANGEPASSWORD** | ||
595 | //Value//: Boolean (default: false) | ||
596 | //Description//: Activate a hyperlink “Lost password” in the login page of the dashboard. This hyperlink sends a password reset code to the user’s email address. See “//Activate reset password feature//”. | ||
597 | |||
598 | Example of a **dashboard_system.xml** file: | ||
599 | |||
600 | {{code cssClass="notranslate"}} | ||
601 | <SystemProperties> | ||
602 | <Property key="SERVERURL" value="http://localhost:8080"/> | ||
603 | <Property key="FORCESERVERURL" value="true"/> | ||
604 | <Property key="DOMAIN" value="ddenterpriseapi"/> | ||
605 | <Property key="FORCEDOMAIN" value="true"/> | ||
606 | <Property key="GRIDSIZEEDITOR" value="15"/> | ||
607 | <Property key="THEME" value="Flat"/> | ||
608 | <Property key="CANCHANGEPASSWORD" value="true"/> | ||
609 | </SystemProperties> | ||
610 | {{/code}} | ||
611 | |||
612 | === Redirection on dashboard logout === | ||
613 | |||
614 | You can specify a URL to display when the user disconnect from the dashboard (Logout button). | ||
615 | |||
616 | Modified file: **web.xml** (digdash_dashboard) or **dashboard_system.xml** | ||
617 | |||
618 | File **web.xml** (digdash_dashboard) is located in **<DDE Install>/apache-tomcat/webapps/digdash_dashboard/WEB-INF/web.xml**. | ||
619 | |||
620 | Modify the value of the parameter **urlLogout** as in the following example. By default the value is empty, which means the logout action is to return to the dashboard authentication page: | ||
621 | |||
622 | {{code language="XML" cssClass="notranslate"}} | ||
623 | <init-param> | ||
624 | <param-name>urlLogout</param-name> | ||
625 | <param-value>http://www.digdash.com</param-value> | ||
626 | </init-param> | ||
627 | {{/code}} | ||
628 | |||
629 | Relative URLs are allowed. They are relative to the location of index.html file in the digdash_dashboard application: | ||
630 | |||
631 | {{code language="XML" cssClass="notranslate"}} | ||
632 | <init-param> | ||
633 | <param-name>urlLogout</param-name> | ||
634 | <param-value>disconnected.html</param-value> | ||
635 | </init-param> | ||
636 | {{/code}} | ||
637 | |||
638 | Alternatively, you can modify this value in **dashboard_system.xml file**: | ||
639 | |||
640 | <Property key="urlLogout" value="**disconnected.html**"/> | ||
641 | |||
642 | === Activate reset password feature === | ||
643 | |||
644 | You can activate the reset password feature to allow a user to reset his password when he forgets it. The feature displays a hyperlink “**Lost password**” in the login page of the dashboard. The hyperlink sends an email to the user containing a reset password code. Then the user is redirect to a reset password form and prompted to enter this code and a new password. | ||
645 | |||
646 | Modified file: **web.xml** (digdash_dashboard) or **dashboard_system.xml**, and Server settings page | ||
647 | |||
648 | Prerequisites on DigDash server: | ||
649 | |||
650 | * The feature must also be activated in the **Server settings page** / **Advanced** / **Allow password reset** | ||
651 | * A valid email server must be configured in **Server Settings page / Advanced / System Email Server** | ||
652 | * The users must have a valid email address configured in the LDAP field **digdashMail** | ||
653 | |||
654 | On dashbpard side, the activation of this feature is done with the variable CANCHANGEPASSWORD set to **true** in **web.xml** (digdash_dashboard): | ||
655 | |||
656 | {{code cssClass="notranslate"}} | ||
657 | <init-param> | ||
658 | <param-name>CANCHANGEPASSWORD</param-name> | ||
659 | <param-value>true</param-value> | ||
660 | </init-param> | ||
661 | {{/code}} | ||
662 | |||
663 | Alternatively, you can modify this value in **dashboard_system.xml** file: | ||
664 | |||
665 | <Property key="CANCHANGEPASSWORD" value="**true**"/> | ||
666 | |||
667 | |||
668 | __Optional: Customization of the reset code email__ | ||
669 | |||
670 | Email subject and body can be customized in the following way: | ||
671 | |||
672 | 1. Start DigDash Studio | ||
673 | 2. Menu **Tools** / **Dictionary manager...** | ||
674 | 3. Right-click on **GLOBAL** section then **Add…**((( | ||
675 | Key name: **LostPasswordMailSubject** | ||
676 | |||
677 | Enter the subject of the email in the languages of your choosing. | ||
678 | ))) | ||
679 | 1. Right-click on **GLOBAL** section then **Add…**((( | ||
680 | Key name: **LostPasswordMailText** | ||
681 | |||
682 | Enter the body of the email in the languages of your choosing. Make sure the body of the email contains the keyword **${code}** at least. This keyword will be substituted by the password reset code. Another available keyword is **${user}. **We discourage putting too much information in this email. For instance in the default subject and body we only include the password reset code. | ||
683 | ))) |