Memory Use in the Server
Documentation | Blog | Demos | Support
Memory Use in the Server
5 out of 5 stars
1 rating
5 Stars | 100% | |
4 Stars | 0% | |
3 Stars | 0% | |
2 Stars | 0% | |
1 Stars | 0% |
If the contents of all files that an application requires at any given time could be stored in memory, response time would be very rapid. Using main memory is much quicker than using disk input/output (I/O) because the time required to access files from disk is much greater than that required to access them from memory. In practice, however, it is impossible to keep everything in memory, especially if a database contains millions of records.
Much of the data used by an application must therefore be transferred continually between memory and disk during an application session. The disk I/O includes the data stored in the database, database definitions, screen definitions, and application programs.
As important as maximizing the disk access is the control of locks over the data being processed by each user after this data is available in memory. This latter topic is discussed in Locks and Deadlocks Conditions whereas the former one is discussed in the following paragraphs.
It is important to remember that Zim Server’s efficiency relies entirely in the available memory to be allocated for its use. When Zim Server starts, it reads its configuration file, calculates the amount of memory needed and allocates a corresponding shared memory that is going to be used from this point on to perform all the services for connecting clients.
Shared memory is allocated using a mapping mechanism that associates and address space relative to the shared memory in relationship to a physical real memory. If the size of the available real memory is large enough to accommodate the size of the allocated shared memory, then all Zim Server operations in memory will be performed with the maximum efficiency. On the other hand, if there is not enough real memory to accommodate the shared memory needed by Zim Server, part of the operations will require that the operating system swaps out and swaps in this extra portion of shared memory that didn’t fit in the real memory (in fact, this situation doesn’t happen exactly as described, but for didactic purposes we can explain it this way).
Even though swap operations to and from memory are faster than regular file operations, they are still significantly slower than real memory operations. Therefore, the administrator, when configuring Zim Server, must have in account this factor.
All Zim Server configurations options also describe efficiency issues. Of special relevance are Checkpoint Buffers, Checkpoint Transactions, Clustered Commits, Maximum Blocks per User and Maximum Data Blocks.
5 out of 5 stars
1 rating
5 Stars | 100% | |
4 Stars | 0% | |
3 Stars | 0% | |
2 Stars | 0% | |
1 Stars | 0% |