Locks and Deadlock Conditions

Deadlocks

A deadlock condition occurs when two users have to wait for each other in order to complete their respective transactions. For example, one user’s transaction needs records currently being used by a transaction started by another user. The second user’s transaction in turn requires records that the first user’s transaction is holding (while it waits for the second user’s transaction to be completed). Zim can detect deadlock situations and can abort the transaction of one of the users – the user’s updates are discarded, and locks released. The other user proceeds.

An application that normally responds very quickly, but is constantly deadlocking, is wasting its high performance. Deadlock situations are not errors, but rather the natural outcome of enabling more than one user to access the same data at the same time.

Completely eliminating deadlocks is not an acceptable solution. To do this, users are forced to wait in line to use the application. Operating system and application tuning can, however, reduce the likelihood of deadlocks, enabling the application, and its users, to maximize the amount of work done in a given period of time.

Page Locking

Zim uses automatic page locking to control concurrent access to the database. Locks force other users to wait until the user whose actions initiated the lock has finished using the data. This waiting can lead to deadlocks.

Two kinds of locks are used:

  • a read lock on pages that are being read
  • a write lock on pages that are being updated

A read lock enables other users to read the locked page; a write lock prevents other users both from reading and from updating the page.

Example

Two users, Bob and Macy, wish to access certain pages in a file at the same time:

  • Bob wants to update page 5, and then read page 6.
  • Macy wants to update page 6, and then read page 5.

Both users update their chosen pages successfully. However, Bob cannot read page 6 because Macy’s write lock precludes both updating and reading. Macy, on the other hand, cannot read page 5, because Bob has placed a write lock on it. A deadlock occurs because each user is waiting for the other. One user’s transaction (let’s assume Macy’s) is aborted, removing its page locks, and enabling the other user to continue. Macy’s work is lost.

File Locking

Zim can lock entire files (both read and write file locks) under certain conditions. A write file lock prevents other users from accessing any page in a file.

File locks can help reduce the incidence of deadlock. For example, Bob and Macy attempt the actions described under Page Locking, above, but, this time, file locking is employed.

When Bob updates page 5, a write file lock is placed on the file. Macy can neither read nor update pages 5 and 6. She must wait for Bob’s transaction to be completed before she can begin her own. Although waiting for Bob seems like a waste of time, this wait is better than the time wasted re-entering data after a deadlock.

Fine Tuning Locks

What the preceding example shows is that there is usually a trade-off between reducing the incidence of deadlock and maintaining a high degree of concurrent access to the application (several users accessing the database at the same time). In our example, file locking eliminated a deadlock, but it also reduced concurrent access (Macy had to wait for Bob). The administrator must  decide the level of trade-off that is appropriate for a particular application. A number of Zim Server configuration options can be used to fine tune the applications being serviced by Zim Server. The following must be considered before changing configuration options:

  • All locks are maintained in memory by Zim Server. The operating system is not invoked to control any kind of locks managed by Zim Server. Therefore, the only limitation in configuring locks is the availability of memory;
  • The locks configuration options are applicable to all databases being managed by Zim Server and not specifically to a single database or user.

All Zim Server configuration options documentation also discusses efficiency issues.

Of particular interest in the area of lock management are the configurations Maximum Locks, Maximum File Locks, Quick Locks, Secondary Lock Groups and Secondary Lock Group Size.

How to Maintain the Integrity of a Database

A database is always considered integral unless it presents problems. This may sound a little bit funny, but the general approach, in real life, is to run an application over a database until something wrong happens.

To avoid surprises, certain procedures can be preventively taken to guarantee that the database is always available. These procedures include:

  • Check your database at regular intervals if your data is particularly important.
  • Plan to check your database after the occurrence of a potentially corrupting event (e.g., power failure).
  • Check your database when you get an error message that indicates corruption could have occurred;

In most circumstances, you can check the integrity of a database by two methods:

1) The ZIMFIX utility

If you suspect corruption of your database, or if you want to check the integrity of the database, run the Repair Facility (ZIMFIX) administrative utility against all database files. The Repair Facility checks the integrity of the database files associated with a specified entity set or relationship. In particular, it verifies that records are constructed properly and that indexes correspond properly to the data stored in the records.

Note: Before executing the Repair Facility (ZIMFIX) to detect corruption, run it with the EXTRACT FILE DEFINITIONS box checked (equivalent to ZIMDD).

For example, to check the integrity of the database files corresponding to EntitySets named Pollutants and WaterSamples, execute the following commands at the operating system level:

zimdd
zimfix WaterSamples
zimdd  Pollutants

The Repair Facility can also perform structural integrity checks on application directory files, but it cannot check the integrity of the actual object definitions that those directories contain.

Note: The Repair Facility reports the records that fail the integrity checks, enabling recall and viewing of the damaged records.

2) Finding Data in your database

A fast way of detecting errors is to run a series of FIND commands over the entity set or relationship in question:

. A FIND command without using indexes to read all records;

. One FIND command per index to check the integrity of the indexes.

If there are no sparse indexes, the number of records found in each one of the FINDs must be exactly the same.

This method is faster than running ZIMFIX, but doesn’t check the contents of records. It indicates if something is wrong or it can directly stop a record that is corrupted. Later, you can run ZIMFIX.

How to Restore a Corrupted Database

There are two general ways to restore a corrupted database (in particular, a Zim file):

1. Use ZIMFIX with the repair option;

2. If you have found a record that is corrupt (trying to LIST that record, there is an error message), you can do:

    • Use the FIND command to retrieve all records of the appropriate entity set or relationship;
    • Use the NEXT command to locate the damaged record;
    • Use the LIST command to display the record on the screen (it will display an error message indicating the corruption);
    • Use the CHANGE command to correct the wrong fields or use the DELETE command to delete the entire record (you can later manually ADD the same record again).

3. If the corruption was in an index, you can ERASE and CREATE that particular index again (always erase and create indexes in the same order they were originally created to avoid the need to recompile all programs).

Physical structure of a Zim database

The files that make up a Zim application are implemented in your operating environment’s file system. This section discusses how the various types of files are mapped onto that file system. This section looks only at the default organization. The default organization can be changed using Zim tools.

Database Directory

By default, Zim stores all database files, audit files, and document files in one operating system directory. This directory is the database directory; that is, the directory in which the ZIMINIT utility created your database.

When you start Zim Server, all databases that are going to be serviced are described in the zimdb.zim file.

When you start a Zim session, it assumes, unless told otherwise, that the default (i.e. current) operating system directory is the database directory and creates a working directory according the rules in the next item.

Working Directory

By default, Zim stores all working files in a sub-directory of the database directory, under a name that is formed using the number of the user connected to the database or under the name of the user that connected to the database.

The working directory can be created in a different place from the database directory by providing the configuration option “work directory” in the zimconfig.zim configuration file.

Another configuration option, “user name directory”, tells whether the working directory name must be a number or the name of the user.

WARNING: On UNIX systems, to properly allow users to log in and log off, the umask setting must be set correctly to allow working directory sharing by issuing the command:

umask 000

Relationship between Zim Directories and Operating System Directories

Many operating systems use the concept of the directory as a means of organizing the various components of a file system. An operating system directory contains the definition of file system components such as files and other directories.

Zim also uses the concept of directories as organizational tools. A Zim directory contains the definitions of objects in your database, such as EntitySets, forms, and other ZIM directories.

Zim directories are quite independent of operating system directories. In fact, a Zim directory is itself implemented as an operating system. For example, the Zim root directory is usually stored as a file called zim0001. For every Zim directory created, Zim also creates an operating system directory in which compiled Zim programs are stored. For example, the operating system directory that Zim creates for the Zim root directory is usually called zim0001.ws.

Special Document File Names

Within Zim, the file name associated with a Zim document is normally a valid operating system file name or device name. However, Zim also recognizes special file names, with specific file prefixes. For more information on these file prefixes and their meanings, see File Path Prefix Characters.

Distributing Database Files

An application is composed of a number of different types of files including

  • directory files
  • entity sets and relationship files
  • application program files
  • compiled application program files

You can control where these files are located. Changing the location of files can be useful for

  • increasing performance (locating files on the network nodes and servers where they are used most often)
  • sharing files among applications

Distributing database files using the areas feature

By default, all database files are located in the database operating system directory.

The areas feature enables you to distribute the database files that correspond to entity sets, relationships, directories and compiled application programs in order to take advantage of the file system and possibly reducing the system overhead.

To use the areas feature, create an areas file named areas.zim.

To be effective, the areas file must reside in the database directory.

Using the Areas file

Entries in the areas file indicate the locations to which particular files were distributed. The areas file must contain one entry per record, and the entry must take the form

nnnn path

where

nnnn is the number taken from the file name (i.e. ZIMnnnn) of the file that corresponds to the desired directory, entity set or relationship.

path is the name of the directory where file ZIMnnnn is to be found. If ZIMnnnn is a file that stores a directory, then path is also the location where the operating system sub-directory for compiled application programs (ie ZIMnnnn.WS) is found.

Path can be preceded by three of the same special relocation prefixes used with document file names (see special document file names). The table below describes the meaning of the prefixes when used with path.

Prefix and Path

Interpretation

)pathThe user’s work directory path is added to the front of path.
“pathThe user’s database directory is added to the front of path. This choice is useful when you are working with foreign directories.
#pathThe directory defined by the Zim environment variable in the registry.

If a file lacks an entry in the areas file, the software assumes that the file is located in the database directory.

Distributing documents

Zim documents can be distributed throughout your file system by including an appropriate file name in the FileName field of the documents entity set in the object dictionary. The file name can be any file or device name that is valid in your operating environment.

Per-user documents

You can define documents that are stored on a per-user basis in the user’s work directory.

Per-user documents are declared in the Object Dictionary by placing the work path indicator (a right parenthesis) in front of the FileName entry in the Documents EntitySet. Per-user documents operate the same way as per-user database files. Identifying a document as per-user instructs the system to look for the file on the disk as indicated by the work path configuration option.

Distributing work files

The temporary work files used by the system are stored in the work directory. The work directory is set in the user configuration files by a work path entry.

Distributing Audit files

Audit files, which are normally created in the database directory, can be relocated by setting the audit path and backup path entries in the Zim Server configuration file.

Files in a Zim Application

Zim uses the operating environment’s file system to store data, programs, data dictionary definitions, configuration information, and work files. Use the ZIMFILES utility to obtain a complete list of database and document file names in your application.

The following files are created by or for Zim.

TYPE

FILE NAME

USE

DatabasezimnnnnThis file stores entity sets, relationships and Zim directories. nnnn is a four-digit number, with leading zeros retained.
zim0099This file stores compiled form definitions.
Backup000000000000 directoriesThese directories stories backup files if the “backup directory” option appears in “zimconfig.srv” configuration file.
Workingerrors.trcThe file stores all error messages that were output during the most recent Zim session.
zimcompA temporary work file used by the ZIM compiler.
zimsetdThe session directory stores the definitions of named sets created and used by the current session. Not created or used by Zim Runtime.
zimsettThe “set file” stores the members of named sets created and used in the current session.
zimstnnThis file is used by the SORT command. nn is a two-digit number. These temporary work files are erased when the SORT command finishes.
DocumentfilenameThis file stores program source, data, or other text. The filename is an operating system filename of the user’s choice. The filename can include a directory path. Within Zim, a device name can also be substituted for filename. In addition, special file names are available for use when defining Zim documents.
Otherareas.zimThis file stores user-defined information used by the Zim areas facility to distribute database files and compiled program files.
collate.zimA file read by ZIMBOOT that contains a database-specific character collating table.
zimconfig.zimThe database configuration file stores user-defined Zim configuration options for the database. None of the options used in this file can be used in zimconfig.srv.
zimconfig.srvThis file stores Zim Server-specific configuration options. None of the options used in this file can be used in zimconfig.zim.
zimdb.zimThis file indicates which databases Zim Server has to service
zimbk.zimThis file indicates which databases Zim Backup Server must perform an online, real-time backup
dirs.zimThis file stores user-defined information about the location of foreign Zim directories that are to be accessed by your application.
zimnnnn.wsAn operating system directory associated with the Zim directory (stored in file zimnnnn) where compiled Zim programs are stored.

Database Backup

A well-designed database application is of no use if data and the application are not available 100% of the time. Data loss and corruption are not common in most systems, but occasional occurrences are unavoidable. Some of the events that can cause data loss include the following:

EventDescription
User errorIndividual records, or even entire files, can be erased by accident or with malicious intent
Hardware failureData can be lost if a disk suddenly becomes unreadable for some reason.

Because events such as these can happen anywhere, at any time, you should take the time now to plan how to handle a possible future data loss. Your plan should include a policy regarding the intervals at which you check the integrity of your database.

Another situation that may arise is when power failures or processing interruptions occur while data is being processed and/or committed. In these cases, ZimServer is perfectly capable of resolving the situation the next time it is restarted by applying previously unresolved commits to the database by means of the recorded modifications made to the database in a common transaction file. If transactions could not successfully end for any reason, these transaction records are ignored.

Among other practices, there are two common actions to be taken to guarantee data security:

ActionDescription
Copy your databases at regular intervalsAt specific times (usually overnight), your system is shut down (unavailable to users) and a physical copy of the databases is made. Data is guaranteed to be safe within a 24-hour span only. Data updates during this interval are probably going to be lost. The application system’s ability to come back online may vary from minutes to hours depending on the size of the database.
Online backupData is constantly backed up as soon as there is a commit to the data. Data loss is up to only a few seconds, if such, the application system may come back online as soon as users are routed to the backup computer.

Using ZimBackup and ZimServer to Make Online Backups

Zim:X provides the ability to perform online data commits in backup databases as soon as data is committed by ZimServer in the active databases. The backup databases can reside anywhere in the world, typically in different machines and/or different physical locations in relation to the active databases. Both ZimServer and ZimBackup need to be properly configured but this process takes only a few minutes to be performed.

a) Stop ZimServer

If ZimServer is currently running, stop it gracefully by means of the “-k” option. This guarantees that the databases under its control are in a valid state.

b) Stop ZimBackup

Make sure that Zimbackup is not running by stopping with the means of the “-k” option. This guarantees that the databases under its control are in a valid state.

c) Copy all databases involved in the backup operation

Make a copy of all databases under ZimServer control, the one that just has been stopped. This copy will not only save all databases but also create a replicate of the databases to be involved in the backup operation. In essence, the backup will start with the current valid state of the databases.

WARNING:
If you have any “areas.zim” files, you will have to copy the corresponding files addressed by this configuration file to the proper database path.

d) Change the server database configuration file “zimdb.zim”

Assuming that there are two Zim databases being serviced by ZimServer but only one needs to be backed up, you just need to add the keyword “backup” after the proper entry (in this case, only MyBase will be backed up):

10;MyBase;c:/mybase/;backup;
20;Example;c:/otherexample
;

e) Change the server configuration file “zimconfig.srv”

The following lines in this configuration file should be changed:

audit path <a file directory>
backup path <another file directory>
backup port number <a port number, usually 6001>
backup server name <an IP address>

The audit path option tells where ZimServer should place the uncommitted data files. For safety reasons, it should point to some hardware different from the current serviced databases in order to preserve the commits in case of any failures. If not provided, it defaults to the current Zim installation.

The backup path option tells where ZimServer should place a compressed format of all committed data files ready to be sent to ZimBackup. For safety reasons, it should point to some hardware different from the current serviced databases and from the place pointed by the audit path so that the backed-up databases would preserve their integrity.

The backup port number and the backup server name are the TCP/IP identification where ZimBackup is operating to perform the backups.

f) Change the backup database configuration file “zimbk.zim”

In the environment where ZimBackup is going to run, change the “zimbk.zim” to provide the databases that are going to be backed up. Using the example above, only one reference to a database needed to be provided:

10;MyBase;c:\mybackupbase;
WARNING:
The database number (10, in this case) and the database name (MyBase) MUST be the same as used in “zimdb.zim” for ZimServer. However, the destination directory does not need to be the same.

g) Change the backup configuration file “zimconfig.bkp”

In the environment where ZimBackup is going to run, change the following lines:

destination directory <temporary file directory>
backup port number <a port number, usually 6001>

The destination directory is a temporary location in the backup server environment to hold the backup files to be applied to the backup databases. After the corresponding application, these files are erased.

The backup port number is the same TCP/IP port used by ZimServer to “talk” to ZimBackup.

h) Copy the databases from the server to the backup place

Copy the databases that were saved in Step C to their proper places as described in “zimbk.zim”. In the above example, you should copy the contents saved from “c:/mybase” to “C:/MyBackupBase”.

WARNING:
Although Windows and Linux may be incompatible in many ways, it is perfectly possible to have ZimServer running on Windows and ZimBackup running on Linux (or vice-versa) because ZimBackup does not deal with the data within the database. However, before using the backed-up database, you will need to copy it to the original machine.

i) Start ZimServer

All set, you can now start ZimServer.

j) Start ZimBackup

All set, you can now start ZimBackup. In fact, starting ZimServer before or after ZimBackup does not make a difference because when it’s time to back up data, one server waits for the other.

WARNING:
If you run any Zim statements changing data under the setting

SET CHECKPOINT OFF
all data changed in this way will NOT be backed up because this setting tells ZimServer to operate in single-user mode, that is, there are no transactions involved and there are no transactions to be committed.
This setting is used to add data offline in
bulk to avoid memory limitations which significantly increases the speed of the operation. However, in case of any failures, the database involved may be rendered corrupt and an off-line backup should be used to recover the database.

General guidelines to keep a database running can comprise:

File Management

File management can be adapted to the resources available in the operating environment. To manage files efficiently, it is important to know

  • how to control the number of files that the system has open at any one time
  • how to estimate file space
  • how to control the growth of a file
  • how to pre-allocate space in a file

Managing the number of open files

In many of the environments in which Zim runs, there is an operating system limit on the total number of files that can be opened at any one time (per task or for the entire operating system). In some cases, the definition of the file includes each use of a device, such as a terminal or a printer, for reading and writing. The operating system limit affects Zim’s use of its own directories, entity sets, relationships, documents, and compiled programs, and possibly, its use of the terminal, printers, and so on.

Zim manages its use of files in order to be able to function within the limitations imposed by the operating system. The management of line-oriented files is reasonably straightforward. For example, when you invoke a non-compiled application program, the current program file is closed before the new program file is opened. As a result, there is, at any one time, only one file being used for reading commands. Output is directed to two files (by the SET OUTPUT and SET TRACE OUTPUT commands). When a SET OUTPUT or a SET TRACE OUTPUT command is executed, the current output or trace output file is closed before the new file is opened. Each error causes the error file (containing templates for the Zim error messages) to be opened. The error file is closed after an error message is produced.

However, the main use of files involves block-oriented files such as directories, entity sets, relationships with fields, and compiled programs. Each of these objects has a corresponding operating system file. For these files, Zim maintains a pool of file control blocks. entity sets and relationships files are logically opened and closed around each command. At the end of a command, all these files are marked as no longer being in use; they are, however, left open as far as the operating system is concerned. During the execution of the next command, if a required file is already open, it is marked to show that the file is now in use. If the file is not open, then an unused file slot in the pool is sought. If a slot is not found, a file that is open, but not actually in use, is closed to free a file control block. The required file is then opened. In this way, any number of files can be used during a session while only a fixed number are actually open at one time.

The files configuration option determines the size of the file pool (i.e. the maximum number of block-oriented files that can be opened at any one time). Within operating environments that place a limit on this number, the file setting must be lower than the operating environment’s upper limit. The operating environment limit minus the files setting is the number of slots that are available for line-oriented files such as documents, terminals, and work files. For more information about the performance implications of the files configuration option, see Increasing Speed: Maximizing Memory Use.

Estimating File Sizes

When managing files, it is valuable to know the amount of space that an existing file currently occupies, and how much space a new file will occupy in the future. The method of estimating a file’s size depends on the use of the file. This section describes the techniques for estimating the size of the following types of files:

  • EntitySets and relationships
  • directories
  • compiled programs

Entity Sets and Relationships

In Zim, an entity set and its related indices are stored in a single operating system file. Relationships with fields are stored in the same way. Every file in a database is organized into fixed-size pages; each page contains 1024 bytes.

Pages are the unit of transfer between disk and memory. Every page belongs either to the entity set (or relationship) or to some specific index on that entity set or relationship. Zim manages the data on each page. In particular, it tracks the free space available within partially filled pages and also tracks completely empty pages. The empty pages remain part of the file because the file systems that the program uses do not permit files to get smaller, only bigger. Thus, if you create a large entity set and delete all the members, you have lots of free space, but the file size is still large.

If a new page is needed, one of the empty pages is used, if available; otherwise, the size of the file is extended.

Entity-set records are packed into pages. A record can be split between pages, but excessive splitting is avoided. The size of a record in an entity set is determined by the formula

L + N + 5

where

L is the total length of the non-virtual fields in the record (virtual fields are not included in the calculation because their values are not stored in an EntitySet record)

N is the number of non-virtual fields in the record.

For char, alpha, and numeric fields, length is the length specified in the field definition. The size of int, longint, vastint, and date fields depends on the underlying machine. Usually, the sizes are 2, 4, 8, and 8 respectively. Some machines force the alignment of certain kinds of data. As a result, records can be somewhat longer than the above calculation indicates. Some RISC machines, for example, force all data types except char, alpha, and numeric to start on an even address or on a multiple of 4 or 8.

Each page contains a header of approximately 22 bytes, meaning that the number of records per page, ignoring splitting, is the greatest integer in

(1024 – 22) / (RL)

where

RL is the calculated length of a record in the entity set

These calculations are approximate and are further complicated by varchar or varalpha (variable length character) fields, which occupy an amount of space equal only to their actual length, plus two bytes to store the length itself.

Using your knowledge of the average size of each variable length field, you can reasonably estimate the number of pages to be occupied by the data in an entity set or relationship. For example, consider an entity set composed of fields shown in the following table:

Field

Type

Actual Length (bytes)

Fld1Char12
Fld2Int2
Fld3Longint4
Fld4Vastint8
Fld5Numeric6
Fld6Date8

In this case, the length of each record in the EntitySet is

(12 + 2 + 4 + 8 + 6 + 8) + 5 + 6 = 51

The most records that can be stored in one 1024-byte page is

(1024 – 22) / 51 = 19

For an entity set containing one thousand records, the data would require 53 pages (i.e. 53Kb bytes).

Index space is somewhat harder to estimate accurately. Accuracy is difficult because Zim uses a sophisticated BXtree algorithm that tries to keep the BX tree as balanced as possible and to keep pages as full as possible. This strategy optimizes performance. The actual result is heavily dependent on the data and its physical order.

For an indexed field, including virtual fields, each non-null field value is stored as a key in the index. The maximum number of keys that can be stored on a single page is approximately

(1024 – 12) / LIF + 10)

where

LIF is the length of the indexed field

Note: The length of a key for a variable length field is always its maximum length.

Assuming that all pages are completely full, you can calculate that the minimum number of pages used by an index is the smallest integer greater than

TR/ MK

where

TR is the total number of records in the entity set

MK is the calculated maximum number of keys per page

As previously noted, the index algorithms in Zim merge partially-filled pages in order to keep pages as full as possible. The actual utilization of blocks depends on the distribution of key values and on the pattern of adding and deleting. Typical utilization range from 50 percent to 80 percent.

If there is an index on field Fld1 from the preceding example, then the maximum number of keys per page is

(1024-12) / (12+10) = 46

For an entity set containing one thousand records, at 70 percent utilization, this index requires approximately

(1000/46) * (100/70) = 32 pages

The total size of a file includes the space used for entity set records, the space used for each index, any completely empty pages created by deletions, and one control page.

Directories

Directory files are also block-oriented. Zim directories contain information about every object defined in your application, including entity sets, relationships, fields, roles, variables, virtual fields, directories, named sets, constants, windows, menus, menu items, form fields, and displays. The amount of information maintained for each object varies. For example, an entity set is described by its name, the file number, and links to the information about its fields. Relationships require more space, primarily to store the encoded relationship condition. Information about an object is separated into basic information and descriptor information.

The number of pages occupied by a directory file can be estimated by the following formula:

3 + N/B + N/D

where

N is the number of objects in the directory

B is the number of objects whose basic information can be packed into a single page

D is the number of objects whose descriptor information can be packed into a single page

B is approximately 20 and D is approximately 10.

If you have chosen to store cross-reference information, that information is also kept in the directory file. Additional space is required for cross-reference information.

Compiled Programs

Files containing compiled application programs are also block-oriented. The amount of compiled code varies enormously from one source command to another. For example, the command

let V = 1

compiles a rather small amount of code that assigns the value 1 to the variable V. On the other hand, consider

add ent1 from form1

The compiled code for this instruction must assign values from the fields in form1 to the fields with the same name in ent1.

If these objects had twenty-five objects in common, the compiled code for this ADD command would be at least twenty-five times the size of that produced for the preceding LET command. This comparison is indicative of the expressive power of Zim. Unfortunately, the variability in compiled output makes it virtually impossible to estimate the size of compiled programs.

Controlling Grown Characteristics of Database Files

In most operating systems, files that grow frequently in small increments can become fragmented; each file is stored in many pieces throughout the disk. As a file becomes more heavily fragmented, access to the file becomes very inefficient, as increased disk head movement is required to locate all the pieces. Fragmentation can be reduced by forcing files to grow less frequently but in greater increments.

Overcoming Fragmentation

When all pages of a file have been filled with data, Zim Server normally extends the length of the file by 10% of the current size. This type of growth somewhat minimizes the size of the database file, but it can also reduce system performance if the file becomes extremely fragmented.

Many operating systems store files in numerous separate fragments. These fragments are often called extents. As a file grows, new extents can be created, increasing the fragmentation of the file. The average time that it takes to access the file can increase as the number of extents increases. Under some operating systems, this problem can be remedied by copying the file to another location; copying alone can reduce the number of fragments.

If the file is known to be growing, the data extend configuration options can be used to pre-allocate file space in the database, thereby controlling file growth and reducing fragmentation.

Memory Use in the Client

Because Zim 9 always operates in the client/server mode, memory operations performed by the client are not affected by what is being done in the server and vice-versa.

When running Zim on Windows in the client side, the local machine is dedicated to the client session; therefore, all resources available can be used solely for Zim. Consequently, memory is not a concern and the administrator can set configuration options to the maximum in order to improve performance.

The configuration options file for the client is the zimconfig.zim (the database configuration file) and the most relevant options are Maximum Forms, Maximum Form Fields, Runtime Buffers and Sort Buffers. These options can be set to their maximum values all the time. Other parameters like Directories, Document Line Length, Maximum Parameters and Parameter Size can be set to a size according to the needs of the application and can be left to their initial and default state, only changed if Zim states an error.

On the other hand, Zim sessions running on Unix will compete for memory and resources with all other Zim sessions and, most important of all, with Zim Server and its shared memory. Therefore, the administrator must balance the needs of Zim Server and the needs of Zim sessions. Thus, for Unix, the options Runtime Buffers and Sort Buffers must be privileged. The other options (as mentioned for Windows, above) can be configured to some values that still allow Zim sessions to run comfortably without expending too much memory. In general, these values are the default ones that can be changed depending the needs of the application.

Memory Use in the Server

 

Optimizing Memory Usage for Faster Response Times

If all the files an application needs at any given time could be stored in memory, response times would be significantly faster. Accessing main memory is much quicker than using disk input/output (I/O) because disk access times are much longer. However, in practice, it is impossible to keep everything in memory, especially if a database contains millions of records.

Efficient Data Transfer Between Memory and Disk

During an application session, much of the data must be continually transferred between memory and disk. This includes data stored in the database, database definitions, screen definitions, and application programs.

Controlling Data Locks

Maximizing disk access is crucial, but so is controlling locks over the data being processed by each user once it is available in memory. This topic is discussed in detail in the Locks and Deadlocks Conditions section.

Zim Server Efficiency

Zim Server’s efficiency relies heavily on the available memory allocated for its use. When Zim Server starts, it reads its configuration file, calculates the required memory, and allocates the corresponding shared memory for client connections.

Shared Memory Allocation

Shared memory is allocated using a mapping mechanism that associates an address space with physical memory. If the available real memory is sufficient to accommodate the allocated shared memory, Zim Server operations will be highly efficient. If not, the operating system will need to swap out and swap in portions of shared memory that do not fit in the real memory.

Impact of Swap Operations

Although swap operations are faster than regular file operations, they are still slower than real memory operations. Therefore, administrators must consider this factor when configuring Zim Server.

Configuration Options for Efficiency

All Zim Server configuration options address efficiency issues. Key options include Checkpoint Buffers, Checkpoint Transactions, Clustered Commits, Maximum Blocks per User, and Maximum Data Blocks.

Performance Tuning

When considering the performance of an application, individual users have different criteria in mind. To some, performance is the speed at which an application responds to the requests of an application user. To others, it is the ability of the application to run properly within the limits set by the hardware or operating system. Performance can also be seen as the overall ability of a multi-user application to process a great deal of work for a large number of users quickly and efficiently.

Performance tuning attempts to maximize the success of an application in achieving the criteria described above. Performance tuning can be done by altering the design of the database, using different coding techniques, modifying the design of multi-user transactions, changing the configuration of the operating system, and altering the software’s own configuration. The focus of this section is solely on altering the software’s configuration to fine-tune performance.

To increase the performance of any application, four areas should be considered:

File useControlling the location, distribution, growth characteristics, and fragmentation of files that make up the application can substantially improve performance. This subject is discussed in File Management and Distribution.
DeadlocksZim 9 provides a significant reduction of the number of deadlocks by implementing an improved lock and deadlock mechanism in Zim Server. Deadlocks, however, are always an inevitable part of multi-user applications and tuning can help to reduce them even more. This topic is discussed in Locks and Deadlock Conditions.
Memory useThe more an application can use main memory (e.g., to buffer file input/output) instead of continually accessing a disk, the shorter the application’s response time. This is specifically critical for Zim Server (discussed in Memory Use in the Server) and very important for each session running Zim (discussed in Memory Use in the Client).
Zim Server TuningApart from the maximization of memory usage, there are some ideas that can be used to improve performance of Zim Server as discussed in Zim Server Performance Tuning.

The utility ZimAdmin can help with the dimensioning of the configuration options by dynamically  monitoring the status of ZimServer, just like this example:

en_CAEnglish