这篇文章是节选自《Open Watcom c/c++ user's guide》,对16位内存模式的总结。
16-bit Code Models
There are two code models;
1. the small code model and
2. the big code model.
A small code model is one in which all calls to functions are made with near calls. In a near call, the destination address is 16 bits and is relative to the segment value in segment register CS. Hence, in a small code model, all code comprising your program, including library functions, must be less than 64K.
A big code model is one in which all calls to functions are made with far calls. In a far call, the destination address is 32 bits (a segment value and an offset relative to the segment value). This model allows the size of the code comprising your program to exceed 64K.
16-bit Data Models
There are three data models;
1. the small data model,
2. the big data model and
3. the huge data model.
A small data model is one in which all references to data are made with near pointers. Near pointers are 16 bits; all data references are made relative to the segment value in segment register DS. Hence, in a small data model, all data comprising your program must be less than 64K.
A big data model is one in which all references to data are made with far pointers. Far pointers are 32 bits (a segment value and an offset relative to the segment value). This removes the 64K limitation on data size imposed by the small data model. However, when a far pointer is incremented, only the offset is adjusted. Open Watcom C/C++ assumes that the offset portion of a far pointer will not be incremented beyond 64K.
The compiler will assign an object to a new segment if the grouping of data in a segment will cause the object to cross a segment boundary. Implicit in this is the requirement that no individual object exceed 64K bytes. For example, an array containing 40,000 integers does not fit into the big data model. An object such as this should be described as huge.
A huge data model is one in which all references to data are made with far pointers. This is similar to the big data model. However, in the huge data model, incrementing a far pointer will adjust the offset and the segment if necessary. The limit on the size of an object pointed to by a far pointer imposed by the big data model is removed in the huge data model.
Tiny Memory Model
In the tiny memory model, the application’s code and data must total less than 64K bytes in size. All code and data are placed in the same segment. Use of the tiny memory model allows the creation of a COM file for the executable program instead of an EXE file. For more information, see the section entitled "Creating a Tiny Memory Model Application" in this chapter.